Search Results

Search found 26618 results on 1065 pages for 'amazon instance store'.

Page 114/1065 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Migrating an Active Directory domain controller to AWS

    - by Xavier Hutchinson
    I am required to migrate a Active Directory server into AWS with a couple other servers (SQL and IIS) to create a dev and test environment for our network / development. My plan at this time is to simply rebuild the Active Directory server in AWS from scratch - which is quite time consuming indeed! I was wondering if anyone had a recommendation as to a better and more efficient approach of migrating a copy of a physical Active Directory server to the cloud? The server is Windows Server 2012. Thank you!

    Read the article

  • Best configuration and deployment strategies for Rails on EC2

    - by Micah
    I'm getting ready to deploy an application, and I'd like to make sure I'm using the latest and greatest tools. The plan is to host on EC2, as Heroku will be cost prohibitive for this application. In the recent past, I used Chef and the Opscode platform for building and managing the server infrastructure, then Capistrano for deploying. Is this still considered a best (or at least "good") practice? The Chef setup is great once done, but pretty laborious to set up. Likewise, Capistrano has been good to me over the past several years, but I thought I'd take some time to look around and seeing if there's been any landscape shifts that I missed.

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • How do I mount an Exchange mail store from the Windows Command Line?

    - by Cypher
    Our Exchange server is running Exchange Server 2003 Standard on the Windows Server 2003 platform. We're dealing with the mail store size issue, where if the mail store goes over the limit, it gets dismounted. While we are working with the powers-that-be on a policy that will prevent this happening in the future, I would like to see if it is possible to re-mount the mail store via the Windows CLI. I'm already monitoring the Event Logs and alerting on mail store warnings and dismounts - I'm just tired of getting up at 5am to manually re-mount the store while the political wars ensue. My alerting tools have the ability to execute a batch script when an alert is generated. I would greatly prefer a native CLI option. I'm not too keen on running some random vbscript found on the Internet and I don't really care to spend my time debugging someone else's code. PowerShell might be an option, if it can be triggered from the CLI.

    Read the article

  • Retrieve a domain name based on an IP Address?

    - by Neil Kodner
    I'm reviewing some apache logs, specifically with respect to downloaded files. I'm interested in knowing, if possible, which domain is responsible for the download, given an IP address. I've given nslookup a try and it seems to (mostly) get the job done but it returns all sorts of extraneous information. Ideally, I pass in an IP and receive a domain back. Before I write a shell script to parse the output of nslookup to capture the domain, I'd like to know if this is the best way of approaching this problem, or if there is a more tried-and-true method of doing this. Specifically, I'd like to know if an address resolves to an amazonaws.com domain. I understand that this might be difficult because EC2 machines are dynamically created and destroyed - I'd like to know if the IP addresses for AWS/EC2/EMR machines fit any sort of addressing pattern.

    Read the article

  • Schedule EC2 instances

    - by mattcodes
    I want to be able to schedule some simple EC2 EBS backed instances (already configured) to start at 8am and stop at 4pm. This is only time I'll be using my integration server. Is there a simple services (paid or not) that I can use to handle this. All I found so far is to buy a cheap VPS at linode or somewhere and install ec2 tools and schedule via crontab, but what a PITA that is to. On the other end is something enterprisey like Rightscale but not my idea of simple.

    Read the article

  • MongoDB data directory transfer and upgrade

    - by KPL
    I just transferred my data directory (of Mongo 1.6.5) to a new server and installed Mongo 2.0 on it. I set the data directory path and did sudo server mongod restart. It failed, and the log file output says this - ***** SERVER RESTARTED ***** Sun Oct 9 07:51:47 [initandlisten] MongoDB starting : pid=8224 port=27017 dbpath=/database/mongodb 64-bit host=domU-12-31-39-09-35-81 Sun Oct 9 07:51:47 [initandlisten] db version v2.0.0, pdfile version 4.5 Sun Oct 9 07:51:47 [initandlisten] git version: 695c67dff0ffc361b8568a13366f027caa406222 Sun Oct 9 07:51:47 [initandlisten] build info: Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41 Sun Oct 9 07:51:47 [initandlisten] options: { auth: "true", config: "/etc/mongod.conf", dbpath: "/database/mongodb", fork: "true", logappend: "true", logpath: "/var/log/mongo/mongod.log", nojournal: "true" } Sun Oct 9 07:51:47 [initandlisten] couldn't open /database/mongodb/local.ns errno:1 Operation not permitted Sun Oct 9 07:51:47 [initandlisten] error couldn't open file /database/mongodb/local.ns terminating Sun Oct 9 07:51:47 dbexit: Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close listening sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to flush diaglog... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: waiting for fs preallocator... Sun Oct 9 07:51:47 [initandlisten] shutdown: closing all files... Sun Oct 9 07:51:47 [initandlisten] closeAllFiles() finished Sun Oct 9 07:51:47 [initandlisten] shutdown: removing fs lock... Sun Oct 9 07:51:47 dbexit: really exiting now I have already run it with --upgrade once.

    Read the article

  • EC2 server in VPC stops responding after joining domain

    - by Geoff
    We have a EC2 Windows Server set up and running in our VPC, connected to our network via a Juniper 5GT. This is working well, with the tunnel up and stable. If I then join the server to our local domain, it appears to work - I can then log on using domain credentials, and use domain accounts when applying security to folders etc. After I log out, if I give it around an hour, the box becomes unresponsive. I can't ping it, although a tracert goes all the way barring the last hop - so the tunnel is ok. I can't RDP into it. If I reboot it, then it works for a while before doing the same thing. Un-joining it from the domain fixes the problem, and it stays up and stable. The event logs don't show anything obvious, at least to me. Any ideas?

    Read the article

  • Ping Unknown Host on CentOS at EC2

    - by organicveggie
    Weird problem. We have a collection of servers running CentOS 5 on EC2. The setup includes two DNS servers and two LDAP servers. DNS has a CNAME pointing at the primary LDAP server. One machine (and only one machine) is giving me problems. I can ssh into the server using LDAP authentication. But once I'm on the machine, ping won't resolve the LDAP host even though DNS seems to work fine. Here's ping: $ ping ldap.mycompany.ec2 ping: unknown host ldap.mycompany.ec2 Here's the output of dig: $ dig ldap.mycompany.ec2 ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> ldap.studyblue.ec2 ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2893 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ldap.mycompany.ec2. IN A ;; ANSWER SECTION: ldap.mycompany.ec2. 3600 IN CNAME ec2-hostname.compute-1.amazonaws.com. ec2-hostname.compute-1.amazonaws.com. 55 IN A aaa.bbb.ccc.ddd ;; Query time: 12 msec ;; SERVER: 10.32.159.xxx#53(10.32.159.xxx) ;; WHEN: Tue May 31 11:16:30 2011 ;; MSG SIZE rcvd: 107 And here is resolv.conf: $ cat /etc/resolv.conf search mycompany.ec2 nameserver 10.32.159.xxx nameserver 10.244.19.yyy And here is my hosts file: $ cat /etc/hosts 10.122.15.zzz bamboo4 bamboo4.mycompany.ec2 127.0.0.1 localhost localhost.localdomain And here's nsswitch.conf $ cat /etc/nsswitch.conf passwd: files ldap shadow: files ldap group: files ldap sudoers: ldap files hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: files ldap publickey: nisplus automount: files ldap aliases: files nisplus So DNS works the way I would expect. And I can ping the ldap server by ip address. And I can even access the box with SSH using LDAP authentication. Any suggestions?

    Read the article

  • Trying to send email from nagios

    - by batman
    I'm very new to Nagios. I'm trying to send email alerts. But that doesn't seem to be working. But in my log of nagios I can see this : SERVICE ALERT: Appserver;Tmp directory;CRITICAL;HARD;1; Where host notifications are generated via email, only service alerts are not working. And when I look at sendEmail log I can see this : Sep 14 12:38:39 x.x.x.x. sendEmail[23005]: ERROR => You must specify a 'from' field! Try --help. Sep 14 12:39:39 x.x.x.x.x. sendEmail[23129]: ERROR => You must specify a 'from' field! Try --help. Sep 14 12:40:39 x-x-x-x-x sendEmail[23233]: ERROR => You must specify a 'from' field! Try --help. Where I'm making the mistake? Thanks in advance.

    Read the article

  • Synchronize large objects to S3 efficiently

    - by emk
    I need to synchronize about 30GB of git repositories to S3. These repos may contain some very large pack files, on the rough order of 2GB. I know that S3 has recently added support for large objects, and has new APIs that allow the objects to be uploaded as several parallel chunks. Is there a good command-line tool for Linux that allows me to efficiently synchronize large objects with S3 in a fashion similar to s3sync?

    Read the article

  • HAProxy and 2 webservers

    - by enrico
    I have a website that is split into two different servers: chat server in node.js normal website (lighttpd + php + whatever) now, I have set HAProxy in the same machine as node.js chat, so that when my website is accessed, it will redirect to the chat login. (Eg: mysite.com/messenger) What I want to do now is to put a link on the chat page to send to the other part of the website which has a normal files tree, like home.php, photos.php, settings.php, etc. but I really have no clue how this whole redirection works. Also, what about URL rewriting? If I have like info.php?item=phone and want to change it to mysite.com/phone ... is this something I should do with HAProxy or with lighttpd? Thanks in advance.

    Read the article

  • ec2 LAMP REDhat distro change mysql password error

    - by t q
    i am on ec2 plain linux and wish to change my mySQL password ive tried: sudo mysqladmin -u root -p '***old***' password '***new****' then it prompts me to enter password then i enter ***old*** but i keep getting an error message mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: YES)' question: how do i change my current password?

    Read the article

  • Cloud based backup solutions based on open standards?

    - by Rick
    I am looking for a solution to backup and consolidate important media from a couple Windows laptops and Mac laptop. I would like a solutions that based on open standards, so my data isn't trapped by proprietary formats and proprietary protocols. I would like the ability to switch clients or change providers in the future. For example, something like Jungle Disk plus S3 sounds like a great option. However, I am having trouble confirming how or if this can be setup meeting this criteria. Are there any real or de-facto standards for treating S3 as a filesystem? If so, what Windows and Mac clients support these standards?

    Read the article

  • Issues Deploying Functional WAR to Elastic Beanstalk with Tomcat7

    - by BFar
    I am currently deploying OpenTripPlanner (http://github.com/OpenPlans/OpenTripPlanner.git) to Elastic Beanstalk. I'm able to successfully build and deploy opentripplanner with my own customized settings on an ec2. I have set it up so that the appropriate WAR file can be placed in the Tomcat/Webapps folder, and when Tomcat is started up, it will auto-deploy, and even download open trip planner's graph.obj from an S3. All of that works just fine, except when I try to deploy to Elastic Beanstalk. When I upload to Elastic Beanstalk, the log shows that my WAR file is successfully unpacked & successfully downloads the graph.obj from my S3. The only difference is that then nothing happens and I can't load the site in my browser. The health is RED, and I can't figure out what is going on. I've tried looking into ports and dns issues, but I can't determine what's wrong. Anyone have any ideas? Why would a WAR that works on tomcat7 outside of Beanstalk fail to be accessible?

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • HAProxy and Intermediate SSL Certificate Issue

    - by Sam K
    We are currently experiencing an issue with verifying a Comodo SSL certificate on an Ubuntu AWS cluster. Browsers are displaying the site/content fine and showing all the relevant certificate information (at least, all the ones we've checked), but certain network proxies and the online SSL checkers are showing we have an incomplete chain. We have tried the following to try to resolve this: Upgraded haproxy to the latest 1.5.3 Created a concatenated ".pem" file containing all the certificate (site, intermediate, w/ and w/out root) Added an explicit "ca-file" attribute to the "bind" line in our haproxy.cfg file. The ".pem" file verifies OK using openssl. The various intermediate and root certificates are installed and showing in /etc/ssl/certs. But the checks still come back with an incomplete chain. Can anyone advise about anything else we can check or any other changes we can make to try to fix this? Many thanks in advance... UPDATE: The only relevant line from the haproxy.cfg (I believe), is this one: bind *:443 ssl crt /etc/ssl/domainaname.com.pem

    Read the article

  • Has ec2 made self-hosting possible for 'amateur' sysadmins possible?

    - by Blankman
    I'm a developer, and it seems ec2 has made it possible for a amateur sysadmin like me to setup and maintain a fairly large set of servers. Now I don't mean to undermine real sys admins, as I know the value of them but what I am trying to get at is that someone like me can setup and maintain a cluster of servers (front end web servers, with some db servers) using tools like ec2 and capistrano with the help of google. Now this isn't something I would do as a long term thing, but as a startup, one-man operation, I think I can pull this off until business takes off and I can hire this important role out. With ec2, I get my firewall, so I basically open up port 80 on my public facing server, which will run haproxy and route requests to my cluster of servers. Ofcourse I am simplifying the setup, but just want a feel for what you guys think about my perception. My application is a web application, that will be runing Ruby on rails (passenger) and talking to mysql or postgresql.

    Read the article

  • aws s3 works with script but not on cron

    - by user3800017
    guys.. My first post ! hope not the last .. I have few bunch of servers on aws ec2 platforms. I made a simple script to backup my custom logs on their s3 storage bucket. The problem is the script works fine .. but I tried to add it to the crontab. And the script executes but not the s3 sync/mv part ! Here is my code: NOW=$(date "+%b_%d_%Y") MY_HOSTNAME=`uname -n` mv /opt/req/req* /opt/req/bkup/ mv /opt/response/res* /opt/req/bkup/ cd /opt/req/bkup/ tar -cvf ${MY_HOSTNAME}_req_bkup_${NOW}.tar re* rm *.txt aws s3 mv /opt/req/bkup/* s3://req `

    Read the article

  • s3cmd run on command line not on cron

    - by Jonar
    Many have said that the problem is with environment but I still can't seem to solve this problem. BTW I am using Ubuntu 9.10 login as user, then sudo -s using this command: s3cmd put file s3://bucket worked! now here is the simple script intended for testing: #! /bin/bash env >/tmp/cronjob.log s3cmd put file s3://bucket issuing the command crontab -e * * * * * /opt/script 2>&1 | logger Then using tail to syslogs Dec 3 23:22:01 ubuntu CRON[10795]: (root) CMD (/opt/script 2&1 | logger) But by verifying it on s3Fox Organizer, the file is not uploaded. (I tried changing the #! /bin/sh (no effect), putting crons on /etc/crontab (no effect), setting HOME=/home/user (no effect) What are other options to try? Or other ways to debug this problem. Thanks

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

  • route53 for multiple identical domains

    - by Yaniv Aknin
    My main domain is example.com, but also bought example.org and example.net. I've configured my webservers at *.example.com to handle requests from the other domains and redirect them correctly to example.com, but I'd rather not re-configure all my DNS records at example.org and example.net to be the same as example.com. Other than writing some ugly synchronization script, what should I do to have route53 answer queries against my "other" domains with the same data from the "main" domain?

    Read the article

  • Am I using too much memory? (Rails on EC2 with Resque)

    - by Stpn
    I am looking at the memory usage of the Rails application (it uses background processes via Resque) and since the common answer to the question, "how many workers is too many" was "test and see", I ran some memory commands and wonder if someone can help figuring if the memory usage is high enough already, or I can still add some extra workers.. so (this is all under the maximum load): $ free -t -m total used free shared buffers cached Mem: 1756 1532 223 0 12 229 -/+ buffers/cache: 1291 464 Swap: 895 10 885 Total: 2652 1543 1108 $ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 10588 156172 13400 326476 1 6 4 0 5 4 1 0 99 0 If there is any extra info I can provide to help answer this, I would be happy to do so. If the question is strange in some way, please let me know I'd be glad to fix etc..

    Read the article

  • How frequent are network partitions on cloud services?

    - by roja
    Much is made of the CAP trade-off for data storage where conflicts can be introduced if there is a network partition. My question is there any evidence that this is a problem that arises with any significant frequency in modern cloud IAAS services e.g.; EC2, Azure, Rackspace. Is it a problem which, despite being a theoretical roadblock in constructing idealised distributed systems is, in fact, a non-issue for all practical concerns? Has anyone experienced a network partition within one of these systems (within a single data-centre?) If so would you be willing to share any details?

    Read the article

  • How to combine AWS and dedicated external servers?

    - by rfw21
    I have an extensive network of servers all currently hosted on AWS EC2. For reasons of cost I plan to gradually migrate to dedicated servers where possible. So: How can I best combine AWS and non-AWS servers in my network? Ideally, I should be able to assign internal IP addresses to the external servers, include them in AWS security groups and ensure that all private traffic between my AWS servers and external servers is secure.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >