Search Results

Search found 1324 results on 53 pages for 'ec2'.

Page 28/53 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • OSSEC is not running

    - by batman
    I have an two ec2 instances. In one I have installed ossec server and in other I have installed ossec agent. Here are my server config INBOUND (security group/firewall) : port:514 source:0.0.0.0/0 port:1514 source:0.0.0.0/0 But it seems to be not working. In my agent log file I keep on getting: 2012/08/28 06:52:52 ossec-agentd: INFO: Using IPv4 for: x.x.x.x.x.x . 2012/08/28 06:53:13 ossec-agentd(4101): WARN: Waiting for server reply (not started). Tried: 'x.x.x.x.x'. Edit: Running sudo netstat --inet -nlp | grep ossec. I'm getting: udp 0 0 0.0.0.0:1514 0.0.0.0:* 26027/ossec-remoted Where I'm making the mistake?

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

  • Mounting both /dev/sda and /dev/sda1 - how can this be?

    - by itsadok
    I work on an Amazon EC2 instance that somebody else set up. We have an EBS volume mounted on /dev/sda, even though the root device is already on /dev/sda1, and we're also using `/dev/sda2' user@server:~$ mount /dev/sda1 on / type ext3 (rw) ... (snip) /dev/sda2 on /mnt type ext3 (rw) /dev/sda on /vol type xfs (rw,noatime) ... This doesn't seem to fit with what I know about the way /dev/ works. How is this possible, and more importantly: will this cause trouble in the future? I'm running ubuntu 9.04 jaunty.

    Read the article

  • zlib/libxml2 duplicate package?

    - by Fusion
    I've been updating my amazon ec2 micro instance every month till now. when i try to "yum update" i receive this error : zlib-1.2.5-7.11.amzn1.x86_64 has installed conflicts libxml2 < ('0', '2.7.7', None): libxml2-2.7.6-4.12.amzn1.x86_64 zlib-1.2.5-7.11.amzn1.x86_64 is a duplicate with zlib-1.2.3-27.9.amzn1.x86_64 yum update output: http://pastebin.com/Dfq0yphN I've tried to update separately zlib and libxml2 zlib: same "duplicate" error. libxml2: Transaction Check Error: package libxml2-2.7.8-10.24.amzn1.x86_64 is already installed what can i do?

    Read the article

  • How to upgrade a single instance's size without downtime

    - by Justin Meltzer
    I'm afraid there may not be a way to do this since we're not load balancing, but I'd like to know if there is any way to upgrade an EC2 EBS backed instance to a larger size without downtime. First of all, we have everything on one instance: both our app and our database (mongodb). This is along the lines i'm thinking: I know you can create snapshots of your EBS and an AMI of your instance. We already have an AMI and we create hourly snapshots. If I spin up a new separate instance of a larger size and then implement (not sure what the right term is here) the snapshots so that our database is up to date, then I could switch the A record of our domain from the old ip address to the new one. However, I'm afraid that after copying over the data from the snapshot, by the time it takes to change the A record and have that change propagate, the data could potentially be stale. Is there a way to prevent this, and is there a better way to do this than I am suggesting?

    Read the article

  • DNS settings for SaaS in the cloud?

    - by Jeremy
    I am building a SaaS product. When a user signs up for an account they must select an alias for their site --------.getlaunchpoint.com. Right now I have an A record *.getlaunchpoint.com that points to the ip address server. However, with Azure I am not given an IP address. The suggested implementation is to make use of a CNAME. I need to create a CNAME for *.getlaunchpoint.com - getlaunchpoint.cloudapp.net GoDaddy does not support CNAME wildcards. Searching on Google I'm getting conflicting information... is CNAME wildcard a bad practice? I run into the same problem with Amazon EC2 if I want to make use of load balancers because you cannot tie a public IP address to an Amazon Load Balancer. Amazon also suggests the use of a CNAME. Any help would be appreciated.

    Read the article

  • Easy way to update apache on a server cluster with shared NFS conf?

    - by Simon
    we have a server setup where a server cluster connected with a db/files/conf server shared by nfs serve our sites, behind an Elastic Load Balancer at Amazon EC2. The setup works correctly, but keeping it up to date is becoming like hell, because the apache/php conf that webservers use is shared through NFS. So, if we try to run an apt-get upgrade on a server on the cluster, it will abort it due to the webserver is not able to write back the configuration to the nfs server. Every time we want to update the machines, or install a package like php-curl, we need to create a new ami, so the changes will reflect on the new launched amis. Could it be another way of doing the things simpler? Thanks in advance!

    Read the article

  • Address already in use - Amazon AWS

    - by Peter
    I've run into a really weird issue. I was debugging a server 500 error script on our EC2 instance and found that we didn't have ioncube loaders installed. So I went to go install them and I created a new file at /etc/php.d/zend.ini and initially I inserted the value of extension=/usr/local/ioncube/ioncube_loader_lin_5.3.so and restarted httpd at which point it told me: The ionCube Loader is a Zend-Engine extension and not a module Please specify the Loader using 'zend_extension' in php.ini PHP Fatal error: Unable to start ionCube Loader module in Unknown on line 0 So I changed the contents of zend.ini to zend_extension=/usr/...etc. Now when I attempt to restart httpd I get this error: Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs I can't even run /etc/init.d/httpd stop without it erroring. I've since removed zend.ini to see if that's what caused it and it doesn't seem to be. Any ideas?

    Read the article

  • How can I run a job when the server load is low?

    - by jberryman
    I have a command that runs a disk snapshot (on EC2, freezing an XFS disk and running an EBS snapshot command), which is set to run on a regular schedule as a cron job. Ideally I would like to be able to have the command delayed for a period of time if the disk is being used heavily at the moment the task is scheduled to run. I'm afraid that using nice/ionice might not have the proper effect, as I would like the script to run with high priority while it is running (i.e. wait for a good time, then finish fast). Thanks.

    Read the article

  • How to know if my nginx is in good health?

    - by Howard
    I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around 2Mbps and system load average is around 2 to 3. I am wondering if this system is in good health for now, e.g. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) what is the average queue time for a given request to be served. I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470

    Read the article

  • Is it secure to store the cert/key on a private AMI?

    - by Phillip Oldham
    Are there any major security implications to bundling a private AMI which contains the private key/certificate & environment variables? For resiliency I'm creating an EC2 image which should be able to boot and configure itself without any intervention. After boot it will attempt to: Attach & mount specific EBS volume(s) Associate a specific Elastic IP Start issuing backups of the EBS volume(s) to S3 However, to do this it will need the private key/pem files and will need certain environment variables to be available on start-up. Since this is a private AMI I'm wondering if it will be "safe" to store these variables/files directly in the image so that I don't need to specify any user-data information and can therefore start a new instance remotely (from my iPhone, if needed) should the instance be terminated for any reason.

    Read the article

  • Cheapest High Available Web Server [closed]

    - by xyz
    I would like to create a high-available setup (e.g. a small cluster) for a webserver, i.e. it will run Apache, PHP and MySQL. There will be between 2-8 small websites running with only very little traffic and workload. High availability is however very important. I don't want to be dependent on 1 datacenter, so there must be a minimum of 2 servers placed in different datacenters, and if one server goes down, the user must experience no or only a minimum of downtime - and no data loss. I have considered Amazon AWS using their Elastic Load Balancing, since it is possible to buy 2 EC2 instances in 2 availability zones and set up load balancing and RDS (Multi-AZ). However this seems rather expensive. Using the AWS price calculator http://calculator.s3.amazonaws.com/calc5.html it totals to 185$/month the first year (including the free tier). Are my calculations incorrect or is there a cheaper way to make this HA setup? Best regards

    Read the article

  • Tomcat access logs - are failed requests included?

    - by Maxim Eliseev
    We have a RESTful web service (Java, hosted in Tomcat on Ubuntu on Amazon EC2). From time to time it fails (not every week). When it fails, Java CPU consumption goes to 100% and it takes all available memory. It does not finish by itself. I have to restart the server. There is nothing suspicious in Tomcat access logs. I guess one of our users could submit a very "heavy" request which brought the server down. Is it possible this request is not in Tomcat logs since it never finished?

    Read the article

  • Amazon Linux AMI release 2010.11.1 corresponds to which RHEL version (4/5/6)?

    - by Jayesh
    I am using the default Amazon Linux AMI in an EC2 instance - Amazon Linux AMI release 2010.11.1. I can see that it's a Redhat based system, but after trying many tools (/etc/issues, uname -a, lsb_release), I cannot tell which version of RHEL or CentOS is it based on. I need to get some packages that are not available in Amazon's package repos. I have list of custom yum repos that I can use, but since I don't know which RHEL version is the Amazon AMI based on, I cannot choose from different versions of repos. How can I find whether it's running RHEL 4/5/6 (or their CentOS counterparts)?

    Read the article

  • "Cannot allocate memory" while no process seems to be using up memory

    - by omat
    I am not competent on server issues, any help is much appreciated. When try to start a python/django shell on a linux box, I am getting OSError: [Errno 12] Cannot allocate memory. free -m seems to confirm I am out of memory: total used free shared buffers cached Mem: 590 560 29 0 3 37 -/+ buffers/cache: 518 71 Swap: 0 0 0 But I cannot see what is eating up the memory with top or ps aux: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24336 908 0 S 0.0 0.2 0:00.68 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:04.85 ksoftirqd/0 How can I identify the leak? Thanks. BTW, I am not sure if it is relevant, but the machine I am talking about is an AWS EC2 instance with Ubuntu 12 running.

    Read the article

  • Persistent Spot Instance Request with CloudFormation

    - by PapelPincel
    Is it possible to create "Persistent Spot Instance" with AWS CloudFormation ? I'm going through the Autoscale and EC2 CloudFormation's template references but there is no mention how to set a property so the Spot requests stay persistent. When the price bid lower than the actual spot price AWS brings the instances down. I would like the instances to be started automatically when the instance price is cheaper again. This can be set manually when creating a new spot instance request by checking the option "Persistent Request" in the "Request Instances Wizard".

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. EDIT: trying to grow the RAID using mdadm --grow yields mdadm: raid0 array /dev/md0 cannot be reshaped Is there any other way I can grow a RAID0 array?

    Read the article

  • MMS gets hostname from uname and can't connect to it

    - by Adam Monsen
    I'm trying to get 10gen's MongoDB Monitoring Service monitoring my 3-node replica set. The replica set running in an AWS VPC. Each node runs on a different [virtual] machine. Assume their IPs are 192.168.1.1 (primary or secondary), 192.168.1.2 (primary or secondary), 192.168.1.3 (arbiter). From a quick look at the source, MMS appears to get the hostname of the machine it is running on like so: platform.uname()[1] For my VPC EC2 instance, this returns something like ip-192-168-1-1 MMS then tries to connect to this hostname, which does not resolve. I'd rather just use IP addresses (since they're always static), but it seems like the hardcoded use of platform.uname()[1] in mmsAgent.py precludes that. So, what's an elegant way out of this? Hack /etc/hosts? I'm not setting up a DNS server just for this. Maybe I'm just misunderstanding how to configure MMS.

    Read the article

  • Force ID of user created by apt-get

    - by Bart van Heukelom
    Context: I'm automatically installing postgresql-9.1 on an Ubuntu server with apt-get. This creates the required postgres user. The Postgres data is on an external volume that survives reinstalls. This data is obviously owned by the postgres user. The problem I'm having is that the ownership is not recorded under the name postgres, but under the UID that postgres had at creation time. When the server is reinstalled, postgres sometimes gets a different UID, and no longer owns the data directory, and thus does not work. Question: Can I force the UID of the user postgres created by apt-get to something fixed? Or is there another way to solve my problem? (As you may have deduced, this is on Amazon EC2 with the data on an EBS volume)

    Read the article

  • How frequent are network partitions on cloud services?

    - by roja
    Much is made of the CAP trade-off for data storage where conflicts can be introduced if there is a network partition. My question is there any evidence that this is a problem that arises with any significant frequency in modern cloud IAAS services e.g.; EC2, Azure, Rackspace. Is it a problem which, despite being a theoretical roadblock in constructing idealised distributed systems is, in fact, a non-issue for all practical concerns? Has anyone experienced a network partition within one of these systems (within a single data-centre?) If so would you be willing to share any details?

    Read the article

  • Detaching EBS Volumes (in LVM) take a lot of time

    - by Cheezo
    I have an EC2 Instance(EBS Backed-root partition) with EBS volumes configured via LVM. I have formatted it as ext4 and can mount it to store data etc. Now i want take a snapshot of the root partition, hence in that case i go and detach the other non-root EBS volumes (configured in LVM). Here a regular detach does not work, and i have "force" detach almost always. Although, i another similar setup with RAID instead of LVM and there after stopping RAID, i can easily detach. The whole setup is running Ubuntu Maverick 10.10 Please assist me in the same.

    Read the article

  • Setting up Amazon Cloudwatch to get an alert when you server is down

    - by Saif Bechan
    I have an instance running on Amazon EC2 that I turned into a webserver. Now I have been looking at cloudwatch, but I do not know if it is the correct tool for the job. Basically I want to get informed when the server is down, for whatever reason. Maybe the server got hacked, or the server shut down for whatever reason, I want to get a notification on that. I have enabled clouwatch, and tried to set up a alert, but I only see things like network in-out or cpu usage, an d metrix. Now I do not know if these will do the trick.

    Read the article

  • Sharing / replicating EBS across AWS nodes

    - by skrat
    I would like to use single EBS storage across multiple EC2 nodes (web/app servers). I've read some articles on snapshot sharing, but that doesn't suit well for what we need. We use filesystem for storing DB record attachments, so if one such attachment gets created, we need it to be immediately available to all nodes (to serve). So far only NFS seem to be viable, but it's a pain to configure and maintain. Another option could be storing those attachments on S3 instead, but that would cut us of doing any analysis on that data. This must be quite common problem when scaling in AWS, what solutions are there?

    Read the article

  • Cloud hosting and single hardware point of failure?

    - by PeterB
    From talking to sales I thought Rackspace Cloud was running on a SAN and compute nodes (as VMWare's offerings do), only to find out it doesn't, so when the host server goes down for maintenance all cloud servers on the server go down (in our case for 2.5 hours). I understand Amazon EC2 also has this single-server point of failure. Which cloud hosting solutions don't rely on a single server? I've yet to find a list by architecture Is there a term that distinguishes between these types of 'cloud'? Is one of these 'grid computing' and the other 'virtualisation'? Can a SAN backed solution provide the same reliability as 2 mirrored cloud servers on (say) Rackspace Cloud? I am more familiar with the VMWare architecture and would like to understand the advantages and disadvantages of each approach. I understand the standard architecture is to have multiple cloud servers and mirrored data between them; until we need multiple database servers I'm wondering if a SAN/node hosting solution would provide the lack of downtime we need without the added complexity.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >