Search Results

Search found 14056 results on 563 pages for 'odata services'.

Page 115/563 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Providing a static IP for resources behind AWS Elastic Load Balancer (ELB)

    - by tharrison
    I need a static IP address that handles SSL traffic from a known source (a partner). Our servers are behind an AWS Elastic Load Balancer (ELB), which cannot provide a static IP address; many threads about this here. My thought is to create an instance in EC2 whose sole purpose in life is to be a reverse proxy server having it's own IP address; accepting HTTPS requests and forwarding them to the load balancer. Are there better solutions?

    Read the article

  • What differences are there between an official Ubuntu AMI image and a base install from an ISO?

    - by David Winter
    When creating a new instance on AWS using an official Ubuntu 12.04 server AMI, what differences are there compared to if I was to do a standard server install on a computer of my own? For example, the default user is 'ubuntu'. An SSH public key is added to that users authorized_keys file. Sudo is passwordless for that user. PasswordAuthentication is disabled for SSH. etc etc. Configurations have been changed from their defaults, and I'd like to know if there is a list, or somewhere I could find out the modifications made.

    Read the article

  • AWS: Should my EC2 and RDS instances be in the same Availability Zone?

    - by DOOManiac
    I just noticed that all of our EC2 instances are in zone us-west-2b, but our Multi-AZ RDS instance is in us-west-2a. Performance-wise everything seems to be okay, and it will be a hassle to "move" the instances to one place since you have to stop and re-create them all. However if either of the two zones goes down when we will have some downtime; if everything is in one zone then at least we have a higher chance of not being in the zone that has downtime... Is this something worth fixing, or am I over-thinking it? (I was about to purchase some EC2 Reserved Instances, which are tied to specific AZs, so I wanted to make sure before going through with it) Thanks!

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

  • What's the max Windows 7 access possible to restrict tampering with single service?

    - by Crawford Comeaux
    I'm developing an ADHD management system for myself. Without going into detail (and as silly as it may sound for a grown man to need something like this), I need to build a me-proof service to run on my Windows 7 Ultra laptop. I still need fairly complete access to the system, though. How can I set things up so that I'm unable to "easily" (ie. within 3-5 mins without rebooting) stop the service or prevent it from running?

    Read the article

  • Cheapest High Available Web Server [closed]

    - by xyz
    I would like to create a high-available setup (e.g. a small cluster) for a webserver, i.e. it will run Apache, PHP and MySQL. There will be between 2-8 small websites running with only very little traffic and workload. High availability is however very important. I don't want to be dependent on 1 datacenter, so there must be a minimum of 2 servers placed in different datacenters, and if one server goes down, the user must experience no or only a minimum of downtime - and no data loss. I have considered Amazon AWS using their Elastic Load Balancing, since it is possible to buy 2 EC2 instances in 2 availability zones and set up load balancing and RDS (Multi-AZ). However this seems rather expensive. Using the AWS price calculator http://calculator.s3.amazonaws.com/calc5.html it totals to 185$/month the first year (including the free tier). Are my calculations incorrect or is there a cheaper way to make this HA setup? Best regards

    Read the article

  • Picking only the value field out of Cloudwatch Dimensions, Java

    - by GroovyUser
    I have some data that are retried from the cloudwatch api's. Specifically I have used listMetrics. The data that I got from this call is : {Metrics: [{Namespace: Metric from grails, MetricName: hello123, Dimensions: [{Name: name, Value: 1425, }], }, {Namespace: Metric from grails, MetricName: hello123, Dimensions: [{Name: name, Value: 1068, }], }, That was the correct data as I would expect. I need a way to return only the value fields. Not others things. Is there any way to do this, in java? Thanks in advance.

    Read the article

  • Does AWS resolve same-datacenter hostnames to 10.* addresses for different customer accounts?

    - by Scott Ritchie
    If I bring up two Amazon EC2 instances and run nslookup on one for the other's hostname, amazon will return a 10.* address. This is routable within amazon, and works just fine. But does this work between different accounts? If I use one of my nodes to nslookup a hostname belonging to another customer (but still in the same datacenter) will it resolve as a 10.* address or will it give the standard public IP?

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • Opening an oracle database crashes the service [SOLVED]

    - by tundal45
    I am experiencing a weird issue with Oracle where the service started fine after a crash. The database mount went fine as well. However, when I issue alter database open; command, the database does not open, gives a generic cannot connect to the database error & crashes the service. Oracle support has not seen this issue before so it's pretty scary. The fact that there are no logs that give any leads as to what could be causing this is also scary. I was wondering if good folks over at Server Fault had seen something like this or have some insights on things that I could try. It's Oracle 10g running on Windows Server 2003. Thanks, Ashish

    Read the article

  • Two threads in initializer on rails not working

    - by Luccas
    Initially I was using one thread to listen a queue from amazon and works perfectly. aws.rb Thread.new do my_queue = AWS::SQS::Queue.new(SQSADDR['my_queue']) my_queue.poll do |msg| ... but now I appended another thread to listen another queue: ... Thread.new do my_another_queue = AWS::SQS::Queue.new(SQSADDR['my_another_queue']) my_another_queue.poll do |msg| ... and now it seems to not work. Only the last one receives response... What is going on?

    Read the article

  • rsync remote to local automatic backup

    - by Mark Molina
    Because all my work is stored on a remote server I would like to auto backup my server monthly and weekly. My server is running Centos 5.5 and while searching the web I'm found a tool named rsync. I got my first update manually by using this command in terminal: sudo rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP I then prompt my password for that user and bob's my uncle. This backups the necessary files from my remote server to my local device but does somebody know how I can automate this? Like automatic running this script every sunday? EDIT I forgot to mention that I let direct admin backup the files I need and then copy those files from the remote server to a local server.

    Read the article

  • Webcam security camera software that runs as a service

    - by hurfdurf
    I've been looking for Windows webcam software that will run as a Windows service without any user login. The goal is to use the webcam as a cheap security camera and log the results to secure networked storage (windows share, not FTP). The requirements are: Motion detection Video capture Runs as a service (should start recording immediately after reboot) Nice to have: Round-robin storage, e.g. 10Gb limit, oldest files overwritten/deleted when space gets low I've read the other webcam questions but still haven't stumbled across anything suitable. Evaluations thus far: Title MotionDetect Service Snapshots Video SpaceLimit License Yawcam Yes Yes Yes No No GPL WebCam ZoneTrigger Yes No Yes Yes No Commercial Dorgem Yes No Yes Yes No GPL AbelCam Yes No Yes Yes No Commercial Logitech Yes No Yes Yes No Paired with camera IspyConnect Yes No Yes Yes Yes Free SecureCam (SourcefoYes No Yes Yes No GPL AbelCam Yes No Yes Yes No Commercial Active WebCam Yes Yes(?) Yes Yes Volume Free Commercial WebCam Surveyor Yes No Yes Yes No Commercial WebCamsPy NA NA NA NA NA GPL Camera: Logitech Webcam Pro 9000 Windows 7 32-bit WebCamsPy failed to initialize so couldn't be tested So far, the contenders: Active Webcam comes the closest, and claims to run as a service, but i haven't been able to get it to record after a cold boot even though a service is running. Yawcam can be set up as a service but doesn't record video. IspyConnect has exactly the type of space limit I want and looks great, but doesn't run as a service (seems also to be a bit of a cpu hog) Any other suggestions? I'm locked into Windows so can't use linux Motion, which looks almost perfect. Any pointers to rich Windows webcam/motion detection libraries out there that could easily be turned into a command line program would also be appreciated.

    Read the article

  • Temporary user-profiles on Windows Server 2008 TS

    - by sinni800
    Hello, for a publicly accessible terminal server I have created a user profile which only allows running of a few programs (demonstration of applications). This results in many people connecting to the same user name on the server, essentially sharing the same profile. How can I copy the original, empty profile on every logon to a seperate directory and delete it afterwards, so everybody starts with a clean copy of the "Guest"-Account?

    Read the article

  • What type of amazon instance should I use and do I need auto scaling and load balancing?

    - by Navetz
    Hi I am looking to release a website that will initially have large amounts of uploads from users. The first will be 65GB and the rest will probably be close to 1TB. They could happen simultaneously. My question is what type of amazon server instance would be best for this? The website is just being released so the traffic wont be very high. I have been using a micro instance for development but it is time to launch and I need more power. Should I use auto scaling and a load balancer to increase the number of instances when I need it or Will a small or medium instance do the trick? If I do use auto scaling and load balancing how do I handle things like sessions and the database/file lookups? Does one instance become the primary instance and the rest become clones?

    Read the article

  • s3fs changing s3 permissions?

    - by magd1
    My developer believes that s3fs is changing my bucket's permissions. Is this possible? I want my bucket to be public, but it keeps reverting back to private. Here's my fstab. s3fs#production /mnt/production fuse use_cache=/tmp,use_rrs=1,allow_other,uid=1000,gid=1000 0 0 My developer mentioned the "-o default_acl (default="private")" option. The documentation refers to "canned acl", but I don't understand what these are.

    Read the article

  • log4j-1.2.8.jar gets deleted from the path when a Webservice is created from Eclipse

    - by Seema
    When I try to create a Webservice from Eclipse, the log4j-1.2.8.jar which is configured in the project's build path just gets deleted, and when I try to invoke the Webservice it gives error as below: 2014-06-05 11:47:48,742 ERROR ServiceRequester:55 - RemoteException 2014-06-05 11:47:48,742 ERROR ServiceRequester:56 - ------ AxisFault faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.generalException faultSubcode: faultString: java.lang.NoClassDefFoundError: org/apache/log4j/Logger; nested ex ception is: java.lang.NoClassDefFoundError: org/apache/log4j/Logger faultActor: faultNode: faultDetail: {http://xml.apache.org/axis/}hostname:INPUSCPC07719 java.lang.NoClassDefFoundError: org/apache/log4j/Logger; nested exception is: java.lang.NoClassDefFoundError: org/apache/log4j/Logger We also tried to place this jar to a different path than where the project is located, but it still delete this jar from that path too. Can anyone help in this?

    Read the article

  • Terminal Server 2003 Performance Troubleshooting

    - by MikeM
    Let me get your thoughts on terminal server performance problems. The server hosts average 25 users which, after running some numbers, on average use 600MB memory with their main applications running (web browser, adobe reader, IP phone client). All users are on the same LAN as server. We constantly experience slow response and short session lockups. Combined CPU usage is on average 10%. What appears strange to me is that the system shows 29GB physical memory with 25GB of it free. The page file usage is about 50% averaging 9GB used. Some server specs OS: Server 2003 32bit Enterprise with /PAE flag RAM: 32GB CPU: 2xQuad Core @ 2.27Ghz HD: RAID5 1.2GB After doing basic troubleshooting using performance monitor it leads me to believe that the performance problems are caused by the 32bit OS limitation in addressing full 32GB of physical memory even though the /PAE flag is used. Can anyone suggest something, troubleshooting steps that can lead to a more conclusive answer? Thanks

    Read the article

  • monitoring load on AWS EC2

    - by hortitude
    I'm interesting in monitoring our EC2 instances to ensure we scale up when necessary. Right now we are monitoring idle CPU time as our metric. We aren't measuring disk IO as we are not a very disk intensive application. When running on our own hardware in a datacenter I also usually monitor "load" from the top command. My question is: Does it make sense to monitor "load" on a shared env such as EC2? If so, how do you interpret the results?

    Read the article

  • ec2 ami device mapping

    - by hortitude
    I have large ec2 Ubuntu image and I'm just looking through the devices. I noticed from the metadata that % curl http://169.254.169.254/latest/meta-data/block-device-mapping/ami sda1 % curl http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0 sdb However when I look what is actually mounted there is /dev/xvda1 and /dev/xvdb (and there is no /dev/sd* ) I know that both names look somewhat valid from the AWS documentation, but it looks to me from this like there is a mismatch in the instance metadata and what is actually on the machine. Why don't they match?

    Read the article

  • If my Remote Desktop Connection Broker server goes down, can users still access my two Terminal Servers?

    - by Frank Owen
    I would like to setup the Remote Desktop Connection Broker to allow better load balancing of the two terminal servers we have as well as allowing the user to re-establish to the correct server if they get disconnected. My worry is, if I set this up and the server this service is running goes down, does the terminal server stop accepting connections or will they just lose the benefit of having RDCB turned on? I don't want to add another point of failure in this equation unless I have to.

    Read the article

  • HAProxy and Intermediate SSL Certificate Issue

    - by Sam K
    We are currently experiencing an issue with verifying a Comodo SSL certificate on an Ubuntu AWS cluster. Browsers are displaying the site/content fine and showing all the relevant certificate information (at least, all the ones we've checked), but certain network proxies and the online SSL checkers are showing we have an incomplete chain. We have tried the following to try to resolve this: Upgraded haproxy to the latest 1.5.3 Created a concatenated ".pem" file containing all the certificate (site, intermediate, w/ and w/out root) Added an explicit "ca-file" attribute to the "bind" line in our haproxy.cfg file. The ".pem" file verifies OK using openssl. The various intermediate and root certificates are installed and showing in /etc/ssl/certs. But the checks still come back with an incomplete chain. Can anyone advise about anything else we can check or any other changes we can make to try to fix this? Many thanks in advance... UPDATE: The only relevant line from the haproxy.cfg (I believe), is this one: bind *:443 ssl crt /etc/ssl/domainaname.com.pem

    Read the article

  • Tracking costs within one AWS account

    - by caius howcroft
    I have what I'm sure is a very common problem. Our company has many projects and groups working for different clients. We do a lot of our development work in the cloud and deploy our solutions there. We have a VPC set up that isolates projects from each other in their own subnet and that VPC is getting a hardware VPN connection back to HQ. We need to keep track of the cost run up by every project. The way I currently implement this is by providing my own tools for starting and stopping instances which log which user (and thus which project) to bill the instance too. This works okay for BoxUsage costs but not for other costs. I could create a separate account for each project and use consolidated billing, this I think would allow me to pay once but track costs per "project", but I would then not be able to share common resources (like bring account B's running instances inside the same VPC). Does anyone have any suggestions? Cheers C

    Read the article

  • How to address an EC2 instance from both inside and outside datacenter?

    - by Alexandr Kurilin
    I'm trying to find a good way of being able to address my EC2 database instance from both inside and outside of the datacenter. Other EC2 instances need to be able to call into it, and other clients like pgAdmin might need to connect to it from the outside world as well. It's my understanding that using the internal and external DNS names is sustainable long term as each reboot leads to a change. I'm thinking of associating an Elastic IP with the instance and giving it an A record (say db1.mydomain.com) which I then will use both within and outside the datacenter. Further instances in the same role will get the same treatment and a DNS record of db2.mydomain.com etc. Now, is there a cleaner and more stable way of achieving this result? Am going about this the wrong way? Suggestions?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >