Search Results

Search found 14056 results on 563 pages for 'odata services'.

Page 117/563 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Are there any torrent clients that run as a service?

    - by snorfys
    I turn off all of my computers at night except for my server. I'd like to be able to have my torrent downloads run on my server, which won't be logged in, so I'm wondering if there are any clients that can run as a service. I'm ideally looking for one that can be passed a path to a .torrent file and will begin downloading it (I'll write the program that will interact with it).

    Read the article

  • Amazon EC2 Elastic Load Balancing - strategy for zero downtime server restart

    - by Yoga
    I have 5 web servers (Apache/mod_perl) behind Amazon EC2 Elastic Load Balancing, when I deploy codes to the web servers, I am doing this.. For each machine, shutdown the Apache Update the code Start over the server and proceed to the next server I think when my server is shutdown, ELB will not distribute request to my server, but how about the request still serving? I think a better approach is Stop accepting new request from ELB Sleep for sometimes, shutdown web server only if all requests are responded Update the codes Start the server again But how to perform (1) and (2) from my local sever? Do I need to use AWS API? or other easy way to do it? Thanks.

    Read the article

  • How to secure a group of Amazon EC2 instances

    - by ks78
    I have several Amazon EC2 instances running Ubuntu 10.04 and I've recently started using Amazon's Route 53 as my DNS. The purpose of doing that was to allow the instances to refer to each other by name rather than private IP (which can change). I've pointed my domain name (via GoDaddy) to Amazon's name servers, allowing me to access my EC2 webservers. However, I noticed I can now access the EC2 instances which I don't want to be public, such as the dedicated MySQL Server. I was thinking Amazon's Security Groups would still be in effect when using Route 53, but that doesn't seem to be the case. Before I started using Route 53, I was thinking of having one instance run a reverse proxy, which would help protect the web servers behind it. Then IP-restrict all the other instances. I know IP restricting can be done using the firewall within each instance, but should I ever need to access them from another IP address, I'd need a way in. Amazon's control panel made it a breeze to open a port when necessary. Does anyone have any suggestions for keeping EC2 instances secure, but also accessible to their administrator? Also, what's the best topology for a group of EC2 instances, consisting of web servers and a dedicated database server, from a security perspective? Does having a reverse proxy server even make sense?

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • EC2 Auto-Scaling with Spot and On-Demand Instances?

    - by platforms
    I'm looking to optimize the cost of our auto-scaling EC2 groups by having them launch spot instances instead of on-demand instances. What I really want is to be able to keep some servers in the group as on-demand instances, regardless of what happens to the spot instance pricing market. Then I want any additional servers in the group, above my configured minimum, to be spot instances. I'm generally OK with the delay in adding servers via spot requests. I can't seem to find any way to do this and I've tried to scour the AWS documentation. It appears that an ASG can either be on-demand or spot, but not a hybrid. I could possibly manually add an on-demand instance to the Elastic Load Balancer assigned to the auto-scaling group, but then the load of that server would not be factored into the auto-scaling measurements and triggers. I suppose I could enter a ridiculously high bid price in order to ensure that I always get the servers I need, but then I look at the pricing history and see occasional large spikes. The AWS documentation is at odds with itself, since in one place it says that if you enter a server minimum, that number is "ensured" to be there. But then when you read about spot instances, there are no assurances. The price differential for spot is compelling, so I'd like to leverage that as much as I can while still maintaining an always-on baseline. Is this possible?

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • VMWare Player Run as Service

    - by Marshal
    I have a VMWare Player running a VM at all times. However I want the server to run that Virtual Machine on boot (without logging in). This will allow me basically to restart the Server and when it loads to the login screen it's already running VMWare. Any ideas? I have tried reading up on it however websites suggest VMWare Server. This is no longer backed by VMWare anymore. So I am looking to either make VMWare Player a service or if anyone else has any other ideas? Running Win 7 on my server btw Thanks for looking.

    Read the article

  • In SSRS 2000 dynamically setting the datasource

    - by Christopher Kelly
    Is there a way in SSRS 2000 to set the datasource that a report is using via the webservice? I am currently generating reports using the SSRS2000_ReportService.ReportingService webservice and want to dynamically switch between a couple of shared data sources on demand. I am using C# but could adapt other languages if needed.

    Read the article

  • backup aws ec2 to separate account

    - by Paul de Goede
    I want to backup my AWS snapshots to a completely separate AWS account for additional security (if my AWS credentials were acquired someone could delete all my snapshots and volumes). But I'm a bit stumped on how to do this. There doesn't seem to be a way to store a volume or snapshot in S3 such that another user could access that data in s3 and store it in a separate AWS account. Does anyone have any suggestions on how to acheive this? Thanks

    Read the article

  • Multi domain server with dedicated SSL's HA

    - by user3692800
    I am hosting a server with 150 domain names (websites), each of the ssl's requere dedicated IP address. So server windows 2008, with 150 IP addressees and 150 websites. I need to have high availability solution. So thinking setting up AWS but ELB will not be a solution... and max IP's I can get per instance is 12 addresses. So what can I do to have all 150 sites hosted on one instance and be HA with instance in different availability zone.

    Read the article

  • Force HTTPS with AWS Elastic load balancer

    - by panos2point0
    I need to redirect all incoming HTTP traffic to HTTPS on my elastic load balancer. I tired using Apache mod_rewrite: RewriteEngine On RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule !/status https://%{SERVER_NAME}%{REQUEST_URI} [L,R] Taking advantage of the X-Forwarded-Proto header added by the load balancer, this rule should instruct the users browser to request the HTTPS version of the same URL. So far It doesn't work (no redirection happens). What am I doing wrong? Is there a better way to do this?

    Read the article

  • How to add a second domain to an EC2 instance with Elastic & Route 53

    - by memeLab
    I've got my domain site.com running on EC2, using Elastic IP and Route 53. I want to park site.net so that it resolves to the same site.. I've looked up Migrating an Existing Domain to Route 53 in the docs, but can't find mention of how to add a second domain! I figured I'd have to create an A record, but when I do so, the record is created site.net.site.com .. not quite what I'm after! I've also done searches for mixes of 'route 53', 'park domain', 'addon domain', 'second domain', but no dice... My prior experience is with cPanel and Plesk, so I'm a bit lost! Any pointers would be appreciated! TIA

    Read the article

  • Citrix Mixed Server Session problems

    - by oldskool
    Hello, we've run an Citrix Farm with 4 x Windows 2008 x64 (XenApp 5.0 FP3) and one server Win 2003 x86 (Xenapp 4.6). Yesterday we had a strange problem with Session logins. Existing Sessions could work without problems but new logins were not possible. In the AMC we could not see the Sessions on the Servers. After an reboot all worked fine again. We run the 32 Bit Server since 3 days in our farm (for one Application). Could it be possible that this behavior is caused of the mixed farm? -- (we got in this time a lot of wmi warnings that the Provider CitrixEvemtProv has been registered but it does not correctly impersonate user requests.

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • Process keeps creating dump files

    - by Pieter
    We have a Delphi application running on a terminal server that keeps generating dump files. For the same PID, it keeps creating dump files with an interval of around 1 second until the process is killed manually. Another weird thing is the name of the dump files: ±_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_40.dmp ÷_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_42.dmp k_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_39.dmp Ô_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_41.dmp Ž_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_40.dmp The dump files aren't telling us much and we would like to have a suggestion where we should start looking.

    Read the article

  • AWS RDS Mysql with benstalk Hibernate app: Character encoding issue

    - by TeraTon
    I'm running a webapp from amazon rds with tomcat 7 and spring, which uses hibernate as the persistence layer. The application and utf-8 encoding work properly on localhost, but for some reason when I deploy to amazon, the UTF-8 encoding breaks. I use mysql 5.5.27 on amazon rds and the table that we wish to update has collation set to utf8 - utf8_unicode_ci And in hibernate I have set: < prop key="hibernate.connection.charSet"UTF-8 UTF-8 characters get replaced by ??? and this is of course especially bad for passwords and usernames + email as it basically kills them. Anyone else encountered character encoding breaking when deploying to amazon?

    Read the article

  • Could SQL Server 2008 replication be used with NLB to allow unlimited scaling of reporting servers?

    - by John Keranos
    We are currently using transactional replication in SQL Server 2008 to keep a secondary reporting server synchronized with a primary database server. This has been working weel and keeps some of the load off the primary server. Would it be possible to scale this solution to multiple reporting servers? We're expecting an increased load of read-only queries and it would be nice to be able to add reporting servers as needed. The general idea was the following: Each reporting server would use a "pull" subscription to get the data from the primary database publication. These reporting databases could be a couple of minutes behind the primary server without it being an issue. The reporting servers would be NLB'd together. All read-only queries would be directed to the NLB which should spread the load across the servers.

    Read the article

  • "service"-command and environment variables

    - by varesa
    I am trying to start a service that requires a env. variable to be set to certain path. I set this variable in "/etc/profile.d/". However when I start this service using the service command, it doesn't work. man service: service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /. So it seems that service is removing my variables. How should I set the variables up to keep them from being removed. Or is that something i should not do. I could start the service manually using the init-scripts, or even hardcode the path into the script, but I'd like to know how to use it with the service command.

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >