Search Results

Search found 4367 results on 175 pages for 'cloud'.

Page 154/175 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Memcached session manager in Azure: Connection gets forcibly closed

    - by Edgar Pérez
    I am using Memcached Session Manager to handle Tomcat sessions in non-sticky mode. My deployment in Azure consists of a Worker Role with two instances which connect to an Azure VM running my Memcached server. Everything works pretty well, my session is persisted and retrieved by any of the two instances transparently. The problem arises when the session is idle for about 4 minutes; everything points out that the Azure Loadbalancer is closing the spymemcached connection to the VM after some period of inactivity. My MSM configuration is this: <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:my-azure-vm.cloudapp.net:11211" sticky="false" sessionBackupAsync="false" sessionBackupTimeout="10000" lockingMode="uriPattern:/path1|/path2" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js|ttf|eot|svg|woff)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory" customConverter="de.javakaffee.web.msm.serializer.kryo.HibernateCollectionsSerializerFactory"/> The stacktrace printed by the spymemcached client is this: INFO net.spy.memcached.MemcachedConnection: Reconnecting due to exception on {QA sa=/10.194.132.206:13000, #Rops=1, #Wops=0, #iq=0, topRop=net.spy.memcached.protocol.binary.StoreOperationImpl@1d95da8, topWop=null, toWrite=0, interested=1} java.io.IOException: An existing connection was forcibly closed by the remote host at sun.nio.ch.SocketDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(Unknown Source) at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source) at sun.nio.ch.IOUtil.read(Unknown Source) at sun.nio.ch.SocketChannelImpl.read(Unknown Source) at net.spy.memcached.MemcachedConnection.handleReads (MemcachedConnection.java:303) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:264) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:184) at net.spy.memcached.MemcachedClient.run(MemcachedClient.java:1298) Given this idle time limitation in Azure, is there any other way to make MSM work in the azure cloud?

    Read the article

  • Implement QoS/Bandwidth Management or Upgrade Bandwidth?

    - by Michael
    A question that I'm faced with currently. Here's my setup: Cisco ASA 5510 15Mbps Internet Connection @ $1350/month The bandwidth was originally meant for 35-45 people but we've grown quite quickly to roughly 60-65 people. Needless to say, when I check bandwidth logs it's almost always spiked at 15Mbps. I did use Wireshark to do some poking around to see what was hogging up our bandwidth but with everything running through CDNs and Cloud Services it proved difficult to get a good grasp of where our bandwidth was going. So the question is do I ONLY implement bandwidth management through ASA OR upgrade the Internet to 50Mbps ($1600/month) and then implement bandwidth management through ASA? Any suggestions on how to segment the 15Mbps connection if we decided ONLY to go with the bandwidth management solution? Thanks. UPDATE 1 Installed PRTG and used packet content to monitor the traffic. As I suspected still pretty vague. My Top Connections include the following: a204-2-160-16.deploy.akamaitechnologies.com ec2-50-16-212-159.compute-1.amazonaws.com a204-2-160-48.deploy.akamaitechnologies.com a72-247-247-133.deploy.akamaitechnologies.com mediaserver-sv5-t1-1.pandora.com Other than the Pandora destination, the rest doesn't tell me much on how to properly control the bandwidth. Any thoughts or suggestions? Thanks. M

    Read the article

  • TeamCity EC2 Integration via ISA Server

    - by Tim Long
    I have a TeamCity server which is actually installed on SBS 2003 Premium with ISA Server (firewall/proxy) installed. My ADSL connection has multiple IP addresses, which all resolve directly to my SBS external NIC. The NIC is therefore multi-homed and I have allocated one of the IP addresses specifically to TeamCity. In ISA, I've created an access rule to allow the traffic in. I can access my TeamCity server externally and view the web interface, that all works fine. I want to use the Amazon EC2 integration in TeamCity to launch build agents 'in the cloud'. The problem I am having is that when the agent starts, it sees the server and registers, then just sits there waiting. On the server side, the agent appears as 'disconnected'. Examining the settings, the agent's IP address appears to be that of the external NIC. What I think might be happening is that the traffic is undergoing Network Address Translation (NAT) so that TeamCity always thinks the agent is locally installed and therefore can't communicate with the actual remote agent. This seems to happen even though I have a permanent static IP address dedicated to TeamCity. So, the question is this. How can I make traffic to a specific IP address pass through the ISA server un-NATted?

    Read the article

  • 16-bit MS-DOS Subsystem: csrss.exe

    - by Wesley
    Hi all, I just booted up my Samsung N120 netbook (with Windows XP Home SP3) and a dialog box came up with a command prompt window behind it. The dialog box is titled 16 bit MS-DOS Subsystem and the message is as follows: C:\DOCUME~1\SAMSUNG\csrss.exe The NTVDM CPU has encountered an illegal instruction. CS:0544 IP:0117 OP:63 00 64 00 34 Choose 'Close' to terminate the application. This only started on my most recent boot-up. One thing to note is that when I downloaded the Dropbox installer and opened it up, Panda Cloud Antivirus detected a suspicious file, which was csrss.exe and "neutralized it." However, an actual virus or trojan was not detected immediately before the file was detected and neutralized. Just under two weeks ago, a trojan and two viruses were detected for some odd reason. (I only went to website I knew and I do not torrent or browse adult sites.) Anyhow, the two viruses came up in temporary files and the trojan was "neutralized." Anyways, the main question is: How can I repair the csrss.exe file such that Windows XP starts up properly? A screenshot could be posted upon request. Thanks in advance!

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • How do I fix issue causing "incomplete startup packet" log message trying to implement replication in Postgresql?

    - by colour me brad
    I've got two cloud servers running Ubuntu 13.04 and PostgreSQL 9.2. I've primarily used this blog post to aid me in setting things up. However, to do the initial database dump to the slave I'm using pg_start_backup/pg_stop_backup strategy used in this other blog post. I've read through the docs and postgres wikis as well. I ran into several problems I was able to solve, but I can't get past this wretched "the database is starting up" failure. I'm not sure if seeing "cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory" after "consistent recover state reached" is normal or the first sign of a problem. The searching I've done on "the database is starting up" and "incomplete startup packet" tells me that something is sending empty TCP packets to the slave. The only thing that even knows about the slave is the master, so I'm not sure why it's sending empty packets... Has anyone worked with this and have an idea what might be going wrong? The postgres log on the slave looks like so: 2013-08-26 13:01:38 CDT LOG: entering standby mode 2013-08-26 13:01:38 CDT LOG: restored log file "000000010000000000000039" from archive 2013-08-26 13:01:38 CDT LOG: incomplete startup packet 2013-08-26 13:01:39 CDT LOG: redo starts at 0/39000020 2013-08-26 13:01:39 CDT LOG: consistent recovery state reached at 0/390000E0 cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory 2013-08-26 13:01:39 CDT LOG: streaming replication successfully connected to primary 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:41 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT LOG: incomplete startup packet 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up thanks! brad

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • Active Directory Domain Services - Network Name Cannot be Found

    - by Arief
    I have really weird problem that I could not explain. I am trying to redirect all users home folder to the new server. I have copied all the files including the permissions to the new server. All I need to do is to update the user profile for home folder by changing the server name. However, I got this message when I enter the new server name: My server that serving as AD can resolve the name by ping and nslookup of the server name. The only thing that I don't understand why the MMC cannot resolve the name. I did change with the IP Address and I still get the same error message. Thank you so much for your help. UPDATE: I know what seems to be a problem, but I don't know how to fix it. The new server that will serve all Home folder is actually sitting in the cloud with different IP Address as the Domain Controller. The Domain Controller is sitting locally in the office with 10.0.0.0/24 IP Addresses. The new server that is sitting in on Data Centre is on 172.10.10.10/24 IP Addresses. The static route has been set up on both end, and the DNS as well. I believe this is the issue. Does anyone how to overcome this situation? Thank you.

    Read the article

  • How to configuration keepalived on Amazon EC2?

    - by oeegee
    I rad some article. Keepalived over GRE tunnel for failover on VPS environment http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/ but, I don't know how to configuration? and How to call this architecture? only I Know that How to config Master/Backup configuration at keepalived. What I want to know that How does work keepalived? I want to design this.... XMPP Server(EC2) | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy #1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 Thanks! but What I want to know how to work on keepalived with unicast patche modul. ELB is expansive. and this is first totaly design. [Flow] ELB -- XMPP Server -- ELB -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ELB | Casandra#1 Casandra#2 Casandra#3 Casandra#4 and change first design. [Flow] ELB -- XMPP Server -- HAProxy Master(Casandra Farm) -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy#1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 this is second. [Flow] ELB -- HAProxy(XMPP Farm) -- XMPP Server -- HAProxy(Casandra Farm) -- Casanda It's OK? ELB | HAProxy#1 HAProxy#2 HAProxy#3 HAProxy#4 XMPP#1 XMPP#2 XMPP#3 XMPP#4 | Casandra#1 Casandra#2 Casandra#3 Casandra#4

    Read the article

  • Can IIS (Ideally Azure) do SSL Proxying?

    - by Acoustic
    My team has been asked to add a new feature to a project we're working on, and none of can find authoritative details on whether it's possible with Windows/IIS. The short of it is that we're hoping to have customers update their DNS with a CNAME record to point their website to our server instead of theirs (they why's are trivial - it's what the app does on behalf of your site). We're using a reverse proxy with several custom modules to serve particular content from the original servers. So far everything works perfectly until we encounter SSL. Is there a way to have IIS serve up an SSL certificate from another server? In other words, is there a way to be a trusted man in the middle? I'm hoping that's possible so that we don't have to require all our clients to re-issue their SSL certs. Frankly, we don't want to have to manage hundreds of certs. I'd also like to avoid a UCC situation if there's a way to because it seems to require re-creating the cert each time a client is added. So, any pointers on proxying/hosting SSL (or even dynamic SSL hosting like http://www.globalsign.com/cloud/) would be appreciated.

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • Pushing updates to live server... FTP isn't cutting it... a better method?

    - by Jenkz
    I'm the lead developer in a team of 2. My partner has only just joined the project and despite using GIT for version control etc, we are still stuck in the dark ages when it comes to code deployment. Currently I make all site updates via FTP (this way I have control / responsibility over everything that goes live), using Filezilla. I've done this for years, but we now have some large PHP classes (300KB), and a lot of traffic. So in short, every time I upload a key class "general" for example, the site goes down until the file finishes uploading. This is only 5/6 seconds at a time, but this is increasingly unacceptable. I realise I can upload the file under a different name and then rename both files... but really there must be a better way? I've heard about rsyncing code across from another server, but I don't see how this prevents switching to the new file whilst uploading. We only have one server (for DB and Apache) but also use some cloud servers (for openx as an example).

    Read the article

  • How can I download a file directly to a web server?

    - by matt
    I'd like to download files directly to a hosted server, whether it's one I set up myself or a hosted service like Dropbox. For example, when I download a podcast, instead of downloading it to my computer then uploading it to the server, how can I have it download directly to the cloud. My interest here is reducing the traffic I'm using over a metered data plan on my laptop, so I don't want my computer acting as a physical intermediary caching the file. Ideally, there would be some way for me to have a download link and tell it to go directly to my server. How can I accomplish this? I realize that this question is potentially involving a "webapp" and it is potentially involving "server administration" and since my goal is to cut my computer out of the loop I can see people saying this is off-topic and should be on another site. My issue is this: I don't know if this is going to be a webapp solution or a server solution but I do know regardless I'm going to be using a computer to get it done and I am replacing a function that's currently done on my computer so I figured I'd ask it here. If I was wrong and this definitely should be at webapps feel free to let me know or just migrate it.

    Read the article

  • AWS Application - Subscription Payments Scheduled but Not Initializing

    - by nicorellius
    I briefly browsed the AWS forums but these are not nearly as easy to use and efficient as Stack Exchange derived ones, so here I am... the company I work for has an app on the AWS cloud. In general the app works well. However, since it is new, we haven't had many customers and the ones who signed up are now coming to the point where their subscriptions to our service should be renewed. In comes my question. When I query the database for payment status of certain subscriptions using the console, I get what I would expect: Instrument XYZ is on a Monthly subscription (subscription=xxxyyyzz). The subscription is active with an expiration date of 19 Apr 2010 10:43 Z The payment token will expire on 1 Dec 2012 12:00 Z There are xy runs remaining. The subscription fee of $XX.xx will be charged on 17 Apr 2010 10:43 Z OK, this is great. According to this, I would expect the next payment to be initiated on 17 Apr. The reason I am asking is because for a different user last month I got this same output but the payment never went through, i.e., was not initiated through Amazon payments. The user didn't see the payment go through and neither did we. It should be noted that the initial payments were received. The sign up process works and if the user goes to "pay" from within our app, they will be directed to https://payments.amazon.com to make their payment. Any ideas?

    Read the article

  • How can I get my routers to forward ports correctly?

    - by Giffyguy
    My network currently looks like this (simplified): Note that Router #2 is connected to the LAN interface of Router #1. This should be familiar to anyone who has seen a standard static-IP setup with an additional firewall for a residence or other small building. Router #1 is actually my cable gateway, but since it is a fully functional router/firewall, I am going to refer to it as a router. Now, I need to open various ports in both firewalls for incoming communication to my server - port 80 is a good example. So I've opened up port 80 in Router #2, and so far all incoming traffic at the public IP X.X.X.129 is being routed correctly. The problem is that I also need my server to respond to incoming traffic at the public IP X.X.X.130 on the WAN interface of Router #1. Naturally, I can't just tell Router #1 to forward port 80 to another public IP. Port forwarding is only supported when the traffic is being directed to the LAN subnet. I am willing to restructure my network topology if required, with the following conditions: Router #1 cannot have its WAN IP reassigned - X.X.X.130 is mandatory. Router #1 cannot be moved or disconnected from the cloud. The server cannot be given a second IP address. I would prefer the server to have a private IP address - e.g. 10.0.0.10 I'd like to keep Router #2, but it can have a private IP - e.g. 10.0.1.10 Following these rules, I need to get my server to receive incoming traffic on port 80 from both public IP addresses. Does anyone on SU know if this is possible? So far my only theories have been to set up a static route on either router, or to somehow combine my two subnets into a single subnet.

    Read the article

  • How to handle server failure in an n-tier architecture?

    - by andy
    Imagine I have an n-tier architecture in an auto-scaled cloud environment with say: a load balancer in a failover pair reverse proxy tier web app tier db tier Each tier needs to connect to the instances in the tier below. What are the standard ways of connecting tiers to make them resilient to failure of nodes in each tier? i.e. how does each tier get the IP addresses of each node in the tier below? For example if all reverse proxies should route traffic to all web app nodes, how could they be set up so that they don't send traffic to dead web app nodes, and so that when new web app nodes are brought online they can send traffic to it? I could run an agent that would update all the configs to all the nodes, but it seems inefficient. I could put an LB pair between each tier, so the tier above only needs to connect to the load balancers, but how do I handle the problem of the LBs dying? This just seems to shunt the problem of tier A needing to know the IPs of all nodes in tier B, to all nodes in tier A needing to know the IPs of all LBs between tiers A and B. For some applications, they can implement retry logic if they contact a node in the tier below that doesn't respond, but is there any way that some middleware could direct traffic to only live nodes in the following tier? If I was hosting on AWS I could use an ELB between tiers, but I want to know how I could achieve the same functionality myself. I've read (briefly) about heartbeat and keepalived - are these relevant here? What are the virtual IPs they talk about and how are they managed? Are there still single points of failure using them?

    Read the article

  • Odd behavior of setting REMOTE_ADDR between Apache, Nginx, and AWS ELB

    - by Chris Drumgoole
    I have encountered a strange issue and am curious if others have encountered this as well. and if there is absolutely anything that can be done.. We have a set up where we have multiple AWS EC2 Linux machines sitting behind a ELB. The EC2 machines are running Nginx. Let's refer to these as my production machines (because they are!) I also have a Rackspace cloud machine running apache. Completely separate. Let's call this the test server. Now, there's a ISP here in Singapore that seems to be funneling traffic through a transparent proxy or something, and when you do a IP check, the IP often changes. In fact, I noticed that when I check on http://www.whatismyip.com, the ip seems to be stable (doesn't change) across refreshes. But, http://www.whatismyipaddress.com, on refreshing, the IP changes! (so my ISP is doing weird stuff). Now, back to my set up, I noticed a couple of things: Checking the REMOTE_ADDR variable from PHP when connecting to a single Nginx production machine (bypassing the load balancer), is set to the stable IP that does change. Checking the REMOTE_ADDR variable from PHP when connecting to the test Apache server, it is set to the IP that does change on refreshes. Checking the headers when connecting to the nginx production machines through the ELB, the ELB sets the HTTP_X_FORWARDED_FOR to the stable IP. Has anyone experienced this odd behavior? Is there nothing that I can do? And which IP should I "trust"? (the one Apache gives, or the one ELB and Nginx gives?) Thanks! Chris

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • Temporary boot problem after thunder storm - likely causes?

    - by alastairs
    The village where I live was sat under a thunder cloud for most of Friday, and we suffered a few power fluctuations (specifically, what seemed to be split-second outages). When I got back home from work, I found that my PCs had shut down during one of these outages. When I went to boot one of them back up, I couldn't get anything to display on screen, nor did the boot seem to complete correctly. I tried a number of things - unplugging different bits of hardware, swapping graphics adaptors, etc. - to no avail. I thought I was looking at a fried motherboard or CPU. Power seemed to be distributed correctly to the peripherals (the drives all appeared to be working) so I figured it couldn't be the PSU. Eventually I unplugged it from the mains and left it overnight (approx 12hrs unplugged). I tried it again this morning, and it booted up correctly. Woo-hoo! I have all my equipment protected by surge-protected power strips, so I don't think a spike caused these problems. Obviously it has something to do with the power fluctuations, and maybe the PSU in the problem machine got itself confused somehow. The questions are, for future reference and to help people with similar problems: What are the likely causes of the boot failure I experienced? Is a UPS a simple and cost-effective solution, or might other things help prevent this happening in future? What UPS can you recommend (my budget is limited)?

    Read the article

  • Password Manager that allows syncing accross platforms

    - by lexu
    I use OS X, Linux, Solaris and windows for work and from home. There are good tools that allow me to manage the many logins/passwords required platform independently. But mostly they expect me to carry a thumb-drive around or require direct access to a central location (a sky drive in the cloud). The thumb-drive is too easily lost (= synchronized backup needed), the central location not always reachable/ mountable. Besides company policy rightly prevents this often. Is there a tool that allows me to add passwords locally and then syncs it's DB with the "mother-ship" later. Or is there another approach that you use, that solves my problem? EDIT My question is more about "synchronize" than cross platform. I've evaluated (=read feature list) some good cross platform tools, but need one that does the synchronizing for me. By synchronize I mean "merge two versions" not "replace (hopefully) old file with new." I'm not sure I'm always disciplined/awake enough to prevent data loss. UPDATE Lifehacker just posted that AgileSolutions now have a beta version of 1Password for Windows.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Backup strategy for developer-focused Apple environments?

    - by ewwhite
    It's interesting to see the technological split between structured corporate environments and more developer-driven/startup environments. Some of the Microsoft technologies I take for granted (VSS, Folder Redirection, etc.) simply are not available when managing the increasing number of Apple laptops I see in DevOps shops. I'm interested in centralized and automated backup strategies for a group of 30-40 Apple laptops... How is this typically done safely and securely, assuming these are company-owned machines (versus BYOD)? While Apple has Time Machine, it's geared toward individual computer backups and doesn't seem to work reliably in a group setting. Another issue with these workstations is the presence of Vagrant/Virtual Box VMs on the developers' systems. Time Machine and virtual machines typically don't work well unless the VMs are excluded from the backup set. I'd like a push-based backup process with some flexible scheduling options. I know how to handle the backend storage, but I'm not sure on what needs to be presented to the client systems. Due to the nature of the data here, cloud-based backup may not be a viable option. Any suggestions about how you handle this in your environment would be appreciated. Edit: The virtual machine backups are no longer important. They can be excluded from the process and planning.

    Read the article

  • VMWare Newbie - looking for hardware recommendations and help :) [closed]

    - by Dan
    I am looking for some hardware recommendations on an upcoming virtualization project. We are a small company (80 users - 25 in site 1, 55 in site 2) currently using Windows Server 2003 - no VM servers yet. Our AD is setup where site 1 is the root domain and site 2 is a subdomain/subnet - connected by T1 and VPN for failover. The current DC's also server as file servers, print servers, AntiVirus servers. Email is in the cloud. Additionally then in site 1 we have 3 additional member servers - one running IBM Websphere for a customer specific app, one running Infor PowerLink (no real heavy load) and another that we use for Virtual Studio apps and also runs DirSync for Exchange Online. No heavy workloads on any of these machines really. We also have an AS400 box that we run ERP/CRM software on that site 2 connects to over the WAN link. In site 2 we also have a SQL machine that runs on Win2K server. Database files are not large less than 5 GB. Light to Medium workload on this machine. File servers in each site store less than 500 GB data and probably won't grow to more than 1TB in the next 5 years. I am looking to go to VMWare in both sites and virtualize all servers. What recommendations do you have for server, storage hardware? Is it safe to virtualize all of your DC's? Any help or advice would be greatly appreciated. Thanks.

    Read the article

  • online backup plan for a home office with servers

    - by TiernanO
    So, i am in the process of tweaking my spending and i need to change my backup plan... I am currently using a mix of JungleDisk and ZManda ZCB to backup files on my MacBook Pro, Main Windows Server Wrokstation, a dedicated Windows Server in a datacenter, and various other machines and file sources. The problem is the cost: this month, it has cost me about $90 to backup a little over 500Gb... This amount of data will increese over time too, since i am backing up Photos (24Mb RAW images + 4-8MB JPEGs), Videos (various cameras shooting 720p and 1080p), Music, Movies, TV shows and Apps from iTunes (though with iTunes cloud, this might not need to be backed up again) and source code... I have looked at the likes of Mozy, CrashPlan+ and Pro, Backblaze and Carbonite, but each have their problems: Mozy seems overly expenvice per gig at 50C Crashplan wont sell to me since i am outside the US (they hide it on their site... hidden in the FAQ section!) Backblaze dont support Windows Server Carbonite business pricing is $600 up front for 500Gb of storage... Fro $229, they will not backup Windows Servers. So, other than those, Jungle Disk (at 15c per Gig) or ZManda (also at 15c per Gig) what other options are there? what are other people using?

    Read the article

  • Remote Desktop *from* Windows 2008 R2 Server

    - by freefaller
    Summary: how do I create an RDC connection from a Windows 2008 server to another server? Our client will only allow us to connect to their server via a static IP address (which is fair enough), but unfortunately as we're a very small company we don't have one in the office. As a work around, we had the connection working through our old Windows 2003 server (dynamic-cloud from 1and1). .. however we have just rebuilt the server to run under Windows 2008 R2 (don't ask, but it was necessary), and now I simply cannot get the connection working. I have added an "Outbound Rule" to Windows Firewall with Advanced Security (TCP, All local ports, 3389 remote port - I have also tried the other way around). I have added a packet filter IP security rule with the same details. The 1and1 firewall rules (through their online control panel) allows for 3389 TCP and UDP. But it is simply not connecting (yes, the server is definitely on and able to accept connections) with the general error of... Remote Desktop can’t connect to the remote computer for one of these reasons: 1) Remote access to the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network Is there anything obvious I've missed - or something I can use to find out where the request is being blocked? The new server is using the exact same IP address as before, so I don't believe that would be an issue. Unless it's trying to use an IPv6 address rather than the old IPv4 address that it was before? I apologise that I am not a network person by trade, but I know more than anybody else in my office!!

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >