Search Results

Search found 9845 results on 394 pages for 'ntp servers'.

Page 97/394 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Software RAID 10 on Linux

    - by vpetersson
    For a long time, I've been thinking about switching to RAID 10 on a few servers. Now that Ubuntu 10.04 LTS is live, it's time for an upgrade. The servers I'm using are HP Proliant ML115 (very good value). It has four internal 3.5" slots. I'm currently using one drive for the system and a RAID5 array (software) for the remaining three disks. The problem is that this creates a single-point-of-failure on the boot drive. Hence I'd like to switch to a RAID10 array, as it would give me both better I/O performance and more reliability. The problem is only that good controller cards that supports RAID10 (such as 3Ware) cost almost as much as the server itself. Moreover software-RAID10 does not seem to work very well with Grub. What is your advice? Should I just keep running RAID5? Have anyone been able to successfully install a software RAID10 without boot issues?

    Read the article

  • Entering IT field with only hobby experience?

    - by EA Bisson
    I can build computers, install servers, network mac, linux, and windows, build servers, do support etc. I do all of this at home/for friends/for hobbies. I have worked with computers every day since I was in elementary school (commodore 64, windows 3.1 etc.). I have IT bachelors in administrative management (so basically nothing good). I am getting another bachelor's in server admin, including about 5 certifications. I am the IT go to gal at every position usually because I know more than the IT people and have better people skills. My job history is random: office admin, hair braider, disney ride operator, camp counselor etc. I found a job I want its a entry level specialist (server) position. What do I put on a resume?

    Read the article

  • china and gmail attacks

    - by doug
    "We have evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human rights activists. Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” [source] I don't know much about how internet works, but as long the chines gov has access to the chines internet providers servers, why do they need to hack gmail accounts? I assume that i don't understand how submitting/writing a message(from user to gmail servers) works, in order to be sent later to the other email address. Who can tell me how submitting a message to a web form works?

    Read the article

  • Remote desktop sessions - Unwanted automatic log off after period of time

    - by alex
    I'm having an issue whenever I connect to any of our servers via RDP - After a certain period of time, it seems to close these sessions, closing all the applications i had open etc... This is particularly annoying if I am running a long process - for example, copying a file - it cuts it off... I then re-connect via RDP, and it effectively loads a new session. Is this set somewhere in Group Policy? Or somewhere else? This is happening on Windows 2008 (it may also be on our 2003 servers, although I haven't noticed...)

    Read the article

  • Per-machine decentralised DNS caching - nscd/lwresd/etc

    - by Dan Carley
    Preface: We have caching resolvers at each of our geographic network locations. These are clustered for resiliency and their locality reduces the latency of internal requests generated by our servers. This works well. Except that a vast quantity of the requests seen over the wire are lookups for the same records, generated by applications which don't perform any DNS caching of their own. Questions: Is there a significant benefit to running lightweight caching daemons on the individual servers in order to reduce repeated requests from hitting the network? Does anyone have experience of using [u]nscd, lwresd or dnscache to do such a thing? Are there any other packages worth looking at? Any caveats to beware of? Besides the obvious, caching and negative caching stale results.

    Read the article

  • Database server: Small quick RAM or large slow RAM?

    - by Josh Smeaton
    We are currently designing our new database servers, and have come up with a trade off I'm not entirely sure of how to answer. These are our options: 48GB 1333MHz, or 96GB 1066MHz. My thinking is that RAM should be plentiful for a Database Server (we have plenty and plenty of data, and some very large queries) rather than as quick as it could be. Apparently we can't get 16GB chips at 1333MHz, hence the choices above. So, should we get lots of slower RAM, or less faster RAM? Extra Info: Number of DIMM Slots Available: 6 Servers: Dell Blades CPU: 6 core (only single socket due to Oracle licensing).

    Read the article

  • Moving SharePoint 2010 to a new domain

    - by Chris
    I have a small SharePoint 2010 farm (1 WFE / App Server & 1 SQL server) Our organisation is currently mirgrating to our holding company's global domain, so we now have a new local DC on site with trusts between the current domain and the new domain. I am going to need to move our SP Farm to the new domain and possibly rename servers to fit into the global naming convention (we are trying to avoid this at the moment, but might become a requirement) If there a way to script (stsadm / powershell) the user profiles and permission accross to the new domain? and on the server side, is it as simple as joining the servers to the new domain and updating all the service / farm accounts to accounts on new domain? I have googled this a bit, but everything I have found so far refers to MOSS 2007 or earlier. Any help / advise would be appreciated.

    Read the article

  • Unix Server Partitioning & Filesystem Layout

    - by user1717735
    There's a lot of contradictory information about Unix server partitioning out on the internet, so I need some advice on how to proceed. So far, on the servers I in our test environment I didn't really care about partitioning and I configured a single monolithic / plus a swap partition. This partitioning scheme doesn't seem like a good idea for our production servers. I have found a good starting point here, but it seems very vague on the details. Basically I have a server on which I will be running a basic LAMP stack (Apache, PHP, and MySQL). It will have to handle file uploads (up to 2GB). The system has a 2TB RAID 1 array. I plan to set : / 100GB /var 1000GB (apache files and mysql files will be here), /tmp 800GB (handles the php tmp file) /home 96GB swap 4GB Does this sound sane, or am I over-complicating things?

    Read the article

  • Bad certificate error with RabbitMQ using SSL

    - by David Tinker
    I am trying to get RabbitMQ working with SSL on a couple of Gentoo servers. I get the following error in /var/log/rabbitmq/[email protected] when I try to connect to the management console using https: SSL: certify: ssl_connection.erl:1641:Fatal error: bad certificate I followed the instructions here: http://www.rabbitmq.com/ssl.html The annoying thing is that I have 2 cloned servers and it is working on one and not the other. As far as I can tell the machines are configured identically. I wrote a script to generate the certs etc. and have run it on both. I am not using client certificates. Anyone know how I can figure out whats wrong with my certificate(s)? I am using Erlang 15.2, RabbitMQ 2.7.9, OpenSSL 0.9.8k.

    Read the article

  • nsclient++ intermittent connection refused on same subnet

    - by jshin47
    I have setup Nagios and nsclient++ on a number of my Windows servers. They are all in the same subnet so no routing or firewall stuff is taking place in between the endpoints, and I have verified that the firewalls on the servers are not causing trouble. The problem is that the scheduled checks sometimes fail with "connection refused" and sometimes work! It is a frustrating problem to resolve because I do not know what to look for. One place I did look is in the nsclient++ logs, where I am seeing this recurring error: ...\trunk\modules\CheckSystem\PDHCollector.cpp:148: Failed to query performance counters: \238... This sounds promising, but I couldn't find much on Google about this error as it pertains to NSClient++

    Read the article

  • Why can't I use SSL certs imported via Server Admin in a custom Apache install?

    - by morgant
    I've got a couple of Mac OS X 10.6.8 Server web servers that run a custom AMP255 (Apache 2.x, MySQL 5.x, and PHP 5.x) stack installed using MacPorts. We've got a lot of Mac OS X Server servers and generally install SSL certs via Server Admin and they "just work" in the built-in services, however, these web servers have always had SSL certs installed in a non-standard location and used only for Apache. Long story short, we're trying to standardize this part of our administration and install certs via Server Admin, but have run into the following issue: when the certs are installed via Server Admin and referenced in our Apache conf files, Apache then prompts for a password upon trying to start. It does not seem to be any password we know, certainly not the admin or keychain passwords! We've added the _www user to the certusers (mainly just to ensure they have the proper access to the private key in /etc/certificates/). So, with the custom installed certs we have the following files (basically just pasted in from the company we purchase our certs from): -rw-r--r-- 1 root admin 1395 Apr 10 11:22 *.domain.tld.ca -rw-r--r-- 1 root admin 1656 Apr 10 11:21 *.domain.tld.cert -rw-r--r-- 1 root admin 1680 Apr 10 11:22 *.domain.tld.key And the following in the VirtualHost in /opt/local/apache2/conf/extra/httpd-ssl.conf: SSLCertificateFile /path/to/certs/*.domain.tld.cert SSLCertificateKeyFile /path/to/certs/*.domain.tld.key SSLCACertificateFile /path/to/certs/*.domain.tld.ca This setup functions normally. If we use the certs installed via Server Admin, which both Server Admin & Keychain Assistant show as valid, they're installed in /etc/certificates/ as follows: -rw-r--r-- 1 root wheel 1655 Apr 9 13:44 *.domain.tld.SOMELONGHASH.cert.pem -rw-r--r-- 1 root wheel 4266 Apr 9 13:44 *.domain.tld.SOMELONGHASH.chain.pem -rw-r----- 1 root certusers 3406 Apr 9 13:44 *.domain.tld.SOMELONGHASH.concat.pem -rw-r----- 1 root certusers 1751 Apr 9 13:44 *.domain.tld.SOMELONGHASH.key.pem And if we replace the aforementioned lines in our httpd-ssl.conf with the following: SSLCertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.cert.pem SSLCertificateKeyFile /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem SSLCertificateChainFile /etc/certificates/*.domain.tld.SOMELONGHASH.chain.pem This prompts for the unknown password. I have also tried httpd-ssl.conf configured as follows: SSLCertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.cert.pem SSLCertificateKeyFile /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem SSLCertificateChainFile /etc/certificates/*.domain.tld.SOMELONGHASH.concat.pem And as: SSLCertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.cert.pem SSLCertificateKeyFile /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem SSLCACertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.chain.pem We've verified that the certificate is configured to allow all applications access it (in Keychain Assistant). A diff of the /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem & *.domain.tld.key files shows the former is encrypted and the latter is not, so we're assuming that Server Admin/Keychain Assistant is encrypting them for some reason. I know I can create an unencrypted key file as follows: sudo openssl rsa -in /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem -out /etc/certificates/*.domain.tld.SOMELONGHASH.key.no_password.pem But, I can't do that without entering the password. I thought maybe I could export an unencrypted copy of the key from Keychain Admin, but I'm not seeing such an option (not to mention that the .pem options are greyed out in all export options). Any assistance would be greatly appreciated.

    Read the article

  • Configuring HAProxy with memcache with failover

    - by Lawrie Matthews
    I'm configuring a new set of servers for an existing Wordpress site, and it's been requested that memcache be available and made more resilient. The idea proposed is to have HAProxy send requests to one of the two servers; if that memcache instance is inaccessible, then it should switch to the second, but should not switch back to the first if it comes back up unless the second is then unavailable. This doesn't appear to be a particularly common use case and I've not found much along these lines except to possibly set up the first node with an enormous rise value, such as: server server1 10.112.58.16:11211 check inter 5s fall 3 rise 99999999 server server2 10.112.58.19:11211 check backup which falls over as expected when server1 is unavailable. It won't ever fall back to server1, though, even if server2 goes offline. Can this be made to work?

    Read the article

  • vCenter Server: This host currently has no management network redundancy

    - by goober
    Background I'm on a new VMWare install consisting of: 1 vCenter Server (containing inventory service, SSO, vCenter server, and web client server) 2 ESX Servers configured in a HA group Problem When a I view the summary for any one of my servers, I receive a notice: "" This is expected in our scenario and we're okay with it. Attempted Solutions As I understand from this article and this discussion, the proper way to remove the error message is to ignore it via setting the "das.ignoreRedundantNetWarning" propery to "true". I took the following steps: Logged into vCenter Right-clicked on my HA cluster and chose "Edit Settings..." Clicked "vSphere HA" section Clicked "Advanced Options..." Added the "das.ignoreRedundantNetWarning" option with a value of "true". Question How do I get this error to go away, and are there any reasons why adding this option may not have worked? References Network redundancy message when configuring VMware High Availability in vCenter Server [VMWare KnowledgeBase] How remove a notice " has no management network redundant" [VMWare Community]

    Read the article

  • Tying down a cloud by virtualizing everything and then locking VMs to real hardware as necessary

    - by tudor
    I'm looking for a cloud software solution that: Can run on both server and desktop machines; Virtualizes hardware and has the option of exposing each real machine to the cloud; Allows a VM to be "locked" to a set of real hardware capabilities and stay there until moved (e.g. a user's "real" desktop); Allows a VM to link to some types of devices elsewhere (e.g. USB/serial via ethernet); and Is geography-aware to control movement of VMs between real networks. I'm aware that this may be the holy grail of virtualization, and I've searched alot. Some solutions appear to meet some criteria but not others. Most cloud implementations appear to ignore real hardware, for example. I realise that this may be solved by using three different implementations in combination: A standard cloud server farm. A bare-metal network backup utility with PXEBoot. VNC and/or VDI. (VNC obviously would require the real hardware to be running.) This combination, however, has some serious drawbacks that I'd like to solve by treating it as one system. My explanation follows... I have a network of real servers and desktops in multiple locations. I've virtualized servers before using Virtualbox and that's worked quite well. I've even connected USB devices to VMs on servers. I would like to virtualize the desktops in all my offices to facilitate movement of desktops, remote access (e.g. VDI) and bare-metal backups. However, I know that there are problems with this. For example, some desktops have specific hardware (e.g. 3D graphics cards, USB devices, etc) that limit their mobility. Geographic constraints also limit movement in that VMs can be moved easily within offices, but transferring between offices is not always preferable. What I would like to find is a system that can virtualize everything from bare-metal easily by maintaining an abstraction layer on each client and server machine that exposes the hardware available and runs as a cloud. Then certain VMs would be "locked" to specific hardware (so that, e.g. the VM runs only on their own desktop.) This would be required for situations where speed is important (e.g. 3D graphics pass-through). In addition, abstracted low-speed devices (e.g. USB) could be piped from real hardware to a VM in the cloud. This is important since if a VM is taken down, another VM can connect to the real hardware for minimum downtime.

    Read the article

  • Can't configure PAM + LDAP on Debian Lenny - Getting error=49 on server logs

    - by Jorge Suárez de Lis
    I've been migrating some servers and desktops using Ubuntu 10.04 from getting the users from an old OpenLDAP implementation to a newer Centos Active Directory. I haven't had any problems so far, until I reached a Debian Lenny server. I've set up the server as the others, setting /etc/ldap.conf and /etc/ldap/ldap.conf. However, when I issue "getent passwd", I get nothing from the LDAP server. Reading the pam_ldap manpage, I realized that /etc/ldap.conf was not an accepted file by pam_ldap -it worked with Ubuntu though-, so I renamed it to /etc/pam_ldap.conf. Same result. However, once I've changed the name of this file, when I login using SSH I get this on the LDAP server logs: [20/Jul/2012:11:19:40 +0200] conn=16501 fd=155 slot=155 connection from x.x.x.50 to 10.1.176.237 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 RESULT err=49 tag=97 nentries=0 etime=0 The password isn't working. I don't know that could be wrong, anything else seems to be OK. That user/password is working from another clients: [20/Jul/2012:11:29:39 +0200] conn=16528 fd=188 slot=188 connection from x.x.x.224 to 10.1.176.237 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=jorge.suarez,ou=people,ou=citius,dc=inv,dc=usc,dc=es" I'm using SSHA for storing passwords on the LDAP server. Maybe this is not supported by Debian Lenny? On pam_ldap.conf, I've set up this, as in all the other servers: # Do not hash the password at all; presume # the directory server will do it, if # necessary. This is the default. pam_password md5 Also tried clear, but it didn't work. Anyways, it's weird that issuing getent passwd still gets me no users. However, if I use pamtest from the package libpam-dotfile to test login, it works. # pamtest ssh jorge.suarez Trying to authenticate <jorge.suarez> for service <ssh>. Password: Authentication successful. # pamtest foo jorge.suarez Trying to authenticate <jorge.suarez> for service <foo>. Password: Authentication successful. But "su" won't work also: # su jorge.suarez Id. descoñecido: jorge.suarez Just the output from getent passwd : # getent passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/bin/sh man:x:6:12:man:/var/cache/man:/bin/sh lp:x:7:7:lp:/var/spool/lpd:/bin/sh mail:x:8:8:mail:/var/mail:/bin/sh news:x:9:9:news:/var/spool/news:/bin/sh uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh list:x:38:38:Mailing List Manager:/var/list:/bin/sh irc:x:39:39:ircd:/var/run/ircd:/bin/sh gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh nobody:x:65534:65534:nobody:/nonexistent:/bin/sh libuuid:x:100:101::/var/lib/libuuid:/bin/sh Debian-exim:x:101:103::/var/spool/exim4:/bin/false statd:x:102:65534::/var/lib/nfs:/bin/false sshd:x:104:65534::/var/run/sshd:/usr/sbin/nologin luser:x:1000:1000:Usuario local de Burdeos,,,:/home/luser:/bin/bash messagebus:x:105:107::/var/run/dbus:/bin/false sge-admin:x:1001:1001:Administrador do SGE,,,:/home/cluster/sge-admin:/bin/bash ntp:x:107:110::/home/ntp:/bin/false haldaemon:x:108:111:Hardware abstraction layer,,,:/var/run/hald:/bin/false vde2-net:x:109:114::/var/run/vde2:/bin/false uml-net:x:110:115::/home/uml-net:/bin/false polkituser:x:111:116:PolicyKit,,,:/var/run/PolicyKit:/bin/false Debian-pxe:x:113:65534:Dummy user for Debian pxe package,,,:/home/Debian-pxe:/bin/false Nscd was stopped from the beginning.

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • Cheapest server per gigabit throughput [closed]

    - by nethgirb
    I'm looking for a set of servers for performance testing a network, and secondarily testing some applications on the servers. Their most important task is simply to pump out data: from an application like memcached or just dumped from a large file in memory into a TCP flow (i.e., disk performance doesn't matter). This should happen over one or more 1 gigabit Ethernet ports, and the machines should run Linux (ideally), or perhaps Mac OS X or some other *nix. Other than that, there are few constraints (e.g., even something ARM-based could be fine). So here's the question: What's the cheapest server per gigabit? Price and power are both considerations.

    Read the article

  • Virtualmin - Added Virtual Server - Stopped access to Rails app?

    - by Dan
    Hi, Sorry if this sounds pretty simple, I'm new to Virtualmin and running servers in general. I recently purchased a VPS and installed Virtualmin with no problems. I then installed mod_rails and uploaded my first rails app, which I got working by adding the following to my apache httpd.conf file: <VirtualHost *:80> ServerName testing.mydomain.com DocumentRoot /home/myapp/public <Directory /home/myapp/public> Allow from All AllowOverride all Options -MultiViews </Directory> RailsBaseURI / </VirtualHost> I then tried adding a virtual server through Virtualmin, using mydomain.com. Now, the site this created (plus several sub-servers) and working as expected. However, my original Rails app is no longer accessible. The URL now sends me to the parent application (ie mydomain.com) The Rails app is not located within the parent's application directory, would this be a problem? Can anyone help? Any advice appreciated. Thanks.

    Read the article

  • Getting MSExchange transport Error on Server 2003 SP2

    - by Scott
    I am getting the following Error messages and do not know how to fix it. Event Type: Error Event Source: MSExchangeTransport Event Category: (8) Event ID: 3017 Date: 4/29/2010 Time: 1:21:12 PM User: N/A Computer: NETSRV Description: A non-delivery report with a status code of 5.3.5 was generated for recipient rfc822;[email protected] (Message-ID <19104335.51321272561635734.JavaMail.SYSTEM@PARROT). Causes: A looping condition was detected. (The server is configured to route mail back to itself). If you have multiple SMTP Virtual Servers configured on your Exchange server, make sure they are defined by a unique incoming port and that the outgoing SMTP port configuration is valid to avoid looping between local virtual servers. Thanks for any help you can provide.

    Read the article

  • Big IP F5 outbound HTTP issues

    - by mbuk2k
    We've tried upgrading from 9.x to 10.2 on our F5 Big IP 3400 and everything went over fine apart from one thing. We're unable to establish any outbound HTTP (80) connections from any servers that are assigned to a virtual server. This is something that worked before and is required for certain calls our servers need to make. Interestingly HTTPS (443) connections work fine, it's literally just anything outbound over port 80 seems to fail. Does anyone know if anything has changed between 9.4 and 10.2 that would mean additional config would need to be made to allow for external HTTP connections? Any advice would be appreciated, thank you

    Read the article

  • How to use instances with s3 load balancing?

    - by Slay
    I have some questions about instances and load balancing in amazon s3. I can configured an instances in s3, but i do not understand how to deal with many instances in s3. Currently, my instance is loaded with mysql, php etc (ALL IN ONE). However how do i ensure my instances is scaling? E.g if i have a site that is suppose to be handled using 3 instances and using amazon rds. Do i need to host my code base in the 3 instances? how do people normally do this? like facebook has 1000+ servers. Do they host their code base in all the 1000+ servers? Thanks

    Read the article

  • Tool to monitor file size, file existence, parse xml, etc

    - by Artur Carvalho
    I'm trying to find some tool that helps me monitor several things. What are some requirements: Shows results on a web page. Checks existence of files/folders Checks sizes of files/folders Can parse xml files Can have several status depending if it's for instance, after 9pm Ping workstations/Servers to ensure they are on or off create daily/weekly/monthly reports (pdf, html, csv) show daily/weekly/monthly scheduled tasks check if specific users are logged in a machine check which users are logged in in a machine I've looked into some solutions but could not find what I wanted. Usually tools like nagios are more focused in servers, and spiceworks is not so specific. At this point I'm using a little powershell script that does several of these items, but before losing more time probably reinventing the wheel, what tools are out there? Thank you in advance.

    Read the article

  • Backup and restore Subversion user permissions

    - by Earth Engine
    We use svnsync to create fully functional backup servers, and we have a script to do so. However if we wanted to create a new backup server, we have to copy the htpasswd and groups.conf file across (that is not hard) and (after running svnsync) manually assign the user/group to repositories. Also, if we change the assignment in the main server, there is no easy way to apply that change to all backup servers. Since we have 50+ projects and 30+ users this is a boring and error-pond exercise. Are there any tools that can help us to backup and restore those automatically? We are using VisualSVN under Windows, so it is better to have solutions in Windows scripts, not shell scripts.

    Read the article

  • Configure IE to use MS Word Viewer as .doc viewer on Citrix server with Office installed

    - by Adam Towne
    We have a small number of citrix servers that all have office installed. Only a small subset of users have access to office. Everyone is set to open office documents with the free viewers on the Citrix servers. We control access to office through NTFS permissions. We now have a large number of users who need to be able to view office documents from a webpage. Opening office files normally works fine. When users open the office documents from the link in a webpage, it ignores the file associations and attempts to open the document with the full office program. How can I change the program that IE uses to open office documents, or how can I force it to use the file associations I set in the operating system?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >