Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 21/389 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Sending messages between two Python servers

    - by Will
    I have two servers - one Django, the other likely to be written in Python - and one is putting 'tasks' into a database and another is processing these tasks. They share a database, but I want the processor to react quickly to new tasks rather than polling periodically. Are there any straightforward ways for two Python servers to talk to one another, or does the task processor have to have web-hooks or something? It feels there ought to be a blessed way to do this...

    Read the article

  • Subversion and Quickbooks Files

    - by Jorge Fernandez
    I currently have a large problem on one of the file servers I manage for an Accounting Firm. Quickbooks has a tendency to create multiple files of the same thing over and over to prevent data loss. This is a good thing when you handle just a few files. But at an accounting firm it becomes a problem. Some of the older clients have 5-10 files in their respective folders, each with a different cut off date. Because of user error some of these file aren't labeled properly with their correct cutoff dates. This is where Subversion came to mind. Using the revision system would allow for 1 file to be master and have all of its revisions. Has anyone ever tried this with Quickbooks files? I've only used SVN with code for applications making each file size much smaller. How does SVN stand up with larger files like 10-25MB? I'm not exactly sure how SVN handles revisions - does it keep a duplicate of the files and duplicates the disk space space needed?

    Read the article

  • Running multiple services on different servers with IPv6 and a FQDN

    - by Mark Henderson
    One of the things NAT has permitted us to do in the past decade is split physical services onto different servers whilst hiding behind a single interface. For example, I have example.com behind a NAT on 192.0.2.10. I port-forward :80 and :443 to my web server. I'm also port forward :25 to my mail server, and :3389 to a terminal server and :8080 to the web interface of my computer that downloads torrents, and the story goes on. So I have 5 port forwardings going to 4 different computers on example.com. Then, I go and get me some neat IPv6. I assign example.com an IPv6 address of 2001:db8:88:200::10. That's great for my websites, but I want to go to example.com:8080 to get to my torrents, or example:3389 to log on to my terminal server. How can I do this with IPv6, as there is no NAT. Sure, I could create a bunch of new DNS entries for each new service, but then I have to update all my clients who are used to just typing example.com to get to either the website or the terminal server. My users are dumber than two bricks so they won't remember to connect to rdp.example.com. What options do I have for keeping NAT-style functionality with IPv6? In case you haven't figured it out, the above scenario is not a real scenario for me, or perhaps anyone yet, but it's bound to happen eventually. You know, with devops and all.

    Read the article

  • Domain authentication over OPEN wireless pre-logon (Windows 7 Pro) - No logon servers avail

    - by Shadow00Caster
    I have a plethora of laptops that are joined to an AD domain. I have an enterprise wireless system setup, the users of these laptops will be using an OPEN unsecured SSID which will ultimately have a captive portal that uses Radius-AD auth and firewall rules to allow access pre-captive portal auth to the proper ip's/ports of DC's etc for auth etc. I already have other laptops/users connecting to another SSID with 802.11x and SSO, all works perfectly pre-logon etc. My problem is with this open network, for some reason I cannot get the machines to auth to AD. The laptops connect to the wireless network, I confirm this on the controller and can ping the laptop at startup. I sharked the wires on the 2 DC's that these machines auth to, I can see a DNS SOA update from a laptop im testing with and can ping that test laptop from both DC's. When I try to logon, "There are currently no logon servers available to service the logon request." The shark shows no incoming connections to either DC even though the laptop is connected and pingable. Any help is greatly appreciated.

    Read the article

  • How to restrict zone transfers to specific authorized servers only

    - by JonoB
    I recently failed a PCI compliance scan because of the following: This DNS server allows unrestricted zone transfers. Attackers may be able to use this information to gain knowledge on the structure of your networks to aid in device discovery prior to an actual attack. And the suggested solution is as follows: Reconfigure this DNS server to restrict zone transfers to specific authorized servers only. I am running a dedicated Linux Centos server. My understanding is that I have to edit the /etc/named.conf file, which I have done and the the relevant part is as follows: options { acl "trusted" { 127.0.0.1; xxx.xxx.xxx.001; //this is one of the server's ip's xxx.xxx.xxx.002; //this is another server's ip }; allow-recursion { trusted; }; allow-notify { trusted; }; allow-transfer { trusted; }; }; I then restarted the named service /etc/rc.d/init.d/named restart and requested a re-scan, which failed again for the same reason. Am I missing something obvious here?

    Read the article

  • Internal and External DNS from Different Servers, Same Zone

    - by Shane
    Hello All, I am either having trouble understanding how DNS works, or I am having trouble configuring my DNS correctly (either one isn't good). I am currently working with a domain, I'll call it webdomain.com, and I need to allow all of our internal users to get out to dotster to get our public DNS entries just like the rest of the world. Then, on top of that, I want to be able to supply just a few override DNS entries for testing servers and equipment that is not available publically. As an example: public.webdomain.com - should get this from dotster outside.webdomain.com - should get this from dotster as well testing.webdomain.com - should get this from my internal dns controller The problem that I seem to be running into at every turn is that if I have an internal DNS controller that contains a zone for webdomain.com then I can get my specified internal entries but never get anything from the public DNS server. This holds true regardless of the type of DNS server I use also--I have tried both a Linux Bind9 and a Windows 2008 Domain Controller. I guess my big question is: am I being unreasonable to think that a system should be able to check my specified internal DNS and in the case where a requested entry doesn't exist it should fail over to the specified public dns server -OR- is this just not the way DNS works and I am lost in the sauce? It seems like it should be as simple as telling my internal DNS server to forward any requests that it can't fulfill to dotster, but that doesn't seem to work. Could this be a firewall issue? Thanks in advance

    Read the article

  • Two hosted servers, one public - VPN?

    - by Aquitaine
    Hello there, Web developer here who has to occasionally wear a system & network admin hat (small company). We currently have a single hosted server running Windows Server 2003 that runs both our web server (IIS/Coldfusion) and our database server (SQL Server 2008). We lock down the SQL server by allowing only specific IPs to connect to it. Not ideal but it's worked thus far. We're moving up to two distinct servers and I want to take the opportunity to 'get things right' and make only the web server face the public. What I need to be able to do is to allow only a handful of people to connect to the database server. Rather than using an IP allow list, I'd prefer to use a VPN to let people through so that access is based on the user and not simply the user's location. I'm leaning toward something like OpenVPN, just so I can stick with Server 2008 Web edition. Do I: Use the web server as a VPN server and set up the database server to only accept connections from the web server? Is there an extra step required to make connections to, say, db.mycompany.com route through the VPN rather than through a different connection? I'm ignorant of this part of network infrastructure stuff. Or, Set up a VPN server on the database server as the only public-facing server connection so that there aren't any routing issues to deal with? I know this is Network 101 stuff but I thought I'd ask before just blundering through it since it could affect the company a bit. Thanks very much!

    Read the article

  • online backup plan for a home office with servers

    - by TiernanO
    So, i am in the process of tweaking my spending and i need to change my backup plan... I am currently using a mix of JungleDisk and ZManda ZCB to backup files on my MacBook Pro, Main Windows Server Wrokstation, a dedicated Windows Server in a datacenter, and various other machines and file sources. The problem is the cost: this month, it has cost me about $90 to backup a little over 500Gb... This amount of data will increese over time too, since i am backing up Photos (24Mb RAW images + 4-8MB JPEGs), Videos (various cameras shooting 720p and 1080p), Music, Movies, TV shows and Apps from iTunes (though with iTunes cloud, this might not need to be backed up again) and source code... I have looked at the likes of Mozy, CrashPlan+ and Pro, Backblaze and Carbonite, but each have their problems: Mozy seems overly expenvice per gig at 50C Crashplan wont sell to me since i am outside the US (they hide it on their site... hidden in the FAQ section!) Backblaze dont support Windows Server Carbonite business pricing is $600 up front for 500Gb of storage... Fro $229, they will not backup Windows Servers. So, other than those, Jungle Disk (at 15c per Gig) or ZManda (also at 15c per Gig) what other options are there? what are other people using?

    Read the article

  • Nginx multiple upstream servers on the same domain via diferent url

    - by Barry
    Hello. I am trying to rout trafic to different upstream servers (that serve different applications and not for load balancing). The incoming trafic has the same domain name but different URL. Here is an example of my configuration: http { upstream backend1 { server 127.0.0.1:8080 fail_timeout=0; server 127.0.0.1:8081 fail_timeout=0; } upstream backend2 { server 127.0.0.1:8090 fail_timeout=0; server 127.0.0.1:8091 fail_timeout=0; } server { listen 80; server_name my_server.com; root /home/my_server; location /serve_me { fastcgi_pass backend1; include fastcgi_params; } location / { fastcgi_pass backend2; include fastcgi_params; } } } It seems that whatever trafic comes in (including "my_server.com/serve_me") goes to backend2. How do I make queries that start with /serve_me to be directed to backend1? Thanks, Barry.

    Read the article

  • PHP Script Won't Run - Apache2/MySQL Servers Running, PHP Installed - Ubuntu 10.04

    - by nicorellius
    I am trying to install a CRM on a Linux (Ubuntu 10.04) laptop to do some testing. Installing the current versions of Apache, MySQL and PHP, and getting the CRM to run is easy. It's when I try to go backwards and run it on a previous set of versions I run into problems. This is what I have done: I have installed Apache 2.2.14, MySQL 5.0.83, and PHP 5.2.8. When I type something like mysql --version I get back what I would expect: version and distribution info. The same goes for Apache2 and PHP. The Apache server is running and so is mysqld. So when I go to my browser and look at http://localhost/<CRM dir>/install.php Firefox offers to open the PHP file or save it, as if it doesn't recognize the file. What should happen is that I should get a welcome page and the installation wizard for this CRM distribution should start. I have tried so many different things I probably screwed up something along the way. I have restarted the servers over and over, and even recompiled the versions of MySQL and PHP with no problems. I am hoping I am overlooking something simple because I am lost. Any help is appreciated.

    Read the article

  • Using 1and1.com Servers, SMTP Mail is Limited - Local XAMPP Server Works As Expected

    - by nicorellius
    I'm starting to not like 1and1.com that much. I've used them for years, but mainly for simple sites without much need for configuration. I know there are better hosting companies out and I may go seeking them. The problem here is that on my Local XAMPP server (sitting on a network with Comcast ISP), I have a PHP script that uses PEAR::Mail to send mail using MIME. The script works find locally with either smtp.1and1.com and corresponding credentials and smtp.gmail.com with corresponding credentials, using appropriate ports, etc. 1and1 tells me that I have to change the MX record on the domain where this script runs in order to make this work. This doesn't make sense to me. Now I'm pretty new to all this, but how is it that this is the case? Why can my local server work just fine, out of the box, but their servers not? I have asked them these questions, but they are very vague and I cannot get any good answers from them. Versions: PEAR Version: 1.5.0 PHP Version: 4.4.9 Zend Engine Version: 1.3.0 My apologies in advance for my ignorance. Thanks for the help in advance.

    Read the article

  • ASP.NET Web API returns 404 for PUT only on some servers

    - by Greg Bacchus
    Ok, I have been racking my brain and the internet for a solution to this. I just can't figure it out. I have written a site that uses ASP.NET MVC Web API and all working nicely until I put it on staging server. The site works fine on my local machine and the dev web server. Both dev and staging servers are Win Server 2008 R2. The problem is this: basically the site works, but there are some API calls that use the HTTP PUT method. These fail on staging returning a 404, but work fine elsewhere. The first problem that I came across and fixed was in Request Filtering. But still getting the 404. I have turned on tracing in IIS and get the following problem. 168. -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 16 HttpStatus 404 HttpReason Not Found HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification MAP_REQUEST_HANDLER ErrorCode The system cannot find the file specified. (0x80070002) The configs are the same on dev and staging, matter of fact the whole site is a direct copy. Why would the GETs and POSTs work, but not the PUTs? Thanks Greg

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Windows 2008 R2 Servers Sending Arp Requests for IPs outside Subnet

    - by Kyle Brandt
    By running a packet capture on my my routers I see some of my servers sending ARP requests for IPs that exist outside of its network. For example if my network is: Network: 8.8.8.0/24 Gateway: 8.8.8.1 (MAC: 00:21:9b:aa:aa:aa) Example Server: 8.8.8.20 (MAC: 00:21:9b:bb:bb:bb) By running a capture on the interface that has 8.8.8.1 I see requests like: Sender Mac: 00:21:9b:bb:bb:bb Sender IP: 8.8.8.20 Target MAC: 00:21:9b:aa:aa:aa Target IP: 69.63.181.58 Anyone seen this behavior before? My understanding of ARP is that requests should only go out for IPs within the subnet... Am I confused in my understanding of ARP? If I am not confused, anyone seen this behavior? Also, these seem to happen in bursts and it doesn't happen when I do something like ping an IP outside of the network. Update: In response to Ian's questions. I am not running anything like Hyper-V. I have multiple interfaces but only one is active (Using BACS failover teaming). The subnet mask is 255.255.255.0 (Even if it were something different it wouldn't explain an IP like 69.63.181.58). When I run MS Network Monitor or wireshark I do not see these ARP requests. What happens is that on the router capturing I see a burst of about 10 requests for IPs outside of the network from the host machine. On the machine itself using wireshark or NetMon I see a flood of ARP responses for all the machines on the network. However, I don't see any requests in the capture asking for those responses. So it seems like maybe it is maybe refreshing the arp cache but including IPs that outside of the network. Also when it does this NetMon doesn't show the ARP requests?

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • Which type RAM support Our Servers?

    - by Mikunos
    I need to increase the RAM in our DELL servers but with the lshw I cannot see if the RAM installed is a UDIMM or RDIMM. Handle 0x1100, DMI type 17, 28 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: DIMM_A1 Bank Locator: Not Specified Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: 00CE00B380CE Serial Number: 8244850B Asset Tag: 02103961 Part Number: M393B5773CH0-CH9 Handle 0x1101, DMI type 17, 28 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: DIMM_A2 Bank Locator: Not Specified Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: 00CE00B380CE Serial Number: 8244855D Asset Tag: 02103961 Part Number: M393B5773CH0-CH9 Handle 0x1102, DMI type 17, 28 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 2 Locator: DIMM_A3 Bank Locator: Not Specified Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: 00CE00B380CE Serial Number: 8244853E Asset Tag: 02103961 Part Number: M393B5773CH0-CH9 how have we do to know which is the right RAM memory to buy? thanks

    Read the article

  • Unable to Connect to just Google Servers

    - by Akshat Mittal
    I am in an extremely strange problem. I am unable to connect to just Google Servers. I am not able to access any site related to Google, Google.com | YouTube | Google+ | Webmaster Tools | jQuery CDN, nothing is working. I am able to open any other website (as I am posting this question on SuperUser), even the Google DNS (8.8.8.8 and 8.8.4.4) are offline. Please Help!! Update 1: Google DNS are back online, YouTube is back online. But website on domain google.com is not working (ex: play.google.com, maps.google.com, google.com/search, etc). Update 2: I am able to access Google.com (only) with (one of) its IP addresss(s) listed below: 74.125.227.41 74.125.227.46 74.125.227.32 74.125.227.33 74.125.227.34 74.125.227.35 74.125.227.36 74.125.227.37 74.125.227.38 74.125.227.39 74.125.227.40 Update 3: I consulted my friends nearby and they said that they are also experiencing the same problem. Seams like this is a major problem in this area (or India !!) The Problem is Now Solved!! I am able to open Google.com

    Read the article

  • Servers / ram for social network- how many?

    - by Marty
    I am launching my social network soon an looking into hosting. The question i am lost is: Do i need separate servers for web vs database vs image handling since there is photo sharing? Or does 1 server handle it all? Also is more ram better? If i get 50GB ram is that better than having 8 gb ram? EDIT: It is PHP codeignitor and MySQL for now. (switch to NoSQL DB later if demand calls fr it.) I will be using memcache also. Concept wise it is similar to yelp, so geographic based with lots of user content and image sharing + live feeds an privacy levels. User plan is open question. Without testing the demand for this i cant give a number. But the concept is unique, no one out there with the set of features i am releasing so it could grow. Ideally I want to plan for handling about 1-2 million views / month from launch. If it goes more than that then I will upgrade.

    Read the article

  • Nginx order of servers

    - by scrat
    I have 3 sites on my server. All are running on gunicorn and use unix sockets to communicate with nginx which routes requests. I got three records in nginx.conf like: server { listen 80; server_name site1.com; location / { proxy_pass http://unix:/tmp/site1.sock; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } For site1, site2, site3. If they are ordered as config for site1 goes first, and then goes config for site2 and site3 everything works good. But when I change the order for example to site2, site1, site3, then site1 becomes routed to site2. What am I doing wrong? Full server nginx.conf before servers configs: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon;

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • SPF for two different outgoing servers?

    - by Marcus
    I have ran into a problem that I think someone should have a really clever answer for. Today we have our own mailserver that looks like "mail.domain.com" – which we use to send out mail to our customers (with a modified PHPMailer script). Usually around 5000 mails every day. Everything from customer support to invoices goes through there. The from-header is set to "[email protected]". We are now thinking of migrating to Google Apps for internal use (with 70+ users). However, we cannot use Gmails SMTP for sending "bulk" mails (they have a limit of 500 outgoing mails per day) so we really want to keep using our current system for sending automated mail to our customers – and using gmails SMTP for our internal use. So, how do we set up our SPF-records (Sender Policy Framework) for this? We do not want to get stuck in any filters for "spoofing" the sender from either type of account (the ones sent from our own server, and through Gmails). In short: we want to be able to use the same e-mail adress (for sending) on two different SMTP servers (and therefore two different IP-adresses). Anyone with a good knowledge off SPF who knows how to go about? Or if it is even possible? Anything else I should think of when switching to Google Apps?

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • How can I measure actual memory usage from my running processes?

    - by NullUser
    I have two servers, server1 and server2. Both of them are identical HP blades, running the exact same OS (RHEL 5.5). Here's the output of free for both of them: ### server1: total used free shared buffers cached Mem: 8017848 2746596 5271252 0 212772 1768800 -/+ buffers/cache: 765024 7252824 Swap: 14188536 0 14188536 ### server2: total used free shared buffers cached Mem: 8017848 4494836 3523012 0 212724 3136568 -/+ buffers/cache: 1145544 6872304 Swap: 14188536 0 14188536 If I understand correctly, server2 is using significantly more memory for disk I/O caching, which still counts as memory used. But both are running the same OS and if I remember correctly, I configured both with the same parameters when they were installed. I did a diff on /etc/sysctl.conf and they are identical. The problem is, I am collecting memory usage and other metrics over a period of time, (eg: vmstat, iostat, etc.) while a load is generated on the system. The memory used for caching is throwing off my calculations on the results. How can I measure actual memory usage from my running processes, rather than system usage? Is used - (buffers + cached) a valid way to measure this?

    Read the article

  • HAProxy, health checking multiple servers with different host names

    - by Marco Bettiolo
    I need to load balance between multiple running servers with different host names. I cannot set-up the same virtual host on each one. Is it possible to have only one listen configuration with multiple server and make the Health Checks apply the http-send-name-header Host directive? I am using HAProxy 1.5. I came up with this working haproxy.cfg, as you can see, I had to set a different hostname for each health check as the health check ignores the http-send-name-header Host. I would have preferred to use variables or other methods and keep things more concise. global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 stats enable stats uri /haproxy?stats stats refresh 5s balance roundrobin option httpclose listen inbound :80 option httpchk HEAD / HTTP/1.1\r\n server instance1 127.0.0.101 check inter 3000 fall 1 rise 1 server instance2 127.0.0.102 check inter 3000 fall 1 rise 1 listen instance1 127.0.0.101:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.example.com server www.example.com www.example.com:80 check inter 5000 fall 3 rise 2 listen instance2 127.0.0.102:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.bing.com server www.bing.com www.bing.com:80 check inter 5000 fall 3 rise 2

    Read the article

  • Seeing traffic destined for other people's servers in wireshark

    - by user350325
    I rent a dedicated server from a hosting provider. I ran wireshark on my server so that I could see incoming HTTP traffic that was destined to my server. Once I ran wireshark and filtered for HTTP I noticed a load of traffic, but most of it was not for stuff that was hosted on my server and had a destination IP address that was not mine, there were various source IP addresses. My immediate reaction was to think that somebody was tunnelling their HTTP traffic through my server somehow. However when I looked closer I noticed that all of this traffic was going to hosts on the same subnet and all of these IP addresses belonged to the same hosting provider that I was using. So it appears that wireshark was intercepting traffic destined for other customers who's servers are attached to the same part of the network as mine. Now I always assumed that on a switch based network that this should not happen as the switch will only send data to the required host and not to every box attached. I assume in this case that other customers would also be able to see data going to my server. As well as potential privacy concerns, this would surely make ARP poising easy and allow others to steal IP addresses (and therefor domains and websites)? It would seem odd that a network provider would configure the network in such a way. Is there a more rational explanation here?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >