Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 185/537 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • NFS or GFS for LVS 10 Server Setup

    - by Michael Robinson
    Currently we have a 10 servers LVS hosting setup. The people we hired to set it up did not anything about GFS which was our preferred Central Storage File System Solution. As we have tight time constraint, we just told them to use whatever they were familiar with which is NFS. I have since done some research and it seems that NFS is not ideal for the type of high traffic site we are hoping to build. I couldn't find much info online about the signaficance differences between the 2. As we to setup all servers again right now, should we stick with NFS or find someone who knows how to setup GFS amd go with that. We need a setup that is highly reliable and scalable as we intend. As after initial setup is done, we expect high increases in traffic and load.

    Read the article

  • Where did this incorrect cached DNS lookup come from?

    - by Stephen Jennings
    Somehow, I've been having a chronic issue where my computer will get an invalid DNS lookup in its cache for either of the two Exchange servers I use from Mail.app. My workplace runs one of the Exchange servers and I run the other (they are totally unrelated, hosted by different companies, etc.). The problem manifests as a certificate domain error. When it happens, I can run nslookup mail.mydomain.com and I see the incorrect IP address (usually owned by either Apple or Akamai), but if I run nslookup mail.mydomain.com 8.8.8.8, I get the correct address. My real quest is to find out why this keeps happening, and to do that, I'd like to know which server is supplying me this bad DNS entry. Is there a way to check my DNS cache to see where this bad lookup came from?

    Read the article

  • Weird routing problems with VPN

    - by Borek
    In our VPN setup I have to add a route to my routing table like this: route add 1.2.3.0 mask 255.255.255.0 172.16.1.1 -p Our internal addresses 1.2.3.x then use 172.16.1.1 as their gateway and both my local internet and work VPN can work at the same time. However, when I disconnect from VPN and reconnect again, I can't ping our servers even though the connection status is "Connected". When I do route print my previously added route is listed but it doesn't seem to work. So I try to execute that 'route add' command again and as expected, it tells me that The route addition failed: The object already exists. But - and that's the point - when I now try to ping our servers again, everything works! So every time, I have to execute this route add command that will fail but fix the issue at the same time. Any ideas what I might be doing wrong? My PC is Windows 7 x64, I am Administrator, UAC is enabled and the command prompt is run with elevated privileges.

    Read the article

  • Multiple PXE server same subnet

    - by Termiux
    I've been struggling with this for some time. I have a few test machines that boot from the network, they receive the boot data from the DHCP server, this tells them who is the boot server where is the file they'll boot etc. However, I need to add a second PXE server in the same subnet (create another Vlan is not an option right now). I read somewhere that I may be able to send certain parameters to certain machines based on their MAC address (this way choosing what computers boot from what server) however I cannot find how to do this, anyone knows how? this will be my solution but I cannot find the answer. My DHCP is a windows server 2003 I have 2 servers running custom flavors of Linux server as TFTP servers. Some machines use data to boot from server 1, and the others must bu able to boot from server 2. Thx

    Read the article

  • Rewrite request URI based on Host header in HAProxy

    - by DorinC
    I would like to set up HAProxy to forward HTTP requests to some backend servers but I need it to also rewrite the URI part based on the Host. I've read through the doc but it seems that reqirep isn't suitable for this purpose. Any idea if this is even possible with HAProxy? Here are the details of what I'm trying to accomplish: Requests that come in on: http://www.original-domain.com/ would be balanced between: http://server1/domains/www.original-domain.com/ ... http://serverN/domains/www.original-domain.com/ The proxy should be able to handle this for any number of domains (original-domain.com can be anything, it's not limited to a fixed set of values). For this to work HAProxy would need to rewrite a request like this: GET /original-uri HTTP/1.1 Host: original-domain.com to: GET /domains/original-domain.com/original-uri HTTP/1.1 Host: serverN and forward that request to one of the internal servers.

    Read the article

  • Which directive could make apache/rewrite redirect products/ to products.php

    - by Fernando
    Hello, I am having a trouble with two different apache servers. They are 2.2.x, so minor version is different. At both of them i have the same php application with this .htaccess: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] My issue is that in server A, when i access products/ it redirects me to products.php. While in server B, when i access products/ it redirects me to index.php. This is the correct and wanted behavior. As the modrewrite rules are equal in both servers, any ideas of other directives that could be causing this problem? Thanks!

    Read the article

  • firehol (firewall) with bridge: how to filter

    - by Leon
    I have two interfaces: eth0 (public address) and lxcbr0 with 10.0.3.1. I have a LXC guest running with ip 10.0.3.10 This is my firehol config: version 5 trusted_ips=`/usr/local/bin/strip_comments /etc/firehol/trusted_ips` trusted_servers=`/usr/local/bin/strip_comments /etc/firehol/trusted_servers` blacklist full `/usr/local/bin/strip_comments /etc/firehol/blacklist` interface lxcbr0 virtual policy return server "dhcp dns" accept router virtual2internet inface lxcbr0 outface eth0 masquerade route all accept interface any world protection strong #Outgoing these protocols are allowed to everywhere client "smtp pop3 dns ntp mysql icmp" accept #These (incoming) services are available to everyone server "http https smtp ftp imap imaps pop3 pop3s passiveftp" accept #Outgoing, these protocols are only allowed to known servers client "http https webcache ftp ssh pyzor razor" accept dst "${trusted_servers}" On my host I can connect only to "trusted servers" on port 80. In my guest I can connect to port 80 on every host. I assumed that firehol would block that. Is there something I can add/change so that my guest(s) inherit the rules of the eth0 interface?

    Read the article

  • Verify server performance

    - by George Kesler
    I'm looking for a quick and SIMPLE way to verify that new servers are performing as expected. The most important metric is disk performance, second is network performance. I’m trying to prevent problems caused by misconfiguration of RAID arrays, NIC teaming etc. The solution should work with both physical and virtual servers. I don’t need sophisticated analysis with different workloads, just one set of benchmarks which I would run against a reference server and later compare to new ones. One problem is that most benchmarks are not giving accurate results when running on a VM.

    Read the article

  • Postfix sendmail -f configuration

    - by William
    I have Postfix installed on two servers. One of them writes e-mail (satellite) and the other one delivers the e-mails (smarthost). When I write e-mail from the satellite server I'm using the sendmail command. My problem is that when e-mail arrives the Return-Path is set to the user@hostname where user is the user that is running sendmail and hostname is the servers hostname. If I use the parameter -f with sendmail I change that, but I'm hoping there is a way to do it in a configuration file for Postfix. Is this possible or should I just deal with having to configure all my software to add the -f argument? Thanks in advanced.

    Read the article

  • How should I isolate computers with different roles on a network

    - by fishhead
    I work in an industrial plant and we have one network(physical wire) that us used for both office usage and for process systems. The office computers are only used for typical office needs but occasionally do connect to the process computers to obtain information from a sql server or for some other purpose. A new initiative is in the works and is rolling down hill from corporate and that is to standardize how the the computers are used at work and they would be severely locked down and only a standard set of applications will be allowed to execute. one of the requirements is to also have non office computers isolated from the company domain. our non-office computers are a mix of Man-Machine interfaces and sql-servers all running software that non standard. My question is, how can we divorce the control systems computers from the company domain but still have access to the servers from the company domain. thanks

    Read the article

  • How should I isolate computers with different roles on a network

    - by fishhead
    I work in an industrial plant and we have one network(physical wire) that us used for both office usage and for process systems. The office computers are only used for typical office needs but occasionally do connect to the process computers to obtain information from a sql server or for some other purpose. A new initiative is in the works and is rolling down hill from corporate and that is to standardize how the the computers are used at work and they would be severely locked down and only a standard set of applications will be allowed to execute. one of the requirements is to also have non office computers isolated from the company domain. our non-office computers are a mix of Man-Machine interfaces and sql-servers all running software that non standard. My question is, how can we divorce the control systems computers from the company domain but still have access to the servers from the company domain. thanks

    Read the article

  • MySQL replication not working on leap day

    - by danneth
    Though out of my "core" knowledge I maintain a two-way replicated MySQL database (primary and backup). It's been working fine mostly. All changes are almost instantly replicated between the two servers. But now I've noticed something strange: I have a couple of cases where there are no replication on feb 29th. Admittedly I have not yet confirmed that all replication is lost. But all cases I've found so far have had this issue. Not too long ago I changed timezone from UTC to CET on the backup, it has been CET on the primary all along. Am I fixating on this because it happened on the leap day, or could there be something to it? The servers are both CentOS 5.4 with MySQL 5.0

    Read the article

  • Horrible performing RAID

    - by Philip
    I have a small GlusterFS Cluster with two storage servers providing a replicated volume. Each server has 2 SAS disks for the OS and logs and 22 SATA disks for the actual data striped together as a RAID10 using MegaRAID SAS 9280-4i4e with this configuration: http://pastebin.com/2xj4401J Connected to this cluster are a few other servers with the native client running nginx to serve files stored on it in the order of 3-10MB. Right now a storage server has a outgoing bandwith of 300Mbit/s and the busy rate of the raid array is at 30-40%. There are also strange side-effects: Sometimes the io-latency skyrockets and there is no access possible on the raid for 10 seconds. The file system used is xfs and it has been tuned to match the raid stripe size. Does anyone have an idea what could be the reason for such a bad performing array? 22 Disks in a RAID10 should deliver way more throughput.

    Read the article

  • Single sign-on for SharePoint to MySite?

    - by Chris W
    I've got a fairly simple SharePoint 2010 farm set up: 2 WFE servers with Network Load Balancing hosting the main portal site. As per Microsoft's best practice recommendations I've set up My Sites in a separate web application. As some of the user base are not using domain joined PCs they have to login once for the portal (http://portal) and then again when the access My Sites since they're crossing in to a separate web application on a separate host (http://mysite). Portal & MySite are both hosted on the same physical WFE servers. Is there an easy way to set up some thing to stop this happening and just have them login once? I understand that there's plans for us to deploy ISA in the not too distant future - could we use ISA to manage authentication to the two sites so that the users only need to log in once?

    Read the article

  • Splunk is fantastically expensive: What are the alternatives? [closed]

    - by samsmith
    Possible Duplicate: Alternatives to Splunk? This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts! UPDATE: We spent two weeks researching commercial and open source options. Our conclusion: Write our own (we are a software company... we know how to write things). We built a great system built on mongodb and .NET that gives us the functions we needed from MongoDB in about one engineering week. We have now completed our implementation. We use two Mongodb servers (master and slave), and are able to log and index any amount of log data (5gb/day, 15gb/day, etc), limited only by disk space. OBSERVATIONS: This space needs a solid solution that is $1000-3000 flat rate. The licensing models used by the commercial firms are based on a "milk the data center ops guys" models. That is their right (of course!), but it leaves a HUGE space open for someone to come in underneath them. My guess is that in another year or two there will be a good open source solution that will be really usable. Thank you all for your input (even if it was self promotion).

    Read the article

  • Prevent Server Restart after Windows Updates

    - by eidylon
    we have a number of servers in our office, as a small hosting company, and these servers are critical to business, ... web server, mail server, db server, etc. On a semi-regular basis, when the machines get automatic updates, they just automagically reboot themselves in the middle of the night. A number of them have software which must be running on the console session (bad practice, I know, but out of my control). When they reboot themselves, these programs obviously shut down, leaving customers upset and services interrupted. How do you set a Windows Server 2003 R2 machine to NEVER automagically reboot itself after updates? And perhaps, if possible, to instead email someone so that they are aware it needs a pending reboot and can schedule it for the best time? Thanks in advance!

    Read the article

  • Websphere SSL handshake with active directory cluster

    - by ring bearer
    We have a WebSphere based application that uses Active Directory(AD) based security configurations. Under WebSphere "Global security" we have configured the active directory server and connection parameters. Active directory server is actually a cluster of four servers, say, serverdc01, serverdc02,serverdc03 and serverdc04. Each of these servers have their own root certificate with CN=serverdc01, CN=serverdc02 ..so on. So to set up SSL communication, I need to retrieve certificate of active directory and save it in WebSphere's trust store. When I retrieve certificate by putting AD server name, port and retrieve certificate I randomly get certificate of one of the serverdc01,serverdc02 ... Then I save that certificate to trust store. Question is : Do I have to save certificate from each of the serverdc01,serverdc02 ...in cluster to WebSphere's trust store? What are general strategies so that each server in the cluster does not require its own root certificate?

    Read the article

  • Planning trunk capacity for multiple GbE switches

    - by wuckachucka
    Without measuring throughput (it's at the top of the list; this is just theoretical), I want to know the most standard method for trunking VLANs on multiple Gigabit (GbE) switches to a core Layer 3 GbE switch. Say you have three VLANs: VLAN10 (10.0.0.0/24) Servers: your typical Windows DC/file server, Exchange, and an Accounting/SQL server. VLAN20: (10.0.1.0/24) Sales: needs access to everything on VLAN10; doesn't need access to VLAN30 and vice-versa. VLAN20: (10.0.1.0/24) Support: needs access to everything on VLAN10; doesn't need access to VLAN20 and vice-versa. Here's how I think this should work in my head: Switch #1: Ports 2-20 are assigned to VLAN20; all the Sales workstations and printers are connected here. Optional 10GbE combo port #1 is trunked to L3 switch's 10 GbE combo port #1. Switch #2: Ports 2-20 are assigned to VLAN30; all the Support workstations and printers are connected here. Optional 10GbE combo port #1 is trunked to L3 switch's 10 GbE combo port #2. Core L3 switch: Ports 2-10 are assigned to VLAN10; all three servers are connected here. With a standard 10/100 x 24 switch, it'll usually come with one or two 1 GbE uplink ports; carrying over this logic to a 10/100/1000 x 24, the "optional" 10 GbE combo ports that most higher-end switches can get shouldn't really be an option. Keep in mind I haven't tested anything yet, I'm primarily moving in this direction for growth (don't want to buy 10/100 switches and have to replace those within a couple of years) and security (being able to control access between VLANs with L3 routing/packet filtering ACLs). Does this sound right? Do I really need the 10 GbE ports? It seems very non-standard and expensive, but it "feels" right when you think about 40 or 50 workstations trunking up to the L3 switch over 1 GbE standard ports. If say 20 workstations want to download a 10 GB image from the servers concurrently, wouldn't the trunk be the bottleneck? At least if the trunk was 10 GbE, you'd have 10x1GbE nodes being able to reach their theoretical max. What about switch stacking? Some of the D-Links I've been looking at have HDMI interfaces for stacking. As far as I know, stacking two switches creates one logical switch, but is this just for management I/O or does the switches use the (assuming it's HDMI 1.3) 10.2 Gbps for carrying data back and forth?

    Read the article

  • JSP / Tomcat / Apache setup overview on Fedora Core

    - by Richard T
    Hi Folks, For someone with so much Java experience, boy do I feel clueless - thanks in advance for your help in my grocking the present (Feb, 2010) JSP environment. Here's what I am hoping to learn: Do I understand correctly that most people use Apache to "front-end" their Tomcat servers, such that Apache "talks" directly to web clients and "proxies" Tomcat servers? Do I understand correctly that Apache isn't capable of serving JSP directly but requires a server (like Tomcat)? Is there an RPM package for Fedora Core so I don't have to build one myself? Or, does Fedora Core's package installer do a good job on this one from source code? (Some do, some don't!) While I'm here asking questions; Does Tomcat come with a working example that one can start hacking on as a way to get started quickly? If not, got a good suggestion? Thanks folks, RT

    Read the article

  • Mails bounce because of invalid character ('@') in username

    - by user1598585
    I have a working exim setup with virtual users, working alright, except for when I try to send email to certain servers. These servers reject my emails because of #5.1.3 Invalid character ('@') in username. The offending header parts seem to be: Return-path: <"[email protected]"@smtp.example.com> and ...(envelope-from <"[email protected]"@smtp.example.com>)... The problem is that I cannot find where and why the usernames are being generated like this. My router for submission is: dnslookup: driver = dnslookup domains = ! +local_domains transport = remote_smtp ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8 no_more And the respective transport: remote_smtp: driver = smtp What can be producing this problem?

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • DNS record question

    - by Just plain me
    So I have two Windows domains/seperate forests. One forest consist of what is left of the bought-out company's domain. They have 5 servers that still have important data and need to be worked with on a daily basis by a large group of employees. We have a forest level trust setup to ease file access. We manually create DNS A records for the 5 servers so their short names would resolve to the IP addresses. I need the FQDN to resolve though. Should I create CName records to achieve this? I hope this question makes sense, I am learning DNS on the fly... :)

    Read the article

  • Setting up an Active-Active IIS Cluster with ARR - is it possible?

    - by Ahmed Zubair
    I would like to know if we can setup an Active-Active IIS Cluster using Windows Cluster services that shares a common storage to store web content and WITHOUT the use of Windows NLB. I'm aware that this may not be a best practice or not a recommended setup, however, the setup is to be configured as below: Two web servers running IIS 7.5 (needs a common storage for web content) for HA and another set of two servers for sql cluster in active-passive mode for HA. Also is it possible to enable ARR on 2 node active-active IIS cluster for load balancing http requests? Appreciate if someone replies with both pros & cons of the setup.

    Read the article

  • Supervisor HTTP Server Port Issue.

    - by Catalina
    I have supervisor setup to manage a few processes. It works perfectly fine when I boot my server, however when I stop it and try to start it again it fails and give's me this error msg: * Starting Supervisor daemon manager... Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord. For help, use /usr/bin/supervisord -h ...fail! I'm running nginx on port 80 and 4 web servers on ports 8000, 8001, 8002, 8003 Does anyone have any idea of what is going on? When I reboot everything works fine.

    Read the article

  • Unable to connect to a remote SQL Server Instance over a VPN

    - by Jack Njiri
    I'm running SQL Server 2005 on two different servers running Win XP. The two servers are in different physical locations and are connected via a dedecated point to point data link in a virtual private network(VPN). Im only able to connect to the remote instance of SQL Server by specifying the IP address on the server name property. If I provide the actual server name say 'ServerA', then I get an error message. Everything works fine except configuring replication at the subscriber level, which requires the actual name of the instance, not an IP address or alias. I have already configured both instances on allow remote connections and im running the SQL Server Browser. How do I connect to the remote instance by providing the instance name? Alternatively how I configure subscription to a remote publisher without supplying the remote instance name?

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >