Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 318/500 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • How to tell httpd to preserve the proxied error message?

    - by ZNK - M
    I have an httpd server proxying the requests to 2 different tomcat servers. One of my server handles the authentication and returns a specific http error code 521 when the user already have a running session. My issue is httpd automatically maps this 521 error code to a 500 (internal server error) and then my client can not handle it properly. I have tried to disable ProxyErrorOverride, to remove the /error/HTTP_INTERNAL_SERVER_ERROR.html.var but it does not changes anything. How can I ask httpd to not change anything to the proxied message? <IfModule proxy_module> ProxyPass /context1 http://127.0.0.1:8001/context1 ProxyPass /context2 http://127.0.0.1:8002/context2 ProxyPreserveHost Off ProxyErrorOverride Off </IfModule> Thanks in advance httpd 2.2.22 (Win32) mod_ssl tomcat 7.25 windows 7 64-bits

    Read the article

  • How do I install automake and autoconf on RedHat Enterprise 5?

    - by Kevin Sedgley
    I am attempting to install "uploadprogress" for a PHP application, and have failed on dependencies. Firstly, on phpize, then php-devel, then on autoconf and automake. I have tried yum, and various repositories, with no luck. I think it's to do with the ultra-tight but annoying set up they have on Rackspace Cloud servers. Does anyone know where I can find a repository that I can tell yum to look at that will contain php-devel, autoconf, automake, etc? Thanks ever so much. Release details: Red Hat Enterprise Linux Server release 5.3 (Tikanga) Linux version 2.6.18-128.7.1.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)) #1 SMP Wed Aug 19 04:17:26 EDT 2009 Linux Serv001 2.6.18-128.7.1.el5xen #1 SMP Wed Aug 19 04:17:26 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Motherboard rejects identical hard drive, one works the other doesn't

    - by Payson Welch
    I have an interesting situation. I have a Dell XS23-SB server that has four blades in it. The blades use Supermicro X7DWT motherboards, and interface with the sata drives through a backplane. I took two identical drives from a raid 0 enclosure that came from the factory (GDrive), one works on all four servers, the other does not. I verified that they both work by plugging them into a hard drive cradle. This behavior can be repeated with other drives, some drives work and some don't. However when i test them, they ALL work on my PC. What could cause this?

    Read the article

  • Network adapters reliability

    - by casey_miller
    Can you help me with understanding of reliability of network adapters. Most of the time servers do have at least 2 NIC's bonded to provide sort of a HA for it. So in case of one NIC fails, the second would still do the job. I wonder which factors work when you use network adapters. I know that, the most important and weakest part of any computer system is: storage (i.e HDD). but how reliable actually network adapters are? There are more expensive ones, and cheaper adapters. In which cases do they actually fail? In what circumstances. May it be a intensive usage of them Time when it's on In your experience how often you found yourself changing NIC's due to their fail? Or just what's the typical lifetime of commodity NIC's? thanks.

    Read the article

  • auspex LFS backups

    - by user1250465
    I have some backup tapes which existed on an AUSPEX file server. The backups were written to tape with the SunOs version of the CPIO command. Now that I need to restore them, (of course there are no more auspex servers in existance), the backups won't restore because the headers are not standard. I have dumped the tape images to disk. PAX, CPIO, and TAR cannot read the images. I've tried all of the CPIO format options. The errors I get are "name too long", "byte swapped in header", or just junk output. I can open up the images and read the contents of the files, but cannot restore the images. I have found that SunOs had a special header in CPIO V2.5 images. I have found the source for cpio, now I need definition of the SunOs header inside CPIO?

    Read the article

  • Windows server 2003 mapping home drive wrong

    - by Sandman2010
    hey all, first question... we have around 30 servers in an Active Directory environment with 600 student computers and 100 staff desktops with XP SP2/3, the win server 2003 has the staff home drives on a NAS and in the last few days after some server updates is now mapping home drives to the \servername\home instead of \severname\home\%username%, its simple to re map the network drive but is annoying. we dont use login script to map home drive but use a VB script for other network drives and if we add the home drive mapping to that it works, but shouldnt the profile option in users AD account map that correctly? which do you all recommend, AD profile mapping or VB Script mapping Home drives? thanks Steven

    Read the article

  • http server connectivity puzzle

    - by jpmartins
    I have been seeing some strange connection issue in the production environment. The setup has two IBM Http Server's (IHS) and a network IP load-balancer in front of them (round-robin). One instance the system is working fine, the next requests stop arriving at the IHS. Telnet directly to port 80 of the IHS is established sucessfully, but connection to the port 80 through the IP of the load-balancer fails! The puzzle comes next, the network admins say the load-balancer is working fine. When we finally reboot the IHS servers and request start flowing... The situation happened three times the last month and no obvious pattern was found. Any debug ideas?

    Read the article

  • How do I associate server traffic to a domain hosted on that server?

    - by morley
    I have three or four Linux servers, each of which hosts anywhere from 5 to 50 domains. Each domain has its own folder: /www/projectname/web/ Logs go in: /www/projectname/log However, if there's a traffic spike (or, as I see it on my end, a memory usage spike), I'm not sure how to figure out which domain is responsible for the traffic without running tail -f on each of the projects and making an educated guess based on how fast things scroll. There's got to be a better way! There probably is, but I haven't seen it. And the last time I checked, bandwidth monitors only report system-wide load. So if anyone knows how to do this the right way, please let me know. Thanks!

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • test if master dns has trasnfered copt to slave

    - by su55
    Hello, I setup my master and slave using freebsd. im currently running the Bind 9.X version, so far everything is working successfully. Just one small problem. I cant get the master copy of my dns to transfer it to the slave server. i included transfer-allow {192.168.1.111;}; // this is the slave servers ip i ran the rndc reload command to check but i dont see the copy in the /etc/named/master/? Any help would be necessary and if you like the layout of my dns, I can provide that too.

    Read the article

  • How to enable an active/active file server cluster in windows 2008 r2 Enterprise

    - by Phygg
    I've just created a cluster for my file servers in Windows 2008 R2 Ent SP1 environment. The goal - an Active/Active cluster for web server data How do I go about telling the cluster to be active for both nodes? Do I have to tell the cluster to be active/active? Here is a link to the instructions I followed when configuring the failover cluster. http://technet.microsoft.com/en-us/library/ff182326(WS.10).aspx So if anyone can help me to grasp the concept or maybe I'm way off and I need a node that is not active along with 2 active nodes to do this, I would appreciate it.

    Read the article

  • Linux-Vserver: How to do upgrade Debian 5.0 to 6.0 on vservers and main machine?

    - by Bartosz Kowalczyk
    I have server with debian lenny. I installed vserver on this server a few years ago. Summary I have 5 guest of vservers and main system, now. Each guest is debian lenny. Now, I'm wanting upgrade from lenny to squeezy on this servers (each Vservers and main machine). Do you do it? I should upgrade as usually system ? First I should upgrade every vserver next main machines and I have to do reset all machines and vservers? Please, advise me how to do it ?

    Read the article

  • HP ML150 G6 upgrading RAM/CPU beyond specs?

    - by Morten Green Hermansen
    I am being told that some limits on some HP servers can be crossed. Do any of you have any experience with that? A ML150/G6 is limited to 48GB RAM but I have been talking to a German company that guaranties me that this server will be able to be upgraded to 384GB RAM (using 32GB memory modules and 2 CPUs) http://www.compuram.de/en/memory,HP+%28-Compaq%29,Server,Proliant,ML150+G6.htm Can this really be true? The server that I have is using E5504 CPUs but will I be able to upgrade to any CPU that is using a LGA1366 socket? All from a low wattage L5640 all the way to the 6 core, high wattage versions like an X5650? (If cooling and power is adequate ofcause). Is there any limitation with powerregulators and chipset (Intel 5500). I am looking forward to any reply. Thanks in advance and best regards, - Morten Green Hermansen, Fanitas

    Read the article

  • need advice for storing data setup hardware for client with 80TB per year of data footprint increase

    - by dasko
    hi everyone, i currently have a client that will be adding replicated data from satellite locations in the number of approximately 80TB per year. with this said in year 2 we will have 160TB and so on year after year. i want to do some sort of raid 10 or raid 6 setup. i want to keep the servers to approximately 4u high and rack mounted. all suggestions welcome on a replication strategy. we will be wanting to have one instance of the data in house and the other to be co-located (any suggestions on co-locate sites too?). the obvious hardware will be something like a rack mount server with hot swap trays and dual xeon based type processors. the use of the data is for archives of information, files will be made up of small file sizes. i can add or expand to this question if it is too vague. thanks for looking.

    Read the article

  • IIS7.5 App Pool recycling. What is the best schedule for Recycling

    - by mikedopp
    I have been using IIS7.5 Since its release. I am also using commerce server 2007sp2. Due to Commerce Servers Need for memory and processor I have the app pool the website is assigned to recycling at midnight every night. My Question is what is the best time table to recycle heavy web app pools? I am looking to keep speed and not bump potential customers while recycling multiple times a day if possible. Another issue is that every few days the same app pool will hang and I have to force a reset of IIS to get it working again.

    Read the article

  • DNS Replication issue

    - by BillN
    We host the DNS for our domain. Two weeks ago, the developer requested that we setup a new zone 'dev.ourdomain.com' and place two host records in it my.dev.ourdomain.com and admin.dev.ourdomain.com. We added the zone to our DNS and added A records for the host. Now a week later, some DNS servers like google (8.8.8.8) and gtei (4.2.2.2) will resolve the hosts, but others like OpenDNS (208.67.222.222 ) and ATT Uverse (68.94.156.1) cannot resolve it. Any Ideas?

    Read the article

  • University Assignment: Datacenter/Networking Infrastructure for Hosting Company [closed]

    - by TCB13
    My university assigned me to theorizing a data center for an Web Hosting company. The company should provide the following services: Shared WebHosting; Dedicated Servers; VPS (Virtual Private Server); The bandwith (as resquested) is limited to 10 Gbps. Is there any good book / other info I can read (max 100 pages) about how to design a good data center for hosting, what are the best practices and what should be done from a (logical) network perspective, what security policies should be implemented and how the data center should be built (physically)? Thank you ;)

    Read the article

  • 12-24 rack, 10-32 server thumbscrew. How to mount?

    - by JJ.
    We have just purchased a APC rack (model AR204A) with 12-24 threaded holes. We couldn't get a "square hole" model in time for our setup deadline. Unfortunately our rack servers (Lenovo RD240) appear to have 10-32 thumbscrews for securing the server to the rack. We've successfully mounted the server rails to the rack using 12-24 screws however the 10-32 thumbscrews in the server front won't "grab" the 12-24 holes in the rack, thus there is nothing to stop the server from sliding right off the rack if pushed from the back. The thumb screws on the server don't seem to be removable, so we can't simply use 12-24 screws instead. Any suggestions on how to work around this problem? Is there any way to "convert" a 12-24 hole to a 10-32 thread (or similar approach)? Thanks in advance.

    Read the article

  • MDaemon vs Exchange (2007-2010). Which way should we choose ?

    - by Deniz
    We are at the verge of a mail server decision. We do currently use 2 mail servers : MDaemon 10 and Exchange 2003. We are planning to use a company and customer wide one point solution. Our main candidates are MDaemon 11 and Exchange 2007 or 2010. We would like to learn other users experiences on those solutions. The server-side experiences, the user-side experiences , TCO, support options etc. And if there where other solutions (maybe MDaemon 11 + Exchange or anything else) you could suggest ?

    Read the article

  • Can't access dfs namespace over vpn

    - by cpf
    Hi Serverfault, I've recently configured 2 servers in AD on the same domain level. They are physically separated and permanently connected through a site-to-site vpn for dfs replication. All well, but when users connect to either site through vpn (from home e.g.) they can't use the domain level method: \\domain.com\data Internally this works perfectly, resolving domain.com when connected through vpn gets the correct IP. I've tried Google to figure things out. What I was able to find was that more people have this issue, no real solution found though. Can anyone explain why this is happening? Especially a solution would be really helpful! Thanks in advance.

    Read the article

  • Service haproxy error

    - by user128296
    I want to configure Haproxy for outgoing mail load balancing. my configuration file /etc/haproxy.cfg is. global maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 4 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode tcp listen smtp_proxy 199.83.95.71:25 mode tcp option tcplog balance roundrobin # Load Balancing algorithm ## Define your servers to balance server r23.lbsmtp.org 74.117.x.x:25 weight 1 maxconn 512 check server r15.lbsmtp.org 199.71.x.x:25 weight 1 maxconn 512 check And when i start service haproxy i get this error. Starting HAproxy: [ALERT] 244/172148 (7354) : cannot bind socket for proxy smtp_proxy. Aborting. Please tell me where i am doing mistake.help will appreciated.

    Read the article

  • Server 2003 Functional Domain DFS Replication Problem (Files being moved to conflicted folder for no reason)

    - by Az
    We have 2 Windows 2003 servers configured with a DFS namespace and we are running into problems with the redirected profiles we have setup. Basically, one server is the FSMO master for all roles, and we have another DC that is the DFS namespace primary server. We have profile redirection setup using the \dfsnamespace\userprofile formula. The FSMO master DC locks up occasionally (don't ask :), and when it does, and we bring it back up... All of the user profiles hosted on the DFS namespace get overwritten when a user logs in. The current profile gets moved to the conflicting and deleted items folder. This strikes me as really odd considering the whole point of using DFS was to provide some redundancy in case one server went down. Can anyone help? Thanks in advance! -Nate

    Read the article

  • How can I uninstall a clustered SQL instance if the cluster has been destroyed?

    - by Bob
    First time going through this scenario, and apparently I did it very wrong. On the DB servers I deleted the cluster group that held SQL and Reporting Services. I then destroyed the cluster. Then I tried to uninstall SQL. No dice. SQL still thinks its part of the non-existant cluster and will not let me uninstall it. I went into the Maintenance menu of the SQL setup and tried to Remove Node...nope. Unless I find a way out of this I will have to rebuild the OS if I can't get SQL off the box.

    Read the article

  • VMware two vSwitches Guests can't communicate between them

    - by Aaron R.
    I have some servers in this configuration: And I am not able, from VMGuest1, to ping either VMGuest3 or VMGuest4. I can, however, ping Host1 and Host2, which are attached to pSwitch1. The behavior is the same with VMGuest3 or 4 trying to ping VMGuest 1 or 2. I don't have promiscuity enabled for any of these switches, nor do I have a bridge set up inside ESXi for the virtual switches. I know that one of these options is usually necessary when trying to get connectivity between two virtual switches. These switches are connected, however, through their respective physical switches which are bridged together. Ping just times out, arp request looks like this: [root@vmguest1:~]# arp -a vmguest3 vmguest3.example.com (1.2.3.4) at <incomplete> on eth0 [root@vmguest1:~]# arp -a host1 host1.example.com (1.2.3.5) at 00:0C:64:97:1C:FF [ether] on eth0 VMGuest1 can reach hosts on pSwitch1, so why can't it get to hosts on vSwitch1 through pSwitch1 the same way?

    Read the article

  • Torrent: Webseed or seeding client on server?

    - by Eliasdx
    I want to share a file using bittorent which I also offer over http. The torrent should be seeded by a dedicated and a virtual server and by people who are downloading it to lower bandwidth costs. My question: Should I set up a bittorrent client (rtorrent) on the servers and let them seed the file or should I use webseeds? I also want to limit the bandwidth the server uses to seed which is possible using rTorrent. How many bittorrent clients support webseeds? I found it in µTorrent and never heard of it before.

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >