Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 363/537 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • test if master dns has trasnfered copt to slave

    - by su55
    Hello, I setup my master and slave using freebsd. im currently running the Bind 9.X version, so far everything is working successfully. Just one small problem. I cant get the master copy of my dns to transfer it to the slave server. i included transfer-allow {192.168.1.111;}; // this is the slave servers ip i ran the rndc reload command to check but i dont see the copy in the /etc/named/master/? Any help would be necessary and if you like the layout of my dns, I can provide that too.

    Read the article

  • 12-24 rack, 10-32 server thumbscrew. How to mount?

    - by JJ.
    We have just purchased a APC rack (model AR204A) with 12-24 threaded holes. We couldn't get a "square hole" model in time for our setup deadline. Unfortunately our rack servers (Lenovo RD240) appear to have 10-32 thumbscrews for securing the server to the rack. We've successfully mounted the server rails to the rack using 12-24 screws however the 10-32 thumbscrews in the server front won't "grab" the 12-24 holes in the rack, thus there is nothing to stop the server from sliding right off the rack if pushed from the back. The thumb screws on the server don't seem to be removable, so we can't simply use 12-24 screws instead. Any suggestions on how to work around this problem? Is there any way to "convert" a 12-24 hole to a 10-32 thread (or similar approach)? Thanks in advance.

    Read the article

  • http server connectivity puzzle

    - by jpmartins
    I have been seeing some strange connection issue in the production environment. The setup has two IBM Http Server's (IHS) and a network IP load-balancer in front of them (round-robin). One instance the system is working fine, the next requests stop arriving at the IHS. Telnet directly to port 80 of the IHS is established sucessfully, but connection to the port 80 through the IP of the load-balancer fails! The puzzle comes next, the network admins say the load-balancer is working fine. When we finally reboot the IHS servers and request start flowing... The situation happened three times the last month and no obvious pattern was found. Any debug ideas?

    Read the article

  • OpenLdap 2.4 on centos 6 doesn't listen on port 636

    - by Oliver Henriot
    I have an openldap 2.4 server on centos 6 whose confg I copied from those I have running under openldap 2.3 servers on centos 5 machines. On openldap 2.3, specifying TLSCACertificateFile, TLSCertificateFile and TLSCertificateKeyFile with correct values makes the server listen on port 636. This is not the case on the openldap 2.4 setup. I have configured it with loglevel -1 but I have not seen any clue as to what might be wrong and reading the openldap 2.4 manual doesn't indicate if any of the other TLS related parameters are now mandatory. I don't think so though because if I run the service manually, using "# /usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// ldapi:///"", the server does listen on port 636 and I can query it using "ldapsearch -H ldaps://myserver:636". Is there something I am missing to get the server to listen on port 636 without having to always launch it manually? Is this linked to centos 6 or openldap 2.4? Thank you. Cheers,

    Read the article

  • Can't access dfs namespace over vpn

    - by cpf
    Hi Serverfault, I've recently configured 2 servers in AD on the same domain level. They are physically separated and permanently connected through a site-to-site vpn for dfs replication. All well, but when users connect to either site through vpn (from home e.g.) they can't use the domain level method: \\domain.com\data Internally this works perfectly, resolving domain.com when connected through vpn gets the correct IP. I've tried Google to figure things out. What I was able to find was that more people have this issue, no real solution found though. Can anyone explain why this is happening? Especially a solution would be really helpful! Thanks in advance.

    Read the article

  • Apply Group Policy to Remote Desktop Services users but not when they log on to their local system

    - by Kevin Murray
    Running Windows Server 2008 Service Pack 2 with Remote Desktop Services role. I want to hide the servers drives using a GPO, but not the users local drives when they are logged on to their local system. Using a GPO, I went to "User Configuration - Policies - Administrative Template - Windows Components - Windows Explorer" and enabled "Hide these specified drives in My Computer" and "Prevent access to drives from My Computer" and in both used "Restrict all drives". Then under "Security Filtering" for the GPO, I restricted it to the system running Remote Desktop Services and the specific users who will be using RDS. I then applied the GPO to our domain and it worked a little too well. Not only was I successful in getting the GPO to work for RDS users, but it also affected those same users at their local systems as well. I've tried everything I can think of, but can't figure out how to apply this just to the RDS but not at their local system. What am I missing?

    Read the article

  • Windows 8.1 killed my wifi

    - by char1es
    Was running Windows 8 on a Lenovo G780 and updated to windows 8.1 Wifi does not work anymore, i always receive a dns server not responding error. I have tried using public dns servers from Google but with still no results. I've restarted my router with no results. All other devices on my network are having no trouble at all. I've tried updating the wireless driver but the manufacturers website claims that the Win8.1 driver should be updated with the update from windows. So i cant find a wireless driver... Anyone else having this error and does anyone have any ideas on how to fix it?? EDIT: here are the driver details: Broadcom 802.11n Network Adapter Provider: Microsoft Driver Date: 2013-05-31 Driver Version: 6.30.223.102 Digital Signer: Microsoft Windows Thanks

    Read the article

  • need advice for storing data setup hardware for client with 80TB per year of data footprint increase

    - by dasko
    hi everyone, i currently have a client that will be adding replicated data from satellite locations in the number of approximately 80TB per year. with this said in year 2 we will have 160TB and so on year after year. i want to do some sort of raid 10 or raid 6 setup. i want to keep the servers to approximately 4u high and rack mounted. all suggestions welcome on a replication strategy. we will be wanting to have one instance of the data in house and the other to be co-located (any suggestions on co-locate sites too?). the obvious hardware will be something like a rack mount server with hot swap trays and dual xeon based type processors. the use of the data is for archives of information, files will be made up of small file sizes. i can add or expand to this question if it is too vague. thanks for looking.

    Read the article

  • Linux-Vserver: How to do upgrade Debian 5.0 to 6.0 on vservers and main machine?

    - by Bartosz Kowalczyk
    I have server with debian lenny. I installed vserver on this server a few years ago. Summary I have 5 guest of vservers and main system, now. Each guest is debian lenny. Now, I'm wanting upgrade from lenny to squeezy on this servers (each Vservers and main machine). Do you do it? I should upgrade as usually system ? First I should upgrade every vserver next main machines and I have to do reset all machines and vservers? Please, advise me how to do it ?

    Read the article

  • Do two portforward rules translate to "and"?

    - by blsub6
    I just set up an Exchange server to replace my DeskNow mail server. I want to start testing my internet mail exchange of my Exchange server. I can only set the MX records on my DNS up to my one external IP address so I was thinking that I could set up a firewall rule on my internet-facing firewall that port forwarded the smtp packets to two different servers. My question is: If I do that, will that mean that the smtp packets will be forwarded to just the first internal IP on the list? Or does it mean that the packet will be cloned and sent to both IPs?

    Read the article

  • Looking at desktop virtualization, but some users need 3D support. Is HP Remote Graphics a viable solution?

    - by Ryan Thompson
    My company is looking at desktop virtualization, and are planning to move all of the desktop compute resources into the server room or data center, and provide users with thin clients for access. In most cases, a simple VNC or Remote Desktop solution is adequate, but some users are running visualizations that require 3D capability--something that VNC and Remote Desktop cannot support. Rather than making an exception and providing desktop machines for these users, complicating out rollout and future operations, we are considering adding servers with GPUs, and using HP's Remote Graphics to provide access from the thin client. The demo version appears to work acceptably, but there is a bit of a learning curve, it's not clear how well it would work for multiple simultaneous sessions, and it's not clear if it would be a good solution to apply to non-3D sessions. If possible, as with the hardware, we want to deploy a single software solution instead of a mishmash. If anyone has had experience managing a large installation of HP Remote Graphics, I would appreciate any feedback you can provide.

    Read the article

  • Torrent: Webseed or seeding client on server?

    - by Eliasdx
    I want to share a file using bittorent which I also offer over http. The torrent should be seeded by a dedicated and a virtual server and by people who are downloading it to lower bandwidth costs. My question: Should I set up a bittorrent client (rtorrent) on the servers and let them seed the file or should I use webseeds? I also want to limit the bandwidth the server uses to seed which is possible using rTorrent. How many bittorrent clients support webseeds? I found it in µTorrent and never heard of it before.

    Read the article

  • IIS7.5 App Pool recycling. What is the best schedule for Recycling

    - by mikedopp
    I have been using IIS7.5 Since its release. I am also using commerce server 2007sp2. Due to Commerce Servers Need for memory and processor I have the app pool the website is assigned to recycling at midnight every night. My Question is what is the best time table to recycle heavy web app pools? I am looking to keep speed and not bump potential customers while recycling multiple times a day if possible. Another issue is that every few days the same app pool will hang and I have to force a reset of IIS to get it working again.

    Read the article

  • Deactivating website in ISPConfig shows another site

    - by Mattias
    A long time ago, one of our clients setup a subdomain pointing to our ip-adress. We added a website (SitesWebsiteAdd new website) that points to one of our servers. The project is now closed and the client wants us to remove the content. When we deactivate (by unclicking active) this site it automatically defaults to another website we have in our list (!?). So, because the client is still pointing to our ip, when entering project.client.com another client project is showing up by default. How is this possible? Any suggestions? I can ofcourse give you guys more details when you tell me what details you need. Thanks

    Read the article

  • Should I limit end-user gigabit ports to avoid saturating uplink/trunk connections?

    - by Joel Coel
    We have a campus with 16 buildings and older 850nm 1Gbps fiber links between the buildings, that all come to a core switch for our servers that also uses 1Gbps ports. We're finally starting to replace our aging 10/100 end-user switches, and much of what we're looking at are 1 Gbps units. My question is, since the trunk/uplink lines are still 1Gbps, if I were to install 1 Gbps switches for end users, should I limit the ports to 100Mbps until I can also upgrade the trunks to avoid allowing a bad-behaving host to saturate a trunk line (since we're a college, we have plenty of mis-behaving hosts) and thereby create a DoS situation for a building, or will TCP congestion control typically take care of that for me? What if we have a lot of UDP traffic (games, video chats, even a small amount of bittorrent)?

    Read the article

  • Server 2003 Functional Domain DFS Replication Problem (Files being moved to conflicted folder for no reason)

    - by Az
    We have 2 Windows 2003 servers configured with a DFS namespace and we are running into problems with the redirected profiles we have setup. Basically, one server is the FSMO master for all roles, and we have another DC that is the DFS namespace primary server. We have profile redirection setup using the \dfsnamespace\userprofile formula. The FSMO master DC locks up occasionally (don't ask :), and when it does, and we bring it back up... All of the user profiles hosted on the DFS namespace get overwritten when a user logs in. The current profile gets moved to the conflicting and deleted items folder. This strikes me as really odd considering the whole point of using DFS was to provide some redundancy in case one server went down. Can anyone help? Thanks in advance! -Nate

    Read the article

  • How do I associate server traffic to a domain hosted on that server?

    - by morley
    I have three or four Linux servers, each of which hosts anywhere from 5 to 50 domains. Each domain has its own folder: /www/projectname/web/ Logs go in: /www/projectname/log However, if there's a traffic spike (or, as I see it on my end, a memory usage spike), I'm not sure how to figure out which domain is responsible for the traffic without running tail -f on each of the projects and making an educated guess based on how fast things scroll. There's got to be a better way! There probably is, but I haven't seen it. And the last time I checked, bandwidth monitors only report system-wide load. So if anyone knows how to do this the right way, please let me know. Thanks!

    Read the article

  • Is there any danger in disabling windows firewall on a azure worker role?

    - by NullReference
    I'm trying to troubleshoot a bug on our Azure worker role where we occasionally get the error "Unable to read data from the transport connection: An established connection was aborted by the software in your host machine". This error occurs when we are connecting to outside resources like google auth servers. A few people have recommended disabling the firewall\antivirus on the server. I'm just wondering what kind of security risk we would take by doing this. The server doesn't have iis installed but would it be vulnerable to hacking without the firewall? Thanks

    Read the article

  • SSL Certificates, two-way authentication and loadbalancers

    - by 5arx
    We're looking to implement two-way authentication with client certificates for a privileged subset of our application users. The idea will be that if a certificate is detected the user will be asked for an additional password/PIN and that will be used to verify the certificate and user. Ordinary users will continue to authenticate themselves via the standard login mechanism. Our production environment (hosted by a well-known company) comprises load-balanced application servers and I'm unclear as to how this set-up will handle the certificates and I'm not certain if there are any pitfalls I should be aware of. I would very appreciate some thoughts, comments or real-world advice on the subject.

    Read the article

  • telnet - is there a maximum line limit?

    - by benc
    I am working on several servers that use HTTP for transport of commands. What I have encountered is that some of the commands I am trying to issue by hand are very long GETs, several lines, and that when I telnet from my Mac to my Solaris system, I cannot seem to cut and paste the line successfully. I get a couple bounching sounds (which I assume is a control-g - bell) and then it never pastes everything. From trying to break it up into smaller pieces, I am getting the impression that TELNET, or my bundled telnet client or server has a maximum line length that I had never bumped into. I did some googling and superusering, but did not find anything definitive.

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • How to enable an active/active file server cluster in windows 2008 r2 Enterprise

    - by Phygg
    I've just created a cluster for my file servers in Windows 2008 R2 Ent SP1 environment. The goal - an Active/Active cluster for web server data How do I go about telling the cluster to be active for both nodes? Do I have to tell the cluster to be active/active? Here is a link to the instructions I followed when configuring the failover cluster. http://technet.microsoft.com/en-us/library/ff182326(WS.10).aspx So if anyone can help me to grasp the concept or maybe I'm way off and I need a node that is not active along with 2 active nodes to do this, I would appreciate it.

    Read the article

  • Avoiding spam filters on my CentOS 5.5 64bit server?

    - by Andrew Fashion
    I run a social network on my web server, with about 15,000 members right now. My administration section let's me Mass Email all my users. Currently it uses the built in PHP mail function. What is the best way to congfigure my server to bypass spam? Can I install anything on the server? Or should I just make the social network use SMTP? The admin panel lets me choose SMTP or built-in mail function. I'm not to familiar with mailing from servers, as I usually use Aweber for my mailing, but I cannot use Aweber for this as they will not let me just import 15,000 emails. Let me know, thanks.

    Read the article

  • Benefits of log rotation

    - by Manfred Moser
    I have been using logrotation for years and never thought too much of it being a problem until I came across a question on stackoverflow (http://stackoverflow.com/questions/1508734/disable-java-log-rotation/) where someone wants to disable log rotation. To me with experience in having build server and even production servers cleaned up manually because logs are not rotated and discs are running out and suddenly machines come to a halt that all seems crazy, but it occurred to me that maybe it is not so obvious after all. So what are the benefits of log rotation? And what are the drawbacks (e.g. more difficult to debug/analyze maybe)? What tools do you find useful for working with rotated log files? Splunk I assume, but what else?

    Read the article

  • low speed web application, Server problem or Application

    - by Ashian
    Hi, I have a web application written by asp.net (c#) sql server 2005. we host it on 2 dedicated server ( IIS and SQL server ) From some month ago , in some days of week we have many reports about speed issue. we have some other application on this server using same database. when we have speed problem all aplication on these server have this problem, but applications on other server in same data center work correctly. ram and cpu usage are ok. how can I check that the problem related to internet connection or my application design? which parameters must be checked. Some other information In applications users can upload several files to server , each file up to 3 MB. we use a sql web admin application, on same server that has same problem, this is a standard application which work perfectly on other servers. Thanks

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >