Search Results

Search found 12072 results on 483 pages for 'x86 servers'.

Page 182/483 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • NFS Datastore Appears Empty!

    - by daemonchild
    Hi guys, I've got an NFS server problem. The datastore connected and seems to be a valid datastore in both the vSphere client and under /vmfs/volumes. The issue is that it appears to be empty! I can create files (eg: touch /vmfs/volumes/nfs_common/thefile) and it is correctly written to the nfs store. I can verify this by looking on the nfs server itself. But the vmkernel only sees an empty datastore; the file disappears. Another freebsd box can mount the same NFS share and see the files correctly. Some useful data: ESXi 4.0.0 Build 208167 NFS is unfsd running on a Buffalo Linkstation Pro Duo (a bit hacky I know). The share has file system permissions set to 777 at the moment. My /etc/exports is as follows, and as I say it connects fine. /mnt/array1/ESX_Shared 192.168.16.0/255.255.255.0(insecure,rw,sync,no_root_squash,no_subtree_check) The ESXi servers can also successfully mount NFS shares from other NFS servers. Any ideas guys? Thanks, Tom

    Read the article

  • Microsoft Application Request Routing with Windows Authentication

    - by theplatz
    I'm running into a problem trying to get Windows Authentication working in an environment that uses Microsoft Application Request Routing and was hoping someone might be able to help. The problem I'm running into is that only some requests are authenticated, while others fail with 401 errors. I have followed the Special Case of Running IIS 7.0 in a Web Farm instructions found at http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx to no avail. My current server setup looks like the following: ARR Two servers set up with IIS shared configuration using IIS 7.5 on Windows 2008 R2 Anonymous authentication turned on for the Default Web Site Web Farm Two servers running IIS 7.5 on Windows 2008 R2 Three web sites set up using port binding to differentiate between virtual hosts. Ports being used are 8000, 8001, and 8002 Application pools for Windows Authentication all use a common domain account SPN added to domain account for http/<virthalhost-name>:<port-number> and http/<virtualhost-name>.<fully-qualified-domain>:<port-number> The IIS logs show the following when authentication is working/failing. If I understand correctly, all requests should show DOMAIN\User_Name: 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/stylesheets/techweb.landing.css - 8002 DOMAIN\User_Name ARR-HOST-1-IP-ADDRESS 200 0 0 62 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-left.gif - 8002 DOMAIN\User_Name ARR-HOST-IP-ADDRESS 200 0 0 31 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/application-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 3221225581 15 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/building.gif - 8002 DOMAIN\User_Name ARR-HOST-2-IP-ADDRESS 200 0 0 218 Does anyone know what might cause this problem and how I can resolve it?

    Read the article

  • Decyphering Seagate drive model numbers?

    - by Stefan Lasiewski
    I'm comparing Seagate's Enterprise and Desktop drives for a variety of old and new servers. These servers come from different generations, so options like size (73GB, 2TB) and interface (SATA vs SAS 3.0Gbps vs SAS 6Gbps vs SCSI Ultra320) are widely variable. I'm trying to compare the sizes, speeds and interfaces, but I'm getting thrown off by different models. Also, their website is not the best. Does anyone know of a documented explanation of the Seagate model numbers? And is there a single spreadsheet which compares the features for all drives (or all 'Enterprise' drives?). Seagate drives have model numbers like this: Model ST3600057SS 6-Gb/s SAS 600 GB None at Cheetah® 15K Hard Drives Model ST373455LW Ultra320 SCSI 73.4 GB 68-Pin LW at Cheetah® 15K Hard Drives Model ST32000644NS SATA 3Gb/s 2 TB None at Constellation™ ES Hard Drives Model ST973452SS 6-Gb/s SAS 73 GB None Savvio® 15K Hard Drives Model ST9200011FS SATA 3Gb/s 200 GB Pulsar™ Solid State Drives I understand the model numbers read something like this: ST - SOMETHING1 - SIZE - SOMETHING2 - INTERFACE Where the fields mean something like this: ST : For 'Seagate'? 'Seagate Technoligies'? SOMETHING1 - This field has number, but I'm not sure what that represents. SIZE - Size in Gigabytes. This is a number like '73' or '300' or '2000' SOMETHING2 - This field also has a number, but I'm not sure what it means. INTERFACE - This field seems to indicate the Interface. 'SS' means SAS, 'FC' means Fibre Channel, but I don't see how to distinguish between 6Gbps SAS and 3Gbps SAS, or different SATA or FC speeds. I don't see a field which indicates the RPM (15K , 10K, 7.2K) etc. Is this part of the model number?

    Read the article

  • Kickstart: Serve dynamic kickstart images via a CGI or PHP script?

    - by Stefan Lasiewski
    I'd like to kickstart a couple dozen RHEL6/SL6 servers. However, some of these servers are different and I don't want to create a new ks.cfg file for each class of server. Are there any products which can generate a Kickstart file dynamically on the fly, from a template? For example, if I append a line like this to the KERNEL: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi Then the script ks.cgi can determine what host this is (Via the MAC address), and print out Kickstart options which are appropriate for that host. I could optionally override some options by passing parameters to the script, like this: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi?NODETYPE=production&IP=192.168.2.80 After we kickstart the server, we activate Cfengine/Puppet on this system and manage the system using our favorite Configuration Management product. We're experimenting with xCAT but it is proving too cumbersome. I've looked into Cobbler, but I'm not sure it does this. Update: A roll-your-own solution is discussed in the O'Reilly book: Managing RPM-Based Systems with Kickstart and Yum, Chapter 3. Customizing Your Kickstart Install Dynamic ks.cfg, which echos some of the comments in this thread: To implement such a tool is beyond the scope of this Short Cut, but I can walk through the high-level design. Any such solution would mix a data store (the things that change) with a templating solution (the things that don’t change). The data store would hold the per-machine data, such as the IP address and hostname. You would also need a unique identifier, perhaps the hostname, such that you could pick up a given machine’s data. The data store could be a flat file, XML data, or a relational database such as PostgreSQL or MySQL. In turn, to invoke the system, you pass a machine’s unique identifier as a URL parameter. For example: boot: linux ks=http://your.kickstart.server/gen_config?host-server25 In this example, the CGI (or servlet, or whatever) generates a ks.cfg for the machine server25. But where, oh where, is the code for ks.cgi?

    Read the article

  • vyatta Server Reboots by itself

    - by Fernando
    I have an issue regarding some hardware, maybe you can help me. First, I set up a Supermicro Superserver SYS-5016I-NTF with a Intel Xeon X3470 and 4 GB of Ram with a Hotlava Card Tambora 64G4 with Intel Chipset 82599EB and 4x10G SPF+ ports. Installed Vyatta community edition 6.3. I used it as router making BGP connections with 2 operators. No load at all, temp ranges normal. But the issue is that it reboots by itself in a ramdom way. Not very often, once every few days. But it is unacceptable for production purposes. So I try to test on different hardware, and installed Vyatta community edition 6.3 on a Dell PowerEdge 2950, with Xeon(R) E5345 @ 2.33GHz and 4 GB of Ram. Same Vyatta configuration as Supermicro Server. With same hotlava Card model ( I bought two of them ) Well I have reboots with this equipment as well. Same frecuency as above. I have checked syslog no strange logs until boot process starts to be logged. So it seems server reboot suddenly. I have installed latest driver for the chipset of the Hotlava card. Servers are placed in a datacenter with UPS So finally two things in common in both servers: Hotlava Card. Someone with issues with this card, or the chipset?? Could be it this card?? Vyatta 6.3 community edition. I don't thing is the problem. Is a regular Debian with packages to glue together different services. Or maybe is something I am missing. Andy ideas, suggestions?? Thank you very much... Fernando

    Read the article

  • 553-Message filtered - HELO Name issue?

    - by g18c
    I am having major issues sending from my SBS2011 machine to Message labs server-13.tower-134.messagelabs.com #553-Message filtered. Refer to the Troubleshooting page at 553-http://www.symanteccloud.com/troubleshooting for more 553 information. (#5.7.1) ## I have changed the IP and hostnames from the below. I am not on any IP or domain blacklists. I have setup SPF (which includes mailchimp servers): v=spf1 mx a ip4:95.74.157.22/32 a:remote.mydomain.com include:servers.mcsv.net ~all I am sure i have setup my HELO names correctly under the Exchange Management console, sending a test email from the SBS server and looking at the header shows the following: X-Orig-To: [email protected] X-Originating-Ip: [95.74.157.22] Received: from [95.74.157.22] ([95.74.157.22:52194] helo=remote.mydomain.com) by smtp50.gate.ord1a.rsapps.net (envelope-from <[email protected]>) (ecelerity 2.2.3.49 r(42060/42061)) with ESMTP id 11/90-10010-E529C835; Mon, 02 Jun 2014 11:04:09 -0400 Received: from MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef]) by MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef%10]) with mapi id 14.01.0438.000; Mon, 2 Jun 2014 19:03:56 +0400 Is is the main helo name there OK and do i need to worry about the second Received block where the MYSBSVR.mydomain.local is mentioned? I have asked the ISP to set the reverse DNS for my IP to remote.mydomain.com but they have instead put remote.MYDOMAIN.com - would this case cause HELO lookups to classify this as not matching? Anything else I can do to find out why i am being filtered?

    Read the article

  • Recommendation on remote access setup for accessing customer systems

    - by gregmac
    I'm looking for a product recommendation (open or commercial) that will allow remote access to customer sites for tech support purposes. We need to be able to gain access to help troubleshoot problems on servers. Currently end up using anything from RDP on public IP, to various VPNs that clients happen to have, to webex-type sessions that require lots of interaction from both sides to get things working. This often means a problem that could take 10 minutes to solve takes an extra 30+ minutes messing around trying to get a connection up. There are multiple customer sites, which should NOT have access to each other. At each site, there is anywhere from 1 to 8 servers (Windows 2003 or 2008) that need to be accessed. Support connection to machines even if they're behind a firewall/router with no public IP Be able to selectively allow/deny access from customer site. Customer site should not be able to connect outbound to anywhere else (our systems, or other customer sites) Support multiple users from our end If not a VPN connection (where RDP could be used over top), should support: Remote desktop access, including copy/paste File transfers Preferably would have some way to list all remote systems, showing online/offline. Anyone have any suggestions?

    Read the article

  • master-slave datastore replication, automatic failover, and wackamole

    - by z8000
    I have 2 dedicated servers provisioned for my next project's datastores. The datastores are configured for master-slave replication. There's no inherent automatic failover but I of course want this. That is, I'd love for access to the master datastore to always just work without having to configure a client library to detect when a master is down and failover to the slave. I've seen Wackamole which is based on the Spread Toolkit. You provide Wackamole with a set of IPs and a bunch of nodes, and regardless of the up/down state of any of the nodes, those IPs will stay available/up. Wackamole detects when a node goes down and ARPs the IP(s) that were up on the now-down node. It's pretty neat actually. So, my thought was to use Wackamole to keep the 2 virtual private IPs available/up. Clients would then just always use the same private IP to access the master datastore and the same but distinct IP for the slave datastore, even if those IPs were hosted on the same node. My datastore servers are accessed over a private network. I am unsure if this messes with Wackamole though. Is this lunacy? How do you generally handle automatic failover of private services like a datastore. FWIW, it shouldn't matter but the datastore is Redis. I don't want to hear "use mySQL" please :) Thanks.

    Read the article

  • How do I prevent or override a group policy on Windows 7?

    - by Kevin
    A few months ago my company was purchased by a large corporation. We recently switched our network over to the large corporate network which has more restrictions requirements. One of these is the requirement to use a proxy server for Internet traffic. However, some of our internal servers are not recognized by the corporate DNS, so we need to provide the fully qualified domain name. For W7, we make changes to the Internet Properties for IE8 and Chrome to include our domain name as an exception to the proxy server (e.g., *.foobar.com). The problem is that a group policy that does not include our domain name is continually pushed out to my systems throughout the day. This requires me to make the appropriate changes to the Internet Properties several times a day in order to access our internal servers. Is there a way that I can prevent the group policy from being pushed to my systems or detect when the group policy is pushed and override it? I am an administrator on all of my systems. I do have Firefox installed which is not subject to the same group policy push, but I need to have IE8 and Chrome working.

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • Domain Controller died, now get authentication boxes in IE for SDL Tridion 2009

    - by Rob Stevenson-Leggett
    We had a major network issue where our secondary domain controller (responsible for Win2k3 boxes) died and had to be rebuilt (I beleive this is what happened, I am a developer not network admin). Anyway, I am working remotely via VPN at the moment and since this happened, I am getting an authentication box when trying to access certain areas of SDL Tridion via IE (Tridion 2009 SP1 is IE only) it seems like somewhere my credentials are not being passed correctly or the ones cached on my laptop do not match the ones the Domain Controller has. This only seems to affect Windows 2003 servers. Our IT support thinks that the only way to sort it out is to connect my laptop directly to the network. I am not planned to go to the office for a few weeks at least and this issue means I have to work with Tridion via Remote Desktop. We thought changing the password on my account might work but this didn't help. So basically my question is, is there any way I can reset my credential cache without having to reconnect to the network? Or is it IE that is causing the problem perhaps, since I can RDP to servers and use Tridion 2011 instances in other browsers fine? I am on Windows 7 using SonicWall VPN client.

    Read the article

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

  • Migrate users from one Active Directory domain to another?

    - by Matt
    I work for a company that hosts desktops for a number of different companies. At the moment, all the clients access a single domain controller called HOSTING. Under that are groups for each company. Each of the hosting servers exist on the same network and so are therefore potentially browseable by other terminal servers. This has raised some security issues and I've found it a little tricky to manage the security. As well, it's possible to see who the other hosted companies are even though other users cannot see their data. What I'd like to do is isolate each clients terminal server/s into their own VLAN. In addition, I'm thinking that each TS would have it's own DC which could just run on the TS for that company. Overhead for a DC is fairly minimal. This would isolate users on that TS from seeing the other companies completely. Firstly, does this sound like a sensible plan? Second... if it is sensible, how would I go about pulling the accounts from the HOSTING domain to a new domain? ideally, without the need for users to change their passwords?

    Read the article

  • Cluster Core Resource state of Exchange 2010 DAG

    - by Christoph
    I have two Exchange 2010 servers in a DAG and a witness server to implement mailbox resiliency. The two Exchange servers are in two subnets and the Windows failover cluster therefore has two IP address resources. I now that Exchange uses "core functionality" of Windows Server failover clustering, but it does not use all features. My setup also seems to work, but if I run the validation in the Windows Failover Cluster Manager, it complains about one of the IP address resources being offline. However, I cannot bring this resource online, because the server complains that "the specified cluster node is not the owner of the resource, or the node is not a possible owner of the resource". If I "Simulate failure of this resource", it becomes offline and the other IP becomes online. I have the vague idea that Exchange might use the state of the IP resource to identify the Primary Active Manager, but I am not sure. As it is obviously important that failover really works, I would like to be sure. Therefore, my question is: Is it normal that only one IP address resource in a Exchange 2010 DAG failover cluster is active at a time? If not, how do I bring both resources online at the same time given the error described above?

    Read the article

  • IIS SMTP unable to relay for domain on local network.

    - by MartinHN
    Hi I have a network with the following servers: EXCH01 - Exchange Server for @domain.com e-mails. TEST-REP01 - Local reporting server. Have IIS SMTP installed and configured. What happens on the TEST-REP01, is that I have a Windows Service that reads reports in a SQL server that is ready for delivery. All reports is sent one at a time, using the local SMTP server. It works perfectly, unless the recipient e-mail address is [email protected] - the domain that the EXCH01 server manages. I get the following error: Mailbox unavailable. The server response was: 5.7.1 Unable to relay for [email protected] What can I do to troubleshoot this further? I can't seem to find useful information in the SMTP log: 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 EHLO - +TEST-REP01 250 0 210 15 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 MAIL - +FROM:<[email protected]> 250 0 44 31 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 RCPT - +TO:<[email protected]> 550 0 50 28 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 QUIT - TEST-REP01 240 0 50 28 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 EHLO - +TEST-REP01 250 0 210 15 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 MAIL - +FROM:<[email protected]> 250 0 44 31 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 RCPT - +TO:<[email protected]> 550 0 50 28 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 QUIT - TEST-REP01 240 0 50 28 0 SMTP - - - - Relay access on the local IIS (TEST-REP01) are set to allow 127.0.0.1, TEST-REP01 and other servers such as TEST-WEB01. DNS settings on the local network is not domain.com - but companyname-domain.local.

    Read the article

  • How to unblacklist an IP at Google?

    - by DJRayon
    I own a small business with two servers for webhosting. When setting up the primary (CentOS 5.5 + WHM, secondary is WHM DNS Only) server I kinda messed up the firewall, so the hackers could send stuff from my server. My primary IP is x.y.29.218. Anyway - I got blacklisted in several places, but now those blacklistings are gone. For a week or so, but Google still has my IP blacklisted. I handling serious damages because of that. Many clients want to switch from my hosting, etc. I've fixed the hole with CSF firewall SMTP_BLOCK option and enabled also the WHM SMTP TEAK Currently all I see from the Main Email View Mail Statistics (Errors section) in WHM is rows and rows of the following message removed-the-email-address-for-security R=lookuphost T=remote_smtp: SMTP error from remote mail server after end of data: host aspmx.l.google.com [a.b.39.27]: 550-5.7.1 [x.y.29.218 1] Our system has detected an unusual rate of\n550-5.7.1 unsolicited mail originating from your IP address. To protect our\n550-5.7.1 users from spam, mail sent from your IP address has been blocked.\n550-5.7.1 Please visit http://www.google.com/mail/help/bulk_mail.html to review\n550 5.7.1 our Bulk Email Senders Guidelines. h24si3868764fas.171 What are my options? I have one IP free. How can I configure Exim to send mail from that IP? My brain is like constantly blowing up because of this problem. Please someone, who has any knowledge how to deal with the current situation, please give me some kind of help - any help, suggestions, etc. I've tried everything I know, and I still don't know much, because this is the first time (I just started to webhost, etc) I deal with real physical servers not some kind of pre-setup VPS solution. Many - many thanks, whoever has time to offer some help.

    Read the article

  • Log Shipping breaking daily on SQL Server 2005

    - by IT2
    I am facing a somewhat serious problem with Log Shipping on SQL Server 2005 and I am having trouble to correct it, so I will try some help from SF's experts. I have a Windows 2003 Server (PROD) that ships transactional log backups to another two servers: STAND1: Windows 2003 Server with SQL Server 2005. STAND2: Windows 2008 R2 Server with SQL Server 2005. The problem is that Log Shipping to STAND2 is breaking for ~ 90 minutes some times of the day and returning back without intervention. The breaking occur at times when the backup file is larger (after reindexing, etc). I can see the message below logged on the COPY job: *** Error: The specified network name is no longer available The copy agent was breaking dozens times a day only to STAND2 server, and after the changes below "only" breaks ~ two times a day: The frequency of the backup job was changed from 5 minutes to 10 minutes. Instead of backing up the 4 databases to the same folder, the log backups are now saved on separated folders for each database. The backup job doesn't run 24hs now, and only for 14 hours a day, when people are working on the database. I configured the SQL Server instances on the three servers to limit the memory, leaving more memory to the OS. Now I don't know what to do. Any help will be much appreciated! Thanks!

    Read the article

  • pfSense Load Balancer and Virtual IP

    - by jshin47
    I have two identical web servers on 10.2.1.13 and 10.2.1.113. I would like to set up pfSense load balancer to balance requests to both of these. I set up pools that included HTTP and HTTPS for both of these hosts, then set up virtual servers that responded on HTTP and HTTPS and referred traffic to its respective pool. However, I set up the virtual server to listen on 10.2.1.213, a LAN IP rather than a WAN IP, because I want LAN traffic to be able use the load balancer virtual server as well. So, I set up a Virtual IP for 10.2.1.213 on LAN IP, and a NAT port forwarding rule for HTTP and HTTPS traffic on a WAN IP to forward to 10.2.1.213. It seems like this should work, but it fails. What eventually happens is that when I try to access the page from WAN, I am directed to the login page for my pfSense device rather than the page I am expecting. When I try to access 10.2.1.213 from LAN, the request times out. What is going wrong here? I have tried it with and without NAT reflection to no avail. Please advise

    Read the article

  • append $myorigin to localpart of 'from', append different domain to localpart of incomplete recipient address

    - by PJ P
    We have been having some trouble getting Postfix to behave in a very specific fashion in which sender and recipient addresses with only a localpart (i.e. no @domain) are handled differently. We have a number of applications that use mailx to send messages. We would like to know the username and hostname of the sending party. For example, if root sends an email from db001.company.local, we would like the email to be addressed from [email protected]. This is accomplished by ensuring $myorigin is set to $myhostname. We also want unqualified recipients to have a different domain appended. For example, if a message is sent to 'dbadmin' it should qualify to '[email protected]'. However, by the nature of Postfix and $myorigin, an unqualified recipient would instead qualify to [email protected]. We do not want to adjust the aliases on all servers to forward appropriately. (in fact, every possible recipient doesn't have an entry in /etc/passwd) All company employees have mailboxes on Exchange, which Postfix eventually routes to, and no local Linux/Unix mailboxes are used or access. We would love to tell our application owners to ensure they use a fully qualified email address for all recipients, but the powers that be dictate that any negligence must be accommodated. If we were to keep $myorigin equal to $myhostname, we could resolve this issue by having an entry such as the following in 'recipient_canonical_maps': @$myorigin @company.com However, unfortunately, we cannot use variables in these map files. We also want to avoid having to manually enter and maintain the actual hostname in 'recipient_canonical_maps' for each server. Perhaps once our servers are 'puppetized' we can dynamically adjust this file, but we're not there yet. After an afternoon of fiddling I've decided to reach out. Any thoughts? Thanks in advance.

    Read the article

  • Using WebDAV for automated downloads

    - by Geo Ego
    I currently manage a number of sites (at one point about a dozen, currently four, but soon growing into the dozens or hundreds) that serve a piece of software to clients at their remote locations. Our web server is Windows SBS Server 2k3, and the remote servers are Windows Server 2k3.When we have new versions of the software, I upload this new software to a specific directory and rename it; each time the clients boot, they pull their software from that specific directory. With just a few sites, it's no problem for me to RDP in and copy the files over. As the number grows, this will quickly become quite unwieldy. So I'm thinking that WebDAV would be part of a solution, so that I could simply push the newest version to our server (Windows SBS Server 2003) and make it available to the sites to grab. However, on the remote server side, what are some suggestions for automating the download? I only want the servers to download the files during downtime (between 3 AM and 9 AM), and I only want them to download if there is a new version available. I had thought of writing a program that checked the files on the WebDAV server at a regular interval, compared a hash of the current software to a hash of the software on the server, and only downloaded if they were different, but I'm wondering if there is something I am unaware of that can automate the process.

    Read the article

  • Clustering filesystem for small files

    - by viraptor
    Hi, I'm looking for a distributed filesystem which I could use for storing lots of small files (<1MB usually). What I want to get is: 2 servers which have the fs mounted themselves and mirror the data locking support (among reachable nodes) some kind of best-effort automatic resynchronisation after one node goes down and comes back again What I mean by the resync is that, I'm ok with both servers doing read/write operations even if they split-brain. I'm also ok if a local process obtains a lock if the other host is not reachable. From the resync I expect only a file-level consistent view after a while - that is - if file x is modified on both nodes during a split-brain, I don't really care which one is available after they join again, as long as it's full file, not one block coming from node1 and another block from node2. Is there a solution like that out there? I see that gluster has some problems with file locks (even in 3.1). I also noticed that OCFS2 will panic if both nodes split-brain. What other filesystem would allow me to do what I want?

    Read the article

  • dns in a small network with router and AD domain

    - by Felix
    I have a small office network with router (running OpenWRT), Windows Domain Controller (used to be 2008R2; I just backed it up and upgraded to 2012), about a dozen AD clients (3 server and windows workstation) and several non-AD clients (network printer, PBX). The problem is that the clients can't access servers by name (only by IP). I tried all kind of permutations. Right now domain controller runs DNS server for all desktops; but unless I put an entry in hosts file - I can only get by IP. I have router as DHCP server (since not all devices are on AD); and except for Domain Controller all IP addresses, including "static", are assigned by the router. Most frustrating, some servers sometimes just work! for example, I can often get to the Linux box by name (it is part of Domain using Beyond Trust Integration Services); but I can never get to SQL Server box. Seems like non-domain devices see more names than domain members... This network should be fairly typical; but I couldn't get any guidance about how to set up DNS/DHCP service to make all nodes happy. The closest is this question, but still it's different! Thanks

    Read the article

  • Designing a software based load balancer

    - by Kishore pandey
    Hello to all Server fault users, I am new to this website but have constantly been using the mother website, stackover flow. Well to begin with, i would like to design a load balancer for the organization i am working for. As i am very new to this whole, idea about load balancing and networks. I am finding it very difficult to start my project. I did a lot of research on already existing load balancer and found some(HAPROXY,NGINX) that could solve my problems, but the point is, I am still in a dilemma if they could answer the following requirements of mine: The client and server in my architecture are distributed. The load balancer should take care of the firewall. LB server should balance the load among all servers present in WWW cloud. The LB server should have some sort of configuration file, with the help of which it is possible to configure the servers. Heart beat: With the help of which it would be possible to check if any server is down, if any server is down the request should be passed to some other server. Various load balancing algorithms of the incoming requests. Easy error handling. It should be fairly possible to prioritize the incoming requests. Is there any already available load balncer solution on the market that could satisfy these requirements? If not is there any base code available with the help of which i could develop my own load balncer. If not where should i start from scratch? I am practically new to everything. Any help from a load balancer expert is very much appreciated. Thanx a ton in advance. Cheers and regards. Kishore

    Read the article

  • PTR record not valid for all domains

    - by charnley
    We have an issue sending emails to certain domains, namely Time Warner and Cox. Last week, we decommissioned our Exchange 2003 server and now our Exchange 2010 server is doing all of the transport for our domain. We run our own authoritative name servers, so we are in charge of the DNS and have modified our PTR record to reflect the new server. All mailflow is working except for these 2 domains. When I telnet on port 25 to the mail servers for Cox and Time Warner I am receiving errors. For Cox the error is: 554... rejected - no rDNS And when I telnet to port 25 to the Time Warner mail server we get this: 554 5.7.1 - Connection refused. IP name lookup failed for x.x.x.x I have run through the outbound SMTP test on Microsoft Remote Connectivity Analyzer and get 100% completely successful results. MXToolbox comes up with all successful tests on SMTP as well, showing correct reverse banner check, and no blacklisting. DNSQueries.com shows a valid reverse DNS entry as well for us. Outbound emails to these 2 domains continue to sit in the queue. Any ideas or advice would be greatly appreciated. Thanks!

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >