Search Results

Search found 12089 results on 484 pages for 'rule of three'.

Page 370/484 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • X:\ is not accessible. Insufficient system resources exist to complete the requested service. Help [

    - by Katherine
    I keeping getting the error message from above on multiple computers that I administer. I wasn't sure if I should be posting this on SuperUser or ServerFault so my apologizes if it should go there... Basically, I have at least 5 computers of varying ages (some fresh out of the box!) throwing the above error. X:\ is one of our network drives that is mapped for users. Most of the time if you shut down the biggest application it will fix the problem, but it's becoming an increasing issue, and I can't keep running around fixing it manually. I have tried to do some research, but most of it just states the obvious without supplying a permanent fix. The machines are all running Win XP SP3, with at least 2gb of ram. Sorry for the delay in getting back to people... a lot of good questions. To respond back to people... It is a windows 2003 server that houses the file share. We have about 175 users, however i cannot state how many are actually accessing the information at a single moment. Considering that this is our largest file share, I would say that probably at least 100+. The files we work with are large, but not that big considering that we do a lot of graphical and video work. ~50mb. That being said, this is error occurs simply when trying to gain access to the server itself, not actual files. When I say close a program, I mean that it can be any program. It doesn't matter which program. It varies from machine to machine, and from day to day. Some days it is Firefox, some days it is Outlook, some days it is Excel. There doesn't seem to be a common bond behind which application could be causing the problem. Thank you for the articles, and the recommendation on paging files. I will have to look into that. None of our computers are set to hibernate, so I am going to rule that out.

    Read the article

  • iCloud stuff stops working while connected to OpenVPN [closed]

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again!

    Read the article

  • Authenticate by libpam-mysql and libnss-mysql (CentOS)

    - by Chris
    I'm trying to get MySQL to function as a backend for authenticating users on CentOS 6.3. So far I have successfully installed and configured libnss-mysql. I can test this by doing: # groups testuser testuser : sftp Testuser is a member of the sftp group in fact, all MySQL based useraccounts will be hardcoded to it. The sftp group is chrooted and forced to use internal-sftp so they cannot do anything but access their home directory. Then I configured pam-mysql and PAM to allow mysql logins. This also works.. When SELinux is not enforcing. When I do setenforce 1 users can no longer login. Error: Permission denied, please try again. This is my pam_mysql.conf file: users.host=localhost users.db_user=nss-pam-user users.db_passwd=*********** users.database=sftpusers users.table=users users.user_column=username users.password_column=password users.password_crypt=6 verbose=1 My /etc/pam.d/sshd: #%PAM-1.0 auth sufficient pam_sepermit.so auth include password-auth auth required pam_mysql.so config_file=/etc/pam_mysql.conf account sufficient pam_nologin.so account include password-auth account required pam_mysql.so config_file=/etc/pam_mysql.conf password include password-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open env_params session optional pam_keyinit.so force revoke session include password-auth And to be complete the contents of some log files.. /var/logs/secure Nov 20 14:52:20 hostname unix_chkpwd[4891]: check pass; user unknown Nov 20 14:52:20 hostname unix_chkpwd[4891]: password check failed for user (testuser) Nov 20 14:52:20 hostname sshd[4880]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.10.107 user=testuser Nov 20 14:52:22 sftpusers sshd[4880]: Failed password for testuser from 192.168.10.107 port 51849 ssh2 /var/logs/audit/audit.log type=USER_AUTH msg=audit(1353420107.070:812): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.312:813): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct="testuser" exe="/usr/sbin/sshd" hostname=192.168.10.107 addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.456:814): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=password acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' I tried to let audit2why explain the problem but it remains silent even though there are some errors. Does anyone see the problem? Thanks! EDIT: Turns out it's almost working with setenforce 0 I can mkdir foobar but if I do a single ls I get an error: Received message too long 16777216

    Read the article

  • Configuring gmail for use on mailing lists

    - by reemrevnivek
    This is really two questions in one. First, are nettiquette guidelines still accurate in their restrictions on ASCII vs. HTML, posting style, and line length? (Here's a recent metafilter discussion of the topic.) Second, If they are not, should these guidelines be respected? If they are (or if they should still be respected), how can modern mail programs be configured to work properly with them? Most mailing list etiquette statements appear to have been written by sysadmins who loved their command lines, and refuse to change anything. Many still reference rfc1855, written in 1995. Just reading that paginated TXT should give you an idea of the climate at the time. Here's a short, fairly random list of mailing list etiquette statements with some extracted formatting guidelines: Mozilla - HTML discouraged, interleaved posting. FreeBSD - No HTML, don't top post, line length at 75 characters. Fedora - No HTML, bottom-post. You get the idea. You've all seen etiquette statements before. So, assuming that the rules should be obeyed (Usually a good idea), what can be done to allow me to still use a modern mail program, and exchange mail with friends who use the same programs? We like to format our mail. Bold headings, code snippets (sometimes syntax highlighted, if the copy-paste pulls RTF text as from XCOde and Eclipse), free line breaks determined by your browser width, and the (very) occasional image make the message easier to read. Threaded conversations are a wonderful thing. Broadband connections are, I'm sure, the rule for most of the users of SU and of developer mailing lists, disk space is cheap, and so the overhead of HTML is laughable. However, I don't want to post a question to a mailing list and have the guru who can answer my question automatically delete it, or come off as uncaring. Until I hear otherwise, I'll continue to respect the rules as best I can. For a common example of the problem, Gmail, by default, sends HTML formatted messages with bottom-posted quotes (which are folded in, just read the last message immediately above), and uses the frame width to wrap lines, rather than a character count. ASCII can be selected, and quotes can be moved and reversed, but line wraps of quotes don't work, line breaks are tedious to add (and more tedious to read, if they're super small in comparison to the width of the frame). Is there a forwarding, free mail program which can help with this exercise? Should an "RFC1855 mode" lab be written? Or do I have to go to the command line for my mailing lists, and gmail for my other mail?

    Read the article

  • Cannot access website from inside network

    - by musclez
    I have a website running from my internal network available at the example IP 192.168.1.5. When I type this in to the browser, it redirects to my domain name ie, "example.com", and gives me Error code: ERR_CONNECTION_REFUSED. Any other machine that is inside of the network can access the website. The website is also accessible outside of the network. Other services from the server, like file sharing or ftp, are available to all machines in the network including the one i'm having issues http issues with. The issue may be linked to a proxy service, but from my understanding the service has been completely disabled and any executable have been uninstalled from the machine. I am wondering if there is some residual proxy information remaining on the machine that limits the connection. I'm fairly positive that "example.com" is what is being blocked by the local machine, and not an IP address being blocked or a faulty connection. When I examine the hosts file, there are no redirects to the local machine for "example.com". There was a rule, as on my other machines within the network: 192.168.1.5 example.com But i have since removed that for troubleshooting purposes. What intrigued me is that when I use the actual IP, the IP address will redirect to the domain in the browser and THEN say ERR_CONNECTION_REFUSED. Server-Side Results The server logs are reporting this: example.com ::1 - - [Date & time] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2. 2.22 (Unix) (internal dummy connection)" However, this seems to be irrelevant as it is not triggered when I try to connect to the server with the specified machine. Fiddler results: Host: *example.com* Proxy-Connection: keep-alive Chrome-Side [Fiddler] The connection to 'example.com' failed. Error: ConnectionRefused (0x274d). System.Net.Sockets. SocketException No connection could be made because the target machine actively refused it 01.23.45.67:80 01.23.45.67:80 would be the external IP, which the server and the machine in question both share. I am doing so reading into 0x274d and its coming back with .NET web.config information. I am still at a loss to what to do with this information. I have WireShark running as well. Theres is a lot of sensitive information in the readout and I'm not sure what to extract from it. Either way, if it helps, I can access that information if anyone would like me to. Thanks for the help!

    Read the article

  • Can't connect to shared folders anymore?

    - by HuskyHuskie
    My home server is running Windows Server 2008 R2. I've had it running for almost a year now without any issues with shared folders. This past week I had an issue with my modem which required it to be power cycled and with that I power cycled my router. After that I haven't been able to connect to my shared network folders. I have no idea why that would even cause an issue as I've power cycled my networking equipment in the past without issues and none of my settings appear to have been lost. I am mapping these drives on my Windows 7 Ultimate machine using "Map Network Drive", from there I enter \\SERVER\Storage as I'm trying to connect to my shared folder named Storage. I receive the following error every time I try mapping the drive: Windows cannot access \\Server\Storage Check the spelling of the name. Otherwise there might be a problem with your network. To try to identify and resolve network problems, click Diagnose. Details: Error code: 0x80070035 The network path was not found. When I click Diagnose I get the following: Problems found file and print sharing resource (SERVER) is online but isn't responding to connection attempts. The remote computer isn't responding to connection on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn't find any problems with the firewall on your computer. I've tried this from multiple computers with the same issue too. To resolve the problems so far I've tried: Disabling the firewall on SERVER Reinstalling File Services Modifying NetBT\Parameters registry values Adding a custom inbound rule for port 445 Adding port forwarding on my router for port 445 Recreating the shared folders Checking and rechecking the shared folder permissions. Resetting my user account password on the server used to access the shared folder. I'm pulling my hair out with this problem mainly because it came out of nowhere. It was working fine the night before and the next day it just stopped working. Any ideas of what I could try next are much appreciated. It should also be noted that this server is used as a web server too and that functionality still works correctly.

    Read the article

  • Windows 2003 GPO Software Restrictions

    - by joeqwerty
    We're running a Terminal Server farm in a Windows 2003 Domain, and I found a problem with the Software Restrictions GPO settings that are being applied to our TS servers. Here are the details of our configuration and the problem: All of our servers (Domain Controllers and Terminal Servers) are running Windows Server 2003 SP2 and both the domain and forest are at Windows 2003 level. Our TS servers are in an OU where we have specific GPO's linked and have inheritance blocked, so only the TS specific GPO's are applied to these TS servers. Our users are all remote and do not have workstations joined to our domain, so we don't use loopback policy processing. We take a "whitelist" approach to allowing users to run applications, so only applications that we approve and add as path or hash rules are able to run. We have the Security Level in Software Restrictions set to Disallowed and Enforcement is set to "All software files except libraries". What I've found is that if I give a user a shortcut to an application, they're able to launch the application even if it's not in the Additional Rules list of "whitelisted" applications. If I give a user a copy of the main executable for the application and they attempt to launch it, they get the expected "this program has been restricted..." message. It appears that the Software Restrictions are indeed working, except for when the user launches an application using a shortcut as opposed to launching the application from the main executable itself, which seems to contradict the purpose of using Software Restrictions. My questions are: Has anyone else seen this behavior? Can anyone else reproduce this behavior? Am I missing something in my understanding of Software Restrictions? Is it likely that I have something misconfigured in Software Restrictions? EDIT To clarify the problem a little bit: No higher level GPO's are being enforced. Running gpresults shows that in fact, only the TS level GPO's are being applied and I can indeed see my Software Restictions being applied. No path wildcards are in use. I'm testing with an application that is at "C:\Program Files\Application\executable.exe" and the application executable is not in any path or hash rule. If the user launches the main application executable directly from the application's folder, the Software Restrictions are enforced. If I give the user a shortcut that points to the application executable at "C:\Program Files\Application\executable.exe" then they are able to launch the program. EDIT Also, LNK files are listed in the Designated File Types, so they should be treated as executable, which should mean that they are bound by the same Software Restrictions settings and rules.

    Read the article

  • debian gateway using iptables

    - by meijuh
    I am having problems setting up a debian gateway server. My goal: Having eth1 the WAN interface. Having eth0 the LAN interface. Allow both ports 22 (SSH) and 80 (HTTP) accessed from the outside world on the gateway (SSH and HTTP run on this server). What I did was the following: Create a file /etc/iptables.rules with contents: /etc/iptables.rules: *nat -A POSTROUTING -o eth1 -j MASQUERADE COMMIT *filter -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -i eth1 -j DROP COMMIT edit /etc/network/interfaces as follows: /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.rules auto eth0 allow-hotplug eth0 iface eth0 inet dhcp #auto eth1 #allow-hotplug eth1 #iface eth1 inet dhcp allow-hotplug eth1 iface eth1 inet static address 217.119.224.51 netmask 255.255.255.248 gateway 217.119.224.49 dns-nameservers 217.119.226.67 217.119.226.68 Uncomment the rule net.ipv4.ip_forward=1 in /etc/sysctl.conf to allow packet forwarding. The static settings for eth1 such as the ip address I got from my router (which I want to replace); I simply copied these. I have a (windows) DNS + DHCP server on ip address 10.180.1.10, which assigns ip address 10.180.1.44 to eth0. What this server does is not really interesting it only maps domain names on our local network and assigns one static ip to the gateway. What works: on the gateway itself I can ping 8.8.8.8 and google.nl. So that is okey. What does not work: (1) Every machine connected to eth0 (indirectly via a switch) can not ping an ip or a domain. So I guess the gateway can not be found. (2) Also when I configure my linux machine (a laptop) to use a static ip 10.180.1.41, a mask and a gateway (10.180.1.44) I can not ping an ip or domain either. This means that maybe my iptables is incorrect of not loaded correctly. Or I maybe have to configure my DNS/DHCP on my windows machine. I have not reset the windows machine net, restart the DNS/DHCP services, should I do this? I did not install dnsmasq as desribed here: http://blog.noviantech.com/2010/12/22/debian-router-gateway-in-15-minutes/. I don't think this is necessary?

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Why might one host be unable to access the Internet, when it can ping the router and when all other hosts can?

    - by user1444233
    I have a Draytek Vigor 2830n. It's kicking out a 192.168.3.0 LAN. It performs load-balancing across dual-WAN ports, although I've turned off the second WAN to simplify testing. There are many hosts on the LAN. All IPs are allocated through DHCP, most freely allocated from the pool, but one or two are bound to NIC MAC addresses. All hosts can access the Internet, save one. That host (192.168.3.100 or 'dot100' for short) gets allocated an IP address (and the right gateway address, DNS server addresses, subnet etc.) dot100 can ping itself. It can ping the gateway, and access the latter's web interface via port 80. It's responsive and loss-free (sustained ping over a couple of minutes reports no data loss). Yet, for some reason that evades me, dot100 can't ping an external IP address or domain name. I suspect it's never been able to, because it was getting some Internet access from a second adaptor (different subnet), but that's now been turned off, which exposed the problem. In dot100, I've tried: two operating systems (Windows 8 and Knoppix), to rule out anti-virus programs etc. two physical adaptors two cables, on each adaptor two IPs (e.g. .100 and .103 assigned by Mac and .26 from the pool) both dynamic and assigned (MAC-bound) DHCP-allocated IPs but none of this experiments yielded any variation in the result. dot100 is a crucial host. It's a file server for the network, so I need it to be reliably allocated a consistent IP. Can anyone offer a potential solution or a way forward with the analysis please? My guess My analysis so far leads me to believe it's a router issue. I've checked the web interface very carefully. There are no filters setup in Firewall - General Setup or Filter Setup. I suspect it's a corrupted internal routing table, but the web UI shows this as the Routing table: Key: C - connected, S - static, R - RIP, * - default, ~ - private * 0.0.0.0/ 0.0.0.0 via 62.XX.XX.X WAN1 * 62.XX.XX.X/ 255.255.255.255 via 62.XX.XX.X WAN1 S 82.YY.YYY.YYY/ 255.255.255.255 via 82.YY.YYY.YYY WAN1 C 192.168.1.0/ 255.255.255.0 directly connected WAN2 C~ 192.168.3.0/ 255.255.255.0 directly connected LAN2

    Read the article

  • PPTP VPN Not Working - Peer failed CHAP authentication, PTY read or GRE write failed

    - by armani
    Brand-new install of CentOS 6.3. Followed this guide: http://www.members.optushome.com.au/~wskwok/poptop_ads_howto_1.htm And I got PPTPd running [v1.3.4]. I got the VPN to authenticate users against our Active Directory using winbind, smb, etc. All my tests to see if I'm still authenticated to the AD server pass ["kinit -V [email protected]", "smbclient", "wbinfo -t"]. VPN users were able to connect for like . . . an hour. I tried connecting from my Android phone using domain credentials and saw that I got an IP allocated for internal VPN users [which I've since changed the range, but even setting it back to the initial doesn't work]. Ever since then, no matter what settings I try, I pretty much consistently get this in my /var/log/messages [and the VPN client fails]: [root@vpn2 ~]# tail /var/log/messages Aug 31 15:57:22 vpn2 pppd[18386]: pppd 2.4.5 started by root, uid 0 Aug 31 15:57:22 vpn2 pppd[18386]: Using interface ppp0 Aug 31 15:57:22 vpn2 pppd[18386]: Connect: ppp0 <--> /dev/pts/1 Aug 31 15:57:22 vpn2 pptpd[18385]: GRE: Bad checksum from pppd. Aug 31 15:57:24 vpn2 pppd[18386]: Peer armaniadm failed CHAP authentication Aug 31 15:57:24 vpn2 pppd[18386]: Connection terminated. Aug 31 15:57:24 vpn2 pppd[18386]: Exit. Aug 31 15:57:24 vpn2 pptpd[18385]: GRE: read(fd=6,buffer=8059660,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: Client 208.54.86.242 control connection finished Now before you go blaming the firewall [all other forum posts I find seem to go there], this VPN server is on our DMZ network. We're using a Juniper SSG-5 Gateway, and I've assigned a WAN IP to the VPN box itself, zoned into the DMZ zone. Then, I have full "Any IP / Any Protocol" open traffic rules between DMZ<--Untrust Zone, and DMZ<--Trust Zone. I'll limit this later to just the authenticating traffic it needs, but for now I think we can rule out the firewall blocking anything. Here's my /etc/pptpd.conf [omitting comments]: option /etc/ppp/options.pptpd logwtmp localip [EXTERNAL_IP_ADDRESS] remoteip [ANOTHER_EXTERNAL_IP_ADDRESS, AND HAVE TRIED AN ARBITRARY GROUP LIKE 5.5.0.0-100] Here's my /etc/ppp/options.pptpd.conf [omitting comments]: name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 ms-dns 192.168.200.42 # This is our internal domain controller ms-wins 192.168.200.42 proxyarp lock nobsdcomp novj novjccomp nologfd auth nodefaultroute plugin winbind.so ntlm_auth-helper "/usr/bin/ntlm_auth --helper-protocol=ntlm-server-1" Any help is GREATLY appreciated. I can give you any more info you need to know, and it's a new test server, so I can perform any tests/reboots required to get it up and going. Thanks a ton.

    Read the article

  • Finding underlying cause of Window 7 Account corruption.

    - by Carl Jokl
    I have been having trouble with my Sister's computer which I built. It is running Windows 7 Ultimate x64. The problem is that I have had problems with the accounts becoming corrupted. First problems manifest themselves in the form of Windows saying the profile failed to be loaded properly and a temporary profile. Eventually the account will not allow login at all. An error message along the lines the authentication service failing the login. I have found information about this problem and how to fix it. The problem being that something has corrupted the account profile and backing up and recreating the accounts fixes the problem. I have been able to fix things and get logins working again but over the period of usually about a week it happens again. Bit by bit the accounts corrupt and then it is back to square one. I am frustrated because I don't know what the underlying cause of the problem is i.e. what is causing the accounts to be corrupted in the first place. At the moment I am just treating the symptoms. I was hoping someone who may have more experience with dealing with this problem might be able to help me find the root cause. Some articles suggest that Norton Internet Security is a big culprit of this problem which is installed. I could try uninstalling Norton and see if it helps. The one thing which is different about this computer to any other I have built is that it has a solid state drive. Actually it has both a hard drive and solid state drive. The documents and settings i.e. the Users directory is stored on the hard drive. This was done following an article about moving the user account data onto a separate drive on Windows 7 which I found on the Internet. Moving the User accounts is more of a pain under Windows 7 and this solution involved creating a low level file system link to the folder from the boot drive (Solid State) to the Hard Drive. The idea is that the computer behaves just as if it is accessing the User's folder from the boot drive but actually the data is stored on the hard drive. This may have nothing to do with the cause of the problem but due to the problem being user account corruption it is a possibility I have not been able to rule out. Any help would be appreciated as I would be glad to see the back of this problem.

    Read the article

  • Routing RFC1918 addresses through dd-wrt via a switch

    - by espenfjo
    I am a bit stuck with an experiment of mine. I have a network looking somewhat like this. | Internet | | ---- |Switch| ---- | | Server w/pub IP | DD-WRT router 192.168.1.1 | | RFC1918 clients 192.168.1.0/24 What I want is for the RFC1918 clients to speak directly with each others. On the server with the public IP I have this route: 192.168.1.0/24 dev eth0 scope link and can see that packets are infact reaching the dd-wrt router for 192.168.1.1, even though if I get no answer. Trying to reach one of the RFC1918 clients from the public IP server will get no result, as the dd-wrt router is not announcing that network on to its external interface (arp who-has 192.168.1.107 tell xxx.xxx.xxx.xxx, but no answer). The router being an WLAN dd-wrt router has of course a load of routes, VLANs and interfaces: xxx.xxx.xxx.1 dev vlan2 scope link 192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.244 84.215.64.0/18 dev vlan2 proto kernel scope link src xxx.xxx.xxx.xxx 169.254.0.0/16 dev br0 proto kernel scope link src 169.254.255.1 127.0.0.0/8 dev lo scope link 0.0.0.0 via xxx.xxx.xxx.1 dev vlan2 xxx.xxx.xxx.xxx being the public IP, and xxx.xxx.xxx.1 being the default route for the public IP. I am not sure where to continue with this. I would recon that I both need routing on the dd-wrt router, as well as some iptables magic? Why do something this complex? Why not ;) Also, do not mind that "Internet" can get RFC1918 traffic, it wont go outside of the walls. EDIT 1: Following the tip from stew I do indeed get the correct ARP flowing. And adding an iptables rule for allowing traffic from that specific public IPd machine I get traffic between the systems! Oddly enough though, the speed I get from Server w/pub IP - RFC1918 clients are the same as if the traffic were routed out onto the Internet and back. Edit 2: Ok, disconnecting the external Internet connection will still give the same, crappy transfer speed. So it has to be something else. Edit 3: Ok, I guess there are other reasons for this crappy speed. Case closed. :)

    Read the article

  • Wireless traffic stops when downloading large files at high speed: packets lost (Linksys WRT120N router)

    - by Torious
    The problem Note: First I'd like to understand WHY this is happening. Ofcourse, a solution would be nice too. :) When downloading a large file over HTTP at high-speeds, my wireless traffic basically stops: I can't open webpages and the download itself pauses. It pauses pretty much immediately after starting it; sometimes at 800 KB, sometimes at a few MB. After some time, the download (and other traffic) resumes, but the problem keeps reoccurring during the same download. The problem does not occur when using a wired connection through the same router (Linskys WRT120N). Also note that the connection is not dropped when this happens. It's just that the traffic stops and I can't browse to web pages, etc. (SYN packets are sent but nothing is received, etc.) Inspection with Wireshark shows that the following happens: Server sends data packets which are acknowledged by client Server sends a packet, but SEQ indicates some packets were lost (6 packets in one occurrence). Server sends a few more packets and client acknowledges these using "selective acknowledgement" Server stops sending data for a while (since the lost packets were not acknowledged or the router stops forwarding them?) Eventually, server does a "retransmission" and traffic resumes as normal. This all seems normal behavior to me when packet loss occurs. It's the consistent packet loss throughout a large, high-speed download that puzzles me. What might cause this? My own idea is the following: My internet is pretty fast (100 mbps), so when starting a large-file download, the router buffers the incoming data (since wireless introduces some slight delay / lower speed, in part due to other networks), but the buffer overflows and the router drops packets to regulate traffic (and because it has no choice). But how could that happen? Doesn't the TCP window size limit the amount of data that can go unacknowledged? So how can the router's buffer overflow if there can only be like 64 KB waiting to be acknowledged? Note: I've disabled TCP window scaling and dynamic window size through netsh options, in an attempt to fix this, but it doesn't seem to matter. Also, Wireshark shows a pattern of the server sending 2 packets (of 1514 bytes) and the client sending an ACK, so does that rule out a possible buffer overflow? And a few more subsequent packets are received... I'm at a loss here. Thanks for any insights. Things that are (probably) NOT the cause / I have experimented with The browser Various TCP options in Windows 7 (netsh etc.) Router settings such as MTU, beacon interval, UPnP, ...

    Read the article

  • Very slow printing from print server

    - by evolvd
    Print server is a VM on Xen The VM is Windows 2003 32bit. During the issue the VM is not being taxed in anyway, cpu, memory, hd read/write, and network speed is all good. The problem that I see is the transfer of the print file from the print server to the printer. The 80Mb file is transferred from the client to the print server in about 2 minutes but then it takes about 2 hours for that file to be sent to the printer. I can't figure out why this would just start to happen. The printer is rebooted every evening and is just used for one large print job in the morning. The server has been rebooted with no effect I changed the spool option to send the entire spool to the server before printing starts and it had no effect. This printer problem did happen to come about after some changes to the Xen environment. The Xen servers changed from using HBA NIC cards to software iscsi and a new switch was put in. I don't think this is related to the problem since all the speeds on the VMs are better now. The changed happened on Saturday and the first print to this printer happened on Monday morning. I'm just putting that out there but like I said I don't think it is related but I don't want to rule it out. At this point I don't have many other options besides the physical layer. I can switch out network cable that goes to the printer and I might be able to print the same job to another printer. I wont be able to test those things out till this afternoon though. Any other ideas or test I could do to try to find the reason for the slow speed? I forgot to say that this is only happening when printing to this one printer. ===Update=== I found out that there are a few printers that currently have this issue, not just the one. There are over 30 printers on the server though so I know it's not happening to all of them. I printed a large pdf doc from the server and it was able to print at the normal speed. If the machine sends the large print request it gets to the server fine but then slow to get from the server to the printer. If sent directly from the printer it gets to the printer at the normal speed. The question now is why is there a speed difference when it comes from the machine and why would it start now?

    Read the article

  • FTP not listing files behind firewall (setsockopt (ignored): Permission denied)

    - by KennyDs
    We are developing a Magento application that has a module that works with FTP. Today we deployed this on the testing environment which is setup in the following way: Gateway server which has the following iptables rules: # iptables -L -n -v Chain INPUT (policy ACCEPT 2 packets, 130 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 165 13720 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED Chain FORWARD (policy ACCEPT 7 packets, 606 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- eth1 eth0 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 15 965 ACCEPT all -- eth0 eth1 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- eth1 eth1 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 126 packets, 31690 bytes) pkts bytes target prot opt in out source destination These are set at runtime via the following bash script: #!/bin/sh PATH=/usr/sbin:/sbin:/bin:/usr/bin # # delete all existing rules. # iptables -F iptables -t nat -F iptables -t mangle -F iptables -X # Always accept loopback traffic iptables -A INPUT -i lo -j ACCEPT # Allow established connections, and those not coming from the outside iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the LAN side. iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT # Masquerade. iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE # Don't forward from the outside to the inside. iptables -A FORWARD -i eth1 -o eth1 -j REJECT # Enable routing. echo 1 > /proc/sys/net/ipv4/ip_forward The gateway server is connected to the WAN via eth1 and is connected to the internal network via eth0. One of the servers from eth1 has the following problem when trying to list files over ftp: $ ftp -vd myftpserver.com Connected to myftpserver.com 220 Welcome to MY FTP Server ftp: setsockopt: Bad file descriptor Name (myftpserver.com:magento): XXXXXXXX ---> USER XXXXXXXX 331 User XXXXXXXX, password please Password: ---> PASS XXXX 230 Password Ok, User logged in ---> SYST 215 UNIX Type: L8 Remote system type is UNIX. Using binary mode to transfer files. ftp> ls ftp: setsockopt (ignored): Permission denied ---> PORT 192,168,19,15,135,75 421 Service not available, remote server has closed connection When I try listing the files in passive mode, same result. When I run the same command on the gateway server, everything works fine so I believe that the issue is happening because of the iptables rules not forwarding properly. Does anyone have an idea which rule I need to add to make this work?

    Read the article

  • batch file infinite loop when parsing file

    - by Bart
    Okay, this should be a really simple task but its proving to be more complicated than I think it should be. I'm clearly doing something wrong, and would like someone else's input. What I would like to do is parse through a file containing paths to directories and set permissions on those directories. An example line of the input file. There are several lines, all formatted the same way, with a different path to a directory. E:\stuff\Things\something else (X)\ (The file in question is generated under Cygwin using find to list all directories with "(X)" in the name. The file is then passed through unix2win to make it windows compatible. I've also tried manually creating the input file from within windows to rule out the file's creation method as the problem.) Here's where I'm stuck... I wrote the following quick and dirty batch file in Windows XP and it worked without any issues at all, but it will not work in server 2k8. Batch file code to run through the file and set permissions: FOR /F "tokens=*" %%A IN (dirlist.txt) DO echo y| cacls "%%A" /T /C /G "Domain Admins":f "Some Group":f "some-security-group":f What this is SUPPOSED to do (and does in XP) is loop through the specified file (dirlist.txt) and run cacls.exe on each directory it pulls from the file. The "echo y|" is in there to automagically confirm when cacls helpfully asks "are you sure?" for every directory in the list. Unfortunately, however, what it DOES is fall into an infinite loop. I've tried surrounding everything after "DO" with quotes, which prevents the endless loop but confuses cacls so it throws an error. Interestingly, I've tried running the code from after "DO" manually (obviously replacing the variable with the full path, copied straight from the file) at a command prompt and it runs as expected. I don't think it's the file or the loop, as adding quotes to the command to be executed prevents the loop from continuing past where it's supposed to... I really have no idea at this point. Any help would be appreciated. I have a feeling it's going to be something increadibly stupid... but I'm pulling my hair out so I thought I'd ask.

    Read the article

  • Unable to telnet out on port 25 on windows server 2008

    - by NickGPS
    Hi All, I just setup a Windows 2008 R2 server and am trying to get a basic mail server up and running so that I can send emails from my applications. I setup a virtual SMTP server in IIS6 and tried doing a local telnet to port 25, which seemed to work fine. There were no errors during this stage and I can see the mail message appear in the Queue folder. The problem is that mail never leaves the Queue folder. I then tried to telnet to a remote mail server on port 25 but couldn't connect:- telnet 209.85.227.27 25 Could not open connection to the host, on port 25: Connection failed) I checked my firewall and there is a default setting to allow all outgoing TCP traffic with no restriction. I even setup a specific rule for outgoing port 25 traffic but to no avail. I then ran a SmtpDiag.exe command .\SmtpDiag.exe [email protected] [email protected] and received the following output Searching for Exchange external DNS settings. Computer name is WIN-SERVERNAME. Failed to connect to the domain controller. Error: 8007054b Checking SOA for gmail.com. Checking external DNS servers. Checking internal DNS servers. SOA serial number match: Passed. Checking local domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Local DNS test passed. Checking remote domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Remote DNS test passed. Checking MX servers listed for [email protected]. Connecting to gmail-smtp-in.l.google.com [209.85.227.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Is there any other diagnostics I can do to figure out if it's my firewall or something else? I have removed antivirus to make sure that it wasn't causing the problem. Any ideas would be much appreciated.

    Read the article

  • Keep source IP after NAT

    - by John Miller
    Until today I used a cheapy router so I can share my internet connection and keep a webserver online too, while using NAT. Users IP ($_SERVER['REMOTE_ADDR']) was fine, I was seeing class A IPs of users. But as traffic grown up everyday, I had to install a Linux Server (Debian) to share my Internet Connection, because my old router couldn't keep the traffic anymore. I shared the internet via IPTABLES using NAT, but now, after forwarding port 80 to my webserver, now instead of seeing real users IP, I see my Gateway IP (Linux Internal IP) as any user IP Address. How to solve this issue? I edited my post, so I can paste the rules I'm currently using. #!/bin/sh #I made a script to set the rules #I flush everything here. iptables --flush iptables --table nat --flush iptables --delete-chain iptables --table nat --delete-chain iptables -F iptables -X # I drop everything as a general rule, but this is disabled under testing # iptables -P INPUT DROP # iptables -P OUTPUT DROP # these are the loopback rules iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # here I set the SSH port rules, so I can connect to my server iptables -A INPUT -p tcp --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT # These are the forwards for 80 port iptables -t nat -A PREROUTING -p tcp -s 0/0 -d xx.xx.xx.xx --dport 80 -j DNAT --to 192.168.42.3:80 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p tcp -s 192.168.42.3 --sport 80 -j ACCEPT # These are the forwards for bind/dns iptables -t nat -A PREROUTING -p udp -s 0/0 -d xx.xx.xx.xx --dport 53 -j DNAT --to 192.168.42.3:53 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p udp -s 192.168.42.3 --sport 53 -j ACCEPT # And these are the rules so I can share my internet connection iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth0:1 -j ACCEPT If I delete the MASQUERADE part, I see my real IP while echoing it with PHP, but I don't have internet. How to do, to have internet and see my real IP while ports are forwarded too? ** xx.xx.xx.xx - is my public IP. I hid it for security reasons.

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • What issues ensue from having multiple versions of Office installed?

    - by Michael Sorens
    My ultimate question is embodied in the title but I thought it might be helpful to others if I detail what instigated my inquiry and my examination of the problem. To me, the first rule of software updates is Primum non nocere -- first, do no harm. So with my Windows 7 system containing both Office 2003 and Office 2010 I blithely proceeded to install this month's updates from Microsoft, containing updates for both versions of Office. While Microsoft officially does not recommend running multiple versions (see, for example, Running Multiple Versions of Microsoft Excel it is possible; I have had two versions installed for a year or more and have never run into an issue before. One thing that is always mentioned is installation order, i.e., the one you want to open files by default should be installed last. I wanted 2010 as my default so I had indeed installed 2003 first then, years later, 2010. So with this round of Windows updates, either it installed patches to 2010 before 2003, knocking out the file association, or the 2003 patch was more comprehensive, in the sense of touching the file association while the 2010 did not. In any case, after updates, double-clicking a .xls file opened 2003 rather than 2010. Web search indicated either: Use the file associations control panel to re-associate .xls files with the correct version of excel. I looked at this first, but it showed what seemed to be an unversioned "Excel" associated with .xls files so I did not check further. (This turned out to be an error on my part; more later.) Re-install versions in the desired order; I find this unreasonable. Run the repair option of the Office installer on the desired version; still seems more work than one should need. Run excel from the command line with "/regserver" on the one to be the default and "/unregserver" on the other. Good idea, but further search indicated that neither 2007 nor 2010 support "/regserver" contrary to some posts (e.g. Default Program With Multiple Versions Installed). Since this was a Windows Update issue and Microsoft provides free support for such, I inquired there as well, but succeeded only in getting the suggestion to uninstall all other versions, period; not acceptable to me. What worked for me was going back to the file associations control panel and manually selected the Office 2010 version of Excel. While it appeared no different in the control panel, it did fix the double-click issue. So if all it takes is this simple fix after an update, I can live with that. What I am wondering is: Has anyone seen any other problems related to having multiple versions of Office installed?

    Read the article

  • VMware server 2.0 SYN/ACK repeating issues

    - by user65579
    VMWare Server 2.0.0 Build 122956 I am having some issues with connecting into a guest VM (Ubuntu linux 4.4.3-4 lucid) running under VMware 2.0 on a windows server host. All connections to and from the VM's work fine, except for FTP. I thought the issue was the FTP daemon at first but it has been ruled out that it is not the daemon or the server itself. When you try to connect to the FTP server from outside of the host OS it fails with a "421 Service not available" but when you try and connect from the local VM or from the host OS the connection goes through fine. I have ran many packet sniffs using wireshark/tcpdump from the VM, the host OS, and the client connecting, the most informative is the host OS. I have attached a PNG of the relavant packets that were captured. I viewed some other network traffic that was sniffed (WWW specifically) and it seems to do the same syn/ack repeating but the user doesnt see any issues. I have disabled the firewall and the issues persisits, I have tried with specific allow rules to ensure the data is allowed and no changes. It appears like VMware attempts to do the ICMP redirect and it works, but then it vmware repeats the packets sent so you get 3 syn/ack's for every one syn from the client. Also VMWare appears to be attempting to establish an FTP connection between the HOST OS and the GUEST OS, because I see the second SYN sent from the HOST OS to the GUEST to initiate a new connection, and it get the appropriate SYN/ACK followed by an ACK, but the client never sees any of this from its end. EG. syn from client syn/ack from host OS to client syn/ack from guest OS to client syn/ack from host OS to client The same thing happens when the connection reset is attempted, RST's start being sent and repeated, the server responds with a valid header to continue the FTP handshake but the RST acknowledgement is allready issued and things are closed. I am not 100% if this is a bug in VMware or possibly a VMNetwork missconfiguration. Does anyone have any thoughts on where exactly the issue could be, things to try to verify or rule out? I have linked to a picture of the relevant packets sniffed from the host OS. http://img18.imageshack.us/img18/7789/vmwareftpconnection.jpg

    Read the article

  • configuring mod_proxy_html properly?

    - by tobinjim
    I have an apache2 web server that handles reverse proxy for Rails3 app running on another machine. The setup works except URLs generated within the webapp aren't getting rewritten by my configuration for mod_proxy_html. The ["Reverse Proxy Scenario"][1] is exactly what I'm trying to do, so I've followed the tutorial as completely as I know how. I've applied or tried answers supplied here on stackoverflow, to no effect. According to the "Reverse Proxy Scenario" you want a number of modules loaded. All those instructions are in my httpd.conf file and when I examine the output from apactectl -t -D DUMP_MODULES all the expected modules show in amongst the listing. My external web server doing the reverse proxy is at www.ourdomain.org and the Rails app is internally available at apphost.local (the server is Mac OS X Server 10.6, the rails app server is Mac OS X 10.6). What's working right now is access to the webapp via the reverse proxy as: http://www.ourdomain.org/apphost/railsappname/controllername/action But none of the javascript files, css files or other assets get loaded, and links internal to the web app come out missing the apphost portion of the URL, as if my rewrite rule is configured incorrectly (so of course I've focused on that and can't seem to get anything to be added or deleted in the process of passing the html in from the apphost and out through the Apache server). For instance, hovering over an action link in the html returned by the web app you'll get: http://www.ourdomain.org/railsappname/controllername/action Here's what my Apache directives look like: LoadModule proxy_html_module /usr/libexec/apache2/mod_proxy_html.so LoadModule xml2enc_module /usr/libexec/apache2/mod_xml2enc.so ProxyHTMLLogVerbose On LogLevel Debug ProxyPass /apphost/ http://apphost.local/ <Location /apphost/> SetOutputFilter INFLATE;proxy-html;DEFLATE ProxyPassReverse / ProxyHTMLExtended On ProxyHTMLURLMap railsappname/ apphost/railsappname/ RequestHeader unset Accept-Encoding </Location> After every change I make to httpd.conf I religiously check apachectl -t just to be sane. I'm definitely not an Apache expert, but all the directives that follow mine seem to not overrule what I'm doing here. But then nothing that I try seems to alter the URLs I see in my browser after hitting the Apache server with a request for my web app. Even if you can't tell what I've done incorrectly, I'd welcome ideas on how to get Apache to help see what it's working on and doing to the html coming from my web app. That's what I understood the ProxyHTMLLogVerbose On and LogLevel Debug to be setting up, but I'm not seeing anything in the log files.

    Read the article

  • GPU not powering on

    - by Lerp
    So I got home from work yesterday and went to turn my computer on as per usual to be greeted by this: The screens remained black, so I rebooted; I go as far as GRUB before my screens went black again. I rebooted again, they didn't turn on. I rebooted again, I got as far as the windows login screen. This time I unplugged it, opened it up and cleaned it but to no luck. The GPU was still being tempermental. I repeated the process of turning off and on several times until one time it work as normal. I happily played games for the rest of the night (5-6 hours?) thinking everything was jolly good now. Well I get home from work today and it is doing the SAME thing. Sometimes everything displays normally for a few seconds to minutes then the screens go black; then sometimes the screens don't come on at all. Summary and additional points Screens sometimes turn on before shortly turning off, sometimes they don't; I cannot seem to determine any pattern between when they do or do not turn off. The build has been working fine for about 8 months now so I know it's not hardware incompatibility. If I plug a monitor into the on board graphics I can use the PC normally (just in low graphics mode) I have two monitors and it's a case of they both turn on or not. So I think I can rule out the monitors being dead. I have tried replacing the GPU I have tried replacing the RAM I have tried flashing the CMOS I have tried cleaning the inside The GPU is a Radeon HD 7870 My questions Is my GPU dead? It's not very old and I would rather have a method of being certain it's the GPU before I fork out some money I can't really afford. I do not have a second PC here to test it in. If my GPU is dead why does it sometimes work and sometimes not? Update Okay, it was working again.. at least I thought it was. I left it running for 10-20minutes with the screens black. Turned it off and straight back on and it worked for all of 10minutes. I was then updating the post in joy thinking I could play some games for the rest of the night when BAM it went black again. So yeah, I don't know :C

    Read the article

  • How to Configure Source NAT (Private IP => Public IP Outbound)

    - by DavidScherer
    I'm running VMWare ESXi Free and have Zentyal SBS 3.2 running as a Gateway. I have 5 Public IPS (CIDR/29, let's call them 69.1.1.1 - 69.1.1.5) and currently Zentyal is bound to 69.1.1.1 as the Gateway, with the other 4 Public IPs set as Virtual Interfaces in Zentyal (wan2-wan5) I have machines sitting on the Private Network (10.34.251.x) that, when going Outbound (to Google for instance) should be seen by the Internet as an IP other than the Gateway (69.1.1.1), this is because our machines need to be able to communicate with 3rd party APIs that expect these requests to come from a specific IP. From what I could find, SNAT (Source NAT) in Zentyal is used to achieve this, but I'm not sure how to configure it and cannot find a specific piece of Documentation for it at Zentyal. I've tried setting this up a couple different ways, with no results and at this point I have no idea if I'm going about this completely wrong, or my lack of experience with networking and the associated terminology is preventing me from placing the correct values in the correct fields. I get the following form to set up "SNAT" rules in Zentyal: Perhaps someone can offer some guidance and definitions for the fields above? SNAT Address Is this the Public IP I want to masquerade? Outgoing Interface Should this by my External NIC (one connected to Public 'Net), or is it the "Private" interface? It sounds as though this should be the External interface as I want the traffic from the internal network sent Out over this Interface (using a different IP than normal, anyway) Source Is the the Source on the internal network (one of the private IPs?), a public IP I want to masquerade as, or something else entirely? Destination Is this a place on the Internet (eg, "Only do this for the Site Google.com"/IP) or am I allowing myself to become confused again? Service I'm assuming this allows me to restrict which services this rule will apply to, but is it for a service on the internal network or a service being accessed on the external network? If I can offer any further details or information to make what I'm trying to do more clear, I will happily do so. Honestly any kind of help here would be very appreciated. I'm not a NetOps or anything even close, I spend most of my day writing code and my entire "team" at this company consists of "me, myself, and I" so while I try to broaden my KB at every possible opportunity, I can only learn so much, so fast and I feel like with networking especially there's just so much, coupled with a learning curve for each solution that likes to (from my limited perspective) use slightly different terminology that what I'm used to (and I don't exactly have the necessary experience to cross reference this stuff with the stuff I already know in context).

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >