Search Results

Search found 10850 results on 434 pages for 'shihab returns'.

Page 299/434 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Unable to ping domain.local, but can ping server.domain.local

    - by Force Flow
    I have a single windows 2008 server running active directory, group policy, and DNS. DHCP is running from the firewall (this is because there are multiple branch locations, and each location has its own firewall supplying DHCP. But, for this problem, the server and workstation are at the same location). On an XP workstation, if I try to visit \\domain.local or ping domain.local, the workstation can't find it. A ping returns Ping request could not find host domain.local. If I try to visit \\server or \\server.domain.local or ping server or server.domain.local, I'm able to connect normally. If I ping or visit domain.local on the server, I'm able to connect normally. A-Records are in place in the DNS service for server, domain.local, and server.domain.local. A reverse lookup zone also is enabled and PTR records are in place. If I wait 20-30 minutes, I am eventually able to ping and visit domain.local--but, when attempting to ping, it takes 30 second to return an IP address. I am also unable to join a new workstation to the domain during this wait period. If I try, the error message returned is "network path not found". Is there something I'm missing?

    Read the article

  • Unable to Access Certain Websites

    - by codejoust
    Through a local network, all computers except one ubuntu machine can access 1. Adobe.com 2. Icann.org 3. Apache.org 4. Example.com. The ubuntu machine returns (in firefox): "Though the site seems valid, the browser was unable to establish a connection." Furthermore, when I traceroute those websites using the ubuntu machine, they all return ubuntu.local, and it ends there: (traceroute to icann.org (192.0.32.7), 30 hops max, 40 byte packets 1 ubuntu.local (192.168.1.105) 3000.791 ms !H 3000.808 ms !H 3000.814 ms !H I've checked the hosts file, and there isn't anything in there, and I have an apache server there so if it was redirected to localhost, I'd probably see the localhost webroot page. Thanks in advance! user@ubuntu:~$ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 192.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1 The Ubuntu Machine is one of six on the network. I'm using opendns for dns, so I do think that should be a problem.

    Read the article

  • Windows Explorer slow to open networked computer, fast to navigate once opened

    - by Scott Noyes
    I open Windows Explorer and enter an IP for a computer on my home network (\\192.168.1.101). It takes 30 seconds or more to present a list of the shared folders. It does not appear to be an initial handshaking/authentication thing; even if I allow the view to load and then immediately load the same again, it is always slow. Once they appear, navigating through folders and opening files is fast. Also, navigating directly to a folder (\\192.168.1.101\My Music) is fast, even if it's the first connection since a restart. Using \\computerName instead of the IP address gives exactly the same results. Pings return in 1ms. net view \\computerName (or \ipAddress) returns the list of shared folders fast. This makes me suspect an Explorer issue rather than a network issue. Suspecting that the remote computer was being automatically indexed or something, I went into Tools-Folder Options-View and unchecked "Automatically search for network folders and printers," but that made no difference. De-selecting the "Folders" icon near the address bar makes no difference. Adding the IP address and computer name to the hosts file makes no difference. Both computers involved are laptops running Windows XP. Both have WiFi and cable adapters. Mine is not connected via cable. The result is the same whether the target is plugged in to the cable or not (although the IP address changes - 192.168.1.101 over cable, 192.168.1.103 over WiFi.) We are using DHCP assigned by the router.

    Read the article

  • Mac OS X - configuring ntpd server with on LAN with D-Link DIR-655

    - by Mark C
    Hey all, This question is pretty specific, but I hope someone will have seen this error elsewhere. I a configuring a machine running OS X 10.5.8 to be an NTP server for machines connected to a LAN that is not connected to the Internet. I am not too worried about knowing the "right" time on all the machines, but rather worried about making sure everyone has the same notion of time. I configured the NTP daemon on Mac by turning on the Set date and time automatically in System Preferences, using the server's clock, 127.127.1.0 as the reference clock. I figured I should see if the server can NTP query itself before proceeding to the clients. The weird part is when I run the ntpq -p command in a command-prompt when connected to my D-Link DIR-655 (firmware: 1.33), it hangs for about a minute or so each time before finally giving me some output. I thought the problem might have to do with Port Forwarding, so I configured the router to forward port 123 for the IP of the server, but that did not improve the situation. When I run the ntpq -p command on my school's network, on a Linksys WRT54G router, or with the wireless Airport card turned off - I have absolutely no problems - the command returns a response instantly. Is this normal? I can see why a query might take a minute or so, but I don't understand why one router does it faster than the other. I tried messing around with the ntp.conf file adding the burst, minpoll, and maxpoll options: server 127.127.1.0 burst minpoll 4 maxpoll 5 Figuring that perhaps I am polling too often and the configuration file is slowing me down, but even with this, the ntpq still hangs on the D-Link DIR-655, but does just fine on the other routers. Any thoughts on where the lag is coming from or if the lag is even a problem?

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • Problems using wondershaper on KVM guest

    - by Daniele Testa
    I am trying to limit bandwidth on one of my KVM guest using Wondershaper. Doing something like this works fine: wondershaper br23 9000 9000 Doing a wget with the setting above gives a download speed of about 1MB/sec like it should. However, it seems this is the highest setting I can use, because setting it to this does not work: wondershaper br23 10000 10000 Doing the same wget with the setting above downloads with full speed, about 70MB/sec in my case. Running a status-check returns the following: qdisc cbq 1: root refcnt 2 rate 10000Kbit (bounded,isolated) prio no-transmit Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 qdisc sfq 10: parent 1:10 limit 127p quantum 1514b divisor 1024 perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc sfq 20: parent 1:20 limit 127p quantum 1514b divisor 1024 perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b divisor 1024 perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc ingress ffff: parent ffff:fff1 ---------------- Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 class cbq 1: root rate 10000Kbit (bounded,isolated) prio no-transmit Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:1 parent 1: rate 10000Kbit (bounded,isolated) prio 5 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:10 parent 1:1 leaf 10: rate 10000Kbit prio 1 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:20 parent 1:1 leaf 20: rate 9000Kbit prio 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:30 parent 1:1 leaf 30: rate 8000Kbit prio 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 What am I doing wrong? Does wondershaper have some kind of upper limit?

    Read the article

  • How do I make Nginx redirect all requests for files which do not exist to a single php file?

    - by Richard
    I have the following nginx vhost config: server { listen 80 default_server; access_log /path/to/site/dir/logs/access.log; error_log /path/to/site/dir/logs/error.log; root /path/to/site/dir/webroot; index index.php index.html; try_files $uri /index.php; location ~ \.php$ { if (!-f $request_filename) { return 404; } fastcgi_pass localhost:9000; fastcgi_param SCRIPT_FILENAME /path/to/site/dir/webroot$fastcgi_script_name; include /path/to/nginx/conf/fastcgi_params; } } I want to redirect all requests that don't match files which exist to index.php. This works fine for most URIs at the moment, for example: example.com/asd example.com/asd/123/1.txt Neither of asd or asd/123/1.txt exist so they get redirected to index.php and that works fine. However, if I put in the url example.com/asd.php, it tries to look for asd.php and when it can't find it, it returns 404 instead of sending the request to index.php. Is there a way to get asd.php to be also sent to index.php if asd.php doesn't exist?

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • IIS URl Rewrite working inconsistently?

    - by Don Jones
    I'm having some oddness with the URL rewriting in IIS 7. Here's my Web.config (below). You'll see "imported rule 3," which grabs attempts to access /sitemap.xml and redirects them to /sitemap/index. That rule works great. Right below it is imported rule 4, which grabs attempts to access /wlwmanifest.xml and redirects them to /mwapi/wlwmanifest. That rule does NOT work. (BTW, I do know it's "rewriting" not "redirecting" - that's what I want). So... why would two identically-configured rules not work the same way? Order makes no different; Imported Rule 4 doesn't work even if it's in the first position. Thanks for any advice! EDIT: Let me represent the rules in .htaccess format so they don't get eaten :) RewriteEngine On # skip existing files and folders RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # get special XML files RewriteRule ^(.*)sitemap.xml$ /sitemap/index [NC] RewriteRule ^(.*)wlwmanifest.xml$ /mwapi/index [NC] # send everything to index RewriteRule ^.*$ index.php [NC,L] The "sitemap" rewrite rule works fine; the 'wlwmanifest' rule returns a "not found." Weird.

    Read the article

  • Linksys wireless router will not hardware reset.

    - by Jack M.
    Hello, all. I'm unable to make my router perform a hardware reset, and I cannot understand why. All was working well, except that my iPhone could not connect to the wireless. I found that the router was only allowing AES encryption on WPA2 Personal mode, so I upgraded the firmware. I updated the firmware to Ver.1.06.1, and everything went screwy. The router is no longer showing up in the WiFi list (as Linksys, or its previous network name). Wiring into the router gives me an IP address from my ISP (24.121.121.XXX). Attempting to do a hardware reset, but the power light never starts flashing and the router does not seem to reboot. My machine wired in is still online with no interruption in WoW. Pulling the power cord to force a reset returns it to the same state. I even went so far as to pull up my previous IP address (from DynDNS) and try to connect to that, but it wont even ping. What I'm trying to find out is: Did the new firmware fry the thing, or is there some way to fix this? Thanks in advance for any help.

    Read the article

  • Internal/External Moodle - DNS

    - by Chief17
    Network diagram: I have a moodle (a VLE) setup that I want to be internally and externally accessible. The green route on the diagram below is the route I would like the traffic to take when the user is inside the LAN, and the red route is seemingly what it does take. The website has a domain name (like most websites do). From the User PC, if I ping the domain name, I get the internal IP of the webserver (because of a hosts file entry), if I nslookup the domain name I also get the internal IP of the webserver (because of an A record on my DNS server). Running the same two commands on the webserver gives me the webservers external IP. (going well so far) If I use PHPs gethostbyname() on the moodle website and use domain name as a parameter (getting php/apache to resolve the hostname) it returns the exernal IP of the webserver (good news, DNS seems to be doing what I want it to). All things so far seem to be going well. The only thing that is confusing me and preventing the moodle single sign on from working is the fact that if I get moodle to show my IP address, it says that it is an external one (outside my NATting firewall) when it should show an internal IP. This is the issue, any ideas on how to go about resolving this? Any ideas on tests I can perform (I have also tried a tracert and the request goes directly to the webserver), anything? Thanks all!

    Read the article

  • MySQL Error: FUNCTION LEVENSHTEIN already exists

    - by kgrote
    I've got an ExpressionEngine database and I exported a couple of tables from it, then dropped those tables. When I try to re-import the tables in PHPMyAdmin, I get this error: SQL query: -- -- Database: `my_db` -- DELIMITER $$ -- -- Functions -- CREATE DEFINER=`my_username`@`%` FUNCTION `LEVENSHTEIN`(s1 VARCHAR(255), s2 VARCHAR(255)) RETURNS int(11) DETERMINISTIC BEGIN DECLARE s1_len, s2_len, i, j, c, c_temp, cost INT; DECLARE s1_char CHAR; DECLARE cv0, cv1 VARBINARY(256); SET s1_len = CHAR_LENGTH(s1), s2_len = CHAR_LENGTH(s2), cv1 = 0x00, j = 1, i = 1, c = 0; IF s1 = s2 THEN RETURN 0; ELSEIF s1_len = 0 THEN RETURN s2_len; ELSEIF s2_len = 0 THEN RETURN s1_len; ELSE WHILE j <= s2_len DO SET cv1 = CONCAT(cv1, UNHEX(HEX(j))), j = j + 1; END WHILE; WHILE i <= s1_len DO SET s1_char = SUBSTRING(s1, i, 1), c = i, cv0 = UNHEX(HEX(i)), j = 1; WHILE j <= s2_len DO SET c = c + 1; IF s1_char = SUBSTRING(s2, j, 1) THEN SET cost = 0; ELSE SET cost = 1; END IF; SET c_temp = CONV(HEX(SUBSTRING(cv1, j, 1)), 16, 10) + cost; IF c > c_temp THEN SET c = [...] MySQL said: Documentation #1304 - FUNCTION LEVENSHTEIN already exists I get this error even if I drop all tables from the DB and try to import anything. The only way I can get the error to go away is to totally delete the database and re-create it. What's causing that error and how can I stop it from happening?

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • Website does not resolve in browser but traceroute is successful

    - by Colum
    I am trying to figure out an issue. My internet is working fine, but this one website is not resolving. It works via a proxy, traceroute works: 1 192.168.1.1 (192.168.1.1) 4.205 ms 0.568 ms 0.510 ms 2 * * * 3 67.59.255.13 (67.59.255.13) 10.583 ms 7.949 ms 7.557 ms 4 67.59.255.61 (67.59.255.61) 10.256 ms 9.576 ms 13.083 ms 5 64.15.8.126 (64.15.8.126) 9.943 ms 11.929 ms 11.452 ms 6 64.15.0.217 (64.15.0.217) 14.655 ms 14.092 ms 13.771 ms 7 64.15.0.118 (64.15.0.118) 33.201 ms 34.875 ms 36.544 ms 8 xe-6-0-3.ar1.ord1.us.nlayer.net (69.31.111.169) 34.027 ms 34.957 ms 34.231 ms 9 ae1-30g.cr1.ord1.us.nlayer.net (69.31.111.133) 82.683 ms 35.138 ms 37.592 ms 10 xe-3-0-0.cr2.iad1.us.nlayer.net (69.22.142.26) 41.657 ms 34.063 ms 34.519 ms 11 ae2-30g.ar2.iad1.us.nlayer.net (69.31.31.186) 35.780 ms 36.361 ms 33.968 ms 12 as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 35.086 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.234) 38.031 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 36.833 ms 13 cr1.iad2.inforelay.net (66.231.176.246) 32.595 ms cr2.iad1.inforelay.net (66.231.176.10) 31.771 ms cr1.iad2.inforelay.net (66.231.176.246) 32.622 ms 14 cr1.iad2.inforelay.net (66.231.176.246) 32.956 ms 33.625 ms !X 41.058 ms 15 * cr1.iad2.inforelay.net (66.231.176.246) 35.312 ms !X * 16 * cr1.iad2.inforelay.net (66.231.176.246) 32.814 ms !X * 17 cr1.iad2.inforelay.net (66.231.176.246) 35.459 ms !X * 53.137 ms !X Ping returns this: Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 Request timeout for icmp_seq 4 Request timeout for icmp_seq 5 Request timeout for icmp_seq 6 But what I can not figure out is why my browsers (Firefox, Safari, Opera) can not resolve the domain. I am on a Wifi connection. What could be the problem? BTW I am on a Mac (10.6.5)

    Read the article

  • Windows 7 VPN Error 619

    - by TravisPUK
    So I am running Windows 7 Enterprise. This morning I was able to VPN using the built in VPN (Connect to Work Network etc). I had to change my network's IP address range and now the VPN will not work. It just stalls on the Verifying user name and password... message. But then it returns the 619 error. Anybody know why changing my machine's IP address would cause this problem? Where should I be looking to try and fix this issue? I have tried this on a Windows XP machine that also had the IP address range change and this still connects fine using exactly the same connection details. EDIT The internal network range changed from 192.x.x.x to 10.x.x.x. This was done on the entire Active Directory. All machines are running fine and the Windows XP machine, that works going to the same client VPN mentioned above is on the same network. Both the XP and the Win 7 machines are using DHCP served by the Domain Controller. The client domain is not performing any IP range checks/restrictions. The VPN is outside the internal network, connection is being made via the Internet and not passing through any other machine, other than the normal domain machines, ie DNS etc. This is passing through a router and the router has the relevant VPN passthrough options configured. All internal machines are working correctly with other forms of VPN, ie Cisco, Sonic etc (these were tested on other machines, they are not installed on the Vista or Win7 machines). After further testing, this is occurring on all Win7 and Vista machines where they can no longer connect to the client VPN, however all XP machines can still connect fine. This has been tested on three Vista, two Win7 and five XP machines. All machines are on DHCP and tests have been done with both the firewalls turned on and off, as well as with fixed IPs being used. Thanks Travis

    Read the article

  • 403 Forbidden error on Mac OSX - Apache and nginx

    - by tlianza
    Hi All, There are a million questions like this on Google, but I haven't found a solution to my problem. The default Apache install on my Mac is giving 403 Forbidden errors for everything (default directory, user home directory, virtual server, etc). After sifting through the config files, I figured I'd give nginx a try. Nginx serves files fine from it's home directory, but it won't serve files from a subfolder of my user directory. I've configured a simple virtual host, and requesting index.html returns a 403-forbidden. The error message in nginx's log file is pretty clear - it can't read the file: 2011/01/04 16:13:54 [error] 96440#0: *11 open() "/Users/me/Documents/workspace/mobile/index.html" failed (13: Permission denied), client: 127.0.0.1, server: local.test.com, request: "GET /index.html HTTP/1.1", host: "local.test.com" I've opened up this directory to everyone: drwxrwxrwx 6 me admin 204B Dec 31 20:49 mobile And all the files in it: $ ls -lah mobile/ total 24 drwxrwxrwx 6 me admin 204B Dec 31 20:49 . drwxr-xr-x 71 me me 2.4K Dec 31 20:41 .. -rw-r--r--@ 1 me me 6.0K Jan 2 18:58 .DS_Store -rwxrwxrwx 1 me admin 2.1K Jan 4 14:22 index.html drwxrwxrwx 5 me admin 170B Dec 31 20:45 nbproject drwxrwxrwx 5 me admin 170B Jan 2 18:58 script And yet, I cannot figure out why the nginx process cannot read index.html. It's running as the "nobody" user, but the permissions are set such that anyone can read them.

    Read the article

  • dovecot/postfix: can send & receive via webmin, however squirrel mail and outlook fail to connect

    - by Jonathan
    I have just finished setting up dovecot and postfix on my server (centos 5.5/apache) earlier today. So far I've been able to get email working through webmin (can send/receive to and from external domains). However, attempting to telnet xxx.xxx.xx.xxx 110 returns the following errors: Connected to xxx.xxx.xx.xxx. Escape character is '^]'. +OK Dovecot ready. USER mailtest +OK PASS ********* +OK Logged in. -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 22:55:48] Connection closed by foreign host. Which further logs the following error dovecot: Feb 11 21:32:48 Info: pop3-login: Login: user=, method=PLAIN, rip=::ffff:xxx.xxx.xx.xxx, lip=::ffff:xxx.xxx.xx.xxx, TLS dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 21:32:48] dovecot: Feb 11 21:32:48 Info: POP3(mailtest): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 Also, when attempting to login to squirrelmail or access the account via thunderbird/live mail etc, it obviously fails with a similar issue. Any suggestions or outside thinking on this would be a massive help! I've pretty much exhausted every resource, and tried every suggestion for my dovecot.conf file, but so far nothing seems to work :( I feel like it may be a permissions/ownership issue, but i'm lost as to specifics.

    Read the article

  • Problem restoring from tar backup: why are there /dev/disk/by-id/ symlinks and how can I avoid them?

    - by SK.
    Hello, I'm trying to make a bare-bone backup system with the most basic tools available on openSUSE 11.3 (in this case: bash, fdisk, tar & grub legacy) Here's the workflow for my scripts: backup.sh: (Run from external system, e.g. LiveCD) make an fdisk script ($fscript) from fdisk -l's output [works] mount the partitions from the system's fstab [works] tar the crucial stuff in file.tgz [works] restore.sh: (Run from external system, e.g. LiveCD) run fdisk $dest < $fscript to restore partitioning [works] format and mount partitions from system's fstab [fails] extract from file.tgz [works when mounting manually] restore grub [fails] I have recently noticed that openSUSE (though I'm sure it has nothing to do with the distro) has different output in /etc/fstab and /boot/grub/menu.lst, more precisely the partition name is for example "/dev/disk/by-id/numbers-brandname-morenumbers-part2" instead of "/dev/sda2" -- but it basically is a simple symlink. My questions about this: what is the point of such symlinks, especially if we're restoring on a different disk? is there a way to cleanly prevent the creation of those symlinks and use the "true" /dev/sdx everywhere instead? if the previous is no, do you know a way to replace those symlinks on the fly in a text file? I tried this script but only works if the file starts with the symlink description (case of fstab, not menu.lst): ### search and replace /dev/disk/by-id/... to /dev/sdx while read oldVolume rest; do # get first element, ignore rest of line if [[ "$oldVolume" =~ ^/dev/disk/by-id/.*(-part[0-9]*$)? ]]; then newVolume=$(readlink $oldVolume) # replace pointer by pointee, returns "../../sdx" echo /dev/${newVolume##*/} $rest >> TMP # format to "/dev/sdx", write line else echo $oldVolume $rest >> TMP # nothing to do fi done < $file mv -f TMP $file # save changes I've had trouble finding a solution to this on google so I was hoping some of the members here could help me. Thank you.

    Read the article

  • MySQL Server hitting 100% unexpectedly (Amazon AWS RDS)

    - by Luc
    Please help! We've been struggling with this one for months. This week we upped our RDS instance to the highest performing instance and although the occurrences have reduced, we're still having our DB all of a sudden hit 100%. It comes out of nowhere. Sometimes 2am, sometimes midday. I've ruled out a DOS - our pages access logs have normal traffic I've ruled out memcached suddenly dieing (hits and misses continue as normal). The SHOW PROCESSLIST while we have issues reports about 500 queries in queue. If I kill them off or restart the server, they just keep coming back and then eventually out of knowhere, our server resumes back to normal. Sometimes up to 3 hours. Our bad performing queries take .02 seconds to execute when the server eventually returns back to normal but while we're in this 100% CPU physco phase, those queries never finish executing. Please help!!!!! Anybody know anything about MYSQL query optimization? Could it be the server deciding to use different indexes all of a sudden, which puts it into a spiral?

    Read the article

  • Installing WindowsAuthentication breaks authentication / web.config?

    - by Ian Quigley
    I have a clean Windows 2008 R2 box (on a VM) and have installed IIS 7.5 with default options. I then copied a website to it (from Windows 7, IIS 7) and after a little tweaking the website is working fine. The website is currently using and working with Anonymous Authentication. I have gone back to the Windows Components/Sever Manager, Roles - Security and ticked and installed Windows Authentication. When I check my server in IIS (top level above sites) - Authentication, I see Anonymous Authentication (enabled) ASP.NET Impersonation (disabled) Forms Authentication (disbaled) Windows Authentication (enabled) When I check my default website - Authentication, I see as above but "Retrieving status" and an error dialog saying There was an error while performing this operation. Details: Filename c:\inetpub\wwwroot\screwturnwiki\web.config Line number: 96 Error: This configuration section cannot be used in this path. This happens when the section is being locked at the parent level. Locking is either by default (overriderModeDefault="Deny"), or set explicity by a location tag with overrideMode="Deny" or the legacy allowOverride="False". I have tried hand editing the web.config with no success. UN-installing Windows Authentication happily returns my site to working with Anonymous Authentication, and allows me to enable/disable these three options. FYI. I am using ScrewTurnWiki with the Active Directory plug in.

    Read the article

  • Apache showing 500 error during Active Directory LDAP authentication

    - by Tyllyn
    I have Apache (on Windows Server) set up to authenticate one directory through Active Directory. Config settings are as follows: <LocationMatch "/trac/[^/]+/login"> Order deny,allow Allow from all AuthBasicProvider ldap AuthzLDAPAuthoritative Off AuthLDAPURL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*) AuthLDAPBindDN trac@<dc-redacted>.local AuthLDAPBindPassword "<password-redacted>" AuthType Basic AuthName "Protected" require valid-user </LocationMatch> Watching, Wireshark, I see the following get sent through when I visit the page: To the AD server: bindRequest(1) "trac@<dc-redacted>.local" simple And from the AD server: bindResponse(1) success I'm assuming this means that the auth was successful... but Apache doesn't think so. It returns a 500 server to me. Apache logs show the following: [Thu Nov 18 16:21:12 2010] [debug] mod_authnz_ldap.c(379): [client 192.168.x.x] [7352] auth_ldap authenticate: using URL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*), referer: http://192.168.x.x/trac/Trac/login [Thu Nov 18 16:21:12 2010] [info] [client 192.168.x.x] [7352] auth_ldap authenticate: user authentication failed; URI /trac/Trac/login [ldap_search_ext_s() for user failed][Filter Error], referer: http://192.168.x.x/trac/Trac/login Now, that log file shows a failed auth for a blank user. I am confused. Any idea what I am doing wrong... and how I can get the Apache authentication working? :) Thanks!

    Read the article

  • How to configure IIS 7.5 to allow special chars in Url for ASP.NET 3.5?

    - by Sebastian P.R. Gingter
    I'm trying to configure my IIS 7.5 to allow specials chars in the url for ASP.NET. This is important to support wide-spread legacy url's on a new system. Sample url: http://mydomain.com/FileWith%inTheName.html This would be encoded in the url and requested as http://mydomain.com/FileWith25%inTheName.html This simply works, when creating a new web in IIS 7.5, placing a file with the percentage sign in the file name in the web root and pointing the browser to it. This does not work, however, when the web site is an ASP.NET application. ASP.NET always returns a 400.0 - Bad Request error in the WindowsAuthentication module from the StaticFile handler, when pointing to that url. It however displays the requested url correctly and also resolves correctly to the correct physical file (the information from the field 'Physical Path' from the Server error page points to the physically available file). There are hints on how to enable this, so I followed the instructions on these websites step by step: http://dirk.net/2008/06/09/ampersand-the-request-url-in-iis7/ http://adorr.net/2010/01/configure-iis-to-accept-url-with-special-characters.html The second one actually sums up the information from the first post and adds some more information about x64 systems (we're running x64) and on an additional web.config change for this. I tried all that, and still can't get this running from an asp.net web application. And yes: I rebooted after applying the registry changes. So, what do I have to do in addition to the settings described in above posts, to support the legacy url's which contain percentage characters? Additional info: Application Pool mode is integrated. Push after some days. No idea anyone?

    Read the article

  • Monitor goes black for a few seconds

    - by privatehuff
    I have a Hanns G 28" monitor, Model # HG281D It has its issues (viewing angle sucks) but has been functional and solid, great for desktop stuff. Worked without any sign of any problems for 6-12 months. However, now the monitor "goes black" for about 2-3 seconds, almost like when you click "detect display" It does not turn off (power light does not go amber) The computer is completely unaffected and the video mode never changes when the picture returns. The computer is fully responsive and will keep playing music or taking my keypresses during the time I can't see anything. (it just happened and I kept typing, etc) It happens on multiple computers across several operating systems. (I have an 8-port iogear KVM switch that has several computers connected) But, it seems to happen only on certain computers. I have a hackintosh that does it, a windows 7 PC that does not, a lenovo laptop that does not, and my old ubuntu 8.10 box did not do it, but my new mint 8 box does do it. I've check the connections and tried changing out the power cable and the vga cable. Sometimes it won't happen for hours (or days) and sometimes it happens several times per hour. It was happening many months ago, did not happen for months, and has now started happening again. Does this make any sense? What could it be?

    Read the article

  • How to tell statd to use portmap on a non-localhost ipadress?

    - by jneves
    How can I make statd connect to other IP address other than 127.0.0.1? I have a server that is connected to 2 different networks (one is public, another a private). I want it to provide a NFS share for only the private network. The host in an ubuntu 8.04. The private ip address is 192.168.1.202 I changed /etc/default/portmap to add: OPTIONS="-i 192.168.1.202" The command lsof -n | grep portmap returns: portmap 10252 daemon cwd DIR 202,0 4096 2 / portmap 10252 daemon rtd DIR 202,0 4096 2 / portmap 10252 daemon txt REG 202,0 15248 13461 /sbin/portmap portmap 10252 daemon mem REG 202,0 83708 32823 /lib/tls/i686/cmov/libnsl-2.7.so portmap 10252 daemon mem REG 202,0 1364388 32817 /lib/tls/i686/cmov/libc-2.7.so portmap 10252 daemon mem REG 202,0 31304 16588 /lib/libwrap.so.0.7.6 portmap 10252 daemon mem REG 202,0 109152 16955 /lib/ld-2.7.so portmap 10252 daemon 0u CHR 1,3 960 /dev/null portmap 10252 daemon 1u CHR 1,3 960 /dev/null portmap 10252 daemon 2u CHR 1,3 960 /dev/null portmap 10252 daemon 3u unix 0xecc8c3c0 4332992 socket portmap 10252 daemon 4u IPv4 4332993 UDP 192.168.1.202:sunrpc portmap 10252 daemon 5u IPv4 4332994 TCP 192.168.1.202:sunrpc (LISTEN) portmap 10252 daemon 6u REG 0,12 289 3821511 /var/run/portmap_mapping I defined in /etc/hosts the following: 192.168.1.202 server.local In /etc/default/nfs-common I changed STATDOPTS to: STATDOPTS="--name server.local" Yet when I run /etc/init.d/nfs-common start if fails to start. The log shows: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Version 1.1.2 Starting Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Flags: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: unable to register (statd, 1, udp). An strace -f rpc.statd -n server.local results in a lot of lines, including this one: sendto(9, "\200]3\362\0\0\0\0\0\0\0\2\0\1\206\240\0\0\0\2\0\0\0\1"..., 56, 0, {sa_family=AF_INET, sin_port=htons(111), sin_addr=inet_addr("127.0.0.1")}, 16) = 56

    Read the article

  • Msg 10054, Level 20, State 0, Line 0 Error when altering a stored procedure to add a couple of curso

    - by doug_w
    We have a home-rolled backup stored procedure that uses xp_cmdshell to create and clean up database backups. We have an instance that is 2005 sp3 that we are trying to deploy this script to. I am at a bit of a loss for why it is not working. When I execute the create it runs for about 30 seconds and yields the following error: Msg 10054, Level 20, State 0, Line 0 A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) In my tinkering I discovered that by removing the cursors that actually do the work it will allow me to create the stored procedure (not very helpful for me though). If I add the cursors back in using an alter the error returns. I would be curious if someone has experienced this problem and knows of a solution or work around. I am not opposed to posting the source, it is just lengthy. Things I have checked: Error Logs No dump files in the log directory Thanks in advance for the help.

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >