Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 389/428 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • PPTP connection fails with errors 800/806

    - by Mark S. Rasmussen
    I've got a client (Server 2008 R2) that won't connect to our production environment PPTP VPN server (Server 2003, running RRAS). The server is behind a firewall that has TCP1723 open as well as GRE. Other clients at our office are able to connect just fine. Our office is behind a Juniper SSG5-Serial firewall, but all outgoing traffic is allowed, and multiple other clients are able to connect to VPN servers without issues. I've also setup a completely different VPN server on another network outside of our office. The functioning clients connect just fine - the Server 2008 R2 machine doesn't. Thus it's definitely a problem with this machine in particular. I've rebooted it. I've disabled the firewall, no dice on either. I've run PPTPSRV and PPTPCLNT on the server/client and they're able to communicate perfectly - indicating there's no problem using neither TCP1723 nor GRE. The Server 2008 R2 machine is also running as a VPN server itself (incoming connection) and that's working perfectly. We have the issues no matter if there are active incoming connections or not. I'm not sure what my next debugging step would be; any suggestions? EDIT: The event log on the server has the following warning from RasMan: A connection between the VPN server and the VPN client xxx.xxx.xxx.xxx has been established, but the VPN connection cannot be completed. The most common cause for this is that a firewall or router between the VPN server and the VPN client is not configured to allow Generic Routing Encapsulation (GRE) packets (protocol 47). Verify that the firewalls and routers between your VPN server and the Internet allow GRE packets. Make sure the firewalls and routers on the user's network are also configured to allow GRE packets. If the problem persists, have the user contact the Internet service provider (ISP) to determine whether the ISP might be blocking GRE packets. Obviously this points to GRE being a potential problem. But seeing as I have other clients connectiong without problems, as well as PPTPSRV and PPTPCLNT being able to communicate, I'm suspecting this might be a red herring. EDIT: Here are the anonymized events logged by the client in chronological order: CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has started dialing a VPN connection using a per-user connection profile named ZZZ. The connection settings are: Dial-in User = XXX\YYY VpnStrategy = PPTP DataEncryption = Require PrerequisiteEntry = AutoLogon = No UseRasCredentials = Yes Authentication Type = CHAP/MS-CHAPv2 Ipv4DefaultGateway = No Ipv4AddressAssignment = By Server Ipv4DNSServerAssignment = By Server Ipv6DefaultGateway = Yes Ipv6AddressAssignment = By Server Ipv6DNSServerAssignment = By Server IpDnsFlags = Register primary domain suffix IpNBTEnabled = Yes UseFlags = Private Connection ConnectOnWinlogon = No. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY is trying to establish a link to the Remote Access Server for the connection named ZZZ using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has successfully established a link to the Remote Access Server using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The link to the Remote Access Server has been established by user XXX\YYY. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY dialed a connection named ZZZ which has failed. The error code returned on failure is 806. Running Wireshark on the client shows it trying and retrying to send a "71 Configuration Request" While the server shows the incoming client requests, but apparently without replying: Given that this is GRE traffic, I think rules out the GRE traffic being blocked. Question is, why doesn't the server reply? This is the Configuration Request the server receives from the non functioning client (meaning no response is sent to the client request): And this is the Configuration Request the server receives from the working client: To me they seem identical, except for differing keys and magic numbers, and the fact that one client receives a response while the other doesn't.

    Read the article

  • Wget works, Ping doesn't

    - by derty
    There are some anomalies on a Virtuozzo virtualized Debian 4 (I know, I'm gonna upgrade this one asap, but there dependences). We run some Websites on this one. And a view Days ago exmi4 wasnt able to send mails to SOME people. I'll use live.com as exampledomain! So some of this people got mails and some didn't. Some of the mails got stuck in the queue, and after 2 days they went out!! My Nagios never showed problems with the internet connection or disk space Now i wanted to install "dig" to look how he's solving the dns request. And this Debian tells me he doesn't know dig.. Long story made short, Debian is able to download sites with exact IP or even with wget live.com, but it is not able to ping live.com. I'm 99% sure that the networking is right and the routing too! Some examples of my tring below: wget live.com downloads the site ping live.com ping http://www.live.com ping http://live.com returns: ping: unknown host live.com EDIT: i now use heise.de not live.com any more. and i found out i can ping the heise.de server by using it's IP-address. myserver:~# ping 193.99.144.85 PING 193.99.144.85 (193.99.144.85) 56(84) bytes of data. 64 bytes from 193.99.144.85: icmp_seq=1 ttl=248 time=12.7 ms 64 bytes from 193.99.144.85: icmp_seq=2 ttl=248 time=12.6 ms 64 bytes from 193.99.144.85: icmp_seq=3 ttl=248 time=12.9 ms 64 bytes from 193.99.144.85: icmp_seq=4 ttl=248 time=13.1 ms 64 bytes from 193.99.144.85: icmp_seq=5 ttl=248 time=13.1 ms --- 193.99.144.85 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 12.671/12.924/13.163/0.238 ms EDIT 2: myserver:/etc/apt# dig heise.de ; <<>> DiG 9.3.4-P1.2 <<>> heise.de ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 3 ;; QUESTION SECTION: ;heise.de. IN A ;; ANSWER SECTION: heise.de. 2266 IN A 193.99.144.80 ;; AUTHORITY SECTION: heise.de. 1622 IN NS ns.pop-hannover.de. heise.de. 1622 IN NS ns.s.plusline.de. heise.de. 1622 IN NS ns.plusline.de. heise.de. 1622 IN NS ns2.pop-hannover.net. heise.de. 1622 IN NS ns.heise.de. ;; ADDITIONAL SECTION: ns.plusline.de. 265 IN A 212.19.48.14 ns.pop-hannover.de. 5113 IN A 193.98.1.200 ns2.pop-hannover.net. 15150 IN A 62.48.67.66 ;; Query time: 2 msec ;; SERVER: 193.200.112.80#53(193.200.112.80) ;; WHEN: Tue Oct 9 13:03:50 2012 ;; MSG SIZE rcvd: 216

    Read the article

  • PC: Quick Freeze, then BSOD, then forced reboot, then freezes again

    - by cr0z3r
    Lately I have been experiencing a weird issue. My PC will hang for a second and then BSOD - it stays there, so I have to reboot. Once Windows starts again and I'm logged in, after 1-10min it freezes again, this time without BSOD. RECAP: 1 second freeze BSOD, hangs there I have to force-reboot Once PC rebooted, I log back into Windows second freeze between a 1-10min range, no BSOD (alternatively, I get a freeze with a constant sound/noise, no BSOD) I contacted my PC provider, who told me my graphics-card might be flawed (Quadro 4000), so I used a Quadro 2000 that they lend me. The issue still occurred. The issue now seemed to belong to a flawed RAM module. Following my provider's steps, I removed all but the first from the left column and kept using my PC for a week or so without any issues. I then added the bottom-right module, and so on, until all modules were back inside - I had no problems. Now it seemed that a simple take-out-put-back-in of the RAM modules had fixed the issue. However, after a few months, the issue was back. I redid all the RAM-swapping I had done before, and concluded that the lower-right module was flawed. My provider changed it for another, and everything was great until now. My PC froze again for barely a second, hanged on a BSOD, I rebooted it, logged-in to Windows to get a freeze (without BSOD or reboot) 40 seconds later. Something worth noting, is that every time I reboot after the BSOD, it is something within Chrome that freezes my PC (e.g. this time I clicked the "restore" button as Chrome mentioned it had exited unexpectedly - from the previous freeze obviously - and it instantly froze). Finally, the Event Viewer lists 2 critical events in the past hour as "ID: 41, Type: Kernel-Power". PC-Specs: http://i.imgur.com/VZpbr.jpg Previous Dump-files: https://dl.dropbox.com/u/716600/DumpFiles_08_2012.zip I would like to thank anybody in advance for your help. You're great. UPDATE 1: I realized that I did not mention an awkward fact about this issue. After I have gotten the 1-sec freeze followed by a BSOD, after I rebooted because the BSOD hanged, and after I logged back in to get another (this time, eternal) freeze and rebooted once again, the PC does not boot back up. The power-light is on, but my monitor says "no signal", as if the PC wouldn't really be turned on. This truly seems a like a power-related issue, doesn't it? UPDATE 2: I just got a freeze, but without BSOD. My screen froze (while on Chrome, which is starting to seem suspicious to me) with an ongoing sound/noise. I had to force-reboot my PC. I would say this is a graphics-card issue, but this issue also happened when I was using the Quadro 2000 from my provider. UPDATE 3: I just got a BSOD while trying to render something (quite heavy, actually) in 3ds Max 2012. I left the BSOD "running", as it said it was writing dump files to disk. However, the percentage number stayed at 0, so after 15 minutes I force-rebooted. I then used the software WhoCrashed (thank you Dave) which reported the following from the C:\Windows\Memory.dmp file: On Thu 22.11.2012 22:13:45 GMT your computer crashed crash dump file: C:\Windows\memory.dmp uptime: 01:05:27 This was probably caused by the following module: Unknown () Bugcheck code: 0x124 (0x0, 0xFFFFFA80275AC028, 0xF200001F, 0x100B2) Error: WHEA_UNCORRECTABLE_ERROR Bug check description: This bug check indicates that a fatal hardware error has occurred. This bug check uses the error data that is provided by the Windows Hardware Error Architecture (WHEA). This is likely to be caused by a hardware problem problem. This problem might be caused by a thermal issue. A third party driver was identified as the probable root cause of this system error. It is suggested you look for an update for the following driver: Unknown . Google query: Unknown WHEA_UNCORRECTABLE_ERROR

    Read the article

  • CakePhp on IIS: How can I Edit URL Rewrite module for SSL Redirects

    - by AdrianB
    I've not dealt much with IIS rewrites, but I was able to import (and edit) the rewrites found throughout the cake structure (.htaccess files). I'll explain my configuration a little, then get to the meat of the problem. So my Cake php framework is working well and made possible by the url rewrite module 2.0 which I have successfully installed and configured for the site. The way cake is set up, the webroot folder (for cake, not iis) is set as the default folder for the site and exists inside the following hierarchy inetpub -wwwroot --cakePhp root ---application ----models ----views ----controllers ----WEBROOT // *** HERE *** ---cake core --SomeOtherSite Folder For this implementation, the url rewrite module uses the following rules (from the web.config file) ... <rewrite> <rules> <rule name="Imported Rule 1" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> <rule name="Imported Rule 2" stopProcessing="true"> <match url="^$" ignoreCase="false" /> <action type="Rewrite" url="/" /> </rule> <rule name="Imported Rule 3" stopProcessing="true"> <match url="(.*)" ignoreCase="false" /> <action type="Rewrite" url="/{R:1}" /> </rule> <rule name="Imported Rule 4" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> </rules> </rewrite> I've Installed my SSL certificate and created a site binding so that if i use the https:// protocol, everything is working fine within the site. I fear that attempts I have made at creating a rewrite are too far off base to understand results. The rules need to switch protocol without affecting the current set of rules which pass along url components to index.php (which is cake's entry point). My goal is this- Create a couple of rewrite rules that will [#1] redirect all user pages (in this general form http://domain.com/users/page/param/param/?querystring=value ) to use SSL and then [#2} direct all other https requests to use http (is this is even necessary?). [e.g. http://domain.com/users/login , http://domain.com/users/profile/uid:12345 , http://domain.com/users/payments?firsttime=true] ] to all use SSL [e.g. https://domain.com/users/login , https://domain.com/users/profile/uid:12345 , https://domain.com/users/payments?firsttime=true] ] Any help would be greatly appreciated.

    Read the article

  • RAID and Partitions, guidance Needed

    - by beauregarde
    Alright I have a Biostar TA790GX3A2+ Mobo 2x Seagate 750Gb Hard drive (with 2 different speeds) an X4 9750 A GeForce 9800GT and 2GB RAM Hardware Specs link text I want to configure my computer with partitions in various RAID arrays. The Partitions I know i want (disk letters are mostly for reference here) C: XP Boot D: XP Swap E: XP Run F: Games G: Data The Partitions I think I want (repeat caveat) H: small FAT for Win Legacy and DOS I: Linux J: Linux Swap K-?M?: Other Linux /whatever partitions N & O: Attic for D1 and D2 What I'd like to do, is have C: written on Disk 1 (D1),.. D: on D2,.. E: and F: striped on D1 & D2,.. G: mirrored or D1 & D2,.. I: on D2 (so i can just switch disc boot priority to open in Ubuntu),.. J: on D1,.. and H: somewhere low on D1 I am inexperienced with VMs, so i am unsure as to whether those run out of XP, or whether i need to reserve a primary partition for them. However, I think they would be preferable for testing new OS's to scheduling a partition for the same purpose. I'm also not married to XP, but -64 IS pretty important to me. QUestion Time 1) Ignoring the irrationality of it all, is such a configuration possible? If not, can some pseudo-approximation be achieved? 2) My RAID is software, isnt it? 3) How much should I short a 750GB HD? And should i use that space for my attics, or for my attics and something else, or for something else (.iso's perhaps?)? 4) if XP is striped on D1 & D2, will that interfere egregiously with my Swap writes on D2? If so, would striping both XP and Swap relieve (or at least mitigate) that issue? Should XP and Swap just be written normally on 2 different HDs? 5) Should I keep DL's and Drivers on E: (XP Run), F: (Games), or elsewhere? 6) Is 4GB enough for C:? 7) Is 30GB enough (or too much) for E:? 8) How much to reserve for the Linux and sub-Linux partitions? Also, where on the platter do you think i should put them? 9) Am I a fool to use FAT16 instead of FAT32 for H: because I'd rather run 95 than 98SE? If not, do you think 2GB or 4GB? 10) I cant predict what my Max Commit Charge will be, so recommendations for Pagefile size? 5GB? 12GB? 11) VMs, where do I run them? do they exacerbate anything? Would it be better to just emulate Linux, 95, and DOS? EC) What havent I considered that I really should? Notes: computer is mostly for playing games and watching media, though I wouldnt rule out the use of particularly blah-intensive anything.

    Read the article

  • Have I pushed the limits of my current VPS or is there room for optimization?

    - by JRameau
    I am currently on a mediatemple DV server (basic) 512mb dedicated ram, this is a CentOS based VPS with Plesk and Virtuozzo. My experience with it from day 1 has been bad and I only could sooth my server issues with several caching "Band-aids," but my sites are not as small as they were a year ago either so the issues have worsen. I have 3 Drupal installs running on separate (plesk) domains, 1 of those drupal installs is a multisite, that consists of 5-6 sites 2 of those sites are bringing in actual traffic. Those caching "Band-aids" I mentioned are APC, which seemed to help alot initially, and Drupal's Boost, which is considered a poorman's Varnish, it makes all my pages static for anonymous users. Last 30day combined estimate on Google Ananlytics: 90k visitors 260k pageviews. Issue: alot of downtime, I am continually checking if my sites are up, and lately I have been finding it down more than 3 times daily. Restarting Apache will bring it back up, for some time. I have google search every error message and looked up ways to optimize my DV server, and I am beyond stump what is my next move. Is this server bad, have I hit a impossibly low restriction such as the 12mb kernel memory barrier (kmemsize), is it on my end, do I need to optimize some more? *I have provided as much information as I can below, any help or suggestions given will be appreciated Common Error messages I see in the log: [error] (12)Cannot allocate memory: fork: Unable to fork new process [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 21, in ? import traceback File "/usr/lib/python2.4/traceback.py", line 3, in ? import linecache ImportError: No module named linecache [error] python_handler: no interpreter callback found. [warn-phpd] mmap cache can't open /var/www/vhosts/***/httpdocs/*** - Too many open files in system (pid ***) [alert] Child 8125 returned a Fatal error... Apache is exiting! [emerg] (43)Identifier removed: couldn't grab the accept mutex [emerg] (22)Invalid argument: couldn't release the accept mutex cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 41548: kmemsize 4582652 5306699 12288832 13517715 21105036 lockedpages 0 0 600 600 0 privvmpages 38151 42676 229036 249036 0 shmpages 16274 16274 17237 17237 2 dummy 0 0 0 0 0 numproc 43 46 300 300 0 physpages 27260 29528 0 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 27270 29538 131072 2147483647 0 numtcpsock 21 29 300 300 0 numflock 8 8 480 528 0 numpty 1 1 30 30 0 numsiginfo 0 1 1024 1024 0 tcpsndbuf 648440 675272 2867477 4096277 1711499 tcprcvbuf 301620 359716 2867477 4096277 0 othersockbuf 4472 4472 1433738 2662538 0 dgramrcvbuf 0 0 1433738 1433738 0 numothersock 12 12 300 300 0 dcachesize 0 0 2684271 2764800 0 numfile 3447 3496 6300 6300 3872 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 14 14 200 200 0 TOP: (In January the load avg was really high 3-10, I was able to bring it down where it is currently is by giving APC more memory play around with) top - 16:46:07 up 2:13, 1 user, load average: 0.34, 0.20, 0.20 Tasks: 40 total, 2 running, 37 sleeping, 0 stopped, 1 zombie Cpu(s): 0.3% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 916144k total, 156668k used, 759476k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached MySQLTuner: (after optimizing every table and repairing any table with overage I got the fragmented count down to 86) [--] Data in MyISAM tables: 285M (Tables: 1105) [!!] Total fragmented tables: 86 [--] Up for: 2h 44m 38s (409K q [41.421 qps], 6K conn, TX: 1B, RX: 174M) [--] Reads / Writes: 79% / 21% [--] Total buffers: 58.0M global + 2.7M per thread (100 max threads) [!!] Query cache prunes per day: 675307 [!!] Temporary tables created on disk: 35% (7K on disk / 20K total)

    Read the article

  • windows 2003 server : can't join domain

    - by phill
    I originally tried to rejoin a computer to a network which led to a "cannot find domain" error. The username/password box don't even come up. some tests i ran: I can ping the server, however I can't ping the domain name domain1.local. nslookup can't find the domain either. It looks to the isp's dns instead of my own to resolve the local machines. So i go to the dns and run netdiag.exe and gives me this error. DNS test . . . . . . . . . . . . . : Failed [WARNING] Cannot find a primary authoritative DNS server for the name 'stmartinsrv.stmartin.local.'. [RCODE_SERVER_FAILURE] The name 'srv.domain1.local.' may not be registered in DNS. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.156.1'. Please wait for 30 minutes for DNS server replication. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.157.1'. Please wait for 30 minutes for DNS server replication. [FATAL] No DNS servers have the DNS records for this DC registered. Redir and Browser test . . . . . . : Passed List of NetBt transports currently bound to the Redir NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The redir is bound to 1 NetBt transport. List of NetBt transports currently bound to the browser NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The browser is bound to 1 NetBt transport. then running dcdiag C:\Program Files\Support Toolsdcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\SRV Starting test: Connectivity The host 1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local cou ld not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local) couldn't be resolved, the server name (srv.domain1.local) resolved to the IP address (192.168.1.21) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... SRV failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\SRV Skipping all tests, because server SRV is not responding to directory service requests Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : domain1 Starting test: CrossRefValidation ......................... domain1 passed test CrossRefValidation Starting test: CheckSDRefDom ......................... domain1 passed test CheckSDRefDom Running enterprise tests on : domain1.local Starting test: Intersite ......................... domain1.local passed test Intersite Starting test: FsmoCheck ......................... domain1.local passed test FsmoCheck from previous postings, I've tried adding the domain suffix to the nic ip properties to both the client machine and the dc server which didn't help. note: there is only one nic on the server any ideas? thanks in advance

    Read the article

  • http.conf setup to simplify using 'localhost:81'

    - by Will
    I'm installing portable wampserver within my dropbox folder so I can access anywhere. I have this achieved and accessible using http://locahost:81 I want to access it by using a different address (dropping the :81 port number) such as http://myothersite. I'm fairly certain I need to add a virtualhosts directove somewhere within this, but I am not Apache experienced! This is the current Apache httpd.conf file: ServerRoot "C:/Users/will/Dropbox/Wampee-2.1-beta-2/bin/apache/apache2.2.17" Listen 81 ServerAdmin admin@localhost ServerName localhost:81 DocumentRoot "C:/Users/will/Dropbox/Wampee-2.1-beta-2/www/" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/Users/will/Dropbox/Wampee-2.1-beta-2/www/"> Options Indexes FollowSymLinks AllowOverride all # onlineoffline tag - don't remove Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> <IfModule dir_module> DirectoryIndex index.php index.php3 index.html index.htm </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "C:/Users/will/Dropbox/Wampee-2.1-beta-2/logs/apache_error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "C:/Users/will/Dropbox/Wampee-2.1-beta-2/logs/access.log" common #CustomLog "logs/access.log" combined </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "cgi-bin/" </IfModule> <IfModule cgid_module> #Scriptsock logs/cgisock </IfModule> <Directory "cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-httpd-php .php AddType application/x-httpd-php .php3 </IfModule> # Server-pool management (MPM specific) #Include conf/extra/httpd-mpm.conf # Multi-language error messages #Include conf/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include conf/extra/httpd-autoindex.conf # Language settings #Include conf/extra/httpd-languages.conf # User home directories #Include conf/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include conf/extra/httpd-info.conf # Virtual hosts #Include conf/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include conf/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include conf/extra/httpd-dav.conf # Various default settings #Include conf/extra/httpd-default.conf # Secure (SSL/TLS) connections #Include conf/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include "C:/Users/will/Dropbox/Wampee-2.1-beta-2/alias/*" Include "C:/Users/will/Dropbox/Wampee-2.1-beta-2/MyWebAp ps/etc/alias/*"

    Read the article

  • Consistent PHP _SERVER variables between Apache and nginx?

    - by Alix Axel
    I'm not sure if this should be asked here or on ServerFault, but here it goes... I am trying to get started on nginx with PHP-FPM, but I noticed that the server block setup I currently have (gathered from several guides including the nginx Pitfalls wiki page) produces $_SERVER variables that are different from what I'm used to seeing in Apache setups. After spending the last evening trying to "fix" this, I decided to install Apache on my local computer and gather the variables that I'm interested in under different conditions so that I could try and mimic them on nginx. The Apache setup I've on my computer has only one mod_rewrite rule: RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 [L] And these are the values I get for different request URIs (left is Apache, right is nginx): localhost/ - http://www.mergely.com/GnzBHRV1/ localhost/foo/bar/baz/?foo=bar - http://www.mergely.com/VwsT8oTf/ localhost/index.php/foo/bar/baz/?foo=bar - http://www.mergely.com/VGEFehfT/ What configuration directives would allow me to get similar values on requests handled by nginx? My current configuration in nginx is: server { listen 80; listen 443 ssl; server_name default; ssl_certificate /etc/nginx/certificates/dummy.crt; ssl_certificate_key /etc/nginx/certificates/dummy.key; root /var/www/default/html; index index.php index.html; autoindex on; location / { try_files $uri $uri/ /index.php; } location ~ /(?:favicon[.]ico|robots[.]txt)$ { log_not_found off; } location ~* [.]php { #try_files $uri =404; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_split_path_info ^(.+[.]php)(/.+)$; } location ~* [.]ht { deny all; } } And my fastcgi_params file looks like this: fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; I know that the try_files $uri =404; directive is commented and that it is a security vulnerability but, if I uncomment it, the third request (localhost/index.php/foo/bar/baz/?foo=bar) will return a 404. It's also worth noting that my PHP cgi.fix_pathinfo in On (contrary to what some of the guides recommend), if I try to set it to Off, I'm presented with a "Access denied." message on every PHP request. I'm running PHP 5.4.8 and nginx/1.1.19. I don't know what else to try... Help?

    Read the article

  • Unable to make the session state request to the session state server

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • mod_mono 'Service Temporarily Unavailable' issue

    - by Charlie Somerville
    I've deployed an ASP.NET web application on a Linux (Debian) server running Apache 2.2 and mod_mono 1.9 It's working well, however Mono occasionally segfaults and uses the entire CPU which causes the website to stop working and display 'Service Temporarily Unavailable' Killing mono fixes it, but obviously this isn't a good solution. I tailed the system log after this happened and I saw the following error messages from the kernel: Apr 20 01:49:37 charliesomerville kernel: [1596436.204158] mono[17909]: segfault at b645f671 ip b645f671 sp b4ffb604 error 4<6>mono[19047]: segfault at b645f66e ip b645f66e sp b4bf7604 error 4<6>mono[18017]: segfault at b645f66e ip b645f66e sp b52fe604 error 4<6>mono[19668]: segfault at b645f5e6 ip b645f5e6 sp b48f4604 error 4<6>mono[22565]: segfault at b645f674 ip b645f674 sp b45f1604 error 4<6>mono[17700]: segfault at b645f661 ip b645f661 sp b51fd604 error 4<6>mono[19596]: segfault at b645f5e6 ip b645f5e6 sp b49f5604 error 4 Apr 20 01:49:37 charliesomerville kernel: [1596436.208172] mono[23219]: segfault at b645f66e ip b645f66e sp b44f0604 error 4 At the end of Apache's error.log are the following errors: [Tue Apr 20 03:10:23 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 System.ArgumentNullException: null key Parameter name: key at System.Collections.Hashtable.get_Item (System.Object key) [0x00000] at System.Runtime.Serialization.SerializationCallbacks.GetSerializationCallbacks (System.Type t) [0x00000] at System.Runtime.Serialization.ObjectManager.RaiseOnDeserializingEvent (System.Object obj) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectContent (System.IO.BinaryReader reader, System.Runtime.Serialization.Formatters.Binary.TypeMetadata metadata, Int64 objectId, System.Object& objectInstance, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectInstance (System.IO.BinaryReader reader, Boolean isRuntimeObject, Boolean hasTypeInfo, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObject (BinaryElement element, System.IO.BinaryReader reader, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadNextObject (System.IO.BinaryReader reader) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectGraph (System.IO.BinaryReader reader, Boolean readHeaders, System.Object& result, System.Runtime.Remoting.Messaging.Header[]& headers) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.NoCheckDeserialize (System.IO.Stream serializationStream, System.Runtime.Remoting.Messaging.HeaderHandler handler) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize (System.IO.Stream serializationStream) [0x00000] at System.Runtime.Remoting.Channels.CADSerializer.DeserializeObject (System.IO.MemoryStream mem) [0x00000] at System.Runtime.Remoting.RemotingServices.GetDomainProxy (System.AppDomain domain) [0x00000] at System.AppDomain.CreateDomain (System.String friendlyName, System.Security.Policy.Evidence securityInfo, System.AppDomainSetup info) [0x00000] at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] at Mono.WebServer.ApplicationServer.GetApplicationForPath (System.String vhost, Int32 port, System.String path, Boolean defaultToRoot) [0x00000] at (wrapper remoting-invoke-with-check) Mono.WebServer.ApplicationServer:GetApplicationForPath (string,int,string,bool) at Mono.WebServer.ModMonoWorker.GetOrCreateApplication (System.String vhost, Int32 port, System.String filepath, System.String virt) [0x00000] at Mono.WebServer.ModMonoWorker.InnerRun (System.Object state) [0x00000] at Mono.WebServer.ModMonoWorker.Run (System.Object state) [0x00000] [Tue Apr 20 03:10:26 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:26 2010] [error] Command stream corrupted, last command was -1 Along with the above errors, Apache's error.log is packed with hundreds (if not thousands) of the following error: Maximum number (20) of concurrent mod_mono requests to /tmp/mod_mono_dashboard_default_2.lock reached. Droping request. At the moment, I'm thinking there might be something wrong with configuration here (it's basically running on out-of-the-box config)

    Read the article

  • BSOD & System Failure after trying to install a new RAM

    - by Praveen Kumar
    I have updated the question with sections, so that people won't find it difficult to read. Basic System Information Let me give a basic introduction on my system. I have a system of following configuration: Processor: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz RAM: Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9) x 2 OS: Windows 7 Ultimate, SP1 Build 7601 HDD: 1 TB Seagate 7200 RPM The Problem It was working fine for about an year. Yesterday I planned to increase my RAM to 16 GB by putting another set of two Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9). I got it from an authorized reseller and also, the RAM was fitted by a service engineer only. After the RAM was fit (all the four), the system failed to start, with an error code of 0x000000f4. The complete information of it is: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 16393 Additional information about the problem: BCCode: f4 BCP1: 0000000000000003 BCP2: FFFFFA8008A39060 BCP3: FFFFFA8008A39340 BCP4: FFFFF800037C8510 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\093012-13041-01.dmp C:\Users\Praveen Kumar\AppData\Local\Temp\WER-30716-0.sysdata.xml Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Another Problem We first thought that it was the RAM, which caused the issue. So I returned the RAMs and now my computer configuration is exactly how it was the previous day. But, following the removal of the RAM, I also had several crashes after that. One suspicious thing was with an error code c0000134: STOP: c0000135 The program can’t start because %hs is missing from your computer . Try resintalling the program to fix this problem. After reading contents from this, this and this, which were never my case, they didn't help me. But I didn't receive any more STOP c0000134 messages. But this 0x000000f4 keeps on coming. I am writing from the same system and it allows me to work for say, half an hour max. Then I hear a device disconnect sound, the one you hear in Windows 7, when a USB Mass Storage Device is plugged out. Immediately following that, my screen goes blank and I get 0x000000f4 blue screen. Okay, now I am really concerned about my Hard Disk data, but I have no clue if there is a problem with the HDD. My Question What all files do I need to submit for your reference? Can this issue be fixed? I am getting more time if I remove my RAM, clean it and then put it back. Weird! Hope I have given the necessary information to help you guys. Thanks in advance. Minidumps I have uploaded all the Minidump DMP files from C:\Windows\Minidump folder here: http://www.praveen-kumar.com/Minidumps.zip Let me know if you face any issues in accessing it. Will be able to share elsewhere. Updates 30-Sep-2012 10:15 AM IST: When I keep the system cover opened, pressed the HDD Cable well, it is allowing me to be on for about half an hour, I guess? Also, I feel that the CPU fan speed is kind of slow. It rotates at around 900 RPM, but the CPU Temperature is not more than 70° C. 30-Sep-2012 10:30 AM IST: My Modem (Beetel 220BX ADSL2+ Router) failed. I have no idea how it is related to this issue, but I thought that I need to document this too. I really have a bad day here. 30-Sep-2012 11:00 AM IST: System still running fine, with the cabinet cover open, now for about an hour. 30-Sep-2012 12:00 PM IST: I shut down the system and closed the cabinet. Started the system, and it hung after giving the password. After a few minutes, got the same 0x000000f4 error. So, while it is in the upright position, fixed the Hard Disk cable and now it is booting fine. Waiting for more observations and answers.

    Read the article

  • DNS lookups failing somewhere between firewall and router

    - by TessellatingHeckler
    we have a setup of ADSL line - Cisco 837 ADSL router - Zyxel ZyWall 35 firewall/NAT - Switch == Intel load balanced NICS in a server. It has been fine for years, suddenly DNS resolution stopped working on the server. No changes that I know of, so I can't work backwards from there. It was configured with the ISP's DNS servers, neither network device does DNS relaying. Wireshark shows the request go out but nothing comes back. The server networking stack seems OK though, because if we query an internal DNS server on a remote site, that works. I can logon to the Cisco, and DNS resolves OK from the command line. I can logon to the ZyWall, and DNS does not resolve from the command line. So the problem seems to be the firewall, patch cable or router, yes? On the router: interface Ethernet0 ip address aaa.bbb.ccc.ddd 255.255.255.ddd ip tcp adjust-mss 1450 hold-queue 100 out On the firewall: DNS server set to 8.8.8.8 (Google's), DNS traffic allowed LAN-WAN. What else should I look for? Update: Following This guide I've got traffic logging on the Cisco. I have also got access to a public DNS server which I can run tcpdump on to see things from the other side. And as per the below comments, I've tested with Dig and see that DNS over TCP works, and over UDP does not. Currently: DNS request from the server using TCP shows up in the firewall log, and in the Cisco log, and in tcpdump on the DNS server, the answer comes back, it works fine. DNS request from the server using UDP shows up in the firewall log, and in the Cisco log, does NOT show in tcpdump on the DNS server, times out. DNS request from the cisco (using UDP) does show up in tcpdump on the DNS server, answer received, works fine. Ping requests from the server and the cisco to the DNS server show up in tcpdump on the DNS server. DNS request from the server using UDP does show up on the firewall. Summary: TCP seems fine throughought. UDP works over the ADSL and to the Cisco, and it works from the server to the Cisco, but it doesn't cross the Cisco properly, it seems. I did see the Cisco showing as connected at 10Mb/full-duplex internally, and the firewall showing as 100Mb/full-duplex externally. I have forced the firewall to 10Mb and rebooted both devices. That seemed to help get UDP traffic (server-firewall-cisco) instead of (server-firewall), but did not fix it. Update: Sanitized Cisco config: version 12.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname cisco ! logging queue-limit 100 enable secret 5 {password} enable password 7 {password} ! ip subnet-zero ip domain name example.org ip name-server {nameserver_IP} ! ! ip audit notify log ip audit po max-events 100 no ftp-server write-enable ! interface Ethernet0 ip address {Inside_public_IP} 255.255.255.248 ip tcp adjust-mss 1460 hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface Dialer1 ip unnumbered Ethernet0 encapsulation ppp dialer pool 1 dialer idle-timeout 0 dialer persistent no cdp enable ppp chap hostname {ADSL_Username} ppp chap password 7 {ADSL_Password} ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer1 no ip http server no ip http secure-server ! access-list 23 permit {IP} dialer-list 1 protocol ip permit no cdp run snmp-server enable traps tty ! {con, vty} end

    Read the article

  • Auth-Type :- Reject in RADIUS users file matches inner tunnel request but sends Access-Accept

    - by mgorven
    I have WPA2 802.11x EAP authentication setup using FreeRADIUS 2.1.8 on Ubuntu 10.04.4 talking to OpenLDAP, and can successfully authenticate using PEAP/MSCHAPv2, TTLS/MSCHAPv2 and TTLS/PAP (both via the AP and using eapol_test). I am now trying to restrict access to specific SSIDs based on the LDAP groups which the user belongs to. I have configured group membership checking in /etc/freeradius/modules/ldap like so: groupname_attribute = cn groupmembership_filter = "(|(&(objectClass=posixGroup)(memberUid=%{User-Name}))(&(objectClass=posixGroup)(uniquemember=%{User-Name})))" and I have configured extraction of the SSID from Called-Station-Id into Called-Station-SSID based on the Mac Auth wiki page. In /etc/freeradius/eap.conf I have enabled copying attributes from the outer tunnel into the inner tunnel, and usage of the inner tunnel response in the outer tunnel (for both PEAP and TTLS). I had the same behaviour before changing these options however. copy_request_to_tunnel = yes use_tunneled_reply = yes I'm running eapol_test like this to test the setup: eapol_test -c peap-mschapv2.conf -a 172.16.0.16 -s testing123 -N 30:s:01-23-45-67-89-01:Example-EAP with the following peap-mschapv2.conf file: network={ ssid="Example-EAP" key_mgmt=WPA-EAP eap=PEAP identity="mgorven" anonymous_identity="anonymous" password="foobar" phase2="autheap=MSCHAPV2" } With the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" and running freeradius-Xx, I can see that the LDAP group retrieval works, and that the SSID is extracted. Debug: [ldap] performing search in dc=example,dc=com, with filter (&(cn=employees)(|(&(objectClass=posixGroup)(memberUid=mgorven))(&(objectClass=posixGroup)(uniquemember=mgorven)))) Debug: rlm_ldap::ldap_groupcmp: User found in group employees ... Info: expand: %{7} -> Example-EAP Next I try to only allow access to users in the employees group (regardless of SSID), so I put the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" DEFAULT Auth-Type := Reject But this immediately rejects the Access-Request in the outer tunnel because the anonymous user is not in the employees group. So I modify it to only match inner tunnel requests like so: DEFAULT Ldap-Group == "employees" DEFAULT FreeRADIUS-Proxied-To == "127.0.0.1" Auth-Type := Reject, Reply-Message = "User does not belong to any groups which may access this SSID." Now users which are in the employees group are authenticated, but so are users which are not in the employees group. I see the reject entry being matched, and the Reply-Message is set, but the client receives an Access-Accept. Debug: rlm_ldap::ldap_groupcmp: Group employees not found or user is not a member. Info: [files] users: Matched entry DEFAULT at line 209 Info: ++[files] returns ok ... Auth: Login OK: [mgorven] (from client test port 0 cli 02-00-00-00-00-01 via TLS tunnel) Info: WARNING: Empty section. Using default return values. ... Info: [peap] Got tunneled reply code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Got tunneled reply RADIUS code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Tunneled authentication was successful. Info: [peap] SUCCESS Info: [peap] Saving tunneled attributes for later ... Sending Access-Accept of id 11 to 172.16.2.44 port 60746 Reply-Message = "User does not belong to any groups which may access this SSID." User-Name = "mgorven" and eapol_test reports: RADIUS message: code=2 (Access-Accept) identifier=11 length=233 Attribute 18 (Reply-Message) length=64 Value: 'User does not belong to any groups which may access this SSID.' Attribute 1 (User-Name) length=9 Value: 'mgorven' ... SUCCESS Why isn't the request being rejected, and is this the right way to implement this?

    Read the article

  • nginx 502 Bad Gateway on every external site

    - by Leandros
    I just installed nginx and followed the guides on the official site, to set it up with php5-fpm, but it just won't work. Not even the default site, without php is working outside of my server. Tried listen = 127.0.0.1:7777 and listen = /var/run/php5-fpm.sock Both don't work. I can access http://localhost with lynx on my server, but not from somewhere else (with external ip obviously). Yes, the php5-fpm deamons are running, yes the port (80 and 7777) is opened. Don't work with php-cgi as well. My config: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; proxy_buffers 16 16k; proxy_buffer_size 32k; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; } Server config: (symlinked to sites-enabled) server { server_name skilloverflow.de *.skilloverflow.de; root /var/www/blog.skilloverflow.de/htdocs; index index.php; error_log /var/log/nginx/skilloverflow.error.log; access_log /var/log/nginx/skilloverflow.access.log; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:7777; fastcgi_index index.php; include fastcgi_params; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } } PHP Version: 5.4.17-1 nginx version: 1.2.1 Debian 6.0.7 Linux 2.6.32 Edit: Lighttpd is still installed, does that matter? It's not running though. Edit 2: No error or access log is generated. They're all empty.

    Read the article

  • Moved DNS and Email Hosting, Now Can't Send/Receive To/From Domains Hosted on Previous Host

    - by maxfinis
    Our company had 4 domains whose emails and DNS were hosted by one company, and then we moved the email and DNS hosting for 3 of the 4 domains to a new company. Now, the 3 domains that were moved can't send or receive emails to and from the one domain still left on the old server. All other email functions work fine for all 4 domains. There are no bouncebacks, error messages, or emails stuck in queue, and no evidence of these missing emails hitting the new servers. The new hosting company confirms that everything is fine on their end, and assures me that it's most likely an old zone file still remaining on the old nameserver, and so the emails sent from the old host is routed to what it believes is still the authoritative nameserver. Because the old zone file's MX records still contain the old resource, the requests never leave the old nameserver to go online to do a fresh search for the real (new) authoritative nameserver. The compounding problem is that the old company is rather inept and doesn't seem to have the technical expertise to identify the problem, much less fix it. (I know, I know.) Is the problem truly that this old zone file just needs to be deleted from the old company's nameserver? If so, what's the best way for me to describe this to them? If not, what do you think could be the issue? Any help is much appreciated. I'm not in IT, so all this is new to me. I know it seems weird for me (the client) to have to do this legwork, but I just want to get this resolved. Here's what I've done: Ran dig to verify that the old server's MX records still point to the old authoritative server, instead of going online to do a fresh search: ~$ dig @old.nameserver.com domainthatwasmoved.com mx ; << DiG 9.6.0-APPLE-P2 << @old.nameserver.com domainThatWasMoved.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 61227 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; QUESTION SECTION: ;domainthatwasmoved.com. IN MX ;; ANSWER SECTION: domainthatwasmoved.com. 3600 IN MX 10 mail.oldmailserver.com. ;; ADDITIONAL SECTION: mail.oldmailserver.com. 3600 IN A 65.198.191.5 ;; Query time: 29 msec ;; SERVER: 65.198.191.5#53(65.198.191.5) ;; WHEN: Sun Dec 26 16:59:22 2010 ;; MSG SIZE rcvd: 88 Ran dig to try to see where the new hosting company's servers look when emails are sent from the 3 domains that were moved, and got refused: ~$ dig @new.nameserver.net domainStillAtOldHost.com mx ; << DiG 9.6.0-APPLE-P2 << @new.nameserver.net domainStillAtOldHost.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: REFUSED, id: 31599 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;domainStillAtOldHost.com. IN MX ;; Query time: 31 msec ;; SERVER: 216.201.128.10#53(216.201.128.10) ;; WHEN: Sun Dec 26 17:00:14 2010 ;; MSG SIZE rcvd: 34

    Read the article

  • Strange enduser experience with Liferay, Glassfish and Apache on RedHat

    - by Pete Helgren
    Tried multiple forums to get to the bottom of this. I hope I can get some direction here: Here is the stack I am working with: Red Hat Enterprise Linux Server release 5.6 (Tikanga) Liferay 6.0.6 on Glassfish 3.0.1 MySQL 5.0.77 Apache 2.2.3 The Liferay portal provides a variety of portlets to end users. Static content (web pages), static resources (primarily pdf and mp3 files 1mb - 80mb in size), File upload and download capabilities (primarily 40-60mb mp3 files) and online streaming of those MP3 files. Here is the strange end user experiences: Under normal load: (20-30) users uploading, downloading or streaming files and 20-30 accessing static content (some of it downloads), we see the following: 1) Clicking a link triggers the download of a portion of an MP3 (the portion is a few seconds long). 2) Clicking on a link tiggers the download of the page content rather than rendering. 3) Clicking a link causes the page to dump binary data to the end user rather than the expected content. 4) Clicking a link returns the text of a javascript file rather than rendering the page. Each occurrence is totally random (or appears so). Sometimes it works, sometimes it doesn't. It seems to have no relation to browser or client OS. The strange events seem to occur much more frequently when using an SSL connection rather than regular http. Apache serves as a proxy server only (reverse). It basically passes all the requests through to Glassfish. There isn't any static content proxy served by Apache. We rebuilt the entire stack from scratch and redeployed the portlet wars and still have the same issues. Liferay is running as a single server (not clustered). We disabled mod_cache in Apache. The problems are more frequent as the server load grows. This morning the load is pretty light and we are seeing few problems but the use of the site will grow, particularly tonight around 9pm CST through Wednesday morning. You could try the site (http://preview.bsfinternational.org) during those times and I would expect that you might experience one of the weirdnesses as you randomly click links on the site (https is invoked only when signed in). Again, https seems to exacerbate the issue. This seems very much like a caching issue but I don't know where in the stack to start peeling the onion. Apache? Liferay? Glassfish? MySQL? Maybe even Redhat? We are stumped and most forums we have posted to (LifeRay and Glassfish) have returned very few suggestions. I just need an idea of where to start looking. I understand that we could have a portlet EDIT: Opening the files in a Hex editor that appear to be pages that download rather than render, we see that the first 4000 characters are "junk" and then the "HTTP/1.1 ...." 'normal' header is seen. So something is dumping a jumble of characters up to offset 4000 (when viewing it in a Hex editor). Perhaps a clue? Ideas?

    Read the article

  • Fix overscan in Linux with Intel graphics Vizio HDTV

    - by Padenton
    I am connecting my server to my HDTV so that I can conveniently display it there. My VIZIO HDTV cuts off all 4 edges. I already realize it is not optimal to be running a GUI on a server; this server will not have much external traffic so I prefer it for convenience. I have already spent countless hours searching for a fix, but all I could find required an ATI or NVIDIA graphics card, or didn’t work. In Windows, the Intel driver has a setting for underscan, though it seems only to be available by a glitch. Here’s my specs: Ubuntu Linux (Quantal 12.10) (Likely to switch to Arch) This is a home server computer, with KDE for managing(for now, at least) Graphics: Intel HD Graphics 4000 from Ivy Bridge Motherboard: ASRock Z77 Extreme4 CPU: Intel Core i5-3450 My monitors: Dell LCD monitor Vizio VX37L_HDTV10A 37" on HDMI input I have tried all of the following from both HDMI?HDMI and DVI?HDMI cables connected to the ports on my motherboard: Setting properties in xrandr Making sure drivers are all up to date Trying several different modes The TV was “cheap”; max resolution 1080i. I am able to get a 1920x1080 modeline, in both GNU/Linux and Windows, without difficulty. There is no setting in the menu to fix the overscan (I have tried all of them, I realize it’s not always called overscan). I have been in the service menu for the TV, which still does not contain an option to fix it. No aspect ratio settings, etc. The TV has a VGA connector but I am unsure if it would fix it, as I don’t have a VGA cable long enough, and am not sure it would get me the 1920x1080 resolution which I want. Using another resolution does not fix the problem. I tried custom modelines with the dimensions of my screen’s viewable area, but it wouldn’t let me use them. Ubuntu apparently doesn’t automatically generate an xorg.conf file for use. I read somewhere that modifying it may help solve it. I tried X -configure several times(with reboots, etc.) but it consistently gave the following error messages: In log file: … (WW) Falling back to old probe method for vesa Number of created screens does not match number of detected devices. Configuration failed. In output: … (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" Number of created screens does not match number of detected devices. Configuration failed. Server terminated with error (2). Closing log file. Tried using 'overscan' prop in xrandr: root@xxx:/home/xxx# xrandr --output HDMI1 --set overscan off X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 11 (RRQueryOutputProperty) Serial number of failed request: 42 Current serial number in output stream: 42 'overscan on', 'underscan off', 'underscan on' were all also tried. Originally tried with Ubuntu 12.04, but failed and so updated to 12.10 when it was released. All software is up to date. I am not opposed to reinstalling my OS, likely will anyways (my preference being Arch).

    Read the article

  • How should I config Apache2.2?

    - by user1607641
    I tried to configure Apache 2.2.17 with PHP5.4 using this manual. But when I restarted my PC and went to http://localhost/phpinfo.php, I received this message: Not Found. The requested URL /phpinfo.php was not found on this server. Here is my httpd.conf - (full version with comments) and a copy with the comments stripped for brevity: ServerRoot "C:/Program Files/Apache2.2" #Listen 12.34.56.78:80 Listen 80 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule php5_module "C:/Languages/PHP/php5apache2_4.dll" # PHP AddHandler application/x-httpd-php .php PHPIniDir "C:/Languages/PHP" <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> ServerAdmin [email protected] #ServerName localhost:80 DocumentRoot "C:/Sites" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/Sites"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.php index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "logs/access.log" common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "C:/Program Files/Apache2.2/cgi-bin/" </IfModule> <IfModule cgid_module> #Scriptsock logs/cgisock </IfModule> <Directory "C:/Program Files/Apache2.2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types #AddType application/x-gzip .tgz #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz AddType application/x-compress .Z AddType application/x-gzip .gz .tgz #AddHandler cgi-script .cgi #AddHandler type-map var #AddType text/html .shtml #AddOutputFilter INCLUDES .shtml </IfModule> #MIMEMagicFile conf/magic # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://localhost/subscription_info.html #MaxRanges unlimited #EnableMMAP off #EnableSendfile off # Server-pool management (MPM specific) #Include conf/extra/httpd-mpm.conf # Multi-language error messages #Include conf/extra/httpd-multilang-errordoc.conf # Fancy directory listings #Include conf/extra/httpd-autoindex.conf # Language settings #Include conf/extra/httpd-languages.conf # User home directories #Include conf/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include conf/extra/httpd-info.conf # Virtual hosts Include conf/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include conf/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include conf/extra/httpd-dav.conf # Various default settings #Include conf/extra/httpd-default.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Directory with sites is C:/Sites and PHP is located at C:/Languages/PHP.

    Read the article

  • http request via iptables --to-destination ip redirect results in no response

    - by Wouter Vegter
    I have two Ubuntu servers with each having their own ip addresses. Let's call them server1 and server2, having respectively ip 1.1.1.1 and 2.2.2.2 I have a nginx running on server2. The sole purpose I want server1 to have is to redirect all incoming http (so port 80) requests to server2 without clients noticing that their request is being redirected. I tried the following command on server1: iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2 But when I enter 1.1.1.1 in my browser I get no respond: the page keeps trying to load without giving any message or error message (I get a time-out after 2-3 mins). But when I do remove the above iptables rule I immediately do get a "page not found error" when I enter 1.1.1.1 in my browser; so something is working but not as it should: when I enter 1.1.1.1 I want the html page to load that is hosted on 2.2.2.2 Because when i enter 2.2.2.2 in my browser I do see the webpage loaded. Could anyone please help me with this? I am searching quite some time (on severfault & Google) on this now so that's why I ask. Many thanks for reading my question! Update: Thank you all for you information. Unfortunately I still get no response I have the following iptables configuration: root@ip-10-48-238-216:/home/ubuntu# sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination root@ip-10-48-238-216:/home/ubuntu# sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere anywhere tcp dpt:www to:2.2.2.2 Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination When i run tcpdump and do request via chrome to 1.1.1.1 i get the following root@ip-10-48-238-216:/home/ubuntu# sudo tcpdump -i eth0 port 80 -vv tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:56:18.346625 IP (tos 0x0, ttl 52, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xb398 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.346662 IP (tos 0x0, ttl 51, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9ee0 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.598747 IP (tos 0x0, ttl 52, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xac40 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 13:56:18.598777 IP (tos 0x0, ttl 51, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9788 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel the mentioned address relate to the following 212-123-161-112.ip.telfort.nl.16386 : my personal computer ww1dc1.shopreme.com.www : dns of server2 (2.2.2.2) ip-10-48-238-216.eu-west-1.compute.internal.www : amazon web services ec2 internal address of server1 (1.1.1.1) However, the tcpdump log on server2 (2.2.2.2) stays empty and I get no response back in my browser. I am able to ping from server1 to server2. And net.ipv4.ip_forward is set to 1 and so is /proc/sys/net/ipv4/ip_forward Could there be anything else that is missing?

    Read the article

  • Reuse remote ssh connections and reduce command/session logging verbosity?

    - by ewwhite
    I have a number of systems that rely on application-level mirroring to a secondary server. The secondary server pulls data by means of a series of remote SSH commands executed on the primary. The application is a bit of a black box, and I may not be able to make modifications to the scripts that are used. My issue is that the logging in /var/log/secure is absolutely flooded with requests from the service user, admin. These commands occur many times per second and have a corresponding impact on logs. They rely on passphrase-less key exchange. The OS involved is EL5 and EL6. Example below. Is there any way to reduce the amount of logging from these actions. (By user? By source?) Is there a cleaner way for the developers to perform these ssh executions without spawning so many sessions? Seems inefficient. Can I reuse the existing connections? Example log output: Jul 24 19:08:54 Cantaloupe sshd[46367]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46446]: Accepted publickey for admin from 172.30.27.32 port 33526 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46475]: Accepted publickey for admin from 172.30.27.32 port 33527 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46504]: Accepted publickey for admin from 172.30.27.32 port 33528 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46583]: Accepted publickey for admin from 172.30.27.32 port 33529 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46612]: Accepted publickey for admin from 172.30.27.32 port 33530 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46641]: Accepted publickey for admin from 172.30.27.32 port 33531 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46720]: Accepted publickey for admin from 172.30.27.32 port 33532 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46749]: Accepted publickey for admin from 172.30.27.32 port 33533 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46778]: Accepted publickey for admin from 172.30.27.32 port 33534 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46857]: Accepted publickey for admin from 172.30.27.32 port 33535 ssh2

    Read the article

  • Samba with remote LDAP authentication doesn`t see users properly

    - by LucasBr
    I'm trying to setup a samba server authenticated by a remote LDAP server, and I'm having some problems that I can't figure how to solve. I was able to make an getent passwd at samba server and I could see all users at ldapserver, but when I tried to access \\SAMBASERVER at my windows box I had this at the /var/log/samba/log.mywindowsbox: <...snip...> [2012/10/19 13:05:22.449684, 2] smbd/sesssetup.c:1413(setup_new_vc_session) setup_new_vc_session: New VC == 0, if NT4.x compatible we would close all old resources. [2012/10/19 13:05:22.449692, 3] smbd/sesssetup.c:1212(reply_sesssetup_and_X_spnego) Doing spnego session setup [2012/10/19 13:05:22.449701, 3] smbd/sesssetup.c:1254(reply_sesssetup_and_X_spnego) NativeOS=[] NativeLanMan=[] PrimaryDomain=[] [2012/10/19 13:05:22.449717, 3] libsmb/ntlmssp.c:747(ntlmssp_server_auth) Got user=[lucas] domain=[BUSINESS] workstation=[MYWINDOWSBOX] len1=24 len2=24 [2012/10/19 13:05:22.449747, 3] auth/auth.c:216(check_ntlm_password) check_ntlm_password: Checking password for unmapped user [BUSINESS]\[lucas]@[MYWINDOWSBOX] with the new password interface [2012/10/19 13:05:22.449759, 3] auth/auth.c:219(check_ntlm_password) check_ntlm_password: mapped user is: [SAMBASERVER]\[lucas]@[MYWINDOWSBOX] [2012/10/19 13:05:22.449773, 3] smbd/sec_ctx.c:210(push_sec_ctx) push_sec_ctx(0, 0) : sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449783, 3] smbd/uid.c:429(push_conn_ctx) push_conn_ctx(0) : conn_ctx_stack_ndx = 0 [2012/10/19 13:05:22.449791, 3] smbd/sec_ctx.c:310(set_sec_ctx) setting sec ctx (0, 0) - sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449922, 2] lib/smbldap.c:950(smbldap_open_connection) smbldap_open_connection: connection opened [2012/10/19 13:05:23.001517, 3] lib/smbldap.c:1166(smbldap_connect_system) ldap_connect_system: successful connection to the LDAP server [2012/10/19 13:05:23.007713, 3] smbd/sec_ctx.c:418(pop_sec_ctx) pop_sec_ctx (0, 0) - sec_ctx_stack_ndx = 0 [2012/10/19 13:05:23.007733, 3] auth/auth_sam.c:399(check_sam_security) check_sam_security: Couldn't find user 'lucas' in passdb. [2012/10/19 13:05:23.007743, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [lucas] -> [lucas] FAILED with error NT_STATUS_NO_SUCH_USER [2012/10/19 13:05:23.007760, 3] smbd/error.c:80(error_packet_set) error packet at smbd/sesssetup.c(111) cmd=115 (SMBsesssetupX) NT_STATUS_LOGON_FAILURE [2012/10/19 13:05:23.010469, 3] smbd/process.c:1489(process_smb) Transaction 3 of length 142 (0 toread) <...snip...> /etc/samba/smb.conf file follows: [global] dos charset = 850 unix charset = LOCALE workgroup = BUSINESS netbios name = SAMBASERVER bind interfaces only = true interfaces = lo eth0 eth1 smb ports = 139 hosts deny = All hosts allow = 192.168.78. 192.168.255. 127.0.0.1 10.149.122. 192.168.0. name resolve order = wins bcast hosts log level = 3 syslog = 0 log file = /var/log/samba/log.%m max log size = 100000 domain logons = No wins support = Yes wins proxy = No client ntlmv2 auth = Yes lanman auth = Yes ntlm auth = Yes dns proxy = Yes time server = Yes security = user encrypt passwords = Yes obey pam restrictions = Yes ldap password sync = Yes unix password sync = Yes passdb backend = ldapsam:"ldap://192.168.78.206" ldap ssl = off ldap admin dn = uid=root,ou=Users,dc=business,dc=intranet ldap suffix = ldap group suffix = ou=Groups ldap user suffix = ou=Users ldap machine suffix = ou=Computers ldap idmap suffix = ou=Idmap ldap delete dn = Yes add user script = /usr/sbin/smbldap-useradd -m "%u" delete user script = /usr/sbin/smbldap-userdel "%u" add group script = /usr/sbin/smbldap-groupadd -p "%g" delete group script = /usr/sbin/smbldap-groupdel "%g" add user to group script = /usr/sbin/smbldap-groupmod -m "%u" "%g" delete user from group script = /usr/sbin/smbldap-groupmod -x "%u" "%g" set primary group script = /usr/sbin/smbldap-usermod -g "%g" "%u" add machine script = /usr/sbin/smbldap-useradd -W -t5 "%u" idmap backend = ldap:"ldap://192.168.78.206" idmap uid = 16777216-33554431 idmap gid = 16777216-33554431 load printers = No printcap name = /dev/null map acl inherit = Yes map untrusted to domain = Yes enable privileges = Yes veto files = /lost+found/ /publicftp/ So, \\SAMBASERVER says he couldn't find my user, but I can see it by getent passwd . What I can do in order to SAMBASERVER see and authenticate my user? Thanks in advance!

    Read the article

  • ls hangs for a certain directory

    - by Jakobud
    There is a particular directory (/var/www), that when I run ls (with or without some options), the command hangs and never completes. There is only about 10-15 files and directories in /var/www. Mostly just text files. Here is some investigative info: [me@server www]$ df . Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_dev-lv_root 50G 19G 29G 40% / [me@server www]$ df -i . Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vg_dev-lv_root 3.2M 435K 2.8M 14% / find works fine. Also I can type in cd /var/www/ and press TAB before pressing enter and it will successfully tab-completion list of all files/directories in there: [me@server www]$ cd /var/www/ cgi-bin/ create_vhost.sh html/ manual/ phpMyAdmin/ scripts/ usage/ conf/ error/ icons/ mediawiki/ rackspace sqlbuddy/ vhosts/ [me@server www]$ cd /var/www/ I have had to kill my terminal sessions several times because of the ls hanging: [me@server ~]$ ps | grep ls gdm 6215 0.0 0.0 488152 2488 ? S<sl Jan18 0:00 /usr/bin/pulseaudio --start --log-target=syslog root 23269 0.0 0.0 117724 1088 ? D 18:24 0:00 ls -Fh --color=always -l root 23477 0.0 0.0 117724 1088 ? D 18:34 0:00 ls -Fh --color=always -l root 23579 0.0 0.0 115592 820 ? D 18:36 0:00 ls -Fh --color=always root 23634 0.0 0.0 115592 816 ? D 18:38 0:00 ls -Fh --color=always root 23740 0.0 0.0 117724 1088 ? D 18:40 0:00 ls -Fh --color=always -l me 23770 0.0 0.0 103156 816 pts/6 S+ 18:41 0:00 grep ls kill doesn't seem to have any affect on the processes, even as sudo. What else should I do to investigate this problem? It just randomly started happening today. UPDATE dmesg is a big list of things, mostly related to an external USB HDD that I've mounted too many times and the max mount count has been reached, but that is an un-related problem I think. Near the bottom of dmesg I'm seeing this: INFO: task ls:23579 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. ls D ffff88041fc230c0 0 23579 23505 0x00000080 ffff8801688a1bb8 0000000000000086 0000000000000000 ffffffff8119d279 ffff880406d0ea20 ffff88007e2c2268 ffff880071fe80c8 00000003ae82967a ffff880407169ad8 ffff8801688a1fd8 0000000000010518 ffff880407169ad8 Call Trace: [<ffffffff8119d279>] ? __find_get_block+0xa9/0x200 [<ffffffff814c97ae>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814c964b>] mutex_lock+0x2b/0x50 [<ffffffff8117a4d3>] do_lookup+0xd3/0x220 [<ffffffff8117b145>] __link_path_walk+0x6f5/0x1040 [<ffffffff8117a47d>] ? do_lookup+0x7d/0x220 [<ffffffff8117bd1a>] path_walk+0x6a/0xe0 [<ffffffff8117beeb>] do_path_lookup+0x5b/0xa0 [<ffffffff8117cb57>] user_path_at+0x57/0xa0 [<ffffffff81178986>] ? generic_readlink+0x76/0xc0 [<ffffffff8117cb62>] ? user_path_at+0x62/0xa0 [<ffffffff81171d3c>] vfs_fstatat+0x3c/0x80 [<ffffffff81258ae5>] ? _atomic_dec_and_lock+0x55/0x80 [<ffffffff81171eab>] vfs_stat+0x1b/0x20 [<ffffffff81171ed4>] sys_newstat+0x24/0x50 [<ffffffff810d40a2>] ? audit_syscall_entry+0x272/0x2a0 [<ffffffff81013172>] system_call_fastpath+0x16/0x1b And also, strace ls /var/www/ spits out a whole BUNCH of information. I don't know what is useful here... The last handful of lines: ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, TIOCGWINSZ, {ws_row=68, ws_col=145, ws_xpixel=0, ws_ypixel=0}) = 0 stat("/var/www/", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 open("/var/www/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3 fcntl(3, F_GETFD) = 0x1 (flags FD_CLOEXEC) getdents(3, /* 16 entries */, 32768) = 488 getdents(3, /* 0 entries */, 32768) = 0 close(3) = 0 fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 9), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3093b18000 write(1, "cgi-bin conf create_vhost.sh\te"..., 125cgi-bin conf create_vhost.sh error html icons manual mediawiki phpMyAdmin rackspace scripts sqlbuddy usage vhosts ) = 125 close(1) = 0 munmap(0x7f3093b18000, 4096) = 0 close(2) = 0 exit_group(0) = ?

    Read the article

  • Postfix MySql Dovecot - SMTP Authentication Failure

    - by borncamp
    Hello I have a Postfix setup with Dovecot and MySql. The server is running Debian Squeeze. The MySql server is a slave that has data pushed to it from a primary (postfix) mail server(running a different os). The emails are stored on a replicated GlusterFS volume. I am able to check email using thunderbird over IMAP. However, SMTP requests fail. After turning on query logs for the MySql server I have noticed that no query statement is executed to retrieve the user information when an SMTP client tries to authenticate. I'd like to know what I'm doing wrong or what the next troubleshooting steps are. I'm about to pull my hair out. Below is some log and configuration data that I thought would be relevant. You're help is much obliged. The file /var/log/mail.log shows Oct 11 14:54:16 mailbox2 postfix/smtpd[25017]: connect from unknown[192.168.0.44] Oct 11 14:54:19 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: Oct 11 14:54:25 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:48 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:54 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:57 mailbox2 postfix/smtpd[25017]: disconnect from unknown[192.168.0.44] This is my dovecot.conf file log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/var/mail/virtual/%d/%n/ auth_mechanisms = plain login disable_plaintext_auth = no namespace { inbox = yes location = prefix = INBOX. separator = . type = private } passdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { mode = 0600 user = postfix } user = root } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocol lda { auth_socket_path = /var/run/dovecot/auth-master mail_plugins = sieve postmaster_address = [email protected] } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } Here is my dovecot-mysql.conf file: connect = host=127.0.0.1 dbname=postfix user=postfix password=ffjM2MYAqQtAzRHX driver = mysql default_pass_scheme = MD5-CRYPT password_query = SELECT username AS user,password FROM mailbox WHERE username = '%u' AND active='1' user_query = SELECT CONCAT('/var/mail/virtual/', maildir) AS home, 1001 AS uid, 109 AS gid, CONCAT('*:messages=10000:bytes=',quota) as quota_rule, 'Trash:ignore' AS quota_rule2 FROM mailbox WHERE username = '%u' AND active='1' Here is my output from 'postconf -n': append_dot_mydomain = no biff = no bounce_template_file = /etc/postfix/bounce.cf broken_sasl_auth_clients = yes config_directory = /etc/postfix delay_warning_time = 0h dovecot_destination_recipient_limit = 1 inet_interfaces = all local_recipient_maps = $virtual_mailbox_maps local_transport = virtual mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 maximal_queue_lifetime = 1d message_size_limit = 25600000 mydestination = mailbox2.cws.net, debian.local.cws.net, localhost.local.cws.net, localhost myhostname = mailbox2.cws.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.18.0.119 63.164.138.3 myorigin = /etc/mailname proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps readme_directory = no recipient_delimiter = + relay_domains = relayhost = smtp_connect_timeout = 10 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_client_message_rate_limit = 50 smtpd_client_recipient_rate_limit = 500 smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_delay_reject = yes smtpd_discard_ehlo_keyword_address_maps = hash:/etc/postfix/discard_ehlo smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname, permit smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_tls_security_options = $smtpd_sasl_security_options smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_catchall_maps.cf virtual_gid_maps = static:1001 virtual_mailbox_base = /var/mail/virtual/ virtual_mailbox_domains = proxy:mysql:/etc/postfix/sql/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_mailbox_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_mailbox_maps.cf virtual_transport = dovecot virtual_uid_maps = static:1001

    Read the article

  • nginx + php-fpm cycle redirection error on linode new vps

    - by chifliiiii
    I'm new to nginx, and I'm trying to make my first server run. I followed this guide as I'm trying to use it for a multisite wordpress site. After installing everything, I get a 500 Internal server error. If I check logs, I see this: 012/09/27 08:55:54 [error] 11565#0: *8 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: xxx.xxx.xxx.xxx, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "www.mydomain.com" 2012/09/27 08:59:32 [error] 11618#0: *1 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: xxx.xxx.xxx.xxx, server: localhost, request: "GET /phpmyadmin HTTP/1.1", host: "www.mydomain.com" My conf files are the following: nano /etc/nginx/sites-available/mydomain.com server { listen 80 default_server; server_name mydomain.com *.mydomain.com; root /srv/www/aciup.com/public; access_log /srv/www/mydomain.com/log/access.log; error_log /srv/www/mydomain.com/log/error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { client_max_body_size 25M; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } nano /etc/nginx/nginx.conf user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 2048; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; keepalive_timeout 5; tcp_nodelay on; server_tokens off; gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Any help will be appreciated.

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >