Search Results

Search found 13605 results on 545 pages for 'mail header'.

Page 498/545 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. ** Of interest** I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list ‘\Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • Using nginx's proxy_redirect when the response location's domain varies

    - by Chalky
    I am making an web app using SoundCloud's API. Requesting an MP3 to stream involves two requests. I'll give an example. Firstly: http://api.soundcloud.com/tracks/59815100/stream This returns a 302 with a temporary link to the actual MP3 (which varies each time), for example: http://ec-media.soundcloud.com/xYZk0lr2TeQf.128.mp3?ff61182e3c2ecefa438cd02102d0e385713f0c1faf3b0339595667fd0907ea1074840971e6330e82d1d6e15dd660317b237a59b15dd687c7c4215ca64124f80381e8bb3cb5&AWSAccessKeyId=AKIAJ4IAZE5EOI7PA7VQ&Expires=1347621419&Signature=Usd%2BqsuO9wGyn5%2BrFjIQDSrZVRY%3D The issue I had was that I am attempting to load the MP3 via JavaScript's XMLHTTPRequest, and for security reasons the browser can't follow the 302, as ec-media.soundcloud.com does not set a header saying it is safe for the browser to access via XMLHTTPRequest. So instead of using the SoundCloud URL, I set up two locations in nginx, so the browser only interacts with the server my app is hosted on and no security errors come up: location /soundcloud/tracks/ { # rewrite URL to match api.soundcloud.com's URL structure rewrite \/soundcloud\/tracks\/(\d*) /tracks/$1/stream break; proxy_set_header Host api.soundcloud.com; proxy_pass http://api.soundcloud.com; # the 302 will redirect to /soundcloud/media instead of the original domain proxy_redirect http://ec-media.soundcloud.com /soundcloud/media; } location /soundcloud/media/ { rewrite \/soundcloud\/media\/(.*) /$1 break; proxy_set_header Host ec-media.soundcloud.com; proxy_pass http://ec-media.soundcloud.com; } So myserver/soundcloud/tracks/59815100 returns a 302 to /myserver/soundcloud/media/xYZk0lr2TeQf.128.mp3...etc, which then forwards the MP3 on. This works! However, I have hit a snag. Sometimes the 302 location is not ec-media.soundcloud.com, it's ak-media.soundcloud.com. There are possibly even more servers out there and presumably more could appear at any time. Is there any way I can handle an arbitrary 302 location without having to manually enter each possible variation? Or is it possible for nginx to handle the redirect and return the response of the second step? So myserver/soundcloud/tracks/59815100 follows the 302 behind the scenes and returns the MP3? The browser automatically follows the redirect, so I can't do anything with the initial response on the client side. I am new to nginx and in a bit over my head so apologies if I've missed something obvious, or it's beyond the scope of nginx. Thanks a lot for reading.

    Read the article

  • SSL_CLIENT_CERT_CHAIN not being passed to backend server

    - by nidkil
    I have client certificate configured and working in Apache. I want to pass the PEM-encoded X.509 certificates of the client to the backend server. I tried with the SSLOptions +ExportCertData. This does nothing at all, while the documentation states it should add SSL_SERVER_CERT, SSL_CLIENT_CERT and SSL_CLIENT_CERT_CHAINn (with n = 0,1,2,..) as headers. Any ideas why this option is not working? I then tried setting the headers myself using RequestHeader. This works fine for all variables except SSL_CLIENT_CERT_CHAIN. It shows null in the header. Any ideas why the certificate chain is not being filled? This is my first Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 SSLOptions +ExportCertData ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> This is my second Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 RequestHeader set SSL_CLIENT_S_DN "%{SSL_CLIENT_S_DN}s" RequestHeader set SSL_CLIENT_I_DN "%{SSL_CLIENT_I_DN}s" RequestHeader set SSL_CLIENT_S_DN_CN "%{SSL_SERVER_S_DN_CN}s" RequestHeader set SSL_SERVER_S_DN_OU "%{SSL_SERVER_S_DN_OU}s" RequestHeader set SSL_CLIENT_CERT "%{SSL_CLIENT_CERT}s" RequestHeader set SSL_CLIENT_CERT_CHAIN0 "%{SSL_CLIENT_CERT_CHAIN0}s" RequestHeader set SSL_CLIENT_CERT_CHAIN1 "%{SSL_CLIENT_CERT_CHAIN1}s" RequestHeader set SSL_CLIENT_VERIFY "%{SSL_CLIENT_VERIFY}s" ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> Hope someone can help. Regards, nidkil

    Read the article

  • nikto probe warning messages

    - by julio
    Hi-- I have a pretty standard VPS running Ubuntu 8.1, Apache 2.2, PHP 5 etc. -- standard Lamp stack. I am using suhosin and have tried my best to plug the obvious stuff, since I'm the only user-- there's no SSH access except via pubkey on a non-standard port, there's no root access by SSH, no FTP server running, iptables is set to discard anything outside of basically port 80 or my SSH port (there's no mail server or anything else). However, I've still been compromised (not badly as far as I can tell) probably by a SQL injection. I've locked down the SQL user (there's only one outside of root, and he's got limited priv, no file etc.) So I ran nikto to see what I'm doing wrong, and there's a list of things I've never seen, and can't find using "find" or any other method I'm aware of. See below: + /autologon.html?10514: Remotely Anywhere 5.10.415 is vulnerable to XSS attacks that can lead to cookie theft or privilege escalation. This is typically found on port 2000. + /servlet/webacc?User.html=noexist: Netware web access may reveal full path of the web server. Apply vendor patch or upgrade. + OSVDB-35878: /modules.php?name=Members_List&letter='%20OR%20pass%20LIKE%20'a%25'/*: PHP Nuke module allows user names and passwords to be viewed. + OSVDB-3092: /sitemap.xml: This gives a nice listing of the site content. + OSVDB-12184: /index.php?=PHPB8B5F2A0-3C92-11d3-A3A9-4C7B08C10000: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F36-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F34-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F35-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-3092: /administrator/: This might be interesting... + OSVDB-3092: /Agent/: This might be interesting... + OSVDB-3092: /includes/: This might be interesting... + OSVDB-3092: /logs/: This might be interesting... + OSVDB-3092: /tmp/: This might be interesting... + ERROR: /servlet/Counter returned an error: error reading HTTP response + OSVDB-3268: /icons/: Directory indexing is enabled: /icons + OSVDB-3268: /images/: Directory indexing is enabled: /images + OSVDB-3299: /forumscalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /forumzcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /htforumcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbulletincalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-6659: /kCKAowoWuZkKCUPH7Mr675ILd9hFg1lnyc1tWUuEbkYkFCpCdEnCKkkd9L0bY34tIf9l6t2owkUp9nI5PIDmQzMokDbp71QFTZGxdnZhTUIzxVrQhVgwmPYsMK7g34DURzeiy3nyd4ezX5NtUozTGqMkxDrLheQmx4dDYlRx0vKaX41JX40GEMf21TKWxHAZSUxjgXUnIlKav58GZQ5LNAwSAn13l0w<font%20size=50>DEFACED<!--//--: MyWebServer 1.0.2 is vulnerable to HTML injection. Upgrade to a later version. I understand about the trace and index, but what about the vbulletin and autologin? I've searched, and I can't find any files like that on the server. I have no idea about the "MyWebServer" stuff, the PHP Nuke, or the Netware/servlet stuff-- there's nothing really on the server except a pretty standard Joomla site (updated to the latest version). Any help with these messages and/or what I'm doing wrong is very much appreciated.

    Read the article

  • Having trouble Getting "RTSP over HTTP"

    - by Muhammad Adeel Zahid
    There is an axis camera that is connected to our site (camba.tv) through axis one click connection component (which acts as proxy). We can communicate with this camera only through http by setting the proxy to our OCCC server's address. If we want to get RTSP streams (h.264) we are only left with "RTSP over HTTP" option. For this I have followed axis VAPIX 3 documentation section 3.3. I issue requests through fiddler but don't get any response. But when i put the URL (axrtsphttp://1.00408CBEA38B/axis-media/media.amp) in windows media player (with proxy set to OCCC server 212.78.237.156:3128) the player is able to get RTSP stream over HTTP after logging in. I have created a request trace of communication between camera and windows media player through wireshark and the request that brings the stream looks like http://1.00408cbea38b/axis-media/media.amp HTTP/1.1 x-sessioncookie: 619 User-Agent: Axis AMC Host: 1.00408CBEA38B Proxy-Connection: Keep-Alive Pragma: no-cache Authorization: Digest username="root",realm="AXIS_00408CBEA38B",nonce="000a8b40Y0100409c13ac7e6cceb069289041d8feb1691",uri="/axis-media/media.amp",cnonce="9946e2582bd590418c9b70e1b17956c7",nc=00000001,response="f3cab86fc84bfe33719675848e7fdc0a",qop="auth" HTTP/1.0 200 OK Content-Type: application/x-rtsp-tunnelled Date: Tue, 02 Nov 2010 11:45:23 GMT RTSP/1.0 200 OK CSeq: 1 Content-Type: application/sdp Content-Base: rtsp://1.00408CBEA38B/axis-media/media.amp/ Date: Tue, 02 Nov 2010 11:45:23 GMT Content-Length: 410 v=0 o=- 1288698323798001 1288698323798001 IN IP4 1.00408CBEA38B s=Media Presentation e=NONE c=IN IP4 0.0.0.0 b=AS:50000 t=0 0 a=control:* a=range:npt=0.000000- m=video 0 RTP/AVP 96 b=AS:50000 a=framerate:30.0 a=transform:1,0,0;0,1,0;0,0,1 a=control:trackID=1 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1; profile-level-id=420029; sprop-parameter-sets=Z0IAKeNQFAe2AtwEBAaQeJEV,aM48gA== RTSP/1.0 200 OK CSeq: 2 Session: 3F4763D8; timeout=60 Transport: RTP/AVP/TCP;unicast;interleaved=0-1;ssrc=060922C6;mode="PLAY" Date: Tue, 02 Nov 2010 11:45:24 GMT RTSP/1.0 200 OK CSeq: 3 Session: 3F4763D8 Range: npt=0- RTP-Info: url=rtsp://1.00408CBEA38B/axis-media/media.amp/trackID=1;seq=7392;rtptime=4190934902 Date: Tue, 02 Nov 2010 11:45:24 GMT [Binary Stream Content] But when i copy this request to fiddler, I only get 200 status code with content-type set to application/x-rtsp-tunneled and there is no stream data. The only thing i do different with stream is to use Basic in authorization header instead of Digest and I do not get 401 (Un authorized) status code. Can anyone explain what's happening here? How can I write request sequences to get stream in fiddler? If it is needed, I can upload the wireshark request dump somewhere.

    Read the article

  • Why do Ping and Dig provide different IP address than nslookup?

    - by user1032531
    When pinging my domain name which points to my home public IP from two different servers on my LAN, it shows them pinging different IP. Further investigation shows dig and nslookup providing different results. See below. A little history. My IP used to be 11.22.33.444 and is hosted by Comcast. I changed routers, and it somehow got changed to 55.66.77.888. I've since updated my 1and1 domain name to point to the 55.66.77.888. desktop is a basic server, runs the web server, and connects wirelessly to my LAN. laptop is a GUI and connected via CAT5. Both operate Centos6.4. My old router was a D-Link, and used their "Virtual Server" feature to pass port 80 to desktop. My new router is a Linksys, and I use their "Port Forwarding" feature to pass port 80 to desktop (however, I haven't gotten this part working yet). What is going on??? Why the different IPs? Obviously, it most somehow be stored on the server, but why does the actual machine even know the public IP since it is on a LAN? How do I purge the old IP? [root@desktop etc]# dig +short myDomain.com 11.22.33.444 [root@desktop etc]# nslookup www.myDomain.com Server: 8.8.8.8 Address: 8.8.8.8#53 Non-authoritative answer: Name: www.myDomain.com Address: 55.66.77.888 [root@desktop etc]# dig myDomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> myDomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13822 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myDomain.com. IN A ;; ANSWER SECTION: myDomain.com. 16031 IN A 11.22.33.444 ;; Query time: 21 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Oct 21 04:36:52 2013 ;; MSG SIZE rcvd: 44 [root@desktop etc]# [root@laptop ~]# dig +short myDomain.com 55.66.77.888 [root@laptop ~]# nslookup www.myDomain.com Server: 192.168.0.1 Address: 192.168.0.1#53 Non-authoritative answer: Name: www.myDomain.com Address: 55.66.77.888 [root@laptop ~]#

    Read the article

  • Where / how does Apache generate the HTML code used in the default directory listing?

    - by Ellen B
    I am looking to modify the HTML that apache generates for its default directory listing. I already know how to create a HEADER.html file that gets included for every directory listing. I am attempting to change the actual html that Apache generates for the file listing itself; right now my MacOS apache generates this for example: <table><tr><th><img src="/icons/blank.gif" alt="[ICO]"></th><th><a href="?C=N;O=D">Name</a></th><th><a href="?C=M;O=A">Last modified</a></th><th><a href="?C=S;O=A">Size</a></th><th><a href="?C=D;O=A">Description</a></th></tr><tr><th colspan="5"><hr></th></tr> <tr><td valign="top"><img src="/icons/folder.gif" alt="[DIR]"></td><td><a href="ios-prototype/">ios-prototype/</a> </td><td align="right">07-Dec-2012 16:47 </td><td align="right"> - </td><td>&nbsp;</td></tr> <tr><td valign="top"><img src="/icons/folder.gif" alt="[DIR]"></td><td><a href="magneto-git/">magneto-git/</a> </td><td align="right">07-Dec-2012 16:46 </td><td align="right"> - </td><td>&nbsp;</td></tr> <tr><th colspan="5"><hr></th></tr> </table> I want a different HTML structure (like, say, an OL) generated when my server spits back directory listings. (FYI I'm doing a bunch of mobile browser prototyping with my local webserver & need to make it not totally horrible to browse with fingers to the right test directory — the table structure sucks, and while I can mod a lot of it with CSS it's still going to be ganky.)

    Read the article

  • Change default DNS server in Arch Linux

    - by AntoineG
    I'm in Viet Nam and most social websites (Facebook, Twitter and the likes - even reddit) are blocked by the ISP DNS server. I tried to change the DNS server of my Arch box using the resolv.conf file, but it failed miserably since dhcpd generates this file automatically everytime I connect to the LAN. I've been looking around to try and find out how to fix this, without success. Either I s*ck at Googling, either it is non-trivial to do so. EDIT 1: Meh, apparently posting it here made me feel guilty and I had to push my search a bit more. I found the same article than Ankur post below. This is what I made, if anybody ever faces the same problem: $ sudo gvim /etc/dhcpcd.conf Add "nohook resolv.conf" at the tail of the file. $ sudo gvim /etc/resolv.conf Add to the file (OpenDNS servers): nameserver 208.67.222.222 nameserver 208.67.220.220 Or (Google DNS): nameserver 8.8.8.8 nameserver 8.8.4.4 Then, verify it worked (need package dnsutils): $ dig www.facebook.com ; <<>> DiG 9.9.1-P1 <<>> www.facebook.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16994 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.facebook.com. IN A ;; ANSWER SECTION: www.facebook.com. 89 IN A 69.171.224.53 ;; Query time: 87 msec ;; SERVER: 208.67.222.222#53(208.67.222.222) ;; WHEN: Thu Jun 28 00:43:23 2012 ;; MSG SIZE rcvd: 61 See ;; SERVER: 208.67.222.222#53(208.67.222.222), it worked.

    Read the article

  • Issue in setting up VPN connection (IKEv1) using android (ICS vpn client) with Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using android (ICS vpn client) and Strongswan 4.5.0 server. Below is the set up: Strongswan server is running on ubuntu linux machine which is connected to some wifi hotspot. Using the steps in this guide link, I generated CA, server and client certificate. Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on android device. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212, android device eth0 interface ip address: 192.168.43.62; Android device is also attached with the same wifi hotspot. On the Android device, I uses IPsec Xauth RSA option for setting up VPN authentication configuration. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.62 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add      With the above configurations when I enable VPN on android device, VPN connection is not successful and it gets timed out in Authentication phase. I ran wireshark on both the android device and strongswan server, from the tcpdump below are the observations. Initially Identity Protection (Main mode) exchanges happens between device and server and all are successful. After all successful Identity Protection (Main mode) exchanges server is sending Transaction (Config mode) to device. In reply android device is sending Informational message instead of Transaction (Config mode) message. Further server is keep on sending Transaction (Config mode) message and device is again sending Identity Protection (Main mode) messages. Finally timeout happens and connection fails. I also capture Strongswan server logs and below are the snippets from the server logs which also verifies the same(described above). Apr 27 21:09:40 Linux pluto[12105]: | **parse ISAKMP Message: Apr 27 21:09:40 Linux pluto[12105]: | initiator cookie: Apr 27 21:09:40 Linux pluto[12105]: | 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | responder cookie: Apr 27 21:09:40 Linux pluto[12105]: | 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | next payload type: ISAKMP_NEXT_HASH Apr 27 21:09:40 Linux pluto[12105]: | ISAKMP version: ISAKMP Version 1.0 Apr 27 21:09:40 Linux pluto[12105]: | exchange type: ISAKMP_XCHG_INFO Apr 27 21:09:40 Linux pluto[12105]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 27 21:09:40 Linux pluto[12105]: | message ID: a2 80 ad 82 Apr 27 21:09:40 Linux pluto[12105]: | length: 92 Apr 27 21:09:40 Linux pluto[12105]: | ICOOKIE: 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | RCOOKIE: 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | peer: c0 a8 2b 3e Apr 27 21:09:40 Linux pluto[12105]: | state hash entry 25 Apr 27 21:09:40 Linux pluto[12105]: | state object not found Apr 27 21:09:40 Linux pluto[12105]: packet from 192.168.43.62:500: Informational Exchange is for an unknown (expired?) SA Apr 27 21:09:40 Linux pluto[12105]: | next event EVENT_RETRANSMIT in 10 seconds for #9 Can anyone please provide update on this issue. Why the VPN connection gets timed out and why the ISAKMP exchanges are not proper between Android and strongswan server.

    Read the article

  • Linux not buffering block I/O when the device is not "in use" (i.e. mounted)

    - by Radek Hladík
    I am installing new server and I've found an interesting issue. The server is running Fedora 19 (3.11.7-200.fc19.x86_64 kernel) and is supposed to host a few KVM/Qemu virtual servers (mail server, file server, etc..). The HW is Intel(R) Xeon(R) CPU 5160 @ 3.00GHz with 16GB RAM. One of the most important features will be Samba server and we have decided to make it as virtual machine with almost direct access to the disks. So the real HDD is cached on SSD (via bcache) then raided with md and the final device is exported into the virtual machine via virtio. The virtual machine is again Fedora 19 with the same kernel. One important topic to find out is whether the virtualization layer will not introduce high overload into disk I/Os. So far I've been able to get up to 180MB/s in VM and up to 220MB/s on real HW (on the SSD disk). I am still not sure why the overhead is so big but it is more than the network can handle so I do not care so much. The interesting thing is that I've found that the disk reads are not buffered in the VM unless I create and mount FS on the disk or I use the disks somehow. Simply put: Lets do dd to read disk for the first time (the /dev/vdd is an old Raptor disk 70MB/s is its real speed): [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 36.8038 s, 71.2 MB/s Buffers: 14444 kB Rereading the data shows that they are cached somewhere but not in buffers of the VM. Also the speed increased to "only" 500MB/s. The VM has 4GB of RAM (more that the test file) [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.16016 s, 508 MB/s Buffers: 14444 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.05727 s, 518 MB/s Buffers: 14444 kB Now lets mount the FS on /dev/vdd and try the dd again: [root@localhost ~]# mount /dev/vdd /mnt/tmp [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 4.68578 s, 559 MB/s Buffers: 2574592 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 1.50504 s, 1.7 GB/s Buffers: 2574592 kB While the first read was the same, all 2.6GB got buffered and the next read was at 1.7GB/s. And when I unmount the device: [root@localhost ~]# umount /mnt/tmp [root@localhost ~]# cat /proc/meminfo | grep Buffers Buffers: 14452 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.10499 s, 514 MB/s Buffers: 14468 kB The bcache was disabled while testing and the results are same on faster (newer) HDDs and on SSD (except for the initial read speed of course). To sum it up. When I read from the device via dd first time, it gets read from the disk. Next time I reread it gets cached in the host but not in the guest (thats actually the same issue, more on that later). When I mount the filesystem but try to read the device directly it gets cached in VM (via buffers). As soon as I stop "using" it, buffers are discarded and the device is not cached anymore in the VM. When I looked into buffers value on the host I realized that the situation is the same. The block I/O gets buffered only when the disk is in use, in this case it means "exported to a VM". On host, after all the measurement done: 3165552 buffers On the host, after the VM shutdown: 119176 buffers I know it is not important as the disks will be mounted all the time but I am curious and I would like to know why it is working like this.

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest. Edit: Had a look at Deoendency Walker and Process Explorer. Both great tools. Here is a image of the TCP connections for explorer.exe in Process Explorer http://img210.imageshack.us/img210/3563/61930284.gif

    Read the article

  • Odd squid transparent redirect behavior

    - by EMiller
    This is the first time I've set up squid. It's running a redirect script that does some text search/replace on html pages, and then saves them to a location on the same machine on the nginx path - then issues the redirect to that URL (it's an art project :D). The relevant lines in squid.conf are http_port 3128 transparent redirect_program /etc/squid/jefferson_redirect.py The jefferson_redirect.py script is based on this script: http://gofedora.com/how-to-write-custom-redirector-rewritor-plugin-squid-python/ The issue: I'm getting strange http redirect behavior. For example, here is the normal request/response from a PHP script that issues a header("Location:"); - a 302 redirect: http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.1 302 Found Date: Tue, 13 Apr 2010 05:15:43 GMT Server: Apache X-Powered-By: PHP/5.2.11 Location: http://www.google.com/search?q=yreka Content-Type: text/html Vary: User-Agent,Accept-Encoding Content-Encoding: gzip Content-Length: 2108 Keep-Alive: timeout=3, max=100 Connection: Keep-Alive Here's what it looks like when running through the squid proxy (note that "redirector.mysite.com" is not the site running squid or nginx): http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive If-Modified-Since: Tue, 13 Apr 2010 05:21:02 GMT HTTP/1.0 200 OK Server: nginx/0.7.62 Date: Tue, 13 Apr 2010 05:21:10 GMT Content-Type: text/html Content-Length: 17865 Last-Modified: Tue, 13 Apr 2010 05:21:10 GMT Accept-Ranges: bytes X-Cache: MISS from jefferson X-Cache-Lookup: HIT from jefferson:3128 Via: 1.1 jefferson:3128 (squid/2.7.STABLE6) Connection: keep-alive Proxy-Connection: keep-alive It is basically working - but the URL http://redirector.mysite.com/?unicmd=g+yreka remains unchanged, while displaying the google page (mostly broken as it's using URLs relative to redirector.mysite.com) I've experienced a similar thing with google results pages: when clicking to another page from google, I get a google URL, with the other site's content. Sorry for the long post - many thanks if you've read this far! Any ideas?

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by rbeier
    Hi, We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Process not Listed by PS or in /proc/

    - by Hammer Bro.
    I'm trying to figure out how to operate a rather large Java program, 'prog'. If I go to its /bin/ dir and configure its setenv.sh and prog.sh to use local directories and my current user account. Then I try to run it via "./prog.sh start". Here are all the relevant bits of prog.sh: USER=(my current account) _CMD="/opt/jdk/bin/java -server -Xmx768m -classpath "${CLASSPATH}" -jar "${DIR}/prog.jar"" case "${ACTION}" in start) nohup su ${USER} -c "exec ${_CMD} >>${_LOGFILE} 2>&1" >/dev/null & echo $! >${_PID} echo "Prog running. PID="`cat ${_PID}` ;; stop) PID=`cat ${_PID} 2>/dev/null` echo "Shutting down prog: ${PID} kill -QUIT ${PID} 2>/dev/null kill ${PID} 2>/dev/null kill -KILL ${PID} 2>/dev/null rm -f ${_PID} echo "STOPPED `date`" >>${_LOGFILE} ;; When I actually do ./prog.sh start, it starts. But I can't find it at all on the process list. Nor can I kill it manually, using the same command the shell script uses. But I can tell it's running, because if I do ./prog.sh stop, it stops (and some temporary files elsewhere clean themselves out). ./prog.sh start Prog running. PID=1234 ps eaux | grep 1234 ps eaux | grep -i prog.jar ps eaux >> pslist.txt (It's not there either by PID or any clear name I can find: prog, java or jar.) cd /proc/1234/ -bash: cd: /proc/1234/: No such file or directory kill -QUIT 1234 kill 1234 kill -KILL 1234 -bash: kill: (1234) - No such process ./prog.sh stop Shutting down prog: 1234 As far as I can tell, the process is running yet not in any way listed by the system. I can't find it in ps or /proc/, nor can I kill it. But the shell script can still stop it properly. So my question is, how can something like this happen? Is the process supremely hidden, actually unlisted, or am I just missing it in some fashion? I'm trying to figure out what makes this program tick, and I can barely prove that it's ticking! Edit: ps eu | grep prog.sh (after having restarted; so random PID) 50038 19381 0.0 0.0 4412 632 pts/3 S+ 16:09 0:00 grep prog.sh HOSTNAME=machine.server.com TERM=vt100 SHELL=/bin/bash HISTSIZE=1000 SSH_CLIENT=::[STUFF] 1754 22 CVSROOT=:[DIR] SSH_TTY=/dev/pts/3 ANT_HOME=/opt/apache-ant-1.7.1 USER=[USER] LS_COLORS=[COLORS] SSH_AUTH_SOCK=[DIR] KDEDIR=/usr MAIL=[DIR] PATH=[DIRS] INPUTRC=/etc/inputrc PWD=[PWD] JAVA_HOME=/opt/jdk1.6.0_21 LANG=en_US.UTF-8 SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass M2_HOME=/opt/apache-maven-2.2.1 SHLVL=1 HOME=[~] LOGNAME=[USER] SSH_CONNECTION=::[STUFF] LESSOPEN=|/usr/bin/lesspipe.sh %s G_BROKEN_FILENAMES=1 _=/bin/grep OLDPWD=[DIR] I just realized that the stop) part of prog.sh isn't actually a guarantee that the process it claims to be stopping is running -- it just tries to kill the PID and suppresses all output then deletes the temporary file and manually inserts STOPPED into the log file. So I'm no longer so certain that the process is always running when I ps for it, although the code sample above indicates that it at least runs erratically. I'll continue looking into this undocumented behemoth when I return to work tomorrow.

    Read the article

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

  • Nginx + PHP - No input file specified

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; root /www keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host?

    Read the article

  • Lock ups, crashing, transferring files using TrueCrypt with iSCSI

    - by Anthony
    I have looked into this error and it seems that it hasn't been discussed yet - or at least I can't find any information relating. I'm having issues transferring files, usually larger files over a couple of hundred MB. Here is the setup: QNAP 410 as iSCSI Target with multiple LUNs. (CRC is turned on (Data Digest and Header Digest) Server 2003 with iSCSI Initiator version 2.08 - build 3825 (I'm copying files from anothe machine to shares on Server 2003 = into TrueCrypt volume ergo onto the NAS) I have mounted the LUN and formatted it with TrueCrypt using NTFS (Full format, not a quick one). What happens is some files, mainly RAR/Compressed files, appear as if they copy but fail. I've tested this in a number of ways and can repeat the process every time. So I thought to check transfer over iSCSI without TrueCrypt in between, a plain NTFS format - no problem at all. So it would seem TrueCrypt is at least part of the problem here. I haven't tried copying directly from the server yet, I will try that. I also haven't tried it without CRC but fail to see how that would affect this. I will update with my findings later. In the meantime does anyone have any ideas as to what could be wrong? Thanks for your time. Update: I copied a set of files, the ones I was having issues with, to the server then from there I copied those into two places within the TrueCrypt volume (Mounted on the NAS). A seperate directory create in the root of the volume The same initial directory I was using in the first instance Both worked fine. So it now seems clear that this is a link between TrueCrypt, iSCSI and Windows Shares. I say this because I originally setup the whole system using TrueCrypt volume files, not iSCSI. I changed it as it didn't suit my requirements - day wasted as well. While I had this setup though I copied my entire file set to the volume files and all files copied without error - over the network, from a pc, to the server where TrueCrypt had the volume files mounted from the NAS. I didn't bother turning off CRC on the iSCSI system as I highly doubt that is the cause in light of this finding. So any ideas?

    Read the article

  • Facing issues in setting up VPN connection(IKEv1) using iphone (Defult Cisco VPN client) and Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using iPhone (Defult Cisco VPN client) and Strongswan 4.5.0 server. The Strongswan server is running on Ubuntu Linux, which is connected to some wifi hotspot. This is the guide which was used. I generated CA, server and client certificate, with the only difference mentioned below. “While generating server certificate, as per link CN=vpn.strongswan.org instead of this I changed CN name to CN=192.168.43.212.” Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on iphone. After installation I notice that certificates are considered as trusted also. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212 Iphone eth0 interface ip address: 192.168.43.72. iphone is also attached with the same wifi hotspot. Below is the snapshot of client configurations. Description Strong swan Server 192.168.43.212 Account ipsecvpn Password ***** Use certificate ON Certificate client The above username and password are in sync with the ipsec.secrets file. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.72 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add With the above configurations when I enable VPN on iphone, it says Could not able to verify server certificate. I ran Wireshark on a Linux server and observe that initially some ISAKMP message exchanges happens between client and server, which are successful but before authorization, client is sending some informational message and soon after this client is showing error as popup Could not able to verify server certificate. Capture logs on Strongswan server and in server logs below errors are observed: From auth.log Apr 25 20:16:08 Linux pluto[4025]: | ISAKMP version: ISAKMP Version 1.0 Apr 25 20:16:08 Linux pluto[4025]: | exchange type: ISAKMP_XCHG_INFO Apr 25 20:16:08 Linux pluto[4025]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 25 20:16:08 Linux pluto[4025]: | message ID: 9d 1a ea 4d Apr 25 20:16:08 Linux pluto[4025]: | length: 76 Apr 25 20:16:08 Linux pluto[4025]: | ICOOKIE: f6 b7 06 b2 b1 84 5b 93 Apr 25 20:16:08 Linux pluto[4025]: | RCOOKIE: 86 92 a0 c2 a6 2f ac be Apr 25 20:16:08 Linux pluto[4025]: | peer: c0 a8 2b 48 Apr 25 20:16:08 Linux pluto[4025]: | state hash entry 8 Apr 25 20:16:08 Linux pluto[4025]: | state object not found Apr 25 20:16:08 Linux pluto[4025]: **packet from 192.168.43.72:500: Informational Exchange is for an unknown (expired?) SA** Apr 25 20:16:08 Linux pluto[4025]: | next event EVENT_RETRANSMIT in 8 seconds for #8 Apr 25 20:16:16 Linux pluto[4025]: | Apr 25 20:16:16 Linux pluto[4025]: | *time to handle event Apr 25 20:16:16 Linux pluto[4025]: | event after this is EVENT_RETRANSMIT in 2 seconds Apr 25 20:16:16 Linux pluto[4025]: | handling event EVENT_RETRANSMIT for 192.168.43.72 "ios1" #8 Apr 25 20:16:16 Linux pluto[4025]: | sending 76 bytes for EVENT_RETRANSMIT through wlan0 to 192.168.43.72:500: Apr 25 20:16:16 Linux pluto[4025]: | a6 a5 86 41 4b fb ff 99 c9 18 34 61 01 7b f1 d9 Apr 25 20:16:16 Linux pluto[4025]: | 08 10 06 01 e9 1c ea 60 00 00 00 4c ba 7d c8 08 Apr 25 20:16:16 Linux pluto[4025]: | 13 47 95 18 19 31 45 30 2e 22 f9 4d 85 2c 27 bc Apr 25 20:16:16 Linux pluto[4025]: | 9e 9b e1 ae 1e 35 51 6f ab 80 f5 73 3c 15 8d 20 Apr 25 20:16:16 Linux pluto[4025]: | 4b 46 47 86 50 24 3f 13 15 7d d5 17 Apr 25 20:16:16 Linux pluto[4025]: | inserting event EVENT_RETRANSMIT, timeout in 40 seconds for #8 Apr 25 20:16:16 Linux pluto[4025]: | next event EVENT_RETRANSMIT in 2 seconds for #10 Apr 25 20:16:16 Linux pluto[4025]: | rejected packet: Apr 25 20:16:16 Linux pluto[4025]: | Apr 25 20:16:16 Linux pluto[4025]: | control: Apr 25 20:16:16 Linux pluto[4025]: | 30 00 00 00 00 00 00 00 00 00 00 00 0b 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | 6f 00 00 00 02 03 03 00 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | 02 00 00 00 c0 a8 2b 48 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | name: Apr 25 20:16:16 Linux pluto[4025]: | 02 00 01 f4 c0 a8 2b 48 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: **ERROR: asynchronous network error report on wlan0 for message to 192.168.43.72 port 500, complainant 192.168.43.72: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)]** Anybody please provide some update about this error and how to solve this issue.

    Read the article

  • IBM Server Config questions

    - by Joel Coel
    I have a few questions on a potential server setup. First, the situation: Last year we bought an IBM x3500 server with 2 Xeon E5410's, 9GB RAM, 6 HDDs. The original intent for this server was to replace the old exchange e-mail server. It was brought in, set up, and then shortly after we switched to gmail. Shortly after that my predecessor left for greener pastures, and finally I was hired. So this nice server is now sitting (mostly) idle. This year I have budget again for one server, and of course I want to put this other server to work. I'm thinking about the best use for the two server, and I think I finally have a plan for what I want to do with them. The idea is to use the two newer servers as a pair of VM hosts. I will set up each server with the same 8 VMs, but divide up the load so that only 4 are active per physical host. That means I've normally got 2GB RAM + 2 cores per host. I've done some load testing to pick out what servers to convert to virtual, and chose them so that each host will be capable of handling the entire set of 8 by itself in a pinch with 1 core and 1GB RAM, but would be very taxed to do so. This should take our data center from 13 total servers down to 7. The "servers" I'm replacing are mostly re-purposed desktops, so I'm more than happy to be able to do this. Now it's time to go shopping for the new server. I'd like my two hosts to match as closely as possible, and so I'm looking at IBM again. It also helps that we have some educational matching grant money from IBM that I need to use to help pay for this system (we're a small private college). So finally, (if you're not bored already), we come to my questions: Am I missing anything big or obvious in this plan? I'm a little worried about network performance since the VM hosts will only have 4 nics total where 8 used to be, but I don't think it will be a problem. Is there anything else like this I might be overlooking? Am I making it even too complicated? IBM no longer has a good analog to last year's server. If I want to match the performance (8 cores, 9GB RAM, 1333mhz front side bus, 6 spindles), I have to spend quite a bit more than we paid last year: $2K+, or nearly a 33% cost increase. This only brings a marginal increase in performance. The alternative to stay in budget is to take a hit on the fsb down to 800mhz or cut the number of cores in half, neither of which is attractive. The main cost culprit is the processor. IBM no longer offers the E5410. It's listed as a part, but not available in any of the server configs I've looked at. I'm considering getting the cheapest 800mhz fsb dual core xeon I can configure and then buying the E5410's separately. That's still an extra $350 I wasn't counting on, but that's better than $2K. I want to know what others think of this - will it work or will I end up with the wrong motherboard or some other issue? Am I missing a simple way to configure the server I really want? I don't really intend to do this, but one option to save some money back is to omit the redundant power supply. Since my redundancy plan for these system is to switch over to a completely different host, the extra power isn't fully necessary. That said, it's still very helpful to avoid even short downtimes while I switch over VMs. Has anyone done this?

    Read the article

  • Proxy / Squid 2.7 / Debian Wheezy 6.7 / lots of TCP Timed-out

    - by Maroon Ibrahim
    i'm facing a lot of TCP timed-out on a busy cache server and here below my sysctl.conf configuration as well as an output of "netstat -st" Kernel 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3 x86_64 GNU/Linux Any advice or help would be highly appreciated #################### Sysctl.conf cat /etc/sysctl.conf net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 65536 net.ipv4.tcp_low_latency = 1 net.core.wmem_max = 8388608 net.core.rmem_max = 8388608 net.ipv4.ip_local_port_range = 1024 65000 fs.aio-max-nr = 131072 net.ipv4.tcp_fin_timeout = 10 net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_keepalive_intvl = 10 net.ipv4.tcp_keepalive_probes = 3 kernel.threads-max = 131072 kernel.msgmax = 32768 kernel.msgmni = 64 kernel.msgmnb = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.ip_forward = 1 net.ipv4.tcp_timestamps = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_syncookies = 1 net.ipv4.ip_dynaddr = 1 vm.swappiness = 0 vm.drop_caches = 3 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_ecn = 0 net.ipv4.tcp_max_orphans = 131072 net.ipv4.tcp_orphan_retries = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_max_syn_backlog = 32768 net.core.netdev_max_backlog = 131072 net.ipv4.tcp_mem = 6085248 16227328 67108864 net.ipv4.tcp_wmem = 4096 131072 33554432 net.ipv4.tcp_rmem = 4096 174760 33554432 net.core.rmem_default = 33554432 net.core.rmem_max = 33554432 net.core.wmem_default = 33554432 net.core.wmem_max = 33554432 net.core.somaxconn = 10000 # ################ Netstat results /# netstat -st IcmpMsg: InType0: 2 InType3: 233754 InType8: 56251 InType11: 23192 OutType0: 56251 OutType3: 437 OutType8: 4 Tcp: 20680741 active connections openings 63642431 passive connection openings 1126690 failed connection attempts 2093143 connection resets received 13059 connections established 2649651696 segments received 2195445642 segments send out 183401499 segments retransmited 38299 bad segments received. 14648899 resets sent UdpLite: TcpExt: 507 SYN cookies sent 178 SYN cookies received 1376771 invalid SYN cookies received 1014577 resets received for embryonic SYN_RECV sockets 4530970 packets pruned from receive queue because of socket buffer overrun 7233 packets pruned from receive queue 688 packets dropped from out-of-order queue because of socket buffer overrun 12445 ICMP packets dropped because they were out-of-window 446 ICMP packets dropped because socket was locked 33812202 TCP sockets finished time wait in fast timer 622 TCP sockets finished time wait in slow timer 573656 packets rejects in established connections because of timestamp 133357718 delayed acks sent 23593 delayed acks further delayed because of locked socket Quick ack mode was activated 21288857 times 839 times the listen queue of a socket overflowed 839 SYNs to LISTEN sockets dropped 41 packets directly queued to recvmsg prequeue. 79166 bytes directly in process context from backlog 24 bytes directly received in process context from prequeue 2713742130 packet headers predicted 84 packets header predicted and directly queued to user 1925423249 acknowledgments not containing data payload received 877898013 predicted acknowledgments 16449673 times recovered from packet loss due to fast retransmit 17687820 times recovered from packet loss by selective acknowledgements 5047 bad SACK blocks received Detected reordering 11 times using FACK Detected reordering 1778091 times using SACK Detected reordering 97955 times using reno fast retransmit Detected reordering 280414 times using time stamp 839369 congestion windows fully recovered without slow start 4173098 congestion windows partially recovered using Hoe heuristic 305254 congestion windows recovered without slow start by DSACK 933682 congestion windows recovered without slow start after partial ack 77828 TCP data loss events TCPLostRetransmit: 5066 2618430 timeouts after reno fast retransmit 2927294 timeouts after SACK recovery 3059394 timeouts in loss state 75953830 fast retransmits 11929429 forward retransmits 51963833 retransmits in slow start 19418337 other TCP timeouts 2330398 classic Reno fast retransmits failed 2177787 SACK retransmits failed 742371590 packets collapsed in receive queue due to low socket buffer 13595689 DSACKs sent for old packets 50523 DSACKs sent for out of order packets 4658236 DSACKs received 175441 DSACKs for out of order packets received 880664 connections reset due to unexpected data 346356 connections reset due to early user close 2364841 connections aborted due to timeout TCPSACKDiscard: 1590 TCPDSACKIgnoredOld: 241849 TCPDSACKIgnoredNoUndo: 1636687 TCPSpuriousRTOs: 766073 TCPSackShifted: 74562088 TCPSackMerged: 169015212 TCPSackShiftFallback: 78391303 TCPBacklogDrop: 29 TCPReqQFullDoCookies: 507 TCPChallengeACK: 424921 TCPSYNChallenge: 170388 IpExt: InBcastPkts: 351510 InOctets: -609466797 OutOctets: -1057794685 InBcastOctets: 75631402 #

    Read the article

  • Making application behind reverse proxy aware of https

    - by akaIDIOT
    https in tomcat being the hassel it is, I've been trying to get an Axis2 webapp to work behind a reverse proxy for ages now, can't seem to get it to work. The proxying itself works like a charm, but the app fails to generate 'links' (or ports as it concerns SOAP) using https. It would seem I need some way to let Axis2 know it is being accessed through https, even though the actual transport to it is done over http (proxied from localhost). The nginx config that proxies https to localhost:8080: server { listen 443; server_name localhost; ssl on; ssl_certificate /path/to/.pem ssl_certificate_key /path/to/.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; ssl_prefer_server_ciphers on; location / { # force some http-headers (avoid confusing tomcat) proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; # pass requests to local tomcat server listening on default port 8080 proxy_pass http://localhost:8080; } } The proxy itself works fine, the info pages of the webapp work. The problem lies in the ports generated in the .wsdl: <wsdl:service name="WebService"> <wsdl:port name="WebServiceHttpSoap11Endpoint" binding="ns:WebServiceSoap11Binding"> <soap:address location="http://10.10.3.96/axis2/services/WebService.WebServiceHttpSoap11Endpoint/"/> </wsdl:port> <wsdl:port name="WebServiceHttpSoap12Endpoint" binding="ns:WebServiceSoap12Binding"> <soap12:address location="http://10.10.3.96/axis2/services/WebService.WebServiceHttpSoap12Endpoint/"/> </wsdl:port> <wsdl:port name="WebServiceHttpEndpoint" binding="ns:WebServiceHttpBinding"> <http:address location="http://10.10.3.96/axis2/services/WebService.WebServiceHttpEndpoint/"/> </wsdl:port> </wsdl:service> The Host header does its job; it shows 10.10.3.96 in stead of localhost, but as the snippet shows: it says http:// in front of it in stead of https://. My client app can't deal with this... Adding proxyPort and proxyName to the tomcat6 server.xml in the default <Connector> doesn't help; I'm at a loss on how to get this to work properly.

    Read the article

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

  • nginx connection time issue on some IPs

    - by sheldon
    I have recently shifted my server to nginx and php-fpm getting rid of apache. This has helped improves speeds of my website. Everything seems to work fine until i came across this issue, i noticed that nginx keeps throwing connection time out errors for only certain IPs. One of the IPs is my office IP, we have a backend that is accessed from our office through out the day. I use supervisord to launch 3 php-fpm processes with workers this is my typical php-fpm config pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 300 Since i have a server with 4 cores and 2 GB ram this is my nginx setup worker_processes 4; worker_rlimit_nofile 8192; events { worker_connections 1024; use epoll; multi_accept off; } sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 55; recursive_error_pages on; server_name_in_redirect off; server_tokens off; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; connection_pool_size 256; client_header_buffer_size 8k; large_client_header_buffers 4 32k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; proxy_buffer_size 32k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; fastcgi_connect_timeout 120; fastcgi_send_timeout 120; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; Where am i going wrong with the config, I have tried various settings but the issue still persists. These are the errors i keep getting 2011/11/13 18:20:33 [error] 21583#0: *311683 upstream timed out (110: Connection timed out) while reading response header from upstream, client: IP, server: tastykhana.in, request: "GET url HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.socket:", host: "tastykhana.in", referrer: "url"

    Read the article

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. Of interest I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list \Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • nginx probably deliering wrong filetype for .css file with php tags

    - by Katai
    And again - NGINX is giving me many Questions today :) Like always, I already tried around for a while, but cant seem to fix this issue: I just configured NGINX to handle my .css files equal to my .php files (to parse PHP tags inside the CSS file). This works perfectly, and the file is found and delivered. I could debug it with FIrebug, and everything is OK (it displays the contents of the .css inside the opened <link> tag). So, everything working, right? Wrong. It has the CSS, but it does not interpret it! What I mean by this: apparently, the file-type of the CSS (or aplication-type, whatever) is wrong. The Page can access the CSS, but doesnt bother at all to actually use it. What I checked / tried: There are no PHP errors inside of the .css, so that one is out The .css is accessible. I can call the URI manually, or check if the included URL finds it: both works The .css has no syntax errors (i switched to a css that just has body {background-color: #000; } It works whitout NGINX I deleted the browser cache / restarted NGINX after config rewrites Here the configuration: server { listen 80; server_name localhost; access_log /var/log/nginx/board.access_log; error_log /var/log/nginx/board.error_log warn; root /var/www/board/public; index index.php; fastcgi_index index.php; location / { try_files $uri $uri /index.php; } location ~ (\.php|\.css)$ { try_files $uri =404; include /etc/nginx/fastcgi_params; #keepalive_timeout 0; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:7777; } } Firebug 'Network' Response Header: Connection keep-alive Content-Encoding gzip Content-Type text/html Date Sat, 16 Jun 2012 10:08:40 GMT Server nginx/1.0.5 Transfer-Encoding chunked X-Powered-By PHP/5.3.6-13ubuntu3.7 I think I just answered my own question. Is the Content-Type text/html the problem? How can I remove that? My personal guess is that I have to use this in some way include /etc/nginx/mime.types; default_type application/octet-stream; But I'm not sure... anyone an idea how to solve this? TLDR; CSS file is delivered correctly, but it doesnt seem to be 'used' as CSS from the browser. (Tested, works on apache)

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >