Search Results

Search found 1926 results on 78 pages for 'cookie monster'.

Page 44/78 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • nginx proxypath https redirect fails without trailing slash

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass. The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https. So if a link is too https://example.com/internal/errorlogs in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/) If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off; This happens on more than one app under nginx, and works in apache reverse proxy Config files; proxy.conf location /internal { proxy_pass http://localhost:8081/internal; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 443; server_name example.com; server_name_in_redirect off; include proxy.conf; ssl on; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; curl output -$ curl -I -k https://example.com/internal/errorlogs/ HTTP/1.1 200 OK Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:07 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 14327 -$ curl -I -k https://example.com/internal/errorlogs HTTP/1.1 301 Moved Permanently Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:11 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 127 Location: http://example.com/internal/errorlogs/

    Read the article

  • IKE Phase 1 Aggressive Mode exchange does not complete

    - by Isaac Sutherland
    I've configured a 3G IP Gateway of mine to connect using IKE Phase 1 Aggressive Mode with PSK to my openswan installation running on Ubuntu server 12.04. I've configured openswan as follows: /etc/ipsec.conf: version 2.0 config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn net-to-net authby=secret left=192.168.0.11 [email protected] leftsubnet=10.1.0.0/16 leftsourceip=10.1.0.1 right=%any [email protected] rightsubnet=192.168.127.0/24 rightsourceip=192.168.127.254 aggrmode=yes ike=aes128-md5;modp1536 auto=add /etc/ipsec.secrets: @left.paxcoda.com @right.paxcoda.com: PSK "testpassword" Note that both left and right are NAT'd, with dynamic public IP's. My left ISP gives my router a public IP, but my right ISP gives me a shared dynamic public IP and dynamic private IP. I have dynamic dns for the public ip on the left side. Here is what I see when I sniff the ISAKMP protocol: 21:17:31.228715 IP (tos 0x0, ttl 235, id 43639, offset 0, flags [none], proto UDP (17), length 437) 74.198.87.93.49604 > 192.168.0.11.isakmp: [udp sum ok] isakmp 1.0 msgid 00000000 cookie da31a7896e2a1958->0000000000000000: phase 1 I agg: (sa: doi=ipsec situation=identity (p: #1 protoid=isakmp transform=1 (t: #1 id=ike (type=enc value=aes)(type=keylen value=0080)(type=hash value=md5)(type=auth value=preshared)(type=group desc value=modp1536)(type=lifetype value=sec)(type=lifeduration len=4 value=00015180)))) (ke: key len=192) (nonce: n len=16 data=(da31a7896e2a19582b33...0000001462b01880674b3739630ca7558cec8a89)) (id: idtype=FQDN protoid=0 port=0 len=17 right.paxcoda.com) (vid: len=16) (vid: len=16) (vid: len=16) (vid: len=16) 21:17:31.236720 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 456) 192.168.0.11.isakmp > 74.198.87.93.49604: [bad udp cksum 0x649c -> 0xcd2f!] isakmp 1.0 msgid 00000000 cookie da31a7896e2a1958->5b9776d4ea8b61b7: phase 1 R agg: (sa: doi=ipsec situation=identity (p: #1 protoid=isakmp transform=1 (t: #1 id=ike (type=enc value=aes)(type=keylen value=0080)(type=hash value=md5)(type=auth value=preshared)(type=group desc value=modp1536)(type=lifetype value=sec)(type=lifeduration len=4 value=00015180)))) (ke: key len=192) (nonce: n len=16 data=(32ccefcb793afb368975...000000144a131c81070358455c5728f20e95452f)) (id: idtype=FQDN protoid=0 port=0 len=16 left.paxcoda.com) (hash: len=16) (vid: len=16) (pay20) (pay20) (vid: len=16) However, my 3G Gateway (on the right) doesn't respond, and I don't know why. I think left's response is indeed getting through to my gateway, because in another question, I was trying to set up a similar scenario with Main Mode IKE, and in that case it looks as though at least one of the three 2-way main mode exchanges succeeded. What other explanation for the failure is there? (The 3G Gateway I'm using on the right is a Moxa G3150, by the way.)

    Read the article

  • What are ping packets made of?

    - by Mr. Man
    What exactly are in the packets that are sent via the ping command? I was reading a Wikipedia article about magic numbers and saw this: DHCP packets use a "magic cookie" value of '63 82 53 63' at the start of the options section of the packet. This value is included in all DHCP packet types. so what else is in the packets?

    Read the article

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. Specifically, gzip is not compressing these files. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian More Details: HTTP headers WITHOUT random query string value: http://localhost/script.js GET /script.js HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; If-Modified-Since: Thu, 29 Apr 2010 15:49:56 GMT If-None-Match: "3440e9-119ed-485621404f100" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Date: Thu, 29 Apr 2010 20:19:44 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Connection: Keep-Alive Keep-Alive: timeout=5, max=100 Etag: "3440e9-119ed-485621404f100" Vary: Accept-Encoding X-Pad: avoid browser bug HTTP headers WITH random query string value: http://localhost/script.js?c947344de8278053f6edbb4365550b25 GET /script.js?c947344de8278053f6edbb4365550b25 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 20:14:40 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Last-Modified: Thu, 29 Apr 2010 15:49:56 GMT Etag: "3440e9-119ed-485621404f100" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 24605 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/javascript

    Read the article

  • Where opera store cookies set by expire after restart (time 0)

    - by marc
    Welcome, I'm looking for information where opera store temporary cookies with time expire (0), what's mean - cookie should be deleted after restart browser. Do opera create temporary file for these cookies and delete after restart (if yes what file), or store them in memory (RAM). I'm tested file "cookies4.dat" by hex-editor and it don't have that "temp" 0 cookies stored inside. regards

    Read the article

  • nginx proxypath https redirects to http

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass however several pages load with 404s The links on the pages have https:// in front, but result in a http request - which ends in a 404 - I only want these services to be available through https. I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf, I've also tried commenting out www.conf (just incase its location blocks could have caused any conflicts) to no effect. So if a link is too https://example.com/sickbeard/errorlogs in a browser when loaded https://example.com/sickbeard/errorlogs gives a 404 in a browser https://example.com/sickbeard/errorlogs/ loads nginx error log; 2011/11/23 14:21:58 [error] 28882#0: *6 "/var/www/sickbeard/errorlogs/recent.html" is not found (2: No such file or directory), client: 192.168.1.99, server: example.com, request: "GET /sickbeard/errorlogs/ HTTP/1.1", host: "example.com" Config files; proxy.conf location /sickbeard { proxy_pass http://localhost:8081/sickbeard; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • How to secure memcached?

    - by alfish
    In Debian, I have installed memcached (using this guide) to lower the otherwise unmanageable load on mysql database. The database is on a separate server, and memcached and Varnish are on the front server. Is it a potential security hole to leave memcached unprotected by a firewall? If so, how should I secure it? The situation is especially worrisome,as I've received (unproved) reports of cookie thefts on the server. Thanks

    Read the article

  • Force caching of handler output which actively resists caching

    - by deceze
    I'm trying to force caching of a very obnoxious piece of PHP script which actively tries to resist caching for no good reason by actively setting all the anti-cache headers: Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type: text/html; charset=UTF-8 Date: Thu, 22 May 2014 08:43:53 GMT Expires: Thu, 19 Nov 1981 08:52:00 GMT Last-Modified: Pragma: no-cache Set-Cookie: ECSESSID=...; path=/ Vary: User-Agent,Accept-Encoding Server: Apache/2.4.6 (Ubuntu) X-Powered-By: PHP/5.5.3-1ubuntu2.3 If at all avoidable I do not want to have to modify this 3rd party piece of code at all and instead just get Apache to cache the page for a while. I'm doing this very selectively to only very specific pages which have no real impact on session cookies or the like, i.e. which do not contain any personalised information. CacheDefaultExpire 600 CacheMinExpire 600 CacheMaxExpire 1800 CacheHeader On CacheDetailHeader On CacheIgnoreHeaders Set-Cookie CacheIgnoreCacheControl On CacheIgnoreNoLastMod On CacheStoreExpired On CacheStoreNoStore On CacheLock On CacheEnable disk /the/script.php Apache is caching the page alright: [cache:debug] AH00698: cache: Key for entity /the/script.php?(null) is http://example.com:80/the/script.php? [cache_disk:debug] AH00709: Recalled cached URL info header http://example.com:80/the/script.php? [cache_disk:debug] AH00720: Recalled headers for URL http://example.com:80/the/script.php? [cache:debug] AH00695: Cached response for /the/script.php isn't fresh. Adding conditional request headers. [cache:debug] AH00750: Adding CACHE_SAVE filter for /the/script.php [cache:debug] AH00751: Adding CACHE_REMOVE_URL filter for /the/script.php [cache:debug] AH00769: cache: Caching url: /the/script.php [cache:debug] AH00770: cache: Removing CACHE_REMOVE_URL filter. [cache_disk:debug] AH00737: commit_entity: Headers and body for URL http://example.com:80/the/script.php? cached. However, it is always insisting that the "cached response isn't fresh" and is never serving the cached version. I guess this has to do with the Expires header, which marks the document as expired (but I don't know whether that's the correct assumption). I've tried to overwrite and unset headers using mod_headers, but this doesn't help; whatever combination I try the cache is not impressed at all. I'm guessing that the order of operation is wrong, and headers are being rewritten after the cache sees them. early header processing doesn't help either. I've experimented with CacheQuickHandler Off and trying to set explicit filter chains, but nothing is helping. But I'm really mostly poking in the dark, as I do not have a lot of experience with configuring Apache filter chains. Is there a straight forward solution for how to cache this obnoxious piece of code?

    Read the article

  • Nginx + Passenger running a RoR app is returning 401 when 302 is expected

    - by DBruns
    I've got a RoR app running on Passenger on top of Nginx. I'm using devise for my authentication method and have a link that gets sent in an email to users that requires authentication to view. If a user clicks the link from Outlook, and IE is the default browser, IE makes an HTTP request using the following headers: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: */* Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 401 Unauthorized Content-Type: /; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 401 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 WWW-Authenticate: Basic realm="Application" Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _vxwer_session=[sessionstr]; path=/; HttpOnly X-Runtime: 0.011918 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 31 You need to sign in or sign up before continuing. 0 When the exact same URL is typed into the address bar, it does this: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 302 Found Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 Location: http://www.company.com/users/sign_in Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _xswer_session=[session_info_here]; path=/; HttpOnly X-Runtime: 0.010798 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 6f <html><body>You are being <a href="http://www.company.com/users/sign_in">redirected</a>.</body></html> 0 I expect them to return the same thing regardless.

    Read the article

  • How to recover Google classic design from its new design?

    - by Steven
    I typed this into my address bar: javascript:void(document.cookie=”PREF=ID=20b6e4c2f44943bb:U=4bf292d46faad806:TM=1249677602:LM=1257919388:S=odm0Ys-53ZueXfZG;path=/; domain=.google.com”); However, I don't like the new design of Google. How to switch back? How to cancel this effect using Javascript? How to reverse by using Javascript?

    Read the article

  • Why won't IE let users login to a website unless in In Private mode?

    - by Richard Fawcett
    I'm not entirely sure this belongs on SuperUser.com. I also considered ServerFault.com and StackOverflow.com, but on balance, I think it should belong here? We host a website which has the same code responding to multiple domain names. On 28th December (without any changes deployed to the website) a percentage of users suddenly could not login, and the blank login page was just rendered again even when the correct credentials were entered. The issue is still ongoing. After remote controlling an affected user's PC, we've found the following: The issue affects Internet Explorer 9. The user can login from the same machine on Chrome. The user can login from an In Private browser session using IE9. The user can login if the website is added to the Trusted Sites security zone. The user can NOT login from an IE session in safe mode (started with iexplore -extoff). Only one hostname that the website responds to prevents login, the same user account on the other hostname works fine (note that this is identical code and database running server side), even though that site is not in trusted sites zone. Series of HTTP requests in the failure case: GET request to protected page, returns a 302 FOUND response to login page. GET request to login page. POST to login page, containing credentials, returns redirect to protected page. GET request to protected page... for some reason auth fails and browser is redirected to login page, as in step 1. Other information: Operating system is Windows 7 Ultimate Edition. AV system is AVG Internet Security 2012. I can think of lots of things that could be going wrong, but in every case, one of the findings above is incompatible with the theory. Any ideas what is causing login to fail? Update 06-Jan-2012 Enhanced logging has shown that the .ASPXAUTH cookie is being set in step 3. Its expiry date is 28 days in the future, its path is /, the domain is mysite.com, and its value is an encrypted forms ticket, as expected. However, the cookie is not being received by the web server during step 4. Other cookies are being presented to the server during step 4, it's just this one that is missing. I've seen that cookies are usually set with a domain starting with a period, but mine isn't. Should it be .mysite.com instead of mysite.com? However, if this was wrong, it would presumably affect all users?

    Read the article

  • Google Chrome doesn't keep my "allow all cookies" setting

    - by jldupont
    It seems that Google Chrome doesn't keep my "allow all cookies" settings (dev 5.0.322.2) anymore. Google's sites keep on showing: Your browser's cookie functionality is turned off. Please turn it on. [?] but every I perform the prescribed steps, Chrome doesn't keep the configuration! update: I've deleted ~/.config/google-chrome/Default/Preferences and restarted with a clean state. Now it seems to work.

    Read the article

  • View a pdf with quick webview though apache proxy

    - by Musa
    I have a site(IIS) that is accessed via a proxy in apache(on an IBM i). This site serves PDFs which has quick web view and if I access a pdf directly from the IIS server the PDFs starts to display immediately but if I go through the proxy I have to wait until the entire pdf downloads before I can view it. In the apache config file I use ProxyPass /path/ http://xxx.xxx.xxx.xxx/ <LocationMatch "/path/"> Header set Cache-Control "no-cache" </LocationMatch> I tried adding SetEnv proxy-sendcl to LocationMatch directive this had no effect. The PDFs that view quickly makes a lot of partial requests This is the initial request and response headers GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip HTTP/1.1 200 OK Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 15330238 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET This is a partial request and response GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept: */* Referer: http://xxx.xxx.xxx.xxx/xxxx.PDF Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip Range: bytes=0-32767 HTTP/1.1 206 Partial Content Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 32768 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Range: bytes 0-32767/15330238 Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET These are the headers I get if I go through he proxy GET /path/xxx.PDF HTTP/1.1 Host: domain:xxxx Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 HTTP/1.1 200 OK Date: Mon, 25 Aug 2014 13:28:42 GMT Server: Microsoft-IIS/7.5 Content-Type: application/pdf Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes ETag: "b6262940bbecf1:0"-gzip X-Powered-By: ASP.NET Cache-Control: no-cache Expires: Thu, 24 Aug 2017 13:28:42 GMT Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=300, max=100 Connection: Keep-Alive Transfer-Encoding: chunked I'm guessing its because the proxy uses Transfer-Encoding: chunked but I'm not sure and wasn't able to turn it off to check. Browser Chrome 36.0.1985.143 m Using the native PDF viewer Any help to get the pdf quick web view through the proxy working would be appreciated.

    Read the article

  • Setting http auth type in phpMyAdmin on Debian

    - by Daniel Hollands
    I'm trying to set-up the fresh phpMyAdmin install on my Debian 6 server to use http authentication rather than the cookie based auth that is default when it is installed. To do this, I edited the $cfg['Servers'][$i]['auth_type'] line to use 'http' as its setting in /etc/phpmyadmin/config.inc.php, and restarted the server, but the setting seems to be being ignored, as when I goto phpmyadmin, it is still offering up the regular login box. I've done this twice before (once on debian and once on ubuntu), so I'm not sure why it isn't working this time. Thank you

    Read the article

  • Apache certificates for some urls not working

    - by Vegaasen
    We are having a rather strange problem with a Apache-installation. Here is a short summary: Currently I'm setting up Apache with https, and server-certificates. This is fairly easy and works straight out of the box - as expected. This is the configuration for this setup: Listen 443 SSLEngine on SSLCertificateFile "/progs/apache/ssl/example-site.no.pem" SSLCertificateKeyFile "/progs/apache/ssl/example-site.no.key" SSLCACertificateFile "/progs/apache/ssl/ca/example_root.pem" SSLCADNRequestFile "/progs/apache/ssl/ca/example_intermediate.pem" SSLVerifyClient none SSLVerifyDepth 3 SSLOptions +StdEnvVars +ExportCertData RequestHeader set ssl-ClientCert-Subject-CN "%{SSL_CLIENT_S_DN}s" RewriteEngine On ProxyPreserveHost On ProxyRequests On SSLProxyEngine On ... <LocationMatch /secureStuff/$> SSLVerifyClient require Order deny,allow Allow from All </LocationMatch> ... <Proxy balancer://exBalancer> Header add Set-Cookie "EX_ROUTE=EB.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED BalancerMember http://10.0.0.1:7200 route=ee1 retry=300 flushpackets=off keepalive=on BalancerMember http://10.0.0.2:7200 route=ee2 retry=300 flushpackets=off keepalive=on status=+H ProxySet stickysession=EX_ROUTE scolonpathdelim=Off timeout=10 nofailover=off failonstatus=505 maxattempts=1 lbmethod=bybusyness Order deny,allow Allow from all </Proxy> RewriteCond %{REQUEST_URI} !^/index.html [NC] RewriteRule ^/(.*)$ balancer://exBalancer/$1 [P,NC] ProxyPassReverse / balancer://exBalancer/ Header edit Set-Cookie "(.*)" "$1;HttpsOnly" ... So - everything works fine and as expected for all of the pages that are not a part of the LocationMatch-directive. When requesting something that matches the LocationMatch-directive, I'm asked for a certificate (hence the SSLVerifyClient required attribute) - and getting all the correct certificates in my browser that is based on the root/intermediate chain. After choosing a certificate and clicking "OK", this is what pops up in the apache logs: [ssl:info] [pid 9530:tid 25] [client :43357] AH01998: Connection closed to child 86 with abortive shutdown ( [Thu Oct 11 09:27:36.221876 2012] [ssl:debug] [pid 9530:tid 25] ssl_engine_io.c(1171): (70014)End of file found: [client 10.235.128.55:45846] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!] And this just spams the logs. What is happening here? I can see this configuration working on my local machine, but not on one of our servers. There is no configration differences between the servers, only minor application-wise-changes. I've tried the following: 1) Removing CA-certificate-checking (works) 2) Adding required CA-certificate for the whole site (works) 3) Adding "SSLVerifyClient optional" does not work 4) ++ Server/Application Information Local: -OpenSSL v.1.0.1x -Apache 2.4.3 -Ubuntu -mpm: event -every configuration should be turned on (failing) server: -OpenSSL 0.9.8e -Apache 2.4.2 -SunOS -mpm: worker -every configuration should be turned on Please let me know if more information is needed, I'll provide it instantly. Brief sum-up: -Running apache 2.4 -Server certificates works just fine -Client certificates for some /Locations does not work, fails with errors PS: Could it be related with the OpenSSL version and the "Renegotiation" stuff related to TLS/SSLv3?

    Read the article

  • AD Password About to Expire check problem with ASP.Net

    - by Vince
    Hello everyone, I am trying to write some code to check the AD password age during a user login and notify them of the 15 remaining days. I am using the ASP.Net code that I found on the Microsoft MSDN site and I managed to add a function that checks the if the account is set to change password at next login. The login and the change password at next login works great but I am having some problems with the check for the password age. This is the VB.Net code for the DLL file: Imports System Imports System.Text Imports System.Collections Imports System.DirectoryServices Imports System.DirectoryServices.AccountManagement Imports System.Reflection 'Needed by the Password Expiration Class Only -Vince Namespace FormsAuth Public Class LdapAuthentication Dim _path As String Dim _filterAttribute As String 'Code added for the password expiration added by Vince Private _domain As DirectoryEntry Private _passwordAge As TimeSpan = TimeSpan.MinValue Const UF_DONT_EXPIRE_PASSWD As Integer = &H10000 'Function added by Vince Public Sub New() Dim root As New DirectoryEntry("LDAP://rootDSE") root.AuthenticationType = AuthenticationTypes.Secure _domain = New DirectoryEntry("LDAP://" & root.Properties("defaultNamingContext")(0).ToString()) _domain.AuthenticationType = AuthenticationTypes.Secure End Sub 'Function added by Vince Public ReadOnly Property PasswordAge() As TimeSpan Get If _passwordAge = TimeSpan.MinValue Then Dim ldate As Long = LongFromLargeInteger(_domain.Properties("maxPwdAge")(0)) _passwordAge = TimeSpan.FromTicks(ldate) End If Return _passwordAge End Get End Property Public Sub New(ByVal path As String) _path = path End Sub 'Function added by Vince Public Function DoesUserHaveToChangePassword(ByVal userName As String) As Boolean Dim ctx As PrincipalContext = New PrincipalContext(System.DirectoryServices.AccountManagement.ContextType.Domain) Dim up = UserPrincipal.FindByIdentity(ctx, userName) Return (Not up.LastPasswordSet.HasValue) 'returns true if last password set has no value. End Function Public Function IsAuthenticated(ByVal domain As String, ByVal username As String, ByVal pwd As String) As Boolean Dim domainAndUsername As String = domain & "\" & username Dim entry As DirectoryEntry = New DirectoryEntry(_path, domainAndUsername, pwd) Try 'Bind to the native AdsObject to force authentication. Dim obj As Object = entry.NativeObject Dim search As DirectorySearcher = New DirectorySearcher(entry) search.Filter = "(SAMAccountName=" & username & ")" search.PropertiesToLoad.Add("cn") Dim result As SearchResult = search.FindOne() If (result Is Nothing) Then Return False End If 'Update the new path to the user in the directory. _path = result.Path _filterAttribute = CType(result.Properties("cn")(0), String) Catch ex As Exception Throw New Exception("Error authenticating user. " & ex.Message) End Try Return True End Function Public Function GetGroups() As String Dim search As DirectorySearcher = New DirectorySearcher(_path) search.Filter = "(cn=" & _filterAttribute & ")" search.PropertiesToLoad.Add("memberOf") Dim groupNames As StringBuilder = New StringBuilder() Try Dim result As SearchResult = search.FindOne() Dim propertyCount As Integer = result.Properties("memberOf").Count Dim dn As String Dim equalsIndex, commaIndex Dim propertyCounter As Integer For propertyCounter = 0 To propertyCount - 1 dn = CType(result.Properties("memberOf")(propertyCounter), String) equalsIndex = dn.IndexOf("=", 1) commaIndex = dn.IndexOf(",", 1) If (equalsIndex = -1) Then Return Nothing End If groupNames.Append(dn.Substring((equalsIndex + 1), (commaIndex - equalsIndex) - 1)) groupNames.Append("|") Next Catch ex As Exception Throw New Exception("Error obtaining group names. " & ex.Message) End Try Return groupNames.ToString() End Function 'Function added by Vince Public Function WhenExpires(ByVal username As String) As TimeSpan Dim ds As New DirectorySearcher(_domain) ds.Filter = [String].Format("(&(objectClass=user)(objectCategory=person)(sAMAccountName={0}))", username) Dim sr As SearchResult = FindOne(ds) Dim user As DirectoryEntry = sr.GetDirectoryEntry() Dim flags As Integer = CInt(user.Properties("userAccountControl").Value) If Convert.ToBoolean(flags And UF_DONT_EXPIRE_PASSWD) Then 'password never expires Return TimeSpan.MaxValue End If 'get when they last set their password Dim pwdLastSet As DateTime = DateTime.FromFileTime(LongFromLargeInteger(user.Properties("pwdLastSet").Value)) ' return pwdLastSet.Add(PasswordAge).Subtract(DateTime.Now); If pwdLastSet.Subtract(PasswordAge).CompareTo(DateTime.Now) > 0 Then Return pwdLastSet.Subtract(PasswordAge).Subtract(DateTime.Now) Else Return TimeSpan.MinValue 'already expired End If End Function 'Function added by Vince Private Function LongFromLargeInteger(ByVal largeInteger As Object) As Long Dim type As System.Type = largeInteger.[GetType]() Dim highPart As Integer = CInt(type.InvokeMember("HighPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Dim lowPart As Integer = CInt(type.InvokeMember("LowPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Return CLng(highPart) << 32 Or CUInt(lowPart) End Function 'Function added by Vince Private Function FindOne(ByVal searcher As DirectorySearcher) As SearchResult Dim sr As SearchResult = Nothing Dim src As SearchResultCollection = searcher.FindAll() If src.Count > 0 Then sr = src(0) End If src.Dispose() Return sr End Function End Class End Namespace And this is the Login.aspx page: sub Login_Click(sender as object,e as EventArgs) Dim adPath As String = "LDAP://DC=xxx,DC=com" 'Path to your LDAP directory server Dim adAuth As LdapAuthentication = New LdapAuthentication(adPath) Try If (True = adAuth.DoesUserHaveToChangePassword(txtUsername.Text)) Then Response.Redirect("passchange.htm") ElseIf (True = adAuth.IsAuthenticated(txtDomain.Text, txtUsername.Text, txtPassword.Text)) Then Dim groups As String = adAuth.GetGroups() 'Create the ticket, and add the groups. Dim isCookiePersistent As Boolean = chkPersist.Checked Dim authTicket As FormsAuthenticationTicket = New FormsAuthenticationTicket(1, _ txtUsername.Text, DateTime.Now, DateTime.Now.AddMinutes(60), isCookiePersistent, groups) 'Encrypt the ticket. Dim encryptedTicket As String = FormsAuthentication.Encrypt(authTicket) 'Create a cookie, and then add the encrypted ticket to the cookie as data. Dim authCookie As HttpCookie = New HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket) If (isCookiePersistent = True) Then authCookie.Expires = authTicket.Expiration End If 'Add the cookie to the outgoing cookies collection. Response.Cookies.Add(authCookie) 'Retrieve the password life Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) 'You can redirect now. If (passAge.Days = 90) Then errorLabel.Text = "Your password will expire in " & DateTime.Now.Subtract(t) 'errorLabel.Text = "This is" 'System.Threading.Thread.Sleep(5000) Response.Redirect("http://somepage.aspx") Else Response.Redirect(FormsAuthentication.GetRedirectUrl(txtUsername.Text, False)) End If Else errorLabel.Text = "Authentication did not succeed. Check user name and password." End If Catch ex As Exception errorLabel.Text = "Error authenticating. " & ex.Message End Try End Sub ` Every time I have this Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) enabled, I receive "Arithmetic operation resulted in an overflow." during the login and won't continue. What am I doing wrong? How can I correct this? Please help!! Thank you very much for any help in advance. Vince

    Read the article

  • How to simulate browser file upload with HttpWebRequest

    - by cucicov
    Hi, guys, first of all thanks for your contributions, I've found great responses here. Yet I've ran into a problem I can't figure out and if someone could provide any help, it would be greatly appreciated. I'm developing this application in C# that could upload an image from computer to user photoblog. For this I'm usig pixelpost platform for photoblogs that is written mainly in PHP. I've searched here and on other web pages, but the exmples provided there didn't work for me. Here is what I used in my example: (http://stackoverflow.com/questions/566462/upload-files-with-httpwebrequest-multipart-form-data) and (http://bytes.com/topic/c-sharp/answers/268661-how-upload-file-via-c-code) Once it is ready I will make it available for free on the internet and maybe also create a windows mobile version of it, since I'm a fan of pixelpost. here is the code I've used: string formUrl = "http://localhost/pixelpost/admin/index.php?x=login"; string formParams = string.Format("user={0}&password={1}", "user-String", "password-String"); string cookieHeader; HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(formUrl); req.ContentType = "application/x-www-form-urlencoded"; req.Method = "POST"; req.AllowAutoRedirect = false; byte[] bytes = Encoding.ASCII.GetBytes(formParams); req.ContentLength = bytes.Length; using (Stream os = req.GetRequestStream()) { os.Write(bytes, 0, bytes.Length); } HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); cookieHeader = resp.Headers["Set-Cookie"]; string pageSource; using (StreamReader sr = new StreamReader(resp.GetResponseStream())) { pageSource = sr.ReadToEnd(); Console.WriteLine(); } string getUrl = "http://localhost/pixelpost/admin/index.php"; HttpWebRequest getRequest = (HttpWebRequest)HttpWebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookieHeader); HttpWebResponse getResponse = (HttpWebResponse)getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); } // end first part: login to admin panel long length = 0; string boundary = "----------------------------" + DateTime.Now.Ticks.ToString("x"); HttpWebRequest httpWebRequest2 = (HttpWebRequest)WebRequest.Create("http://localhost/pixelpost/admin/index.php?x=save"); httpWebRequest2.ContentType = "multipart/form-data; boundary=" + boundary; httpWebRequest2.Method = "POST"; httpWebRequest2.AllowAutoRedirect = false; httpWebRequest2.KeepAlive = false; httpWebRequest2.Credentials = System.Net.CredentialCache.DefaultCredentials; httpWebRequest2.Headers.Add("Cookie", cookieHeader); Stream memStream = new System.IO.MemoryStream(); byte[] boundarybytes = System.Text.Encoding.ASCII.GetBytes("\r\n--" + boundary + "\r\n"); string formdataTemplate = "\r\n--" + boundary + "\r\nContent-Disposition: form-data; name=\"{0}\";\r\n\r\n{1}"; string formitem = string.Format(formdataTemplate, "headline", "image-name"); byte[] formitembytes = System.Text.Encoding.UTF8.GetBytes(formitem); memStream.Write(formitembytes, 0, formitembytes.Length); memStream.Write(boundarybytes, 0, boundarybytes.Length); string headerTemplate = "\r\nContent-Disposition: form-data; name=\"{0}\"; filename=\"{1}\"\r\nContent-Type: application/octet-stream\r\n\r\n"; string header = string.Format(headerTemplate, "userfile", "path-to-the-local-file"); byte[] headerbytes = System.Text.Encoding.UTF8.GetBytes(header); memStream.Write(headerbytes, 0, headerbytes.Length); FileStream fileStream = new FileStream("path-to-the-local-file", FileMode.Open, FileAccess.Read); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) != 0) { memStream.Write(buffer, 0, bytesRead); } memStream.Write(boundarybytes, 0, boundarybytes.Length); fileStream.Close(); httpWebRequest2.ContentLength = memStream.Length; Stream requestStream = httpWebRequest2.GetRequestStream(); memStream.Position = 0; byte[] tempBuffer = new byte[memStream.Length]; memStream.Read(tempBuffer, 0, tempBuffer.Length); memStream.Close(); requestStream.Write(tempBuffer, 0, tempBuffer.Length); requestStream.Close(); WebResponse webResponse2 = httpWebRequest2.GetResponse(); Stream stream2 = webResponse2.GetResponseStream(); StreamReader reader2 = new StreamReader(stream2); Console.WriteLine(reader2.ReadToEnd()); webResponse2.Close(); httpWebRequest2 = null; webResponse2 = null; and also here is the PHP: (http://dl.dropbox.com/u/3149888/index.php) and (http://dl.dropbox.com/u/3149888/new_image.php) the mandatory fields are headline and userfile so I can't figure out where the mistake is, as the format sent in right. I'm guessing there is something wrong with the octet-stream sent to the form. Maybe it's a stupid mistake I wasn't able to trace, in any case, if you could help me that would mean a lot. thanks,

    Read the article

  • Guarding against CSRF Attacks in ASP.NET MVC2

    - by srkirkland
    Alongside XSS (Cross Site Scripting) and SQL Injection, Cross-site Request Forgery (CSRF) attacks represent the three most common and dangerous vulnerabilities to common web applications today. CSRF attacks are probably the least well known but they are relatively easy to exploit and extremely and increasingly dangerous. For more information on CSRF attacks, see these posts by Phil Haack and Steve Sanderson. The recognized solution for preventing CSRF attacks is to put a user-specific token as a hidden field inside your forms, then check that the right value was submitted. It's best to use a random value which you’ve stored in the visitor’s Session collection or into a Cookie (so an attacker can't guess the value). ASP.NET MVC to the rescue ASP.NET MVC provides an HTMLHelper called AntiForgeryToken(). When you call <%= Html.AntiForgeryToken() %> in a form on your page you will get a hidden input and a Cookie with a random string assigned. Next, on your target Action you need to include [ValidateAntiForgeryToken], which handles the verification that the correct token was supplied. Good, but we can do better Using the AntiForgeryToken is actually quite an elegant solution, but adding [ValidateAntiForgeryToken] on all of your POST methods is not very DRY, and worse can be easily forgotten. Let's see if we can make this easier on the program but moving from an "Opt-In" model of protection to an "Opt-Out" model. Using AntiForgeryToken by default In order to mandate the use of the AntiForgeryToken, we're going to create an ActionFilterAttribute which will do the anti-forgery validation on every POST request. First, we need to create a way to Opt-Out of this behavior, so let's create a quick action filter called BypassAntiForgeryToken: [AttributeUsage(AttributeTargets.Method, AllowMultiple=false)] public class BypassAntiForgeryTokenAttribute : ActionFilterAttribute { } Now we are ready to implement the main action filter which will force anti forgery validation on all post actions within any class it is defined on: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class UseAntiForgeryTokenOnPostByDefault : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (ShouldValidateAntiForgeryTokenManually(filterContext)) { var authorizationContext = new AuthorizationContext(filterContext.Controller.ControllerContext);   //Use the authorization of the anti forgery token, //which can't be inhereted from because it is sealed new ValidateAntiForgeryTokenAttribute().OnAuthorization(authorizationContext); }   base.OnActionExecuting(filterContext); }   /// <summary> /// We should validate the anti forgery token manually if the following criteria are met: /// 1. The http method must be POST /// 2. There is not an existing [ValidateAntiForgeryToken] attribute on the action /// 3. There is no [BypassAntiForgeryToken] attribute on the action /// </summary> private static bool ShouldValidateAntiForgeryTokenManually(ActionExecutingContext filterContext) { var httpMethod = filterContext.HttpContext.Request.HttpMethod;   //1. The http method must be POST if (httpMethod != "POST") return false;   // 2. There is not an existing anti forgery token attribute on the action var antiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(ValidateAntiForgeryTokenAttribute), false);   if (antiForgeryAttributes.Length > 0) return false;   // 3. There is no [BypassAntiForgeryToken] attribute on the action var ignoreAntiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(BypassAntiForgeryTokenAttribute), false);   if (ignoreAntiForgeryAttributes.Length > 0) return false;   return true; } } The code above is pretty straight forward -- first we check to make sure this is a POST request, then we make sure there aren't any overriding *AntiForgeryTokenAttributes on the action being executed. If we have a candidate then we call the ValidateAntiForgeryTokenAttribute class directly and execute OnAuthorization() on the current authorization context. Now on our base controller, you could use this new attribute to start protecting your site from CSRF vulnerabilities. [UseAntiForgeryTokenOnPostByDefault] public class ApplicationController : System.Web.Mvc.Controller { }   //Then for all of your controllers public class HomeController : ApplicationController {} What we accomplished If your base controller has the new default anti-forgery token attribute on it, when you don't use <%= Html.AntiForgeryToken() %> in a form (or of course when an attacker doesn't supply one), the POST action will throw the descriptive error message "A required anti-forgery token was not supplied or was invalid". Attack foiled! In summary, I think having an anti-CSRF policy by default is an effective way to protect your websites, and it turns out it is pretty easy to accomplish as well. Enjoy!

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • Absent Code attribute in method that is not native or abstract

    - by kerry
    I got the following, quite puzzling error today when running a unit test: java.lang.ClassFormatError: Absent Code attribute in method that is not native or abstract in class file javax/servlet/http/Cookie A google search found this post, which explains that it is caused by having an interface in the classpath, and not an actual implementation. In this case it’s the java-ee interface. To fix this I added the jetty servlet api implementation to my pom: jetty javax.servlet test Piece of cake. I have run in to this before, so I figured I would capture the fix here in case I run in to it again.

    Read the article

  • Mobile redirect strategy

    - by Kevin
    Looking for help on deciding how to redirect users to a mobile optimized version of my site (m.mysite.com). Looking at two methods: Server configuration (.htaccess or even varnish) Webapp (php) The problem I see with #1 is with the "view full site" link on the mobile site. If a user clicks that link and they go to mysite.com won't the server just redirect them back to m.mysite.com? For #2 I could create a cookie that is checked in addition to the user agent. Any suggestions/comments? Is there a better way to "remember" if the user clicked "visit full site"? Thanks, Kevin

    Read the article

  • Integrating PHPBB With DokuWiki

    - by Christofian
    I have a site with a phpbb forum. I'm working on adding a dokuwiki wiki to the site, and I would like to integrate the two. The phpbb forum is running phpbb 3.0.8, and the dokuwiki installation is running 2011-05-25a "Rincewind". Basically what I want is: for users to be able to log into the dokuwiki wiki with their phpbb account. Users should only be able to register from the phpbb registration interface. Cookie integration would be nice, but is not required. Is there any way to do this?

    Read the article

  • HTTPS on all pages where user is logged on

    - by Tom Gullen
    I know this is considered best practise to prevent cookie hijacking. I would like to adopt this approach, but ran across a problem on our forum where the users post images which either aren't posted with URL's over HTTPS or the url itself doesn't support HTTPS. This throws up a lot of ugly browser warnings. I see I have two options: Disable HTTPS for the forum Force all user posted content to start with // in the url so it selects the right protocol, if it doesn't support HTTPS so be it Do I have any other options? How do other sites deal with this?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >