Search Results

Search found 10670 results on 427 pages for 'session cookie'.

Page 225/427 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Making nginx withstand flood attacks

    - by Tiffany Walker
    How can I make it stand stand against attacks better? Are their plugins. Looking for a way to RATE LIMIT and remain up and not slow down. My Setup: user nobody; # no need for more workers in the proxy mode worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; worker_priority -2; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 40480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 9; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 194.145.208.19:80; server_name ipxnow.in www.ipxnow.in; access_log /usr/local/apache/domlogs/ipxnow.in-bytes_log bytes_log; access_log /usr/local/apache/domlogs/ipxnow.in combined; root /home/ipxnowin/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location @backend { internal; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ /\.ht { deny all; } } and proxy.inc: proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • What cookies are allowed from Internet Explorer using this option?

    - by kiamlaluno
    When I look at the options used from Internet Explorer for the cookies, I read the following description: Blocca i cookie di terze party privi di una versione compatta dell'informativa sulla privacy. Its translation is roughtly: Blocks the cookies without a compact version of the privacy's informative. I don't get which cookies are blocked. From that description, it seems the cookies should include a compact description of the privacy informative, but I don't get how cookies can contain that information, or what happens with cookies set from sites outside the European Union. What cookies are blocked? The screenshot has been taken from Internet Explorer 9, but that words were used also from previous versions. The settings are the ones shown for the "Privacy" tab; I don't recall if the English version calls that tab the same way. EDIT, English screenshot .

    Read the article

  • How does this main domain have a CNAME record?

    - by TRiG
    I was under the impression that only subdomains could have CNAME records: main domains need to define all their own records. However, apt-get.com seems to have only a CNAME record. How can this work? $ dig apt-get.com ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45743 ;; flags: qr rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN A ;; ANSWER SECTION: apt-get.com. 86336 IN CNAME thie5ku9.dsgeneration.com. thie5ku9.dsgeneration.com. 60 IN A 208.73.211.242 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.246 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.166 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.232 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.161 thie5ku9.dsgeneration.com. 60 IN A 208.73.210.233 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.186 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.188 ;; Query time: 59 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:05:48 2014 ;; MSG SIZE rcvd: 193 $ dig apt-get.com ns ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 43831 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN NS ;; Query time: 26 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:12:37 2014 ;; MSG SIZE rcvd: 29 $ dig apt-get.com ns @b.gtld-servers.net ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns @b.gtld-servers.net ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38228 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;apt-get.com. IN NS ;; AUTHORITY SECTION: apt-get.com. 172800 IN NS ns1.domainrecover.com. apt-get.com. 172800 IN NS ns2.domainrecover.com. ;; ADDITIONAL SECTION: ns1.domainrecover.com. 172800 IN A 66.45.232.66 ns2.domainrecover.com. 172800 IN A 65.23.159.179 ;; Query time: 70 msec ;; SERVER: 192.33.14.30#53(192.33.14.30) ;; WHEN: Tue Jun 10 15:07:05 2014 ;; MSG SIZE rcvd: 111 The domain does resolve. I get the following headers: GET / HTTP/1.1 User-Agent: Testing_Sniffer/4.15 Host: apt-get.com Accept: */* HTTP/1.0 200 (OK) Cache-Control: private, no-cache, must-revalidate Connection: Keep-Alive Pragma: no-cache Server: Oversee Turing v1.0.0 Content-Length: 1347 Content-Type: text/html Expires: Mon, 26 Jul 1997 05:00:00 GMT Keep-Alive: timeout=3, max=96 P3P: policyref="http://www.dsparking.com/w3c/p3p.xml", CP="NOI DSP COR ADMa OUR NOR STA" Set-Cookie: parkinglot=1; domain=.apt-get.com; path=/; expires=Wed, 11-Jun-2014 14:10:37 GMT <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd"> <!-- turing_cluster_prod --> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>apt-get.com</title> <meta name="keywords" content="apt-get.com" /> <meta name="description" content="apt-get.com" /> <meta name="robots" content="index, follow" /> <meta name="revisit-after" content="10" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <script type="text/javascript"> document.cookie = "jsc=1"; </script> </head> <frameset rows="100%,*" frameborder="no" border="0" framespacing="0"> <frame src="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A" name="apt-get.com"> </frameset> <noframes> <body><a href="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A">Click here to go to apt-get.com</a>.</body> </noframes> </html>

    Read the article

  • Forms Authentication across Sub-Domains on local IIS

    - by Parminder
    I asked this question at SO http://stackoverflow.com/questions/8278015/forms-nauthentication-across-sub-domains-on-local-iis Now asking it here. I know a cookie can be shared across multiple subdomains using the setting <forms name=".ASPXAUTH" loginUrl="Login/" protection="Validation" timeout="120" path="/" domain=".mydomain.com"/> in Web.config. But how to replicate same thing on local machine. I am using windows 7 and IIS 7 on my laptop. So I have sites localhost.users/ for my actual site users.mysite.com localhost.host/ for host.mysite.com and similar.

    Read the article

  • Nginx reverse proxy IP issue

    - by Tiffany Walker
    For some reason Apache is still seeing my SERVERS ip. Is this an nginx problem? /etc/nginx.conf user nobody; # no need for more workers in the proxy mode worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 63.6.1.12:80; server_name photo-rolldomain.com www.domain.com; access_log /usr/local/apache/domlogs/domain.com-bytes_log bytes_log; access_log /usr/local/apache/domlogs/domain.com combined; root /home/mtech/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location @backend { internal; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ /\.ht { deny all; } }

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • file downloaded via firefox and curl have different size

    - by Arash Mousavi
    When I download a file from this link by Firefox its size is 74580 B, But when I download it by curl with exactly all of header was sent by Firefox its size is 79891 B (I copied all header from Firefox and paste it in curl command). what is the problem? If you need any additional data ask me in comment. My curl command: curl --header 'Host: members.tsetmc.com' --header 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' --header 'Accept-Language: en-US,en;q=0.5' --header 'Referer: http://www.tsetmc.com/Loader.aspx?ParTree=15131F' --header 'Cookie: ASP.NET_SessionId=pwzbckbdpjlzqj45vcdbd455' --header 'Connection: keep-alive' 'http://members.tsetmc.com/tsev2/excel/MarketWatchPlus.aspx?d=0' -o 'MarketWatchPlus-1393_3_14.xlsx' -L

    Read the article

  • Simple HTTP server that will send the same file for all requests?

    - by Rory McCann
    I need to debug a XML-RPC application, which sends XML replies over HTTP. I have a sample XML reply (i.e. data from the server, sent to the client that isn't working), I'd like to debug my application. Ideally I'd like a simple HTTP server that will serve one file in reply to all requests. Someone requests /? Send them this file. Someone makes a post to /server/page.php with a certain cookie? Just send them this file. I don't care about multithreading, or security. I will only need to use this for a few hours to debug. I have root on the machine. i.e. I'm hoping there's something as easy to use as this: simple_http_server -p 12445 -f my_test_file I'm aware of python's SimpleHTTPServer module, but I'm not sure how to make it work in this case.

    Read the article

  • Sticky Load Balancing with AWS

    - by John Wheal
    I have just setup a load balancer with AWS for a few instances as search engine crawlers were bringing down the site (it has millions of pages). Parts of the site allow you to login so I selected: Enable Application Generated Cookie Stickiness and everything works fine. I now wonder how this will effect my SEO and the crawlers. As I selected sticky load balancing does this mean that a crawler will be stuck on one server and therefore defeat the point in the load balancer? Any recommendations will be appreciated.

    Read the article

  • Google Chrome doesn't stay logged in to Google sites when using pinned tabs

    - by Nick T
    Despite checking "stay logged in" or the like on Gmail or Docs, Chrome refuses to do so when I close and re-open it with Google sites pinned. If they're not pinned, it works fine. The "Clear cookies and other site and plug-in data when I close my browser" checkbox in the settings is not checked, and I don't have any cookie exceptions. All settings are defaults. Nor is the incognito mode being used. This occurs on all my computers using Chrome. I have deleted my cookies file (%userprofile%\AppData\Local\Google\Chrome\User Data\Default\Cookies) with no effect (other than losing the logins that ordinarly work fine). Of note is that when I relaunch Chrome with Gmail pinned and it asks me to log in, doing so once will fail (does nothing; no errors), then it will work on the second attempt. If I refresh the window before doing so, it will work on the first attempt.

    Read the article

  • Apache ProxyPass/ProxyPassReverse to IIS

    - by Dana
    We have an ASP.NET web application which is mapped to a folder on an apache hosted php site using ProxyPass.ProxyPassReverse. A couple of problems being encountered. cookies are being lost which breaks the site navigation, this can be overcome by setting the asp app as cookieless. Forms authentication is used on the ASP site, this is also broken withe the proxypass in place, suspect this is cookie related also. ASP site works ok when run from a domain/ip address. Use of a separate domain / sub-domain is not an option duew to client requirements.

    Read the article

  • Remote X-windows between new RHEL5 and old Solaris 8

    - by joshxdr
    I have a very small lab network with three boxes: a modern x86-based RHEL3 box, an x86-based RHEL5 box, and a 1998-vintage SPARC Ultra5 with Solaris 8. I can use ssh -X to run a program on the RHEL5 box and view the windows on the RHEL3 box. I believe this uses xauth and magic cookies?? I have followed the X-Windows HOWTO to set up xauth on the Solaris box, but so far no dice. I would like to be able to use the X-windows server on the RHEL3 box with a client program on the Solaris box (program running on Solaris host, windows appearing at Linux host). Is there a trick to this, or have I made a mistake following the instructions for setting up xauth and magic cookie?

    Read the article

  • Creating a seperate static content site for IIS7 and MVC

    - by JK01
    With reference to this serverfault blog post: A Few Speed Improvements where it talks about how static content for stackexchange is served from a separate cookieless domain... How would someone go about doing this on IIS7.5 for a ASP.NET MVC site? The plan so far: Register domain eg static.com, create a new website in IIS Manually copy the js / css / images folders from MVC as is so that they have the same paths on the new server Enable IIS gzip settings (js/css = high compression, images = none) Set caching with far future expiry dates <clientCache cacheControlCustom="public" /> in the web.config Never set any cookies on the static.com site Combine and minimize js / css Auto deploy changes in static content with WebDeploy Is this plan correct? And how can you use WebDeploy to deploy the whole web app to one server and then only the static items to another? I can see there is a similar question, but for apache: Creating a cookie-free domain to serve static content so it doesn't apply

    Read the article

  • Blocking facebook's Like button in firefox

    - by Quiark
    Many sites today use widgets from facebook such as the Like button, list of friends who are fans of that site and so on. While it may be a nice feature, I perceive it to be a serious privacy intrusion, because facebook most likely stores information about which sites you visit. I also heard that when you are not logged into facebook, it still tracks the sites you visit (probably with a cookie) and once you log in attaches the data to your real account. For now, I want to keep using facebook, but I would like to block just these widgets so it can't track me. Is there any Firefox extension which could do that?

    Read the article

  • Changing location in Google Chrome when searching

    - by Alex
    I've recently moved to the Czech Republic from Scotland and I can't find a way to permanently stop Google from automatically defaulting back to google.cz all the time. I've checked to ensure that all my google accounts and cookie based settings (e.g. Advanced Search Options) are set to English but it's still clearly doing an IP address lookup and disregarding everything else. The default Search Engine for Google Chrome (and switches to google.cz automatically): {google:baseURL}search?{google:RLZ}{google:acceptedSuggestion}{google:originalQueryForSuggestion}sourceid=chrome&ie={inputEncoding}&q=%s I've tried hardcoding it to: http://www.google.com/search?{google:RLZ}{google:acceptedSuggestion}{google:originalQueryForSuggestion}sourceid=chrome&ie={inputEncoding}&q=%s this kind of works, but won't work for inline searching, i.e. I always have to press enter in order to get any results which is a bit annoying as I've gotten so used to AJAX style searching I can't have been the only one to get this issue? Any help is appreciated

    Read the article

  • nginx rewrite base url

    - by ptn777
    I would like the root url http://www.example.com to redirect to http://www.example.com/something/else This is because some weird WP plugin always sets a cookie on the base url, which doesn't let me cache it. I tried this directive: location / { rewrite ^ /something/else break; } But 1) there is no redirect and 2) pages start shooting more than 1,000 requests to my server. With this one: location / { rewrite ^ http://www.example.com/something/else break; } Chrome reports a redirect loop. What's the correct regexp to use?

    Read the article

  • IPv6 scope id issue with IE

    - by eych
    I have an ASP.NET website that works with Firefox because FF doesn't need the % in the scope-id to be encoded (%25). The website also works on the same machine using IE because I can leave out the scope-id. However, to access the website from another machine in the network, I need to add the scope-id to the IPv6 address. For some reason, using the scope-id doesn't allow an authentication cookie to be created, and the website keeps going back to the login page. Anyone using IE7+ to access an ASP.NET website on a network using IPv6 with an encoded %?

    Read the article

  • WGet a Page that Requires Logging in

    - by Synetech inc.
    I’m trying to figure out a way to use WGET or a similar tool so that I can schedule a web page to be downloaded regularly as a sort of updating log. The problem is that the page requires that I be logged in otherwise I get a different page, generic. Further, the page does not take login information as GET parameters in the URL, it uses POST to log in on the login page and cookies to save the login information that’s read by the regular page. I’m currently using GNU Wget 1.10.2 for Windows. I’ve tried using WGET’s cookie functionality but have had mixed results, usually skewing towards it not working. Can anyone please advise on a way to accomplish this? Thanks a lot.

    Read the article

  • Is it possible with Google searches to ban any and all results from a domain? [closed]

    - by Stu Thompson
    Is it possible to configure Google somehow to permanently ban search results from domains that I know 100% are never, ever going to make me happy? Something cookie/session based maybe? E.g. I want to ban (permanently, forever and always) results from experts-exchange.com. Every time I click results that take me to their page I just want to scream. Update! Google has released a Chrome Extension to allow users to block individual site from Google search results! Personal Blocklist (by Google). (Since this question has been closed, I cannot answer it.)

    Read the article

  • NHibernate LINQ query throws error "Could not resolve property"

    - by Xorandor
    I'm testing out using LINQ with NHibernate but have run into some problems with resolving string.length. I have the following public class DC_Control { public virtual int ID { get; private set; } public virtual string Name { get; set; } public virtual bool IsEnabled { get; set; } public virtual string Url { get; set; } public virtual string Category { get; set; } public virtual string Description { get; set; } public virtual bool RequireScriptManager { get; set; } public virtual string TriggerQueryString { get; set; } public virtual DateTime? DateAdded { get; set; } public virtual DateTime? DateUpdated { get; set; } } public class DC_ControlMap : ClassMap<DC_Control> { public DC_ControlMap() { Id(x => x.ID); Map(x => x.Name).Length(128); Map(x => x.IsEnabled); Map(x => x.Url); Map(x => x.Category); Map(x => x.Description); Map(x => x.RequireScriptManager); Map(x => x.TriggerQueryString); Map(x => x.DateAdded); Map(x => x.DateUpdated); } } private static ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2008) .Mappings(m => m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly())) .ExposeConfiguration(c => c.SetProperty("connection.connection_string", "CONNSTRING")) .ExposeConfiguration(c => c.SetProperty("proxyfactory.factory_class", "NHibernate.ByteCode.Castle.ProxyFactoryFactory,NHibernate.ByteCode.Castle")) .BuildSessionFactory(); } public static void test() { using (ISession session = sessionFactory.OpenSession()) { var sqlQuery = session.CreateSQLQuery("select * from DC_Control where LEN(url) > 80").AddEntity(typeof(DC_Control)).List<DC_Control>(); var linqQuery= session.Linq<DC_Control>().Where(c => c.Url.Length > 80).ToList(); } } In my test method I first try and perform the query using SQL, this works just fine. Then I want to do the same thing in LINQ, and it throws the following error: NHibernate.QueryException: could not resolve property: Url.Length of: DC_Control I've searched alot for this "could not resolve property" error, but I can't quite figure out, what this means. Is this because the LINQ implementation is not complete? If so it's a bit disappointing coming from Linq2Sql where this would just work. I also tried it setting up the mapping with a hbm.xml instead of using FluentNHibernate but it produced teh same error.

    Read the article

  • Trying to link http://www.example.com to my shopping cart on https://secure.example.com

    - by Pickledegg
    Heres my saga - I'm trying to link http://www.example.com to my shopping cart on https://secure.example.com, but it doesnt seem to be linking correctly. Heres my code: <!--Google Analytics --> <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-125xxxxx-1"); //start cart link pageTracker._setDomainName(".example.com"); pageTracker._setAllowHash(false); //end cart link pageTracker._trackPageview(); } catch(err) {}</script> <!--Google Analytics --> Notice the two lines: pageTracker._setDomainName(".example.com"); pageTracker._setAllowHash(false); I added the first line so I could share the cookies between site and cart, and added the setAllowHash to make sure it used the utm values from the cookie, and didnt 'recreate' them when I entered https://secure.example.com. Using firecookie, it does indeed share the same cookie between site and cart, and the cookies domain is 'example.com'. I'm pretty sure though that if it was working right, all my utmz, utma values etc should be copied over and remain the same, but they're changing. I've copied all the params that are being sent to google analytics and pasted then below. It shows what is happening from my homepage, to my product page, then into my cart all the way to the page before ordering. ( I can't practically test the final page myself without buying something, so I'll post the code from our confirmation page later if needed.) Here goes: =============================================================== HOMEPAGE - http://www.example.com ---------------------------------------------------------------------------------------- utmac UA-125xxxxx-1 utmcc __utma=1.1920057171.1269446996.1269446996.1269446996.1;+__utmz=1.1269446996.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); utmcs UTF-8 utmdt GSM Cell Phone Rental from example utmfl 10.0 r45 utmhid 69978133 utmhn www.example.com utmje 1 utmn 1806413990 utmp / utmr - utmsc 24-bit utmsr 1280x800 utmul en-gb utmwv 4.6.5 PRODUCT PAGE - http://www.example.com/products/international-cell-phone-purchase/ ---------------------------------------------------------------- utmac UA-125xxxxx-1 utmcc __utma=1.1920057171.1269446996.1269446996.1269446996.1;+__utmz=1.1269446996.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); utmcs UTF-8 utmdt example | International Cell Phones utmfl 10.0 r45 utmhid 276151647 utmhn www.example.com utmje 1 utmn 155808433 utmp /products/international-cell-phone-purchase/ utmr 0 utmsc 24-bit utmsr 1280x800 utmul en-gb utmwv 4.6.5 CART STAGE 1 - https://secure.example.com/checkout/viewbasket.php ------------------------------------------------ utmac UA-125xxxxx-1 utmcc __utma=60286578.994269564.1269447144.1269447144.1269447144.1;+__utmz=60286578.1269447144.1.1.utmcsr=example.com|utmccn=(referral)|utmcmd=referral|utmcct=/products/international-cell-phone-purchase/; utmcn 1 utmcs UTF-8 utmdt Your Cart utmfl 10.0 r45 utmhid 1802074903 utmhn secure.example.com utmje 1 utmn 1621444199 utmp 1-reviewcart utmr http://www.example.com/products/international-cell-phone-purchase/ utmsc 24-bit utmsr 1280x800 utmul en-gb utmwv 4.6.5 CART STAGE 2 - https://secure.example.com/checkout/docheckout.php ------------------------------------------------ utmac UA-125xxxxx-1 utmcc __utma=60286578.994269564.1269447144.1269447144.1269447144.1;+__utmz=60286578.1269447144.1.1.utmcsr=example.com|utmccn=(referral)|utmcmd=referral|utmcct=/products/international-cell-phone-purchase/; utmcs UTF-8 utmdt Checkout utmfl 10.0 r45 utmhid 871670520 utmhn secure.example.com utmje 1 utmn 1153927228 utmp 2-checkout utmr 0 utmsc 24-bit utmsr 1280x800 utmul en-gb utmwv 4.6.5 CART STAGE 3 - https://secure.example.com/checkout/doreview.php ---------------------------------------------- utmac UA-125xxxxx-1 utmcc __utma=60286578.994269564.1269447144.1269447144.1269447144.1;+__utmz=60286578.1269447144.1.1.utmcsr=example.com|utmccn=(referral)|utmcmd=referral|utmcct=/products/international-cell-phone-purchase/; utmcs UTF-8 utmdt Checkout utmfl 10.0 r45 utmhid 1731598159 utmhn secure.example.com utmje 1 utmn 1442257710 utmp 3-checkoutreview utmr 0 utmsc 24-bit utmsr 1280x800 utmul en-gb utmwv 4.6.5 =============================================================== As you can see, the utma values are not being preserved, so it looks like a config issue. I've studied the help does but none of the cases seem to fit mine. I hope someone can offer help on this, its been an ongoing problem of mine for a while, and would be good to finally get rock-solid reliable analytics set up.

    Read the article

  • Missing Password check

    - by AAA
    I am using the code below, it checks for empty fields and verifies email, but even if the password is correct it won't login. the password has been inserted with md5 protection, below is the code. I am new to this so please bare with me. Thanks! PHP: session_start(); //Checks if there is a login cookie if(isset($_COOKIE['ID_my_site'])) //if there is, it logs you in and directes you to the members page { $email = $_COOKIE['ID_my_site']; $pass = $_COOKIE['Key_my_site']; $check = mysql_query("SELECT * FROM accounts WHERE email = '$email'")or die(mysql_error()); while($info = mysql_fetch_array( $check )) { if ($pass != $info['password']) { } else { header("Location: home.php"); } } } //if the login form is submitted if (isset($_POST['submit'])) { // if form has been submitted // makes sure they filled it in if(!$_POST['email'] | !$_POST['password']) { die('You did not fill in a required field.'); } // checks it against the database if (!get_magic_quotes_gpc()) { $_POST['email'] = addslashes($_POST['email']); } $check = mysql_query("SELECT * FROM accounts WHERE email = '".$_POST['email']."'")or die(mysql_error()); //Gives error if user dosen't exist $check2 = mysql_num_rows($check); if ($check2 == 0) { die('That user does not exist in our database. <a href=add.php>Click Here to Register</a>'); } while($info = mysql_fetch_array( $check )) { $_POST['password'] = stripslashes($_POST['password']); $info['password'] = stripslashes($info['password']); $_POST['password'] = md5($_POST['password']); //gives error if the password is wrong if ($_POST['password'] != $info['password']) { die('Incorrect password, please try again.'); } else { // if login is ok then we add a cookie $_POST['email'] = stripslashes($_POST['email']); $hour = time() + 3600; setcookie(ID_my_site, $_POST['email'], $hour); setcookie(Key_my_site, $_POST['password'], $hour); //then redirect them to the members area header("Location: home.php"); } } } else { // if they are not logged in <form action="<?php echo $_SERVER['PHP_SELF']?>" method="post"> <table border="0"> <tr><td colspan=2><h1>Login</h1></td></tr> <tr><td>email:</td><td> <input type="text" name="email" maxlength="40"> </td></tr> <tr><td>Password:</td><td> <input type="password" name="password" maxlength="50"> </td></tr> <tr><td colspan="2" align="right"> <input type="submit" name="submit" value="Login"> </td></tr> </table> </form> } Here is the registration code: PHP: // here we encrypt the password and add slashes if needed $_POST['password'] = md5($_POST['password']); if (!get_magic_quotes_gpc()) { $_POST['password'] = mysql_escape_string($_POST['password']); $_POST['email'] = mysql_escape_string($_POST['email']); $_POST['full_name'] = mysql_escape_string($_POST['full_name']); $_POST['user_url'] = mysql_escape_string($_POST['user_url']); } // now we insert it into the database $insert = "INSERT INTO accounts (Uniquer, Full_name, Email, Password, User_url) VALUES ('".$uniquer."','".$_POST['full_name']."', '".$_POST['email']."','".$_POST['password']."', '".$_POST['user_url']."')"; $add_member = mysql_query($insert); After using ini_set function i got to see the error, i am getting this message but not sure what it means: Notice: Undefined index: password in /var/www/domain.com/htdocs/login.php on line 103 Notice: Use of undefined constant password - assumed 'password' in /var/www/domain.com/htdocs/login.php on line 11

    Read the article

  • multiple-inheritance substitution

    - by Luigi
    I want to write a module (framework specific), that would wrap and extend Facebook PHP-sdk (https://github.com/facebook/php-sdk/). My problem is - how to organize classes, in a nice way. So getting into details - Facebook PHP-sdk consists of two classes: BaseFacebook - abstract class with all the stuff sdk does Facebook - extends BaseFacebook, and implements parent abstract persistance-related methods with default session usage Now I have some functionality to add: Facebook class substitution, integrated with framework session class shorthand methods, that run api calls, I use mostly (through BaseFacebook::api()), authorization methods, so i don't have to rewrite this logic every time, configuration, sucked up from framework classes, insted of passed as params caching, integrated with framework cache module I know something has gone very wrong, because I have too much inheritance that doesn't look very normal.Wrapping everything in one "complex extension" class also seems too much. I think I should have few working togheter classes - but i get into problems like: if cache class doesn't really extend and override BaseFacebook::api() method - shorthand and authentication classes won't be able to use the caching. Maybe some kind of a pattern would be right in here? How would you organize these classes and their dependencies? EDIT 04.07.2012 Bits of code, related to the topic: This is how the base class of Facebook PHP-sdk: abstract class BaseFacebook { // ... some methods public function api(/* polymorphic */) { // ... method, that makes api calls } public function getUser() { // ... tries to get user id from session } // ... other methods abstract protected function setPersistentData($key, $value); abstract protected function getPersistentData($key, $default = false); // ... few more abstract methods } Normaly Facebook class extends it, and impelements those abstract methods. I replaced it with my substitude - Facebook_Session class: class Facebook_Session extends BaseFacebook { protected function setPersistentData($key, $value) { // ... method body } protected function getPersistentData($key, $default = false) { // ... method body } // ... implementation of other abstract functions from BaseFacebook } Ok, then I extend this more with shorthand methods and configuration variables: class Facebook_Custom extends Facebook_Session { public funtion __construct() { // ... call parent's constructor with parameters from framework config } public function api_batch() { // ... a wrapper for parent's api() method return $this->api('/?batch=' . json_encode($calls), 'POST'); } public function redirect_to_auth_dialog() { // method body } // ... more methods like this, for common queries / authorization } I'm not sure, if this isn't too much for a single class ( authorization / shorthand methods / configuration). Then there comes another extending layer - cache: class Facebook_Cache extends Facebook_Custom { public function api() { $cache_file_identifier = $this->getUser(); if(/* cache_file_identifier is not null and found a valid file with cached query result */) { // return the result } else { try { // call Facebook_Custom::api, cache and return the result } catch(FacebookApiException $e) { // if Access Token is expired force refreshing it parent::redirect_to_auth_dialog(); } } } // .. some other stuff related to caching } Now this pretty much works. New instance of Facebook_Cache gives me all the functionality. Shorthand methods from Facebook_Custom use caching, because Facebook_Cache overwrited api() method. But here is what is bothering me: I think it's too much inheritance. It's all very tight coupled - like look how i had to specify 'Facebook_Custom::api' instead of 'parent:api', to avoid api() method loop on Facebook_Cache class extending. Overall mess and ugliness. So again, this works but I'm just asking about patterns / ways of doing this in a cleaner and smarter way.

    Read the article

  • Getting EOFException while trying to read from SSLSocket

    - by Isac
    Hi, I am developing a SSL client that will do a simple request to a SSL server and wait for the response. The SSL handshake and the writing goes OK but I can't READ data from the socket. I turned on the debug of java.net.ssl and got the following: [..] main, READ: TLSv1 Change Cipher Spec, length = 1 [Raw read]: length = 5 0000: 16 03 01 00 20 .... [Raw read]: length = 32 [..] main, READ: TLSv1 Handshake, length = 32 Padded plaintext after DECRYPTION: len = 32 [..] * Finished verify_data: { 29, 1, 139, 226, 25, 1, 96, 254, 176, 51, 206, 35 } %% Didn't cache non-resumable client session: [Session-1, SSL_RSA_WITH_RC4_128_MD5] [read] MD5 and SHA1 hashes: len = 16 0000: 14 00 00 0C 1D 01 8B E2 19 01 60 FE B0 33 CE 23 ..........`..3.# Padded plaintext before ENCRYPTION: len = 70 [..] a.j.y. main, WRITE: TLSv1 Application Data, length = 70 [Raw write]: length = 75 [..] Padded plaintext before ENCRYPTION: len = 70 [..] main, WRITE: TLSv1 Application Data, length = 70 [Raw write]: length = 75 [..] main, received EOFException: ignored main, called closeInternal(false) main, SEND TLSv1 ALERT: warning, description = close_notify Padded plaintext before ENCRYPTION: len = 18 [..] main, WRITE: TLSv1 Alert, length = 18 [Raw write]: length = 23 [..] main, called close() main, called closeInternal(true) main, called close() main, called closeInternal(true) The [..] are the certificate chain. Here is a code snippet: try { System.setProperty("javax.net.debug","all"); /* * Set up a key manager for client authentication * if asked by the server. Use the implementation's * default TrustStore and secureRandom routines. */ SSLSocketFactory factory = null; try { SSLContext ctx; KeyManagerFactory kmf; KeyStore ks; char[] passphrase = "importkey".toCharArray(); ctx = SSLContext.getInstance("TLS"); kmf = KeyManagerFactory.getInstance("SunX509"); ks = KeyStore.getInstance("JKS"); ks.load(new FileInputStream("keystore.jks"), passphrase); kmf.init(ks, passphrase); ctx.init(kmf.getKeyManagers(), null, null); factory = ctx.getSocketFactory(); } catch (Exception e) { throw new IOException(e.getMessage()); } SSLSocket socket = (SSLSocket)factory.createSocket("server ip", 9999); /* * send http request * * See SSLSocketClient.java for more information about why * there is a forced handshake here when using PrintWriters. */ SSLSession session = socket.getSession(); [build query] byte[] buff = query.toWire(); out.write(buff); out.flush(); InputStream input = socket.getInputStream(); int readBytes = -1; int randomLength = 1024; byte[] buffer = new byte[randomLength]; while((readBytes = input.read(buffer, 0, randomLength)) != -1) { LOG.debug("Read: " + new String(buffer)); } input.close(); socket.close(); } catch (Exception e) { e.printStackTrace(); } I can write multiple times and I don't get any error but the EOFException happens on the first read. Am I doing something wrong with the socket or with the SSL authentication? Thank you.

    Read the article

  • rails: "unknown action" message when action is clearly specified

    - by john
    hi, I had hard time to figure out why I've been getting "unknown action" error message when I was do some editing: Unknown action No action responded to 11. Actions: bin, create, destroy, edit, index, new, observe_new, show, tag, update, and vote you can see that Rails did mention each action in the above list - update. And in my form, I did specify action = "update". I wonder if some friends could kindly help me with the missing links... here is the code: edit.rhtml <h1>Editing tip</h1> <% form_tag :action => 'update', :id => @tip do %> <%= render :partial => 'form' %> <p> <%= submit_tag_or_cancel 'Save Changes' %> </p> <% end %> _form.rhtml <%= error_messages_for :tip %> <p><label>Title<br/> <%= text_field :tip, :title %></label></p> <p><label>Categories<br/> <%= select_tag('categories[]', options_for_select(Category.find(:all).collect {|c| [c.name, c.id] }, @tip.category_ids), :multiple => true ) %></label></p> <p><label>Abstract:<br/> <%= text_field_with_auto_complete :tip, :abstract %></label></p> <p><label>Name: <br/> <%= text_field :tip, :name %></label></p> <p><label>Link: <br/> <%= text_field :tip, :link %></label></p> <p><label>Content<br/> <%= text_area :tip, :content, :rows => 5 %></label></p> <p><label>Tags <span>(space separated)</span><br/> <%= text_field_tag 'tags', @tip.tag_list, :size => 40 %></label></p> class TipsController < ApplicationController before_filter :authenticate, :except => %w(index show) # GET /tips # GET /tips.xml def index @tips = Tip.all respond_to do |format| format.html # index.html.erb format.xml { render :xml => @tips } end end # GET /tips/1 # GET /tips/1.xml def show @tip = Tip.find_by_permalink(params[:permalink]) respond_to do |format| format.html # show.html.erb format.xml { render :xml => @tip } end end # GET /tips/new # GET /tips/new.xml def new @tip = session[:tip_draft] || current_user.tips.build end def create #tip = current_user.tips.build(params[:tip]) #tipMail=params[:email] #if tipMail # TipMailer.deliver_email_friend(params[:email], params[:name], tip) # flash[:notice] = 'Your friend has been notified about this tip' #end @tip = current_user.tips.build(params[:tip]) @tip.categories << Category.find(params[:categories]) unless params[:categories].blank? @tip.tag_with(params[:tags]) if params[:tags] if @tip.save flash[:notice] = 'Tip was successfully created.' session[:tip_draft] = nil redirect_to :action => 'index' else render :action => 'new' end end def edit @tip = Tip.find(params[:id]) end def update @tip = Tip.find(params[:id]) respond_to do |format| if @tip.update_attributes(params[:tip]) flash[:notice] = 'Tip was successfully updated.' format.html { redirect_to(@tip) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @tip.errors, :status => :unprocessable_entity } end end end def destroy @tip = Tip.find(params[:id]) @tip.destroy respond_to do |format| format.html { redirect_to(tips_url) } format.xml { head :ok } end end def observe_new session[:tip_draft] = current_user.tips.build(params[:tip]) render :nothing => true end end

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >