Search Results

Search found 18563 results on 743 pages for 'url'.

Page 142/743 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • can i have a date in the url of a route in asp.net ?

    - by oo
    This code below doesn't seem to work but i can't figure out why. If i have a user entered textbox that is a datepicker and the results are displayed as: 21-May-2010 , can i take this value and stick it into a URL to send over to a controller action so instead of an id (which is an int), i want a id which is a date value View / Javascript Code: $.get('/Tracker/DailyBlog/' + this.val(), function(data) { $('#dailyblog').html(data); }); ControllAction Code: public ActionResult DailyBlog(DateTime blogDate) { //go do something } any idea why this is not working ?

    Read the article

  • Would popup blockers stop a URL which pops up only when the user clicked on something?

    - by tomeaton
    I'm currently building a web application that can can track a users actions on a particular website and pop a URL if the user takes certain actions, such as: first click, responding to a question by clicking yes / no, clicking a submit button, or exiting the site. It is important that these URLs are served to the user and are not blocked by pop-up blockers. It is my understanding that there are certain exceptions within the major internet browsers that allow pop-ups if they are served based on some user action, rather than serving an unsolicited pop? Is this true? How do I design this web application so that it can serve these pops (and not have them blocked).

    Read the article

  • How to allow three optional parameters in the URL by .htaccess?

    - by eij
    I have http://example.com and a PHP routing class that checks if some URL exists. I want to make a new route, which is: http://example.com/foo/bar/123 but as long as I open it, the Apache redirects me to an error page. So I'm using a .htaccess. The code is: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*) /index.php [L] and it works, as long as I use http://example.com/foo, but once I add some other parameters, it redirects me to an error. I'm guessing that the rewrite code is wrong. Is it wrong? If yes, could you suggest me the good one? If no, where the problem could be located?

    Read the article

  • Get rid the slash in URL (ex. something.com/dogs)

    - by jbenes
    I recently purchased the domain name simply.do. I want to use it as a URL shortening service, but I don't like have to do simply.do/something. Can I remove the slash or replace it with a difference symbol? If this helps, I am using a server running Nginx and I will not switch to Apache. Thanks! I would also appreciate any feedback on the domain name. I was hoping to sell simply.do/insurance, simply.do/religion, etc. to various companies. Do you think there is a way I could sell these parts on an auction website? Thanks!

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • Redircting to a url that has a question mark in it?

    - by dkmojo
    I have a somewhat strange problem. A client has moved their site to Wordpress. They use a service for link exchanges that has a Wordpress plugin. The issue is that the new links pages use a query string to display the correct content and I cannot figure out how to redirect the old URLs correctly. Old URLs look like this: domain.com/link/category-name.html The plugin makes them look like this in WP: domain.com/links/?page=category-name.html How in the world can I get the redirect to work properly? Here's what I have tried: Redirect 301 /link/actors.html http://www.artisticimages.biz/links/?page=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/%3Fpage=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/\?page=actors.html But none of those have worked. Any help is greatly appreciated!

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • Google and Yahoo redirect my site to malware, but direct url works fine. Any computer

    - by UserZer0
    I can go directly to the site doublewing.org or www. without issue, but if I click on the link in google or yahoo it redirects to spam sites. Swagbucks works though! This is not on a single computer this happens on systems isolated from each other(Try it, avast blocks it) . The site is runing joomla 1.5.25 . I deleted .htacces, put fresh index.php and index2.php files. and still get the same results. Any ideas?

    Read the article

  • IIS displaying page differently when localhost is used in URL vs. hostname

    - by maik
    I'm having (yet another) strange problem with IIS. When viewing an ASPX page I've designed on my local machine by browsing to http://localhost/page.aspx the page looks as expected (and looks the same in IE, Firefox and Chrome. If I change localhost to my_hostname the page is rendered with a disabled vertical scroll bar. The behavior was first noticed when I published my site to our live server and saw the same discrepancy. After beating my head against the wall I tried what I described above and was able to duplicate my "problem". So with that, I turn to you guys. This wouldn't really be an issue (save for the cross-browser inconsistency) except that this screws up an "absolute"ly positioned <div> moving it partway off the screen instead of being centered like it should be (and is when viewed any other way except in IE when the address is anything but localhost).

    Read the article

  • How to redirect Cisco IOS's show output to HTTP URL?

    - by yegle
    I found there's a redirect output modifier of Cisco IOS ( version 12.2(53)SE1 ), and there's http: URI support: #sh version | redirect ? flash: Uniform Resource Locator ftp: Uniform Resource Locator http: Uniform Resource Locator https: Uniform Resource Locator nvram: Uniform Resource Locator rcp: Uniform Resource Locator scp: Uniform Resource Locator tftp: Uniform Resource Locator However, I cannot find any document on cisco.com about the http support. I tried sh version | redirect http://my_server/ and cannot find any information on my_server's access log. Can anyone give me a hint?

    Read the article

  • Do CDNs actually dump the data to the client or pass the URL of the content?

    - by zengr
    I am curious, say for example: Facebook is the client of Akamai CDN. So, now when I login to my FaceBook page, I see all the content (vid, image, text etc) and I click on a video to view it. Now, Facebook is the client to Akami to get the content. So, when a request is made from Facebook to Akami, does Akamai dump the vid/image from their data centers to Facebook's data centers where they reside for a while (depending on their heuristics) and flushed after some time? Or, I see that video (stream it) directly from Akamai's server? UPDATE Data resides in CDN permanently (agreed), but is a copy of the content sent to Facebook's data centers too

    Read the article

  • Can I make Apache drop a connection when matching a URL?

    - by PP
    Using mod_rewrite I can construct a rule to respond with a clean error code (e.g. 404 not found, 410 gone, or 403 unauthorised) when a page is requested that I don't want to serve. But frequently I get completely erroneous requests from hackers scanning my website for vulnerabilities or possibly cross-site scripting attempts. For these customers I do not want to return a clean error - I'd rather do something else like immediately drop the connection with no response or, alternatively, hold the connection open for a lengthy period of time to frustrate the automated process. Any ideas how to accomplish this with Apache? I've read that nginx has the ability to immediately terminate a connection when a particular pattern is matched.

    Read the article

  • In Nginx, can I handle both a location:url or a content-type: text/html response from memcached?

    - by Sean Foo
    I'm setting up an nginx - apache reverse proxy where nginx handles the static files and apache the dynamic. I have a search engine and depending on search parameter I either directly forward the user to the page they are looking for or provide a set of search results. I cache these results in memcached as key:/search.cgi?q=foo value: LOCATION:http://www.example.com/foo.html and key:/search.cgi?q=bar value: CONTENT-TYPE: text/html <html> .... .... </html> I can pull the "Content-type...." values out of memcached using nginx and send them to the user, but I can't quite figure out how to handle a returned value like "Location..." Can I?

    Read the article

  • How do I redirect/rewrite to the FQDN URL without setting ServerName?

    - by ChaimKut
    Often in intranets, users will direct URLs to a hostname without supplying the FQDN. Example: http://internalHost Instead of http://internalHost.example.com I would like to redirect users / rewrite URLs so that everything will use the FQDN. Here's the catch: I don't want to set ServerName explicitly. (This is for a product which will be deployed in multiple intranets so we can't know the value of ServerName ahead of time). According to: http://wiki.apache.org/httpd/CouldNotDetermineServerName Apache uses a reverse lookup to determine a default FQDN. How can I make use of/reference that FQDN that Apache is using for a mod_rewrite or redirect?

    Read the article

  • Rewriting URL for tomcat through an apache AJP connector.

    - by StudentKen
    I've tried several attempts to resolve this, but all have come up naught. Currently I have apache setup to forward all urls at and past the /portal/ tag to tomcat. Unfortunately, tomcat receives these requests through /portal/appName, a subdirectory in webapps rather than the webapps root directory where my wars are deployed. Is there a simple solution to this that I'm not seeing? I've been trying to use mod_rewrite to ^/portal/ $ / but that doesn't yield the expected results (perhaps I'm doing this wrong?).

    Read the article

  • Unable to login through varnish cache

    - by ArunS
    I am setting up Active Collab Site in my new server. The setup is like below Internet --- varnish ---- apache But i am not able to login to the site through varnish cache.. But i can login to site through apache. Here is my VCL file backend default { .host = "localhost"; .port = "8080"; } acl purge { "localhost"; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_fetch { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; }} When i try to login through varnish i was redirect back to login page. If i enter wrong password, then it will ask for enter correct password.

    Read the article

  • local .pac-file URL format that works with IE and Safari (Windows)?

    - by legr3c
    Say I want to use a proxy auto-config file that is stored at C:\proxy.pac. To make Internet Explorer use this configuration I have to specify the pac-file in the LAN settings in the following way: file://C:/proxy.pac But Safari, that uses the same proxy settings, will ignore it in this case. To make Safari use the pac-file I have to reference it as file:///C:/proxy.pac (3 slashes at the beginning) which, according to Wikipedia is the correct format. But this way Internet Explorer will ignore it. Opera and Chrome, that also use the same proxy settings, are fine with both ways but is there another option that will work with Safari and Internet Explorer at the same time?

    Read the article

  • Is it possible to monitor an Asterisk ConfBridge from a URL / Browser?

    - by Lorin S.
    I have an Asterisk server set up with minimal configuration, including the following confbridge definition / extension: *confbridge.conf* [testbridge] type=bridge video_mode=follow_talker max_members=20 mixing_interval=10 internal_sample_rate=auto record_conference=yes *extension.conf* exten => 6100,1,Answer() same => n,Set(CONFBRIDGE(user,admin)=yes) same => n,Set(CONFBRIDGE(user,marked)=yes) same => n,ConfBridge("Ad-hoc",testbridge,default_user,sample_user_menu) Is it possible to monitor the video / audio of the conference without joining via a client?

    Read the article

  • Conditional https redirect to http depending on URL? (Apache)

    - by Joel Marcey
    Right now I redirect 100% of the time if someone does https://mysite.com <VirtualHost *:443> ServerAdmin [email protected] ServerName mysite.com ServerAlias www.mysite.com RewriteEngine on RewriteRule (.*) http://%{HTTP_HOST} [L,R=permanent] <VirtualHost> However, now I want to conditionally redirect. If a user goes to https://mysite.com/abc/, then I want to use https; otherwise redirect. How do I do this? I tried reading the docs, but just couldn't find what I needed. I am using Apache on Ubuntu Linux.

    Read the article

  • Redircting to a url that has a ? in it

    - by dkmojo
    I have a somewhat strange problem. A client has moved their site to Wordpress - cool no problem. They use a service for link exchanges that has a Wordpress plugin. The issue is that the new Links pages use a query string to display the correct content and I cannot figure out how to redirect the old URLs correctly. Old URLs look like this: domain.com/link/category-name.html The plugin makes them look like this in WP: domain.com/links/?page=category-name.html How in the world can I get the redirect to work properly? Here's what I have tried: Redirect 301 /link/actors.html http://www.artisticimages.biz/links/?page=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/%3Fpage=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/\?page=actors.html But none of those have worked. Any help is greatly appreciated!

    Read the article

  • Is there a way to use something like RewriteRule ... [PT] for an external URL?

    - by nbolton
    I have a non-apache web server running on port 8000, but this cannot be accessed from behind corporate firewalls. So, I would like to use my apache 2 server as a proxy to this other web server. I've tried using: RewriteEngine On RewriteRule /.* http://buildbot.synergy-foss.org:8000/builders/ [PT] ... but this does not work; I get: Bad Request Your browser sent a request that this server could not understand. However, it worked fine with [R]. Update: Also, when using ProxyPass, I get this error: Forbidden You don't have permission to access / on this server.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >