Search Results

Search found 18563 results on 743 pages for 'url for'.

Page 118/743 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • CodeIgniter: Page not found when passing parameters to a controller???

    - by thedp
    Hello, I'm trying to pass parameters to a control in codeigniter, but I'm getting 404 page not found error, I don't get it, I did what the guide says: http://codeigniter.com/user_guide/general/controllers.html#passinguri <?php class Main extends Controller { function index($username) { echo $username; } } ?> How can I get more info regarding this error from codeigniter? Thank you.

    Read the article

  • Why did mislav-will_paginate start adding so much garbage to urls between rails 2.3.2 and 2.3.5?

    - by user30997
    I've used will_paginate in a number of projects now, but when I moved one of them to Rails 2.3.5, clicking on any of the pagination links (page number, next, prev, etc.,) went from getting nice URLs like this: http://foo.com/user/1/date/2005_01_31/phone/555-6161 to this: http://foo.com/?options[]=user&options[]=date&options[]=2005_01_31&options[]=phone&options[]=555-6161 I have a route that looks like this that is probably the source of the 'options' keyword: map.connect '/browse/*options', :controller=>'assets', :action=>'browse' It's enough of an annoyance that I'm willing to roll a paginator to get around this if there isn't a way to get back to where I was before. Is there a way to get will_paginate to turn array-style routes into sane urls again? Thanks.

    Read the article

  • Calling webservice via server causes java.net.MalformedURLException: no protocol

    - by Thomas
    I am writing a web-service, which parses an xml file. In the client, I read the whole content of the xml into a String then I give it to the web-service. If I run my web-service with main as a Java-Application (for tests) there is no problem, no error messages. However when I try to call it via the server, I get the following error: java.net.MalformedURLException: no protocol I use the same xml file, the same code (without main), and I just cannot figure out, what the cause of the error can be. here is my code: DOMParser parser=new DOMParser(); try { parser.setFeature("http://xml.org/sax/features/validation", true); parser.setFeature("http://apache.org/xml/features/validation/schema",true); parser.setFeature("http://apache.org/xml/features/validation/dynamic",true); parser.setErrorHandler(new myErrorHandler()); parser.parse(new InputSource(new StringReader(xmlFile))); document=parser.getDocument(); xmlFile is constructed in the client so: String myFile ="C:/test.xml"; File file=new File(myFile); String myString=""; FileInputStream fis=new FileInputStream(file); BufferedInputStream bis=new BufferedInputStream(fis); DataInputStream dis=new DataInputStream(bis); while (dis.available()!=0) { myString=myString+dis.readLine(); } fis.close(); bis.close(); dis.close(); Any suggestions will be appreciated!

    Read the article

  • Robots Crawling Across Namespace?

    - by Codex73
    I migrated site from one domain to another. Also placed permanent redirection on old account. My stats logs are capturing this: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) /libro_metaboforte_chap5.php/members/members/file_chap6.php I placed this on robots which wasn't present at time of migration. Robots.txt Contents User-agent: * Allow: / Disallow: /members/ Disallow: /includes/ HTACCESS FILE CONTENTS DirectoryIndex index.php index.html Options +FollowSymlinks RewriteEngine On # Turn on the rewriting engine RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !^/store/?$ RewriteCond %{QUERY_STRING} !. RewriteRule ^.+/?$ index.php [QSA,L] RewriteCond %{QUERY_STRING} ^curlang=([a-z]*)$ RewriteRule ^.+/?$ index.php? [QSA,L] Will continue to log incoming bot captures. My htaccess does rewrite. I just added the robot file. The funny part is that is stepping in double directories... I don't know if the problem was not having the 'robots.txt' in place or the actual in place htaccess doing rewrites?

    Read the article

  • Exclude routing parameters in VaryByParam for Asp.Net 4

    - by HasanGursoy
    I have a routing setting in my global.asax file: routes.MapPageRoute("video-browse", "video/{id}/{title}/", "~/routeVideo.aspx"); My routeVideo.aspx page has caching setting: <%@ OutputCache Duration="10" Location="ServerAndClient" VaryByParam="id" %> But when I request http://localhost/video/6/example1 and http://localhost/video/6/example2 after this, the page is created again. So I think VaryByParam works for * but I only want compile when id changes. Is there a way to define routing parameters at VaryByParam?

    Read the article

  • .htaccess more than one command

    - by Stefan
    Hey I have an .htaccess file with the following code: <Files ~ "item|profile|category|search"> ForceType application/x-httpd-php </Files> What I want is to add a re-write rule as well which changes any body who navigates tot he site ot use http://www.domain.com sintead of http://domain.com I have the code stored somewhere else for it however I can't seem to just place in the file as it corrupts? Or just doesn't work... I wish I knew more about .htaccess files How can I add this in? Thanks, Stefan

    Read the article

  • php URI troube with if statement,

    - by sea_1987
    I am running an if statement, that looks like this, if($this->uri->segment(1) !== 'search' || $this->uri->segment(1) !== 'employment') { //dome something } My problem is that first condition works, if the uri segment 1 equals search then the method do not run however if I on the page employment, and the first segment of the uri is employment then the condition still runs, why is this?

    Read the article

  • code igniter codeigniter question, making anchor load page containing data from referenced row in DB

    - by thrice801
    Hi, Im trying to learn the code igniter library and object oriented php in general and have a question. Ok so Ive gotten as far as making a page which loads all of the rows from my database and in there, Im echoing an anchor tag which is a link to the following structure. [code]echo anchor("videos/video/$row-video_id", $row-video_title);[/code] So, I have a class called Videos which extends the controller, within that class there is index and video, which is being called correctly (when you click on the video title, it sends you to videos/video/5 for example, 5 being the primary key of the table im working with. So basically all Im trying to do is pass that 5 back to the controller, and then have the particular video page output the particular rows data from the videos table. My function in my controller for video looks like this - [code] function video() { $data['main_content'] = 'video'; $data['video_title'] = 'test'; $this-load-view('includes/template', $data); } [/code] So ya, basically test should be instead of test, a returned value of a query which says get in the table "videos", the row with the video_id of "5", and make $data['video_title'] = value of video_title in database... Should have this figured out by now but dont, any help would be appreciated!

    Read the article

  • python regular expression for domain names

    - by user230911
    I am trying use the following regression to extract domain name from a text, but it just produce nothing, what's wrong with it? I don't know if this is suitable to ask this "fix code" question, maybe I should read more. I just want to save some time. Thanks pat_url = re.compile(r''' (?:https?://)* (?:[\w]+[\-\w]+[.])* (?P<domain>[\w\-]*[\w.](com|net)([.](cn|jp|us))*[/]*) ''') print re.findall(pat_url,"http://www.google.com/abcde") I want the output to be google.com

    Read the article

  • Which's the best way to protect primary key on ASP.NET MVC?

    - by Junior Mayhé
    I'm creating a ASP.NET MVC website and I was wandering which techniques do you guys use to protect primary key on these mvc urls. Actually ASP.NET MVC generates this syntax for its urls: /Controller/Action/Id Last week I was trying to encrypt it using SHA-1 Encryption, but this encrypter generates some special symbols like + (plus), / (slash), and other annoying chars which difficult the decryption. Perhaps creating a custom encryption should solve the problem. But I wanna here from you guys, do you have some ideas to protect mvc urls?

    Read the article

  • What's my best bet for replacing plain text links with anchor tags in a string? .NET

    - by Craig Bovis
    What is my best option for converting plain text links within a string into anchor tags? Say for example I have "I went and searched on http://www.google.com/ today". I would want to change that to "I went and searched on http://www.google.com/ today". The method will need to be safe from any kind of XSS attack also since the strings are user generated. They will be safe before parsing so I just need to make sure that no vulnerabilities are introduced through parsing the URLs.

    Read the article

  • Python filter/remove URLs from a list

    - by Eef
    Hi. I have a text file of URLs, about 14000. Below is a couple of examples: http://www.domainname.com/pagename?CONTENT_ITEM_ID=100&param2=123 http://www.domainname.com/images?IMAGE_ID=10 http://www.domainname.com/pagename?CONTENT_ITEM_ID=101&param2=123 http://www.domainname.com/images?IMAGE_ID=11 http://www.domainname.com/pagename?CONTENT_ITEM_ID=102&param2=123 I have loaded the text file into a Python list and I am trying to get all the URLs with CONTENT_ITEM_ID separated off into a list of their own. What would be the best way to do this in Python? Cheers

    Read the article

  • Problem with Spring security's logout

    - by uther-lightbringer
    Hello, I've got a problem logging out in Spring framework. First when I want j_spring_security_logout to handle it for me i get 404 j_spring_security_logout not found: sample-security.xml: <http> <intercept-url pattern="/messageList.htm*" access="ROLE_USER,ROLE_GUEST" /> <intercept-url pattern="/messagePost.htm*" access="ROLE_USER" /> <intercept-url pattern="/messageDelete.htm*" access="ROLE_ADMIN" /> <form-login login-page="/login.jsp" default-target-url="/messageList.htm" authentication-failure-url="/login.jsp?error=true" /> <logout/> </http> Sample url link to logout in JSP page: <a href="<c:url value="/j_spring_security_logout" />">Logout</a> When i try to use a custom JSP page i.e. I use login form for this purpose then I get better result at least it gets to login page, but another problem is that you dont't get logged off as you can diretcly type url that should be guarded buy you get past it anyway. Slightly modified from previous listings: <http> <intercept-url pattern="/messageList.htm*" access="ROLE_USER,ROLE_GUEST" /> <intercept-url pattern="/messagePost.htm*" access="ROLE_USER" /> <intercept-url pattern="/messageDelete.htm*" access="ROLE_ADMIN" /> <form-login login-page="/login.jsp" default-target-url="/messageList.htm" authentication-failure-url="/login.jsp?error=true" /> <logout logout-success-url="/login.jsp" /> </http> <a href="<c:url value="/login.jsp" />">Logout</a> Thank you for help

    Read the article

  • Remove index.php in CodeIgniter

    - by Gabriel Bianconi
    Hello. I'm trying to remove the 'index.php' from CI Urls. I've tried many solutions, none of them worked. I've already set these variables in 'config.php': $config['index_page'] = ""; $config['uri_protocol'] = "REQUEST_URI"; And my current .htaccess is: Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^plugb.com$ [NC] RewriteRule ^(.*)$ http://www.plugb.com/$1 [R=301,L] RewriteCond $1 !^(index\.php|files|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA] The www prefix part works fine. But the 'index.php' part doesn't. If you want to check the webpage, here is it: http://www.plugb.com/index.php/home

    Read the article

  • svnserve not strictly required?

    - by Kev
    I was reading the Red Bean book and noticed this paragraph: Do not be seduced by the simple idea of having all of your users access a repository directly via file:// URLs. Even if the repository is readily available to everyone via a network share, this is a bad idea. It removes any layers of protection between the users and the repository: users can accidentally (or intentionally) corrupt the repository database, it becomes hard to take the repository offline for inspection or upgrade, and it can lead to a mess of file permission problems (see the section called “Supporting Multiple Repository Access Methods”). Note that this is also one of the reasons we warn against accessing repositories via svn+ssh:// URLs—from a security standpoint, it's effectively the same as local users accessing via file://, and it can entail all the same problems if the administrator isn't careful. I realized that, since I'm the only one accessing the repository, ever, none of these caveats seem to apply. Can I safely down svnserve then and only ever have to worry about upgrading my TortoiseSVN client, not both the client and the server whenever there's a new version out? (I've tried it already--just needed to use the Relocate feature to switch from svn:// to file://--but I wanted to make sure something wouldn't be sneaking up on me if I left it this way.)

    Read the article

  • Can I use the CSS :visited pseudo class on 'wildcard' links?

    - by rabidpebble
    Let's say I have a site with multiple links as follows: www.example.com/product/1 www.example.com/product/2 www.example.com/product/3 I also append tracking info to links from time to time so that I can see how my site is being used, e.g, if somebody visits the products page from the product browser I would set a ref parameter: www.example.com/product/1&ref=pb www.example.com/product/2&ref=pb www.example.com/product/3&ref=pb The problem with this is that if the user visits a link of the first type and then views a link of the second type then the :visited pseudo class doesn't seem to apply because the browser only seems to match on exact URLs. Is there any way to have "wildcards" apply to links in this sense, so that when the user sees either the first type or the second type of link that it is highlighted? Note: I cannot change this "ref" architecture; it is inherited.

    Read the article

  • How reliable are URIs like /index.php/seo_path

    - by Boldewyn
    I noticed, that sometimes (especially where mod_rewrite is not available) this path scheme is used: http://host/path/index.php/clean_url_here --------------------------^ This seems to work, at least in Apache, where index.php is called, and one can query the /clean_url_here part via $_SERVER['PATH_INFO']. PHP even kind of advertises this feature. Also, e.g., the CodeIgniter framework uses this technique as default for their URLs. The question: How reliable is the technique? Are there situations, where Apache doesn't call index.php but tries to resolve the path? What about lighttpd, nginx, IIS, AOLServer? A ServerFault question? I think it's got more to do with using this feature inside PHP code. Therefore I ask here.

    Read the article

  • Chained address rewrite in Wordpress

    - by kemp
    What I need to do is rewriting this address: (1) http://localhost/wordpress/fake/text-value to (2) http://localhost/wordpress/gallery?somevar=text-value Notes: the remapping must be transparent: the user always has to see address (1) gallery is a permalink to a wordpress page, not a real address I basically need to rewrite the address first (to modify it) and then feed it back to mod rewrite again (to let wordpress parse it its own way). Problems if I simply do RewriteRule ^fake$ http://localhost/wordpress/gallery [L] it works but the address in the browser changes, which is no good, if I do RewriteRule ^fake$ /wordpress/gallery [L] I get a 404. I tried different flags instead of [L] but to no avail. How can I get this to work?

    Read the article

  • Using axWebBrowser control to capture page request and postdata.

    - by Arjo
    I'm writing a small utility to capture all requests made to a web server from a windows application using the axWebBrowser control. So far I have the following working, as the user traverse the website clicking on links, posting forms etc I capture the webpage and the data being send to the server to request the next page. Where I run into a stumbling block is when it comes to Ajax calls. There are number of drop down boxes that filter down the selection as the user types in the search term, I would like to capture the page/script that is called and the data being send. Any hint, advice would be greatly appreciated.

    Read the article

  • ASP.NET MVC Action with ApplicationPath

    - by Leandro
    Hi, i'm creating a mvc application and i'll use under subdomain like _http://myapp.mycompany.com, this subdomain is pointing to app subdirectory, but my actions are generated with applicationPath (subdirectory) always: _http://myapp.mycompany.com/myapp/Home/About. i want _http://myapp.mycompany.com/Home/About only, without applicationPath prefix. Exists any configuration for this?? I'm using the correct methods: <%= Html.ActionLink("About", "About", "Home") % Actions rendered without ApplicationPath prefix, is possible in asp.net routing? * Please, ignore undescore prefix in urls, but i'm novice in stackoverflow and i can't post valid urls =/

    Read the article

  • Howcome I cannot make my javascript 'executable' in an address bar

    - by imHavoc
    The second link does not work like the first one. How come? <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>Dynamic CSS Properties</title> <script language="JavaScript"> function change(){ //document.getElementById("box1").style.visibility = "visible"; var spanArray = document.getElementsByTagName('span'); var number_spans = spanArray.length ; for( var i = 0; i < number_spans ; i++ ){ var target = spanArray[ i ] ; // do something with target like set visibility target.style.visibility = "visible"; } } function change2(){ var spanArray=document.getElementsByTagName('span');var number_spans=spanArray.length;for(var i=0;i<number_spans;i++){var target=spanArray[i];target.style.visibility="visible";} } </script> </head> <body> <a href="javascript:change2();">Change</a> <br /> <a href="javascript:var spanArray=document.getElementsByTagName('span');va r number_spans=spanArray.length;for(var i=0;i<number_spans;i++){var target=spanArray[i];target.style.visibility='visible';}; ">Show Spans</a> <br /> <div style="position: relative; overflow: hidden;"><center> <br><br> <font size="5" color="blue"> 1. just press the <img src="http://up203.siz.co.il/up1/jw2k4az1imny.jpg"> button on the top to see the picture i promise you its so funny!!!!: <br><br><br> <span style="background: none repeat scroll 0% 0% white;"><span style="visibility: hidden;"> <a onmousedown="UntrustedLink.bootstrap($(this), &quot;77a0d&quot;, event)" rel="nofollow" target="_blank" onclick="(new Image()).src = '/ajax/ct.php?app_id=4949752878&amp;action_type=3&amp;post_form_id=3917211492ade40ee468fbe283b54b3b&amp;position=16&amp;' + Math.random();return true;" href="http://thebigbrotherisrael.blogspot.com/2010/04/all-family-guy-characters-in-real-life.html">Press here to see the picture!!!</a> </span><span style="visibility: visible;"></span></span></font></center></div> </body> </html>

    Read the article

  • Using Website Information Without WebView

    - by Mr. Monkey
    I am very new to this, and I more looking for what information I need to study to be able to accomplish this. What I want to do is use my GUI I have built for my app, but pull the information from a website. If I have a website that looks like this: (Sorry, can't post pics yet) http:// dl.dropbox.com/u/7037695/ErrorCodeApp/FromWebsite.PNG (full website can be seen at http://www.atmequipment.com/Error-Codes) What would I need from the website so that if a user entered an error code here: http:// dl.dropbox.com/u/7037695/ErrorCodeApp/InApp.PNG It would use the search from the website, and populate the error description in my app? I know this is a huge question, I'm just looking for what is actually needed to accomplish this, and then I can start researching from there. -- Or is it even possible?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >