Search Results

Search found 18563 results on 743 pages for 'url shorteners'.

Page 181/743 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Why am I getting a new session ID on every page fetch in my Perl WWW::Mechanize script?

    - by Phill Pafford
    So I'm scraping a site that I have access to via HTTPS, I can login and start the process but each time I hit a new page (URL) the cookie Session Id changes. How do I keep the logged in Cookie Session Id? #!/usr/bin/perl -w use strict; use warnings; use WWW::Mechanize; use HTTP::Cookies; use LWP::Debug qw(+); use HTTP::Request; use LWP::UserAgent; use HTTP::Request::Common; my $un = 'username'; my $pw = 'password'; my $url = 'https://subdomain.url.com/index.do'; my $agent = WWW::Mechanize->new(cookie_jar => {}, autocheck => 0); $agent->{onerror}=\&WWW::Mechanize::_warn; $agent->agent('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.3) Gecko/20100407 Ubuntu/9.10 (karmic) Firefox/3.6.3'); $agent->get($url); $agent->form_name('form'); $agent->field(username => $un); $agent->field(password => $pw); $agent->click("Log In"); print "After Login Cookie: "; print $agent->cookie_jar->as_string(); print "\n\n"; my $searchURL='https://subdomain.url.com/search.do'; $agent->get($searchURL); print "After Search Cookie: "; print $agent->cookie_jar->as_string(); print "\n"; The output: After Login Cookie: Set-Cookie3: JSESSIONID=367C6D; path="/thepath"; domain=subdomina.url.com; path_spec; secure; discard; version=0 After Search Cookie: Set-Cookie3: JSESSIONID=855402; path="/thepath"; domain=subdomain.com.com; path_spec; secure; discard; version=0 Also I think the site requires a CERT (Well in the browser it does), would this be the correct way to add it? $ENV{HTTPS_CERT_FILE} = 'SUBDOMAIN.URL.COM'; ## Insert this after the use HTTP::Request... Also for the CERT In using the first option in this list, is this correct? X.509 Certificate (PEM) X.509 Certificate with chain (PEM) X.509 Certificate (DER) X.509 Certificate (PKCS#7) X.509 Certificate with chain (PKCS#7)

    Read the article

  • Testing file existence using NSURL

    - by Peter Hosey
    Snow Leopard introduced many new methods to use NSURL objects to refer to files, not pathnames or Core Services' FSRefs. However, there's one task I can't find a URL-based method for: Testing whether a file exists. I'm looking for a URL-based version of -[NSFileManager fileExistsAtPath:]. Like that method, it should return YES if the URL describes anything, whether it's a regular file, a directory, or anything else. I could attempt to look up various resource values, but none of them are explicitly guaranteed to not exist if the file doesn't, and some of them (e.g., NSURLEffectiveIconKey) could be costly if it does. I could just use NSFileManager's fileExistsAtPath:, but if there's a more modern method, I'd prefer to use that. Is there a simple method or function in Cocoa, CF, or Core Services that's guaranteed/documented to tell me whether a given file (or file-reference) URL refers to a file-system object that exists?

    Read the article

  • What's the best method in ASP.NET to obtain the current domain?

    - by Graphain
    Hi, I am wondering what the best way to obtain the current domain is in ASP.NET? For instance: http://www.domainname.com/subdir/ should yield http://www.domainname.com http://www.sub.domainname.com/subdir/ should yield http://sub.domainname.com As a guide, I should be able to add a url like "/Folder/Content/filename.html" (say as generated by Url.RouteUrl() in ASP.NET MVC) straight onto the URL and it should work.

    Read the article

  • Running a JavaScript code when a browser bookmark is clicked

    - by Arjun Vasudevan
    I've written a code that has successfully created a bookmark for any of the following browsers - IE, Firefox and Opera. function bookmark() { var title = 'Google'; var url = 'http://google.com'; if (document.all)// Check if the browser is Internet Explorer window.external.AddFavorite(url, title); else if (window.sidebar) //If the given browser is Mozilla Firefox window.sidebar.addPanel(title, url, ""); else if (window.opera && window.print) //If the given browser is Opera { var bookmark_element = document.createElement('a'); bookmark_element.setAttribute('href', url); bookmark_element.setAttribute('title', title); bookmark_element.setAttribute('rel', 'sidebar'); bookmark_element.click(); } } Now I want my bookmark to run a piece of JavaScript code instead of surfing to Google, when the user clicks on it.

    Read the article

  • pdf file download

    - by rex
    Once a member signs up a form they can download the pdf file. Currently, the pdf file is link with HTML which means that anyone with the url can download it. What's the best way to encrypt a page so that the user don't see the url to a pdf file. I tried creating a flash file and link the URL from flash using the following: var myPDF = new URLRequest("temp/test.pdf"); navigateToURL(myPDF); but it opens a new window and show's the URL !!! is there a way to make the browse forcefully download the file instead of opening in on a new browser. Thanks, Rex

    Read the article

  • mod_rewrite and htaccess

    - by chris
    I have set up a few rules based on other questions but now my css breaks I did have the URL / /eshop/cart.php?products_id=bla and everything work fine. but now with my mod rewrite url- /product/product-title/ It loose the base directory. Is there an option to fix this? So i dont have to go back with the full url in all the img src tags and so on?

    Read the article

  • How do I use a spark variable in an html helper?

    - by Quintin Par
    Can I use a spark variable inside an html helper? Say we have <var url="Url.Action(“get”)" /> !{Html.Image("~/Content/up.png")} Now if I need to use the url inside Html.Image as an attribute(part of the 2nd param) to get <img src="~/Content/up.png" type="~/engine/get" /> how do I go about doing it?

    Read the article

  • Parse URLs of major video streaming sites and generate appropriate code for embedding.

    - by Markus Lux
    Posting a video on tumblr.com allows you to just paste the URL of the video on youtube, vimeo, whatever and tumblr automatically does the embedding for you. I assume that this would be nothing more than a mapping between an URL-regex and the belonging HTML construct for embedding the video. Or it is just parsing the response of the URL and getting the construct from there. Is there already any utility, preferably in Java, for doing this? If not, how would you do it?

    Read the article

  • Create two semi-transparent images that when stacked produce the target image

    - by posfan12
    Due to CSS limitations I am forced to stack to semi-transparent images on my website. I won't go into detail regarding the CSS since if I can get this question answered the problem is moot. Anyway, I would like to modify image A in the GIMP such that it will look like it did originally after being stacked on top of image B. Both image A and image B have their opacities set to 50%. Image B is a solid color throughout, whereas image A has some minor details such as a gradient. Here's what it looks like before image B is applied on top (and what it should look like in the end): [URL=http://s421.photobucket.com/albums/pp292/SharkD2161/Support/Website/?action=view&current=website_testing_target_image.png][IMG]http://i421.photobucket.com/albums/pp292/SharkD2161/Support/Website/th_website_testing_target_image.png[/IMG][/URL] Here's what it looks like after image B has been applied on top: [URL=http://s421.photobucket.com/albums/pp292/SharkD2161/Support/Website/?action=view&current=website_testing_undesired_result.png][IMG]http://i421.photobucket.com/albums/pp292/SharkD2161/Support/Website/th_website_testing_undesired_result.png[/IMG][/URL] Thanks! Mike

    Read the article

  • Script to dynamically change the source of an iframe tag

    - by Ukavi
    I want a code that will dynamically change the source of an iframe tag when a hyperlink is clicked for example I have one master page that should load content from other web pages using the iframe tag when a hyperlink is clicked but I don't know how to do it. Here is my code <html> <head> <title> </title> <style type="text/css"> #content{ margin-right:30%; margin-bottom: 10px; padding: 10px; border: 1px solid #000; margin-left:17%; background-repeat:no-repeat; width:686; height:640; } </style> <script type="text/javascript"> function hyperlinkChanger(hyperLink) { switch (hyperLink) { case "home": var url="home.html"; return url; break; case "about_Us": var url="about_us.html"; return url; break; case "contact_Us: var url="contact_us.html"; return url; break; } } </script> </head> <body> <a href="home.html" name="home" onClick="hyperlinkChanger('home')">Home</a> <br /> <a href="about_us.html" name="about_Us" onClick="hyperlinkChanger('about_Us')">About Us</a> <br /> <a href="contact_us.html" name="contact_Us" onClick="hyperlinkChanger('contact_Us')">Contact us</a> <br /> <div name="content"> <!--please help me write a method to call the output of the funcion hyperlinkChanger--> <iframe src="hyprlink function" width="600" height="300"> </iframe> </div> </body> </html>

    Read the article

  • Display using QtWebKit, whilst parsing xml

    - by Beren Scott
    I wish to use QtWebKit to load a url for display, but, that's the easy part, I can do that. What I wish to do is record / log xml as I go. My attention here is to record and database certain details on the fly, by recording those details. My problem is, how to do this all on the fly, without requesting the same url from the server twice, once for the xml, and the second time to view the url. My hope here, is to implement a very fast way of recording set data as the user passes over it. Take for example, rather then have to type out details displayed by a website, I wish to have those details chucked into a database as I the user views the website. Now, I am using QtWebKit, and I have everything pretty much solved viewing wise. I have a loadUrl() routine which calls load(url) inside the qwebview.h The problem is, how do I piggyback xml parsing on top of this?

    Read the article

  • DotNetNuke + XPath = Custom navigation menu DNNMenu HTML render

    - by Rui Santos
    I'm developing a skin for DotNetNuke 5 using the Component DNN Done Right menu by Mark Alan which uses XSL-T to convert the XML sitemap into an HTML navigation. The XML sitemap outputs the following structure: <Root > <root > <node id="40" text="Home" url="http://localhost/dnn/Home.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="0" > <node id="58" text="Child1" url="http://localhost/dnn/Home/Child1.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="1" > <keywords >Child1</keywords> <description >Child1</description> <node id="59" text="Child1 SubItem1" url="http://localhost/dnn/Home/Child1/Child1SubItem1.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="2" > <keywords >Child1 SubItem1</keywords> <description >Child1 SubItem1</description> </node> <node id="60" text="Child1 SubItem2" url="http://localhost/dnn/Home/Child1/Child1SubItem2.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="0" only="0" depth="2" > <keywords >Child1 SubItem2</keywords> <description >Child1 SubItem2</description> </node> <node id="61" text="Child1 SubItem3" url="http://localhost/dnn/Home/Child1/Child1SubItem3.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="1" only="0" depth="2" > <keywords >Child1 SubItem3</keywords> <description >Child1 SubItem3</description> </node> </node> <node id="65" text="Child2" url="http://localhost/dnn/Home/Child2.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="1" only="0" depth="1" > <keywords >Child2</keywords> <description >Child2</description> <node id="66" text="Child2 SubItem1" url="http://localhost/dnn/Home/Child2/Child2SubItem1.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="2" > <keywords >Child2 SubItem1</keywords> <description >Child2 SubItem1</description> </node> <node id="67" text="Child2 SubItem2" url="http://localhost/dnn/Home/Child2/Child2SubItem2.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="1" only="0" depth="2" > <keywords >Child2 SubItem2</keywords> <description >Child2 SubItem2</description> </node> </node> </node> </root> </Root> My Goal is to render this XML block into this HTML Navigation structure only using UL's LI's, etc.. <ul id="topnav"> <li> <a href="#" class="home">Home</a> <!-- Parent Node - Depth0 --> <div class="sub"> <ul> <li><h2><a href="#">Child1</a></h2></li> <!-- Parent Node 1 - Depth1 --> <li><a href="#">Child1 SubItem1</a></li> <!-- ChildNode - Depth2 --> <li><a href="#">Child1 SubItem2</a></li> <!-- ChildNode - Depth2 --> <li><a href="#">Child1 SubItem3</a></li ><!-- ChildNode - Depth2 --> </ul> <ul> <li><h2><a href="#">Child2</a></h2></li> <!-- Parent Node 2 - Depth1 --> <li><a href="#">Child2 SubItem1</a></li> <!-- ChildNode - Depth2 --> <li><a href="#">Child2 SubItem2</a></li> <!-- ChildNode - Depth2 --> </ul> </div> </li> </ul> Can anyone help with the XSL coding? I'm just starting now with XSL..

    Read the article

  • Extracting URLs (to array) in Ruby

    - by FearMediocrity
    Good afternoon, I'm learning about using RegEx's in Ruby, and have hit a point where I need some assistance. I am trying to extract 0 to many URLs from a string. This is the code I'm using: sStrings = ["hello world: http://www.google.com", "There is only one url in this string http://yahoo.com . Did you get that?", "The first URL in this string is http://www.bing.com and the second is http://digg.com","This one is more complicated http://is.gd/12345 http://is.gd/4567?q=1", "This string contains no urls"] sStrings.each do |s| x = s.scan(/((http|https):\/\/[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,5}(([0-9]{1,5})?\/.[\w-]*)?)/ix) x.each do |url| puts url end end This is what is returned: http://www.google.com http .google nil nil http://yahoo.com http nil nil nil http://www.bing.com http .bing nil nil http://digg.com http nil nil nil http://is.gd/12345 http nil /12345 nil http://is.gd/4567 http nil /4567 nil What is the best way to extract only the full URLs and not the parts of the RegEx? Thanks Jim

    Read the article

  • How to display the data of DOM parsed attributes in the listView display ?

    - by Praween k
    Hi, I am building a test output for DOM parser with node "Rider" and within that 7 attributes are there.URL://http://ps700.pranasystems.com/tours/8/xml/results/stage1results.xml. I want to display only the "name" and the "team" attributes output in the listview mode of the device.I am not getting clear where to store the output to display. Please help me someone for how to store and display that data to the output of the device in List view. Thanks in advance //-------------------------------// Here is my code------------// public String getSearch(String strURL) { URL url; URLConnection urlConn = null; NamedNodeMap nnm = null; int len; try { url = new URL(strURL); urlConn = url.openConnection(); } catch (IOException ioe) { Log.e("Could not Connect: "+ioe.getMessage(), "."); } DocumentBuilder builder = null ; Document doc = null ; try { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); doc = db.parse(urlConn.getInputStream()); Node thisNode, currentNode, node,theAttribute ; NodeList nchild, nodeList; String name; ArrayList<Node> result = new ArrayList<Node>(); nodeList = doc.getElementsByTagName("rider"); int length = nodeList.getLength(); for (int i = 0; i < length; i++) { currentNode = nodeList.item(i); NamedNodeMap attributes = currentNode.getAttributes(); Log.i("TAG", attributes.toString()); for (int a = 0; a < attributes.getLength(); a++) { theAttribute = attributes.item(a); } // s1.setAdapter(new ArrayAdapter<Node>(this, // android.R.layout.simple_list_item_1,result)); }catch(ParserConfigurationException pce ){ Log.e("Could not Parse XML:" +pce.getMessage() ,"."); } catch (SAXException se) {Log.e("Could not Parse XML: "+se.getMessage(), ".");} catch (IOException ioe) {Log.e("Invalid XML: "+ioe.getMessage(), ".");} return strURL; }

    Read the article

  • facebook application error

    - by moustafa
    it needs facebook facepile plugin(http://developers.facebook.com/docs/reference/plugins/facepile). i tried but i am getting this error (The Facebook Connect cross-domain receiver URL (http://static.ak.fbcdn.net/connect/xd_proxy.php#?=&cb=f223d517566e616&origin=http%3A%2F%2Fpromolife.com.au%2Ff3e13728ba8ec8&relation=parent.parent&transport=postmessage) must have the application's Connect URL (http://www.testsite.com.au/) as a prefix. You can configure the Connect URL in the Application Settings Editor.) why do i getting this error please help me.

    Read the article

  • Do not re-create session id .net cookieless sessions

    - by nLL
    Due to target audience I am using .net cookieless sessions in auto-detect mode and time to time I get visitors redirected with cookiless session url like domain.com/(S(jdhdghdghd))/default.aspx Problem is, if I call this url after session expired .net will re-create it. What I want to find out is a way to force .net to create another session id instead of using the one that came with url. Is it possible?

    Read the article

  • ASP MVC - Routing Required?

    - by evo_9
    I've been reading up on MVC2 which came in VS2010 and it sounds pretty interesting. I'm actually in the middle of a large multi-tenant application project, and have just started coding the UI. I'm considering changing to MVC as I'm not that far along at this point. I have some questions about the Routing capabilities, namely are they required to use MVC or can I more or less ignore Routing? Or do I have to setup a default routing record that will make things work like standard ASPX (as far as routing alone is concerned)? The reason why I don't want to use Routing is because I've already defined a custom URL 'rewrite' mechanism of my own (which fires on session_start). In addition, I'm using jquery and opens-standards for the entire UI, and MVC's aspx overhead-free approach seems like a better fit based on how I've already started to build the application (I am not using viewstate at all, for example). I guess my big concern is whether the routing can be ignored, of if I will have to re-implement my custom URL rewriting to work with MVC, and if that's the case, how would I do that? As a new Routing routine, or stick with the session_start (if that's even possible?). Lastly, I don't want to use anything even remotely 'intelligent/readable' for the url - for a site like StackOverflow, the readability of the URL is a positive, but the opposite is true if it's not a public website like this one. In fact, it would seem to me that the more friendly MVC routing URL (which indirectly show method names) could pose a security risk on a private, non-public website app like I'm developing. For all these reasons I would love to use the lightweight aspects of MVC but skip the Routing entirely - is this possible?

    Read the article

  • Django Template For Loop Removing <img> Self-Closing

    - by Zack
    Django's for loop seems to be removing all of my <img> tag's self-closing...ness (/>). In the Template, I have this code: {% for item in item_list %} <li> <a class="left" href="{{ item.url }}">{{ item.name }}</a> <a class="right" href="{{ item.url }}"> <img src="{{ item.icon.url }}" alt="{{ item.name }} Logo." /> </a> </li> {% endfor %} It outputs this: <li> <a class="left" href="/some-url/">This is an item</a> <a class="right" href="/some-url/"> <img src="/media/img/some-item.jpg" alt="This is an item Logo."> </a> </li> As you can see, the <img> tag is no longer closed, and thus the page doesn't validate. This isn't a huge issue since it'll still render properly in all browsers, but I'd like to know how to solve it. I've tried wrapping the whole for loop in {% autoescape off %}...{% endautoescape %} but that didn't change anything. All other self-closed <img> tags in the document outside the for loop still properly close.

    Read the article

  • Unable to control requests for static files on Google App Engine

    - by dan
    My simple GAE app is not redirecting to the /static directory for requests when url is multiple levels. Dir structure: /app/static/css/main.css App: I have two handlers one for /app and one for /app/new app.yaml: handlers: - url: /static static_dir: static - url: /app/static/(.*) static_dir: static\1 - url: /app/.* script: app.py login: required HTML: Description: When page is loaded from /app HTTP request for main.css is successful GET /static/css/main.css But when page is loaded from /app/new I see the following request: GET /app/static/css/main.cs That's when I tried adding the /app/static/(.*) in the app.yaml but it is not having any effect.

    Read the article

  • django-social-auth for Facebook is redirecting home and not logging in

    - by Scott Rogowski
    I have had django-social-auth working for Google for quite some time now but am having problems with Facebook. I am at the point where clicking on the /login/facebook/ link will take me to the Facebook authorization page. I then click "go to app" and it redirects me to my home page but does not log in or create a user but does put some strange "#=" onto the back of my URL. Reading up on that, here https://developers.facebook.com/blog/post/552/, and here https://github.com/omab/django-social-auth/issues/199, it seems that would be happening if the redirect uri was not defined. However, on my facebook app settings, I have the following (replacing my site with example.com): + App Namespace: "example" + Site URL: "http://example.com/complete/facebook/" + Site Domain: "example.com" + Sandbox Mode: "On" + Post-Authorize Redirect URL: "http://apps.facebook.com/example/" + Deauthorize URL: "http://www.example.com/" + Post-Authorize URL: "http://example.com/complete/facebook/" The request that django-social-auth is sending to facebook is (replacing my info again): "https://www.facebook.com/dialog/oauth?scope=email&state=*&redirect_uri=http%3A%2F%2Fexample.com%2Fcomplete%2Ffacebook%2F%3Fredirect_state%3D***&client_id=*" The /complete/facebook/ is what is in the documentation and google works as /complete/google/ What am I missing here?

    Read the article

  • Code in FOR loop not executed

    - by androniennn
    I have a ProgressDialog that retrieves in background data from database by executing php script. I'm using gson Google library. php script is working well when executed from browser: {"surveys":[{"id_survey":"1","question_survey":"Are you happy with the actual government?","answer_yes":"50","answer_no":"20"}],"success":1} However, ProgressDialog background treatment is not working well: @Override protected Void doInBackground(Void... params) { String url = "http://192.168.1.4/tn_surveys/get_all_surveys.php"; HttpGet getRequest = new HttpGet(url); Log.d("GETREQUEST",getRequest.toString()); try { DefaultHttpClient httpClient = new DefaultHttpClient(); Log.d("URL1",url); HttpResponse getResponse = httpClient.execute(getRequest); Log.d("GETRESPONSE",getResponse.toString()); final int statusCode = getResponse.getStatusLine().getStatusCode(); Log.d("STATUSCODE",Integer.toString(statusCode)); Log.d("HTTPSTATUSOK",Integer.toString(HttpStatus.SC_OK)); if (statusCode != HttpStatus.SC_OK) { Log.w(getClass().getSimpleName(), "Error " + statusCode + " for URL " + url); return null; } HttpEntity getResponseEntity = getResponse.getEntity(); Log.d("RESPONSEENTITY",getResponseEntity.toString()); InputStream httpResponseStream = getResponseEntity.getContent(); Log.d("HTTPRESPONSESTREAM",httpResponseStream.toString()); Reader inputStreamReader = new InputStreamReader(httpResponseStream); Gson gson = new Gson(); this.response = gson.fromJson(inputStreamReader, Response.class); } catch (IOException e) { getRequest.abort(); Log.w(getClass().getSimpleName(), "Error for URL " + url, e); } return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); Log.d("HELLO","HELLO"); StringBuilder builder = new StringBuilder(); Log.d("STRINGBUILDER","STRINGBUILDER"); for (Survey survey : this.response.data) { String x= survey.getQuestion_survey(); Log.d("QUESTION",x); builder.append(String.format("<br>ID Survey: <b>%s</b><br> <br>Question: <b>%s</b><br> <br>Answer YES: <b>%s</b><br> <br>Answer NO: <b>%s</b><br><br><br>", survey.getId_survey(), survey.getQuestion_survey(),survey.getAnswer_yes(),survey.getAnswer_no())); } Log.d("OUT FOR","OUT"); capitalTextView.setText(Html.fromHtml(builder.toString())); progressDialog.cancel(); } HELLO Log is displayed. STRINGBUILDER Log is displayed. QUESTION Log is NOT displayed. OUT FOR Log is displayed. Survey Class: public class Survey { int id_survey; String question_survey; int answer_yes; int answer_no; public Survey() { this.id_survey = 0; this.question_survey = ""; this.answer_yes=0; this.answer_no=0; } public int getId_survey() { return id_survey; } public String getQuestion_survey() { return question_survey; } public int getAnswer_yes() { return answer_yes; } public int getAnswer_no() { return answer_no; } } Response Class: public class Response { ArrayList<Survey> data; public Response() { data = new ArrayList<Survey>(); } } Any help please concerning WHY the FOR loop is not executed. Thank you for helping.

    Read the article

  • Sharepoint FullTextQuery doesnt work

    - by user330309
    I have three scopes: scope A with rule include http: //mywebapp/lists/myList scope B with rule include FileExtenion aspx scope C with rule include http: //mywebapp/mysitecollection2/ some managed properties mapped to columns from myList, ows_contenttype, ows_created (Property1,2,3) select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'A' this returns 15 resullts select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'B' this returns 50 results select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'C' this returns 10 results so why this query select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'A' or scope='B' or scope='C' doesnt return 75 results ?? it returns 15 or some FullTextSqlQuery sqlQuery = new FullTextSqlQuery (site); sqlQuery .ResultTypes = ResultType .RelevantResults ; sqlQuery .QueryText = sql ; sqlQuery .TrimDuplicates = true; sqlQuery.EnableStemming=true; ResultTableCollection results = sqlQuery .Execute();

    Read the article

  • Why is this HTTP request continually looping?

    - by alex
    I'm probably overlooking something really obvious here. Comments are in to help explain any library specific code. public function areCookiesEnabled() { $random = 'cx67ds'; // set cookie cookie::set('test_cookie', $random); // try and get cookie, if not set to false $testCookie = cookie::get('test_cookie', false); $cookiesAppend = '?cookies=false'; // were we able to get the cookie equal ? $cookiesEnabled = ($testCookie === $random); // if $_GET['cookies'] === false , etc try and remove $_GET portion if ($this->input->get('cookies', false) === 'false' AND $cookiesEnabled) { url::redirect(str_replace($cookiesAppend, '', url::current())); // redirect return false; } // all else fails, add a $_GET[] if ( ! $cookiesEnabled) { url::redirect(url::current().$cookiesAppend); } return $cookiesEnabled; }

    Read the article

  • Getting error 400 / 404 - HttpUtility.UrlEncode not encoding full string?

    - by Justin808
    Why do the following URLs give me the IIS errors below: A) http://192.168.1.96/cms/View.aspx/Show/Small+test' A2) http://192.168.1.96/cms/View.aspx/Show/Small%20test' <-- this works, but is not the result from HttpUtility.UrlEncode() B) http://192.168.1.96/cms/View.aspx/Show/'%26$%23funky**!!~''+page Error for A: HTTP Error 404.11 - Not Found The request filtering module is configured to deny a request that contains a double escape sequence. Error for B: HTTP Error 400.0 - Bad Request ASP.NET detected invalid characters in the URL. The last part of the URL after /Show/ is the result after the text is being sent through HttpUtility.UrlEncode() so, according to Microsoft it is URL Encoded correctly. If I user HttpUtility.UrlPathEncode() rather than HttpUtility.UrlEncode() I get the A2 results. But B ends up looking like: http://192.168.1.96/TVCMS-CVJZ/cms/View.aspx/Show/'&$#funky**!!~''%20page which is still wrong. Does Microsoft know how to URL Encode at all? Is there a function someone has written up to do it the correct way?

    Read the article

  • Google Custom Search Engine not giving the expected search result.

    - by iecut
    Hi, I have been trying to create a new google custom search engine, but when I try some query,the search engine it is not giving me the expected search result.On some queries it is working fine, but on other queries, it says"no result". I tried adding the URL of the website that I wanted to search for, but there are certain pages and keywords that are not coming up in the search result when I try to search for the keyword of that page. I tired adding both the main page URL and the URL of the sub page that I want to search for, but nothing is working. There are some sub pages to the main URL that are coming in the search result.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >