Search Results

Search found 18288 results on 732 pages for 'meta search'.

Page 160/732 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • flickr.photos.search strange behavior with nested xml response

    - by JohnIdol
    I am querying flickr with the following request: view-source:http://www.flickr.com/services/rest/?method=flickr.photos.search&api_key=MY_KEY&text=cork&per_page=12&&format=rest if I put the above url in the browser I get the following as I would expect: <rsp stat="ok"> <photos page="1" pages="22661" perpage="12" total="271924"> <photo id="4587565363" owner="46277999@N05" secret="717118838d" server="4029" farm="5" title="06052010(001)" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587466773" owner="25901874@N06" secret="32c5a1a57f" server="4002" farm="5" title="black wine cork 2 BW" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4588084730" owner="25901874@N06" secret="b27eef5635" server="4032" farm="5" title="black wine cork 2" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587457789" owner="43115275@N00" secret="ae0daa0ab6" server="4034" farm="5" title="Matthew" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587443837" owner="49967920@N07" secret="2b4c1a58de" server="4066" farm="5" title="Two car garage" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4588050906" owner="45827769@N06" secret="72de138f7e" server="4067" farm="5" title="Dabie Mountains, Central China" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587979300" owner="36531359@N00" secret="e5f8b30734" server="3299" farm="4" title="stroll" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587304769" owner="7904649@N02" secret="995062f550" server="4034" farm="5" title="IMG_4325" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587306827" owner="7904649@N02" secret="b92d7ff916" server="4050" farm="5" title="IMG_4329" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587311657" owner="7904649@N02" secret="bb1b34ccf8" server="4053" farm="5" title="IMG_4336" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587933110" owner="7904649@N02" secret="f733b1d482" server="3300" farm="4" title="IMG_4334" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587930650" owner="7904649@N02" secret="9bc202b3ed" server="4018" farm="5" title="IMG_4331" ispublic="1" isfriend="0" isfamily="0" /> </photos> </rsp> Now I've got something very simple PHP to query the exact same url, called from javascript through AJAX - it all works as I can see putting a breakpoint in my javascript function that handles the response: function HandleResponse(response) { document.getElementById('ResponseDiv').innerHTML = response; } Looking at response value in the snippet above I see what I expect (xml formed as above) - then the strangest thing happens: when I inspect the DOM instead of having photo elements all as children of photos they're all nested within each other. What amI missing - How is this possible in light of the fact that the response string is definitely formed as I would expect it to be?

    Read the article

  • Search row with highest number of cells in a row with colspan

    - by user593029
    What is the efficient way to search highest number of cells in big table with numerous colspan (merge cells ** colspan should be ignored so in below example highest number of cells is 4 in first row). Is it js/jquery with reg expression or just the loop with bubble sorting. I got one link as below explainig use of regex is it ideal way ... can someone suggest pseudo code for this. High cpu consumption due to a jquery regex patch <table width="156" height="84" border="0" > <tbody> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px" colspan="2"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> </tbody> </table>

    Read the article

  • Form Search Onkeyup event

    - by Aryan
    I Have a Form In which the form should automatically search when i complete entering the 10th character in the text field but the below code is searching for each n every character i enter in the text field . . . I just want the result after completing the 10th character not for each n every character . . i have used onkeyup event and i set that value to 10 but still it is searching for each n every character... please do help me <body OnKeyPress="return disableKeyPress(event)"> <section id="content" class="container_12 clearfix" data-sort=true> <center><table class='dynamic styled with-prev-next' data-table-tools='{'display':true}' align=center> <script> function disableEnterKey(e) { var key; if(window.event) key = window.event.keyCode; //IE else key = e.which; //firefox return (key != 13); } function showUser(str) { if (str=="") { document.getElementById("txtHint").innerHTML=""; return; } if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById("txtHint").innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","resdb.php?id="+str,true); xmlhttp.send(); } </script> <script type='text/javascript'> //<![CDATA[ $(window).load(function(){ $('#id').keyup(function(){ if(this.value.length ==10) }); });//]]> </script> <form id="form" method="post" name="form" > <tr><td><p align="center"><font size="3"><b>JNTUH - B.Tech IV Year II Semester (R07) Advance Supplementary Results - July 2012</b></font></p></td></tr> <td><p align="center"><b>Last Date for RC/RV : 8th August 2012</b></p></td> <tr><td><p align="center"></b> <input type="text" onkeyup="showUser(this.value)" onKeyPress="return disableEnterKey(event)" data-type="autocomplete" data-source="extras/autocomplete1.php" name="id" id="id" maxlength="10" placeholder=" Hall-Ticket Number">&emsp;</p></td></tr> </table> </center> </form> <center> <div id="txtHint"><b>Results will be displayed here</b></div> </center> </body>

    Read the article

  • Search Engine Marketing Ardan Michael Blum, Would You Like to Learn SEO Like a Pro?

    Learning Search Engine Optimization techniques (SEO) along with keyword research is key to marketing yourself, your business, your website or your blog on the internet. Ardan Michael Blum is a Yahoo SEO expert out of Switzerland. He has over 9 years experience as a SEO expert and guarantees his clients first page rankings in the search engines. For a business, first page rankings can mean success! But, outsourcing to someone like Ardan Michael Blum is going to be costly.

    Read the article

  • Best algorithm/practice when creating a search mechanism for your database?

    - by Alex Hope O'Connor
    I have been designing a database where it is very important to provide users with a good search mechanism. So I was wondering what some of the best practices are for using keywords to search over multiple database tables and return the relevent records? Some other things I am curious about: The users location, if they provide an address The speed of the algorithm Additional Information: I am using C# and LINQ-To-SQL.

    Read the article

  • WebCenter Content Web Search Performance: Do you really need that folder path info?

    - by Nicolas Montoya
    End-users want content at their fingertips at the speed of thought if possible. When running search operations in the WebCenter Conter Web Interface every second or fraction of a second improvement does matter. When doing some trace analysis on the systemdatabase tracing on a customer environment, we came across some SQL queries that were unnecessarily being triggered! These were related to determining the folder path for every entry part of the search result set. However, this folder path was not even being used as part of the displayed information in the user interface.Why was the folder path information being collected when it was not even displayed in the UI? We found that the configuration parameter 'FolderPathInSearchResults' was set to 'true' under Administration > Admin Server > General Configuration > Additional Configuration Variables as shown below:When executing a quicksearch by keyword we were getting 100 out of 2280 entries in the first page of the result set.When thera 'FolderPathInSearchResults' configuration parameter is set to 'true', the following queries appear in the systemdatabase tracing:100 executions for a query on the FolderFiles table for each of the documents displayed in the first page:>systemdatabase/6       12.13 11:17:48.188      IdcServer-199   1.45 ms. SELECT * FROM FolderFiles WHERE dDocName='SLC02VGVUSORAC140641' AND fLinkRank=0[Executed. Returned row(s): true]382 executions for a query of the folders tables - most of the documents that match the keyword criteria are at a folder depth level of three or four:>systemdatabase/6       12.13 11:17:48.114      IdcServer-199   2.57 ms. SELECT FolderFolders.*,FolderMetaDefaults.* FROM FolderFolders,FolderMetaDefaults WHERE FolderFolders.fFolderGUID=FolderMetaDefaults.fFolderGUID(+) AND((FolderFolders.fFolderGUID = '1EB8E527E19B09ED3FE82EE310AEA13A' ) )[Executed.Returned row(s): true]By setting this 'FolderPathInSearchResults' configuration parameter to 'false', the above queries were no longer reported in the Server Output System Audit Information.Now, let's consider a practical scenario:Search result set page = 100Average folder depth der document in the search result set: 5The number of folder path related queries will be: 100 + 5*500 = 600If each query takes slightly over 3 ms. You would have 2000 ms (2 seconds) spent in server time to get this information.The overall performance impact goes beyond seerver time execution, as this information needs to travel from the server to the browser. If the documents are further nested into the folder hierarchy, additional hundreds of queries may be executed. If folder path is not being displayed in the end-user interface profile, your system may be better of with the 'FolderPathInSearchResults' configuration parameter disabled.

    Read the article

  • Why Does On-Page Search Engine Optimization Work So Well?

    On-page search engine optimization has been around since an inordinately long time - probably it is the first kind of SEO that marketers began to use - but it is only lately that people have begun to understand its great efficacy in bolstering the prospects of any website. The term is used to describe all methods you use on the page of the website in order to enhance its prospects with the search engines.

    Read the article

  • What Are the Top 4 Search Engine Optimization Techniques of Today?

    Today, with the over-whelming competitions among the web masters in competing for customers to increase their sales and profit, finding ways to increase to optimize their websites is not really very difficult. There are whole lots of SEO (Search engine optimization) techniques which can be found over the internet today, which majority of them are really very user friendly which even a novice web-master with limited HTML knowledge find it not that difficult to get the most out of it as long as they have the will to learn and improve in their search ranking.

    Read the article

  • How Does a Landing Page Affect Search Engine Optimization?

    Search engine optimization or SEO for short is one of the many those new to online commercial have to deal with when setting up their web presence. One SEO related question that seems to come up frequently is "How does a landing page affect search engine optimization, and if so to what degree"? The short answer is, yes it does.

    Read the article

  • Natural Search Engine Optimization - Don't "Game the System" Or You Will Get Banned!

    When focusing on natural search engine optimization, it is important that you keep the process "white hat." You see, when it comes to SEO, there are basically three schools of thought: White hat, Gray hat, and Black hat. As you can probably infer, white hat is following the rules, gray hat is a little in between, and black hat is going against parameters that Google and other major search engines have set for ethical SEO practices.

    Read the article

  • Moose and error messages, the sun and the moon [closed]

    - by xxxxxxx
    So again using Moose I write a role like this: package My::Role; use Moose::Role; use Some::Class::Consuming::My::Role; With the note that Some::Class::Consuming::My::Role consumes the role My::Role; And what do I get ? I get an error message like this: A role generator is required to generate roles at /usr/local/share/perl/5.10.0/MooseX/Role/Parameterized/Meta/Role/Parameterizable.pm line 79 MooseX::Role::Parameterized::Meta::Role::Parameterizable::generate_role('MooseX::Role::Parameterized::Meta::Role::Parameterizable=HASH...', 'consumer', 'Moose::Meta::Class=HASH(0x894e540)', 'parameters', 'HASH(0x86fc1e0)') called at /usr/local/share/perl/5.10.0/MooseX/Role/Parameterized/Meta/Role/Parameterizable.pm line 116 MooseX::Role::Parameterized::Meta::Role::Parameterizable::apply('MooseX::Role::Parameterized::Meta::Role::Parameterizable=HASH...', 'Moose::Meta::Class=HASH(0x894e540)', 'element_type', 'Tuple') called at /usr/local/lib/perl/5.10.0/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0x894e540)', undef, 'Stuff', 'HASH(0x894e1d0)') called at /usr/local/lib/perl/5.10.0/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0x894e540)', 'Stuff', 'HASH(0x894e1d0)') called at /usr/local/lib/perl/5.10.0/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0x894e540)', 'Group', 'HASH(0x894e1d0)') called at /usr/local/lib/perl/5.10.0/Moose/Exporter.pm line 293 Moose::with('Group', 'HASH(0x894e1d0)') called at Some_path_on_disk line 6 require Some_other_path_on_disk called at Some_path_on_disk line 9 Group::BEGIN() called at Yet_another_path_on_disk line 0 eval {...} called at Yet_another_path_on_disk line 0 Compilation failed in require at some_path_on_disk line 9. BEGIN failed--compilation aborted at some_path_on_disk line 9. What am I to make of this ? As Dijkstra would concisely describe, this looks like "just a meaningless concatenation of words"(which is exactly what it is). Would a more appropriate error message be "You cannot use a class consuming the role that you are currently defining " ? What does the error message try to convey ? Can the author make the error message meaningful ? Will he ever make it so ? maybe this can be planned for version 3.14159265358979323846 ? In actuality I get one and a half pages of error which is completely unreadable and devoid of any logic or sense of respect for the user that is using Moose (in terms of intuitive error messages) just like the one above. What's to be done in this case ? I mean I get on my screen these error messages that are sometimes completely unrelated to the problem that I'm having (which I can assess after solving the problems that probably caused them, I say probably becuase I have no idea where these error messages came from because they look like they fell from the sky as they have no relation to the actual situation). Is this: the inexplicable dramatic destiny of the Perl programmer using Moose ? someone being extremely lazy and sloppy at writing error messages ? maybe on heavy drugs ? me not understanding basic english ? Gentlemen, when writing software, please please please, take care of the poor programmer that will use it and respect him by writing relevant error messages. (Except for error messages Moose is a pretty good piece of software)

    Read the article

  • Using django-haystack, how do I perform a search with only partial terms?

    - by Sri Raghavan
    I've got a Haystack/xapian search index for django.contrib.auth.models.User. The template is simply {{object.get_full_name}} as I intend for a user to type in a name and be able to search for it. My issue is this: if I search, say, Sri (my full first name) I come up with a result for the user object pertaining to my name. However, if I search Sri Ragh - that is, my full name, and part of my last name, I get no results. How can I set Haystack up so that I can get the appropriate results for partial queries? (I essentially want it to search *Sri Ragh*, but I don't know if wildcards would actually do the trick, or how to implement them). This is my search query: results = SearchQuerySet().filter(content='Sri Ragh')

    Read the article

  • The SharePoint Search Dropdown shows only This site as the scope.

    - by Mac
    Hi All, I have a team site in which the Search scope dropdown only shows This site and it doesn't show All Sites and People in the dropdown.Any one knows why? I have not enabled Custom scopes in Search settings for this site because I don't have Search Center created. If I create search center and specify as a custom scope in this site I can see all 3 scopes in the dropdown. I am just trying to understand whether I need to have Search Center created to have all these shared scopes getting displayed in the Search Scopes drop down?

    Read the article

  • depth first search graph by using linked list

    - by programmerwannabe
    im using mac book and i cannot read the text file using this code. moreover, can you guys please add function(graph is connected?, and is this graph tree?) inputA.txt consist 1 2 1 6 1 5 2 3 2 6 3 4 3 6 4 5 4 6 5 6 #include <stdio.h> #include <memory.h> #include <stdlib.h> #define MAX 10 #define TRUE 1 #define FALSE 0 typedef struct Graph{ int vertex; struct Graph* link; } g_node; typedef struct graphType{ int x; int visited[MAX]; g_node* adjList_H[MAX]; } graphType; typedef struct stack{ int data; struct stack* link; } s_node; s_node* top; void push(int item){ s_node* n=(s_node*)malloc(sizeof(s_node)); n->data = item; n->link = top; top = n; } int pop(){ int item; s_node* n=top; if(top == NULL){ puts("\nstack is empty!\n"); return 0; } else { item = n-> data; top = n->link; free(n); return item; } } void createGraph(graphType* g){ int v; g->x = 1; for(v=1 ; v < MAX ; v++){ g -> visited[v] = FALSE; g -> adjList_H[v] = NULL; } } void insertVertex(graphType* g, int v){ if(((g->x)) > MAX){ puts("\n it has been overed the number of vertex\n"); return ; } g -> x++; } void insertEdge(graphType* g, int u, int v){ g_node* node; if(u >= g -> x || v >= g -> x){ puts("\n no vertex in the graph\n"); return ; } node = (g_node*)malloc(sizeof(g_node)); node -> vertex = v; node -> link = g -> adjList_H[u]; g-> adjList_H[u] = node; } void print_adjList(graphType* g){ int i; g_node *p; for(i=1 ; i<g -> x ; i++){ printf("\n\t\t vertex %d adjacency list ", i); p = g -> adjList_H[i]; while(p){ printf("-> %d", p-> vertex); p = p-> link; } } } void DFS_adjList(graphType* g, int v) { g_node* w; top = NULL; push(v); g->visited[v] = TRUE; printf(" %d", v); while(top != NULL){ w=g->adjList_H[v]; while(w){ if (!g->visited[w->vertex]){ push(w->vertex); g->visited[w->vertex] = TRUE; printf(" %d", w->vertex); v = w->vertex; w=g->adjList_H[v]; } else w= w->link; } v = pop(); } } int main (int argc, const char * argv[]) { FILE *fp; char mychar; char arr[][2]={0, }; int j, k; int i; graphType *G9; G9 = (graphType*)malloc(sizeof(graphType)); createGraph(G9); for(i=1; i<7 ; i++) insertVertex(G9, i); fp = fopen("inputD.txt", "r"); for(j = 0 ; j< 10 ; j++){ for(k = 0 ; k < 2 ; k++){ mychar = fgetc(fp); if(mychar = EOF){ j=10; break; } else if(mychar == ' ') continue; else if(mychar <= '9' || mychar >= '1'){ arr[j][k] = mychar; printf("%d%d", arr[i][k]); } } } insertEdge(G9, 1, 2); insertEdge(G9, 1, 6); insertEdge(G9, 1, 5); insertEdge(G9, 2, 3); insertEdge(G9, 2, 6); insertEdge(G9, 3, 4); insertEdge(G9, 3, 6); insertEdge(G9, 4, 5); insertEdge(G9, 4, 6); insertEdge(G9, 5, 6); insertEdge(G9, 6, 5); insertEdge(G9, 6, 4); insertEdge(G9, 5, 4); insertEdge(G9, 6, 3); insertEdge(G9, 4, 3); insertEdge(G9, 6, 2); insertEdge(G9, 3, 2); insertEdge(G9, 5, 1); insertEdge(G9, 6, 1); insertEdge(G9, 2, 1); printf("\n graph adjacency list "); print_adjList(G9); printf("\n \n//////////////////////////////////////////////\n\n depth fist search >> "); DFS_adjList(G9, 1); return 0; }

    Read the article

  • Does anyone have a valid and working example of OpenLDAP meta backend?

    - by QXT
    I have been Google'ing my fingers off and simply can not find a working example of how to merge/proxy a OpenLDAP server and windows AD server. Have anyone worked with this before? Any suggestions would be appreciated. The idea is simple: openldap.mydomain.local ---- Linux LDAP Server winad.mydomain.local ---- Windows AD Server Some users are one Linux and some on WinAD. OpenLDAP should search both on login. A working example would be appreciated.

    Read the article

  • Facebook like button issue.

    - by Ross Hale
    Hello community, We're having some trouble getting our like button to work. It seemed to work last week but suddenly it's stopped working. Basically when clicking "Like", we get an error saying: You failed to provide a valid list of administators. You need to supply the administors using either a "fb:app_id" meta tag, or using a "fb:admins" meta tag to specify a comma-delimited list of Facebook users. Our section looks like this: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:og="http://opengraphprotocol.org/schema/" xmlns:fb="http://www.facebook.com/2008/fbml" xml:lang="en" lang="en"> <head> <meta property="fb:app_id" content="number"/> <meta property="fb:admins" content="number"/> <meta property="og:title" content="title"/> <meta property="og:type" content="website"/> <meta property="og:url" content="url with trailing slash"/> <meta property="og:image" content="url to image"/> <meta property="og:site_name" content="Site Name"/> </head> Help?

    Read the article

  • Sub routing in a SPA site

    - by Anders
    I have a SPA site that I'm working on, I have a requirement that you can have subroutes for a page view model. Im currently using this 'pattern' for the site MyApp.FooViewModel = MyApp.define({ meta: { query: MyApp.Core.Contracts.Queries.FooQuery, title: "Foo" }, init: function (queryResult) { }, prototype: { } }); In the master view model I have a route table this.navigation(new MyApp.RoutesViewModel({ Home: { model: MyApp.HomeViewModel, route: String.empty }, Foo: { model: MyApp.FooViewModel } })); The meta object defines which query should populate the top level view model when its invoked through sammyjs, this is all fine but it does not support sub routing My plan is to change the meta object so that it can (optional offcourse) look like this meta: { query: MyApp.Core.Contracts.Queries.FooQuery, title: "Foo", route: { barId: MyApp.BarViewModel } } When sammyjs detects a barId in the query string the Barmodel will be executed and populated through its own meta object. Is this a good design?

    Read the article

  • SEO for a list of products with filters

    - by dana
    I am a wondering if there is a recommended "best practice" for a product search SEO. I know to create a dynamic sitemap file that lists links to all products in the site. However, I want to implement a a bookmark-able "advanced search". Should I let search engines index any of the results? Take the following parameters for a search on a make believe used car website: minprice (minimum price in dollars) maxprice (maximum price in dollars) make (honda, audi, volvo) model (accord, A4, S40) minyear (minimum model year) maxyear (maximum model year) minmileage (minimum mileage) maxmileage (maximum mileage) Given these parameters, there could be an infinite number of search combinations: Price Between $10,000 and $20,000 /search?minprice=10000&maxprice&20000 Audis with less than 50k miles /search?model=audi&maxmileage=50000 More than 100,000 miles and less than $5,000 /search?minmileage=100000&maxprice=5000 etc. Over time, there may be inbound links to a variety of these types of searches, yet they are all slices of the same data. Should I allow for all of these searches to be indexed?

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >