Search Results

Search found 59194 results on 2368 pages for 'depth first search'.

Page 52/2368 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • How should I make searching a relational database more efficient?

    - by Travis J
    This is in the scope of a web application. I have a database which has a few nested relations. There is a feature which depicts the history of a large chain of relations. It is essentially a data analysis feature. The issue is that in order to search, a large object graph must be loaded - the loading time for this object graph is not quick enough to be viable. The problem is that without loading the whole graph it makes searching from a single string nearly impossible. In order to search, explicit fields must be specified and the search data supplied. Is there a design pattern for exposing the data in a way which facilitates a single string search instead of having to explicitly define parameters?

    Read the article

  • Spam link text when searching for company directors' name

    - by Alex
    It was brought to my attention that if you search for the name of one of our directors (with the intent to find there profile page on our site) They come up as the first link in most search engines as you would expect but the link text is just pure spam. the three search string I have tested on Google, Bing, Ask, and Yahoo have all returned similar results. Here is a list of the search strings: Paolo rossi futex Mark rossi futex Marco rossi futex Dan Goldberg futex Any idea what might be causing this I have searched through as much of the sites code as I can and cant find anything wrong with it.

    Read the article

  • Can anyone explain step-by-step how the as3isolib depth-sorts isometric objects?

    - by Rob Evans
    The library manages to depth-sort correctly, even when using items of non-1x1 sizes. I took a look through the code but it's a big project to go through line by line! There are some questions about the process such as: How are the x, y, z values of each object defined? Are they the center points of the objects or something else? I noticed that the IBounds defines the bounds of the object. If you were to visualise a cuboid of 40, 40, 90 in size, where would each of the IBounds metrics be? I would like to know how as3isolib achieves this although I would also be happy with a generalised pseudo-code version. At present I have a system that works 90% of the time but in cases of objects that are along the same horizontal line, the depth is calculated as the same value. The depth calculation currently works like this: x = object horizontal center point y = object vertical center point originX and Y = the origin point relative to the object so if you want the origin to be the center, the value would be originX = 0.5, originY = 0.5. If you wanted the origin to be vertical center, horizontal far right of the object it would be originX = 1.0, originY = 0.5. The origin adjusts the position that the object is transformed from. AABB_width = The bounding box width. AABB_height = The bounding box height. depth = x + (AABB_width * originX) + y + (AABB_height * originY) - z; This generates the same depth for all objects along the same horizontal x.

    Read the article

  • My parked domain was de-indexed by Google - what to do?

    - by Programmer Joe
    I have a question about how to handle my domain. In a nutshell, I bought a domain last year from Go Daddy. My intention was to launch a real site with this domain and I have spent the last year working on my site. For the last year, I have been using the default Go Daddy page display for an up and coming site. When I first bought this site, it was indexed by Google - you could search for "alphabanter" and my site would show up on the search result page for Google. Several months ago, it seemed Google de-indexed my domain and if you type "alphabanter," my domain no longer shows up on the list of search results. However, if you search for "www.alphabanter.com", that's the only way it shows up in the search results for Google. Anyways, I am about to launch my site for real. However, I don't quite know if I can get my site back into Google's index. I have a few questions: 1) Was my domain permanently penalized by Google and removed from their index just because it was a parked domain? I don't believe I have done anything abusive other than using the Go Daddy default page for almost a year because my site was not ready. 2) Should I just launch my site, put a few backlinks to my site, and hope that Google indexes my site again? 3) Should I submit my site to Google at Google submit your content I assume getting Google to reconsider my site is the last option if none of the above works.

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • Wordpress blog penalized by Google search - what's wrong?

    - by pawelbrodzinski
    I have a blog (http://blog.brodzinski.com), which is wordpress.org blog with pretty popular Thesis theme with almost no other customizations. Some time ago it was penalized by Google search - it simply stopped appearing in search results even for search terms it used to be top result, like my name - Pawel Brodzinski - which isn't anything close to popular search term. To be exact the site has been penalized on Nov 18. It started popping up in search result on Dec 23 but only for a few days. Since Dec 27 it is out again. I know Google guidelines and I'm not aware to break any of them. I submitted reconsideration request after I noticed penalty. It was proceeded and there was no change whatsoever (no surprise as it seems the site was penalized again). I checked diagnostics in webmaster tools and neither any malware was detected nor any strange search terms popped up. I read related threads on Google webmasters forum but found none of solutions working for me. I posted a thread on Google webmasters forum (http://www.google.com/support/forum/p/Webmasters/thread?tid=546339f49d4a03bc&hl=en) and the only answer I got was to check for duplicate content. Well, there is some duplicate content published on the web but it is true for vast majority of blogs and it doesn't seem to be a reason for a penalty. Also before Dec 27 I was able to remove duplicate content from a couple of sites which were republishing my feed but this doesn't change the situation - the site was penalized again. The problem is I have no idea what can be wrong with the website or how to find it out. To make the problem worse I'm no webmaster, I just run a wordpress blog, which supposed to be easy.

    Read the article

  • Creating Your First Website

    If you are looking into website creation as a form of business or just as a hobby to give an input of your freedom of expression, check out how creating your first website can be a simple task as long as you follow these simple tips and believe in yourself. The first thing you want to do when you are looking to create your first website is run a search in a search engine for guides that can help you though the process. There are hundreds of guides available online that run you through the process of website creation....

    Read the article

  • What is good side of PageRank?

    - by SharkTheDark
    I am doing research about backlinks/PR/SEO/Search Result position and all I read about is that PageRank is not important, that it worth before but now it's not important at all. Only thing I found useful about it that is "change Search Result position", but ONLY if there are two sites with same keywords and same text content value, then Search Engine will check which site has higher PR and place that site above lowest one. Google counts PR importance as 20% for displaying search rankings, and Yahoo! is like 3%... Correct me if I am wrong... Is there any other good thing from it?

    Read the article

  • Oracle E-Business Products New Search Helpers for Guided Resolution of Customer Issues

    - by user793044
    Oracle E-Business Proactive Support has created many new guided resolution documents that you may find helpful in resolving issues in your EBS applications.  These new documents are called “Search Helpers” and they guide you through your issue to a solution.  They are meant to be an easy and fast method to finding a relevant, complete solution. Hundreds of notes and service requests were reviewed and the best solutions to these known issues were selected.  For some issues, notes were updated to better clarify the solution.  In other cases, if a note with a solution did not already exist, one was created. You start the process by selecting the scenario you have encountered.  You may have received an error message, or there may be a particular area of the application in which you have encountered an issue.  Based on your selection of the issue, the Search Helper will present one or more additional possible symptoms.  When you have selected from both of these two sections, you are then presented with one or more articles known to have fully solved this issue in the past.  Several EBS products have produced Search Helpers documents.  Take a look at Doc ID 1501724.1 for an index of the current EBS Search Helpers.  Here is an example of a Search Helper from the Receivables Transactions area: After selecting the Functional Area of "Entering / Updating Transactions" a list of Known Symptoms is presented: And, when "Transaction numbers are not in sequence" is selected, a solution link is provided for Document ID 197212.1: How To Setup Gapless Document Sequencing in Receivables. The EBS applications that currently have published Search Helpers are: Advanced Pricing Applications Technology Configurator General Ledger Human Capital Management Inventory Management Order Management Payables Process Manufacturing Purchasing Receivables Shipping Value Chain Planning

    Read the article

  • Do first-class methods exist?

    - by gdhoward
    Okay, I know first-class functions are cool, closures even better, etc. But is there any language with first-class methods? In my mind, I see a first-class method as an "object" that has both a function pointer and a pointer to a specific instance of the class/object, but the implementation doesn't matter. I just want to know if there is any language that uses them. And as a bonus, how were they implemented?

    Read the article

  • 3 Simple Steps to Get to the First Page on Google

    When you get to the first page on Google you will get a lot of exposure for yourself and/or your business. Some SEO companies charge their clients thousands of dollars just to get to the first page on Google. Well you can save your money because this article will teach you 3 simple steps to the first page on Google.

    Read the article

  • How do I search nodes of XML document for text? Or convert to SQL tables?

    - by netefficacy
    Hi I have an XML file and would like to run a search on the nodes for text that matches user input. My options are: Convert the XML file to a SQL table and run the search against the table records. Search the XML nodes themselves. The problem is that I cannot find a open source conversion utility, nor can I figure out how to search the XML nodes. I can use PHP, Ruby, or Python for the search code. Any pointers on how can I do 1 or 2? Thanks

    Read the article

  • Dates search in Drupal (greater than, less than) using CCK / views / facelet?

    - by guillefar
    I'm working in a site that manage events (like parties). Each event could have several fields, including date, that the user could add thanks to CCK module. Now, the problem is when I have to search using those fields. I could not find how to search for events between a range of dates. I discover the facelet module, which is pretty good, and it is very useful for some kind of search, but as far as I can see it is not possible search in a range. Also I do some testing using views, but again, with no results. I can not find how to search a date "greater than" and "less than". I will really appreciate any help.

    Read the article

  • Python recursion , Sierpinski triangle with color at each depth

    - by ???? ???
    import turtle w=turtle.Screen() def Tri(t, order, size): if order==0: t.forward(size) t.left(120) t.forward(size) t.left(120) t.forward(size) t.left(120) else: t.pencolor('red') Tri(t, order-1, size/2, color-1) t.fd(size/2) t.pencolor('blue') Tri(t, order-1, size/2, color-1) t.fd(size/2) t.lt(120) t.fd(size) t.lt(120) t.fd(size/2) t.lt(120) t.pencolor('green') Tri(t, order-1, size/2,color-1) t.rt(120) t.fd(size/2) t.lt(120) can anyone help with this problem ? i want to a sierpinski triangle that have color at specific depth like this http://openbookproject.net/thinkcs/python/english3e/_images/sierpinski_color.png i dont know how to make the the triangle color change at specific depth

    Read the article

  • XML Document Depth?

    - by CrazyNick
    How to find the depth of the xml file using powershell/xpath? consider the below xml: <?xml version="1.0" encoding="ISO-8859-1"?> <bookstore> <book> <title>Harry Potter</title> <price>25.99</price> </book> <book> <title>Learning XML</title> <price>49.95</price> </book> </bookstore> depth of the above xml document is 3 (bookstore - book - title/price).

    Read the article

  • what data structure should I use for hash lookup as well as binary search?

    - by zebraman
    I am working on a school homework. I have a list of names. I want to be able to perform binary search on these names (find all names between a lower and upper bound) for first name as well as last name, and perform keyword searches as well (this will be accomplished using hashing. For example, if I have the names Garfield Cat Snoopy Dog Captain Crunch Fat Cat then a binary search of first names (C,H) will return Captain Crunch, Fat Cat, and Garfield Cat. A binary search of last names (Cr,D) will return Captain Crunch. A keyword search of 'cat' will return Fat Cat and Garfield Cat. I understand binary search will only work on a sorted list, but since I am planning on searching two different criteria, I will have to sort the list by last name or first name depending on what I'm searching for. I feel like it will be too inefficient to have to resort the list each time I want to perform a new binary search. Would it just be better for me to set up and maintain two sorted lists (one for sorted by first name, one for sorted by last name)? Also, for hashing, will I have to set up a different table of names for that as well? I understand each keyword will hash to some value determined by a hash function, and this value (or key) is a table address where the corresponding names are stored. So I just want to know what would be the best way to solve this problem? Maintaining separate structures, or is there a way to efficiently do everything I want with just one data structure?

    Read the article

  • Visualize the depth buffer

    - by Thanatos
    I'm attempting to visualize the depth buffer for debugging purposes, by drawing it on top of the actual rendering when a key is pressed. It's mostly working, but the resulting image appears to be zoomed in. (It's not just the original image, in an odd grayscale) Why is it not the same size as the color buffer? This is what I'm using the view the depth buffer: void get_gl_size(int &width, int &height) { int iv[4]; glGetIntegerv(GL_VIEWPORT, iv); width = iv[2]; height = iv[3]; } void visualize_depth_buffer() { int width, height; get_gl_size(width, height); float *data = new float[width * height]; glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, data); glDrawPixels(width, height, GL_LUMINANCE, GL_FLOAT, data); delete [] data; }

    Read the article

  • sharepoint 2010 search error give me correlation id and event viewer is empty and no log there

    - by saber tabatabaee yazdi
    we have a farm of SharePoint 2010 that worked properly since last week. we configure and start the search service last week and it is also worked and when we test it with some criteria , worked , and results appear for us so we are very happy. but after few days when we search , occur this error: after that we delete application service and reconfure. and now when we open any document libraries occur this error: but lists open correctly without any error An unexpected error has occurred. Troubleshoot issues with Microsoft SharePoint Foundation. Correlation ID: 5becf903-d13e-4490-a23c-d7e4f68ca769 please help us.

    Read the article

  • How to setup Lucene search for a B2B web app?

    - by Bill Paetzke
    Given: 5000 databases (spread out over 5 servers) 1 database per client (so you can infer there are 1000 clients) 2 to 2000 users per client (let's say avg is 100 users per client) Clients (databases) come and go every day (let's assume most remain for at least one year) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. His situation is quite similar to mine. Although, he didn't elaborate on the setup (particularly indices); hence, the need for this question. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • Image bit depth values.

    - by pencilslate
    REPHRASED QUESTION: I am coming up with a list of possible image bit depth values that could be used as a predefined reference list in my application. I could think of 8,16,24 and 32 bit depths. The image formats considered are BMP, JPEG, PNG and GIF. I understand the bit depth decides the quality and thereby the storage requirements for the image. The application is used to store user uploaded images(non-medical, non-DICOM). Are there more bit depths other than the ones mentioned above that i should be including in my list? Are there any stats on the usage of the images with bit depths? Appreciate your response!

    Read the article

  • Using the Search API with Sharepoint Foundation 2010 - 0 results

    - by MB
    I am a sharepoint newbee and am having trouble getting any search results to return using the search API in Sharepoint 2010 Foundation. Here are the steps I have taken so far. The Service Sharepoint Foundation Search v4 is running and logged in as Local Service Under Team Site - Site Settings - Search and Offline Availability, Indexing Site Content is enabled. Running the PowerShell script Get-SPSearchServiceInstance returns TypeName : SharePoint Foundation Search Description : Search index file on the search server Id : 91e01ce1-016e-44e0-a938-035d37613b70 Server : SPServer Name=V-SP2010 Service : SPSearchService Name=SPSearch4 IndexLocation : C:\Program Files\Common Files\Microsoft Shared\Web Server Exten sions\14\Data\Applications ProxyType : Default Status : Online When I do a search using the search textbox on the team site I get a results as I would expect. Now, when I try to duplicate the search results using the Search API I either receive an error or 0 results. Here is some sample code: using Microsoft.SharePoint.Search.Query; using (var site = new SPSite(_sharepointUrl, token)) { // FullTextSqlQuery fullTextSqlQuery = new FullTextSqlQuery(site) { QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope() WHERE \"scope\"='All Sites' AND CONTAINS('\"{0}\"')", searchPhrase), //QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope()", searchPhrase), TrimDuplicates = true, StartRow = 0, RowLimit = 200, ResultTypes = ResultType.RelevantResults //IgnoreAllNoiseQuery = false }; ResultTableCollection resultTableCollection = fullTextSqlQuery.Execute(); ResultTable result = resultTableCollection[ResultType.RelevantResults]; DataTable tbl = new DataTable(); tbl.Load(result, LoadOption.OverwriteChanges); } When the scope is set to All Sites I retrieve an error about the search scope not being available. Other search just return 0 results. Any ideas about what I am doing wrong?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >