Search Results

Search found 1124 results on 45 pages for 'indexing'.

Page 35/45 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Grails searchable plugin

    - by Don
    Hi, In my Grails app, I'm using the Searchable plugin for searching/indexing. I want to write a Compass/Lucene query that involves multiple domain classes. Within that query when I want to refer to the id of a class, I can't simply use 'id' because all classes have an 'id' property. Currently, I work around this problem by adding the following property to a class Foo public Long getFooId() { return id } static transients = ['fooId'] Then when I want to refer to the id of Foo within a query I use 'fooId'. Is there a way I can provide an alias for a property in the searchable mapping rather than adding a property to the class?

    Read the article

  • Creating SQL Server index on a nvarchar column

    - by Jahan
    When I run this SQL statement: CREATE UNIQUE INDEX WordsIndex ON Words (Word ASC); I get the following exception message: The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.Words' and the index name 'WordsIndex'. The duplicate key value is (ass). The statement has been terminated. The 'Word' column has a datatype of nvarchar(100). There are two items in the 'Word' column that SQL Server interprets as the same: 'aß' and 'ass', which causes the indexing failure. Why would SQL Server interpret those two different words as the same word?

    Read the article

  • Exception in JEE application Email Notification Pattern

    - by Build Monkey
    We have spring 3.0.x based application, we use SimpleMappingExceptionResolver which sends emails on an exception, when the exception happens within the DispatcherServlet. This gives us following flexibility: Subject can include who the logged in user is, so that we can send personalized email to the user Subject also includes the server on which the error occurred The request params, request url, and headers -- helped us find some problems when search engine indexing the site. However, lately we have been finding the exceptions have been occurring in the filters, and since this is not going through the Resolver, we dont get any emails. We dont like the log4j email appender solution, and not writing another filter to send emails seems right. Is there an accepted pattern to resolve this issue

    Read the article

  • MS Sql Full-text search vs. LIKE expression

    - by Marks
    Hi. I'm currently looking for a way to search a big database (500MB - 10GB or more on 10 tables) with a lot of different fields(nvarchars and bigints). Many of the fields, that should be searched are not in the same table. An example: A search for '5124 Peter' should return all items, that ... have an ID with 5124 in it, have 'Peter' in the title or description have item type id with 5124 in it created by a user named 'peter' or a user whose id has 5124 in it created by a user with '5124' or 'peter' in his street address. How should i do the search? I read that the full-text search of MS-Sql is a lot more performant than a query with the LIKE keyword and i think the syntax is more clear, but i think it cant search on bigint(id) values and i read it has performance problems with indexing and therefore slows down inserts to the DB. In my project there will be more inserting than reading, so this could be a matter. Thanks in advance, Marks

    Read the article

  • How to improve IntelliJ code editor speed?

    - by Hoàng Long
    I am using IntelliJ (Community Edition) for several months, and at first I'm pleased about its speed & simplicity. But now, after upgrading to version 10, it's extremely slow. Sometimes I click a file then it takes 5 - 15 seconds to open that file (it freeze for that time). I don't know if I have done anything which cause that: I have installed 2 plugins(regex, sql), and have 2 versions of IntelliJ on my machine (now the version 9 removed, only version 10 remains). Is there any tips to improve speed of code editor, in general, or specifically IntelliJ? I have some experience when using IntelliJ: 1. Should open IntelliJ a while before working, cause it needs time for indexing. Don't open too many code tabs Open as less other program as possible. I'm using 2 GB RAM WinXP, and it just seems fairly enough for Java, IntelliJ & Chrome at the same time.

    Read the article

  • STOP ERASING MY QUESTIONS! - VIEWING FIRST_ROWS BEFORE QUERY COMPLETES (RE-VISITED)

    - by Frank Developer
    OK, so say I have a table with 500K rows, then I ad-hoc query with unsupported indexing which requires a full table scan. I would like to immediately view the first rows returned while the full table scan continues. Then I want to scroll thru the next results. In the meantime, I would like to display the progress of the table scan, example: "SEARCHING.. FOUND 23 OF 500,000 ROWS SO FAR". If I scroll too far ahead, I want to display a message like: "REACHED LAST ROW IN LOOK-AHEAD BUFFER.. QUERY HAS NOT COMPLETED".. Can this be done? Maybe like: spawn/exec, declare scroll cursor, open, fetch, etc.?

    Read the article

  • Obfuscate strings in Python

    - by Caedis
    I have a password string that must be passed to a method. Everything works fine but I don't feel comfortable storing the password in clear text. Is there a way to obfuscate the string or to truly encrypt it? I'm aware that obfuscation can be reverse engineered, but I think I should at least try to cover up the password a bit. At the very least it wont be visible to a indexing program, or a stray eye giving a quick look at my code. I am aware of pyobfuscate but I don't want the whole program obfuscated, just one string and possibly the whole line itself where the variable is defined. Target platform is GNU Linux Generic (If that makes a difference)

    Read the article

  • Am I reindexing this Sphinx index correctly?

    - by Ethan
    According to the Thinking Sphinx docs... Turning on delta indexing does not remove the need for regularly running a full re-index ... So I set up this cron job... 50 10 * * * cd /var/www/my_app/current && /opt/ruby/bin/rake thinking_sphinx:index RAILS_ENV=production >> /var/www/my_app/current/log/reindexing.log 2>&1 Is that a reasonable way to do it? Should I be doing something different?

    Read the article

  • Best build dir location to use in Xcode

    - by neoneye
    I'm consolidating my Xcode/TextMate setup and is interested in where you put your build dir. Some years ago I started out having the build dir in the same dir as my xcodeproj file. However it became a mess when my project became a multi project with a applications and frameworks and tests, so I started using ../build as the build dir, so that all the sub projects used the same dir. However Spotlight is indexing this build dir and TextMate's global find is unusable when there is a build dir in the project. I'm thinking either using ~/.build or /build as Xcode's build dir. What build dir do you use and why?

    Read the article

  • Map a domain to an MVC area

    - by Simon_Weaver
    Anybody got any experience in mapping a domain to an MVC area? Here's our situation: Old system (still active but will soon redirect to new store): www.example.com - our main site where we send traffic store.example.com - our store site which is a completely separate site that is indexed in google New system: www.example.com - same site as before www.example.com/store - new store site - built in an ASP.NET MVC area Because store is a separate domain google gives it a separate entry in the search results. I'd like to keep this benefit in future but wondering whether or not there is a good way to map a domain (store.example.com) to the MVC area or if its just going to be more trouble than its worth. PS. I'm not trying to keep existing indexing - its a completely separate store so thats not possible. I just want to redirect to the corresponding page in the new store. I'm just trying not to lose the benefit of two domains for SEO purposes.

    Read the article

  • Per query relevance elevation for solr?

    - by plusplus
    I want to tune the relevance of solr search results on a per user basis - based on the number of times the user has clicked through a result before. Frequently hit items FOR THAT USER should rise to the top of their search results. Is there a way to provide custom boost/elevation for particular document ids on the query? I'm thinking in the order of ~100s of particular documents to elevate. The elevation should have no effect if the rest of the query doesn't find those documents. Alternatively, if this isn't possible, what is a sane way for setting up an alternative indexing approach that would make this possible? Could I add a field per user in the index to store their scores? I'm thinking in the order of 1000 users. The major drawback of that approach is the number of times a document would need to be reindexed (i.e. each time it was used by the user).

    Read the article

  • Why shouldnt i use flash again?

    - by acidzombie24
    I heard many times i should avoid flash for my website. Yet no one has told me a good reason. I searched for reasons and i see many that are not true (such as text in flash are not indexable by search engines) or may not necessarily be true or significant enough (eating more bandwidth. Would a JS equivalent be bigger or smaller?). My site uses flash to playback sound (m4a). I dont have to worry about indexing, the back button not working, etc. But i have feeling there may be other reasons. What are reasons i shouldnt use flash on my website. I'll note one, the fact iphone/itouch and mobile devices does not support it. Not a big deal for most sites and is obvious. What are reason to avoid flash on my site?

    Read the article

  • Unsigned versus signed numbers as indexes

    - by simendsjo
    Whats the rationale for using signed numbers as indexes in .Net? In Python, you can index from the end of an array by sending negative numbers, but this is not the case in .Net. It's not easy for .Net to add such a feature later as it could break other code perhaps using special rules (yeah, a bad idea, but I guess it happens) on indexing. Not that I have ever have needed to index arrays over 2,147,483,647 in size, but I really cannot understand why they choose signed numbers. Can it be because it's more normal to use signed numbers in code?

    Read the article

  • Why shouldn't I use Flash?

    - by acidzombie24
    I heard many times i should avoid flash for my website. Yet no one has told me a good reason. I searched for reasons and i see many that are not true (such as text in flash are not indexable by search engines) or may not necessarily be true or significant enough (eating more bandwidth. Would a JS equivalent be bigger or smaller?). My site uses flash to playback sound (m4a). I dont have to worry about indexing, the back button not working, etc. But i have feeling there may be other reasons. What are reasons i shouldnt use flash on my website. I'll note one, the fact iphone/itouch and mobile devices does not support it. Not a big deal for most sites and is obvious. What are reason to avoid flash on my site?

    Read the article

  • Using Lucene to Query File properties in Windows

    - by sneha
    Hi All, I am planning to use Apache lucense in one of my projects, I want to index files based on the file properties (I won’t be indexing the data) and I want lucense to query the index so that I can quickly find list of files to based on the properties . E.g: give me all the files with access time greater than 10/10/2005 and access time less than 10/04/2010 and file created by james. Can i use Lucene for these kind of projects ? or i better of using windows search (the foor print is very heavy almost 5 MB :( ) and i have to bundling this as part of my application is seems to tough. Can you please suggest is there any better alternatives here?

    Read the article

  • JAVA: Build XML document using XPath expressions

    - by snoe
    I know this isn't really what XPath is for but if I have a HashMap of XPath expressions to values how would I go about building an XML document. I've found dom-4j's DocumentHelper.makeElement(branch, xpath) except it is incapable of creating attributes or indexing. Surely a library exists that can do this? Map xMap = new HashMap(); xMap.put("root/entity/@att", "fooattrib"); xMap.put("root/array[0]/ele/@att", "barattrib"); xMap.put("root/array[0]/ele", "barelement"); xMap.put("root/array[1]/ele", "zoobelement"); would result in: <root> <entity att="fooattrib"/> <array><ele att="barattrib">barelement</ele></array> <array><ele>zoobelement</ele></array> </root>

    Read the article

  • What programming language is used to design google algorithm?

    - by AKN
    It is known that google has best searching & indexing algorithm. The also have good relevancy. They are also quicker in getting down the latest results. All that's fine. What programming language (c, c++, java, etc...) & database (oracle, MySQL, etc...) they have used in achieving this. Since they have to manipulate with volume of data quickly and effectively. Though I'm not looking for their indepth architecture (if in case violates their company policies) an overview of all such things could be useful. Anybody please add you valuable suggestions and insight on this?

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • Question about mysql indexes on low to medium cardinality columns

    - by Kevin J
    I have a general question about the way that database indexing works, particularly in mysql. Let's say I have a table with a million rows with a column "ClientID" that is distributed relatively equally among 30 values. Thus, this column is very low cardinality (30) relative to the primary key (1 million). Now, I understand that you shouldn't create indexes on low cardinality fields. However, in this case, queries are only ever done with one of the 30 clientIDs. Thus, wouldn't creating an index on ClientID be helpful, as the search space is automatically reduced to 1/30th what it normally would be? Or is my understanding of how the index works flawed? Thanks

    Read the article

  • Choosing a Portal / CMS software for developing multi brand websites?

    - by hbagchi
    We are in the early stage of overhauling a multi-brand website built using a custom developed java mvc framework to enable web 2.0 features. Built-in features we are looking at are: i18n, sso, content search and indexing, personalization, mashup support, ajax support, rich media content storage and management support, friendly to search engine optimizations, bookmarkable URLs, support for social networking sites, support for page composition and decoration using templates. A combination of these features are supported by many portal and cms software. Any insights will be very helpful in using a portal/cms combination to address this requirements! This is a follow-up on this post focusing on the portal/cms angle

    Read the article

  • What's the fastest way to scrape a lot of pages in php?

    - by Yegor
    I have a data aggregator that relies on scraping several sites, and indexing their information in a way that is searchable to the user. I need to be able to scrape a vast number of pages, daily, and I have ran into problems using simple curl requests, that are fairly slow when executed in rapid sequence for a long time (the scraper runs 24/7 basically). Running a multi curl request in a simple while loop is fairly slow. I speeded it up by doing individual curl requests in a background process, which works faster, but sooner or later the slower requests start piling up, which ends up crashing the server. Are there more efficient ways of scraping data? perhaps command line curl?

    Read the article

  • MS SQL Server 2000 tables

    - by klork
    We currently have an MS SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: 1) Does anyone have any experience with a MS SQL Server (2000/2005) installation with millions of tables? 2) What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. 3) What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Implications of Fulltext Search over many columns

    - by Alex
    Hello, I have a really wide table which includes separate columns for billing address, shipping address, primary address, names, aliases etc. (I can't normalize this table further, and that's not the question here anyways). I'm implementing SQL Server fulltext search, and I'm wondering whether I should limit the search ability to just the primary fields (primary address and names for example), or if I can extend the search ability across all columns without occurring too much of a performance or memory penalty. I've done some basic testing with 10,000 sample rows and it's quite fast but I don't have much experience with fulltext indexing, especially its dictionary internals, so I don't know if the index is going to grow over time, or if there is anything else to consider. Thoughts?

    Read the article

  • Detect movie being played (Windows)

    - by modosansreves
    Watching a movie is quite a different user activity. User doesn't touch neither mouse nor keyboard. Yet he 'actively' uses the computer. Thus, screensaver shouldn't run, indexing should be performed with care etc. On the other side, playing video requires either using direct write to video memory, or DirectShow, or some other API. This may be the key to the answer. What is the Dead Simple Way to determine that a video is being played?

    Read the article

  • 302 vs 301 redirect in this specific case

    - by Binder
    We have a website that displays information in a location based manner, i.e. it detects the IP of the visiting user and redirects him/her to an appropriate landing page; for e.g. a user coming from 'Egypt' will be redirected to http://www.mysite.com/egypt/cairo and a user visting from dubai will be redirected to http://www.mysite.com/uae/dubai, so on and so forth and we cater to multiple locations in the middle-east. Now, we have been advised by our SEO consultant that we should put a 301 (permanent redirect) on http://www.mysite.com to point to http://www.mysite.com/ksa/riyadh I would like to know the negative implications that this would have on Google indexing or otherwise, as I fundamentally disagree with this suggestion and believe that in a siutation like this a 302 redirect would be more appropriate.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >