Search Results

Search found 991 results on 40 pages for 'indexed'.

Page 28/40 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Java object caching, which is faster, reading from a file or from a remote machine?

    - by Kumar225
    I am at a point where I need to take the decision on what to do when caching of objects reaches the configured threshold. Should I store the objects in a indexed file (like provided by JCS) and read them from the file (file IO) when required or have the object stored in a distributed cache (network, serialization, deserialization) We are using Solaris as OS. ============================ Adding some more information. I have this question so as to determine if I can switch to distributed caching. The remote server which will have cache will have more memory and better disk and this remote server will only be used for caching. One of the problems we cannot increase the locally cached objects is , it stores the cached objects in JVM heap which has limited memory(using 32bit JVM). ======================================================================== Thanks, we finally ended up choosing Coherence as our Cache product. This provides many cache configuration topologies, in process vs remote vs disk ..etc.

    Read the article

  • noindex, follow on list views?

    - by Fabrizio
    On one of our client's website we have lot's of list views with links to detail views. (Image a blog with the posts overview and the single pages). The detail views don't change, but the list views will change when new items come up. The pages displaying the list view don't contain any other valuable content. So my question is: Does it make sense to define meta "noindex, follow" on the list view pages (and of course "index, follow" on the detail views) to prevent search engines to point to the list views when the keyword is found in the title or teaser of the list view. By the time the visitor clicks on the list view search result it might have changed and the content is not visible anymore, whereas if he goes directly to the single view he will definitly find what he was searching for? Related question: The startpage also contains mainly a list view. Is it a bad idea to have the start page not indexed? Any SEO gurus here? :) Thanks, Fabrizio.

    Read the article

  • Unique keys for Sphinx along three vectors instead of two

    - by Brendon Muir
    I'm trying to implement thinking-sphinx across multiple 'sites' hosted under a single rails application. I'm working with the developer of thinking-sphinx to sort through the finer details and am making good progress, but I need help with a maths problem: Usually the formula for making a unique ID in a thinking-sphinx search index is to take the id, multiply it by the total number of models that are searchable, and add the number of the currently indexed model: id * total_models + current_model This works well, but now I also through an entity_id into the mix, so there are three vextors for making this ID unique. Could someone help me figure out the equation to gaurantee that the id's will never collide using these three variables: id, total_models, total_entities The entity ID is an integer. I thought of: id * (total_models + total_entities) + (current_model + current_entity) but that results in collisions. Any help would be greatly appreciated :)

    Read the article

  • Solr dataimport skips entities in my data-config.xml

    - by lerhaupt
    My data-config.xml defines 3 different entities under the document tag (lets call them foo, bar and baz). When I issue a basic full import localhost:8983/solr/dataimport?command=full-import, only 2 of the 3 entities get indexed (foo and bar are in my index but baz never makes it). However, if I then issue a command to just import baz via localhost:8983/solr/dataimport?command=full-import&entity=baz&clean=false it adds baz documents just fine and the index then has all 3 types. Does anyone have any thoughts on why one entity gets skipped in the general data import but then still works okay if I specifically call it out? Is there an error/warning log I can check? Nothing bad shows up in /solr/logs/ but those just appear to be request logs.

    Read the article

  • Different execution plan for similar queries

    - by Graham Clements
    I am running two very similar update queries but for a reason unknown to me they are using completely different execution plans. Normally this wouldn't be a problem but they are both updating exactly the same amount of rows but one is using an execution plan that is far inferior to the other, 4 secs vs 2 mins, when scaled up this is causing me a massive problem. The only difference between the two queries is one is using the column CLI and the other DLI. These columns are exactly the same datatype, and are both indexed exactly the same, but for the DLI query execution plan, the index is not used. Any help as to why this is happening is much appreciated. -- Query 1 UPDATE a SET DestKey = ( SELECT TOP 1 b.PrefixKey FROM refPrefixDetail AS b WHERE a.DLI LIKE b.Prefix + '%' ORDER BY len(b.Prefix) DESC ) FROM CallData AS a -- Query 2 UPDATE a SET DestKey = ( SELECT TOP 1 b.PrefixKey FROM refPrefixDetail b WHERE a.CLI LIKE b.Prefix + '%' ORDER BY len(b.Prefix) DESC ) FROM CallData AS a

    Read the article

  • Lucene.NET - Find documents that do not contain a specified field

    - by Brandon
    Let's say I have 2 instance of a class called 'Animal'. Animal has 3 fields: Name, Age, and Type The name field is nullable, so before I insert an instance of Animal as a Lucene indexed document, I check if Animal.Name == null, and if it does, I do not insert it as a field in my document. If I were to retrieve all animals, I would see that the Name field does not exist and I can set its value to null. However, there may be situations where I want to say "Get me all animals that do not have a name specified yet." In this situation I want to retrieve all Lucene.NET documents from my animal index that do not contain the Name field. Is there an easy way to do this with Lucene.NET? I want to stay away from having to perform some sort of hack to check if my name field has a value of 'null'.

    Read the article

  • Map a domain to an MVC area

    - by Simon_Weaver
    Anybody got any experience in mapping a domain to an MVC area? Here's our situation: Old system (still active but will soon redirect to new store): www.example.com - our main site where we send traffic store.example.com - our store site which is a completely separate site that is indexed in google New system: www.example.com - same site as before www.example.com/store - new store site - built in an ASP.NET MVC area Because store is a separate domain google gives it a separate entry in the search results. I'd like to keep this benefit in future but wondering whether or not there is a good way to map a domain (store.example.com) to the MVC area or if its just going to be more trouble than its worth. PS. I'm not trying to keep existing indexing - its a completely separate store so thats not possible. I just want to redirect to the corresponding page in the new store. I'm just trying not to lose the benefit of two domains for SEO purposes.

    Read the article

  • Concatenating databases with Squeryl

    - by Pengin
    I'm trying to use Squeryl to take the contents of a table from one database, and append it to the equivalent table in another database. The primary key will have to be reassigned in the process, but I'm getting the error NULL not allowed for column "SIMID". Why is this? object Concatenator { def main(args: Array[String]) { Class.forName("org.h2.Driver"); val seshA = Session.create( java.sql.DriverManager.getConnection("jdbc:h2:file:data/resultsA", "sa", "password"), new H2Adapter ) val seshB = Session.create( java.sql.DriverManager.getConnection("jdbc:h2:file:data/resultsB", "sa", "password"), new H2Adapter ) using(seshA){ import Library._ from(sims){s => select(s)}.foreach{item => using(seshB){ sims.insert(item); } } } } case class Simulation( @Column("SIMID") var id: Long, val date: Date ) extends KeyedEntity[Long] object Library extends Schema { val sims = table[Simulation] on(sims)(s => declare( s.id is(unique, indexed, autoIncremented) )) } }

    Read the article

  • What's a reasonable number of rows and tables to be able to join in MySQL?

    - by Philip Brocoum
    I have one table that maps locations to postal codes. For example, New York State has about 2000 postal codes. I have another table that maps mail to the postal codes it was sent to, but this table has about 5 million rows. I want to find all the mail that was sent to New York State, which seems simple enough, but the query is unbelievably slow. I haven't been able to even wait long enough for it to finish. Is the problem that there are 5 million rows? I can't help but think that 5 million shouldn't be such a large number for a computer these days... Oh, and everything is indexed. Is SQL just not designed to handle such large joins?

    Read the article

  • How to use a variable to specify filegroup in MSSQL

    - by gt
    I want to alter a table to add a constraint during upgrade on a MSSQL database. This table is normally indexed on a filegroup called 'MY_INDEX' - but may also be on a database without this filegroup. In this case I want the indexing to be done on the 'PRIMARY' filegroup. I tried the following code to achieve this: DECLARE @fgName AS VARCHAR(10) SET @fgName = CASE WHEN EXISTS(SELECT groupname FROM sysfilegroups WHERE groupname = 'MY_INDEX') THEN QUOTENAME('MY_INDEX') ELSE QUOTENAME('PRIMARY') END ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY ( [myGuid] ASC ) ON @fgName -- fails: 'incorrect syntax' However, the last line fails as it appears a filegroup cannot be specified by variable. Is this possible?

    Read the article

  • How do I index documents in SOLR?

    - by Shane
    Hi there, Im running Solr 1.4 on Ubuntu 10.04 (installed via apt-get solr-tomcat) and it seems to be working fine. Im having some difficulty finding any coherent info on how to index documents though. Im new to SOLR so bear with me! I have a folder (/mnt/folder) that is a mounted windows share, which contains Word and PDF files that I would like indexed, whats the easiest way to get SOLR to index the entire folder? The documentation for SOLR is pretty poor, its impossilbe to find any decent tutorials on getting things done with it so any help is greatly appreciated! S

    Read the article

  • Adding search for a private website

    - by Vitor Py
    I have a login-protected website. It's an internal application and it's not avaiable to the general public hence it's not indexed by any search engine. My application is developed on the Google App Engine. I would like to add a search engine but obviously without the need to public index it. There's any solution avaiable from Google/Bing/Others for a situation like this? Have you done this before? What solution did you chose and what are yours results?

    Read the article

  • How can I make sure the Sphinx daemon runs?

    - by Ethan
    I'm working on setting up a production server using CentOS 5.3, Apache, and Phusion Passenger (mod_rails). I have an app that uses the Sphinx search engine and the Thinking Sphinx gem. According to the Thinking Sphinx docs... If you actually want to search against the indexed data, then you’ll need Sphinx’s searchd daemon to be running. This can be controlled using the following tasks: rake thinking_sphinx:start rake ts:start rake thinking_sphinx:stop rake ts:stop What would be the best way to ensure that this takes place in production? I can deploy my app, then manually run rake thinking_sphinx:start, but I like to set things up so that if I have to bounce the server, everything will come back up. Should I put a call to that Rake task in an initializer? Or something in rc.local?

    Read the article

  • How to use a variable to specify filegroup in SQL Server

    - by gt
    I want to alter a table to add a constraint during upgrade on a SQL Server database. This table is normally indexed on a filegroup called 'MY_INDEX' - but may also be on a database without this filegroup. In this case I want the indexing to be done on the 'PRIMARY' filegroup. I tried the following code to achieve this: DECLARE @fgName AS VARCHAR(10) SET @fgName = CASE WHEN EXISTS(SELECT groupname FROM sysfilegroups WHERE groupname = 'MY_INDEX') THEN QUOTENAME('MY_INDEX') ELSE QUOTENAME('PRIMARY') END ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY ( [myGuid] ASC ) ON @fgName -- fails: 'incorrect syntax' However, the last line fails as it appears a filegroup cannot be specified by variable. Is this possible?

    Read the article

  • Using Full-Text Search in SQL Server 2005 across multiple tables, columns

    - by crisgomez
    Hi, I have a problem, I created a full text search query which return a record(s), in which the paramater I have supplied match(es) in every fields(full-text indexed) of multiple tables. The problem is , when the user.id is equal to ceritification.AId it returns a records eventhough it was not satisfied with the parameter supplied. For this example I supplied a value "xandrick" which return an Id=184, but the problem is it returns two ids which is 184 and 154.What is the best way to return an ID(s) that satisfied of the supplied given value? User table Id Firstname Lastname Middlename Email AlternativeEmail 154 Gregorio Honasan Pimentel [email protected] [email protected] 156 Qwerty Qazggf fgfgf [email protected]. [email protected] 184 Xandrick Flores NULL [email protected] null Certification table Id AID Certification School 12 184 sdssd AMA 13 43 web-based and framework 2 Asian development foundation college 16 184 hjhjhj STI 17 184 rrrer PUP 18 154 vbvbv AMA SELECT DISTINCT Users.Id FROM Users INNER JOIN Certification on Users.Id=Certification.aid LEFT JOIN FREETEXTTABLE (Users,(Firstname,Middlename,Lastname,Email,AlternativeEmail), 'xandrick' )as ftUsr ON Users.Id=ftUsr.[KEY] LEFT JOIN FREETEXTTABLE (Certification,(Certification,School), 'xandrick' )as ftCert ON Certification.Id=ftCert.[KEY]

    Read the article

  • Perfect hash in Scala.

    - by Lukasz Lew
    I have some class C: class C (...) { ... } I want to use it to index an efficient map. The most efficient map is an Array. So I add a "global" "static" counter in companion object to give each object unique id: object C { var id_counter = 0 } In primary constructor of C, with each creation of C I want to remember global counter value and increase it. Question 1: How to do it? Now I can use id in C objects as perfect hash to index array. But array does not preserve type information like map would, that a given array is indexed by C's id. Question 2: Is it possible to have it with type safety?

    Read the article

  • GQL, Aggregation and Order By

    - by Koran
    Hi, How can GQL support ORDER BY when it does not support aggregation? The question is - if say the result of the query is more than 1000, does ORDER BY return fully ordered list or only the first 1000 items which is then ordered? To explain the question more: is conceptually MIN() same as query.orderby('asc').fetch(1)? If it is properly ordering the list, then how can it not provide COUNT(), since to properly order the list, GQL possibly has to parse through the whole list - in which case, COUNT() is not an issue at all? Or is item indexed and kept in some type of tree so that it does not need to parse it all the time?

    Read the article

  • Filtering data in an array.

    - by user276424
    Hi all, I have an array that has 30 date objects. The date objects are indexed in the array from the minimum date value to the maximum date value. What I would like to do is retrieve only 7 dates from the array. Out of the 7, the first one should be the minDate and the last should be the maxDate, with 5 dates in the middle. The 7 numbers should increment evenly from the minDate to the maxDate. How would I accomplish this? Hope I was clear. Thanks, Tonih

    Read the article

  • Showing all rows for keys with more than one row

    - by Leif Neland
    Table kal id integer primary key init char 4 indexed job char4 id init job --+----+------ 1 | aa | job1 2 | aa | job2 3 | bb | job1 4 | cc | job3 5 | cc | job5 I want to show all rows where init has more than one row id init job --+----+------ 1 | aa | job1 2 | aa | job2 4 | cc | job3 5 | cc | job5 I tried select * from kal where init in (select init from kal group by init having count(init)2); Actually, the table has 60000 rows, and the query was count(init)<40, but it takes humongus time, phpmyadmin and my patience runs out. Both select init from kal group by init having count(init)2) and select * from kal where init in ('aa','bb','cc') runs in "no time", less than 0.02 seconds. I've tried different subqueries, but all takes "infinite" time, more than a few minutes; I've actually never let them finish. Leif

    Read the article

  • JS dynamic img change and SEO

    - by Gusepo
    Hi all, I've built a web site using jquery to make nice transitions between content. The code works this way: there are 2 imgs (body and footer) when I click on a link (instead of going to another page) I fade out the 2 imgs and change the src attribute of the 2. When the new imgs are loaded I fade them back in. I'm using SWFaddress to allow user go directly to internal content. Now I'd like to make my content indexed by google and other Search engines, all the text content is inside the imgs, So I've got the text in ALT attribute. My question is: if a dinamically change the imgs ALT attribute using JS, will spiders be able to read it properly? consider that I'm using SWFaddress to create a sitemap.. Thanks

    Read the article

  • Solr spellcheck configuration

    - by Bogdan Gusiev
    I am trying to build the spellcheck index with IndexBasedSpellChecker <lst name="spellchecker"> <str name="name">default</str> <str name="field">text</str> <str name="spellcheckIndexDir">./spellchecker</str> </lst> And I want to specify the dynamic field "*_text" as the field option: <dynamicField name="*_text" stored="false" type="text" multiValued="true" indexed="true"> How it can be done?

    Read the article

  • Sphinx + tokyo Tyrant + mysql

    - by stunti
    I'm looking at creating a full text search engine for one of my projects. We have a Mysql, Tokyo Tyrant and file documents that need to be indexed. I'm looking at Sphinx right now but I can't figured out if I can use it to index every document. I know it's possible to let Sphinx to use Mysql but I'm looking at a way to let Sphinx index and query Tokyo Tyrant as well as index file documents. It could be Sphinx or Xapian or another one but no JAVA (Lucene is out) but something that can be used with PHP and run on Linux. Any idea of a search engine that can accept more that Mysql as the source? Thanks

    Read the article

  • What mail storage should I choose for our web application; IMAP, key-valud store, rdbms, ...

    - by tvrtko
    I have to store e-mail messages for use with our application. I have "metadata" for all messages inside a relational database, but I don't feel comfortable keeping message content (gigabytes and terabytes of email data) inside a database. I'm currently using IMAP as a storage, but I have my doubts if I choose correctly. First of all there is a problem of uidvalidity and how to keep a permanent reference to message inside IMAP. Second, I'm not sure if this is the most robust solution in terms of backup/restore strategies, corruption of store, replication ... Positive side is that I can query IMAP using the headers because the data is mostly indexed. I don't know if key-value stores are a better approach (Casandra, Tokyo cabinet, redis). How they handle storing 1KB and 50MB of data. How they prevent corruption and when corruption or device failure happens how can I repair the store.

    Read the article

  • 2 sites each in a different country with 1 set of content (cloaking)

    - by Greg
    Hi, I have a question re: cloaking. I have a friend who has a business in Canada and the UK. Currently the .ca site is hosted on Godaddy. The co.uk domain is registered (with uk ip address) with domainmonster and is using a cloaked/framed redirect to the .ca site. As a result (my assumption) the .ca site is indexed fine by google, the .co.uk is not. The content is generic for both sites. How do I point the .co.uk site directly to the content independently (preferably without duplicating the content hosting in the UK), so that for instance if the .ca domain was taken away altogether the .co.uk domain would remain an entity in itself from Google's point of view? Does Google index a generic set of content and then associate different country domains with that content? I hope I have explained this ok. Thanks, Greg

    Read the article

  • removed url from google webmaster tool, now goole don't show my website in search

    - by vicky
    I developed the site, and I removed the url from google webmaster tool which was like "http://example.com\". I did this because google show this in search with underconstruction Title, which was my previous page. When i completed the website, i removed the url from there, and added sitemap etc to have new copy of site. Now i see in webmaster tools, that all pages are indexed, but still no success. Yahoo and Bing are showing my page when searched. Help me to fix this problem. Regards, vicky

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >