Search Results

Search found 1124 results on 45 pages for 'indexing'.

Page 19/45 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to ignore noiseXXX.txt files for a specific column in SQL Server 2005?

    - by John MacIntyre
    I have a product table where the description column is fulltext indexed. The problem is, users frequently search a single word, which happens to be in the noiseXXX.txt files. We'd like to keep the noise word functionality enabled, but is there anyway to turn it off just for this one column? I think you can do this in 2008 with the SET STOPLIST=OFF, but I can't seem to find similar functionality in SQL Server 2005.

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • Should I rebuild table indexes after a SQL Server 2000 to 2005 database migration

    - by Joe T
    I'm tasked with doing a SQL Server 2000 to 2005 migration. I will be doing a side-by-side migration. After restoring from a backup I plan to do the following: ALTER DATABASE <database_name> SET COMPATIBILITY_LEVEL = 90; DBCC CHECKDB(<database_name>) WITH NO_INFOMSGS DBCC UPDATEUSAGE(<database_name>) WITH NO_INFOMSGS exec sp_updatestats ‘resample’ Should I rebuild table indexes before using DBCC UPDATEUSAGE and sp_updatestats? Have I missed anything obvious that should be executed after a migration? All help would be warmly up-voted. Thanks

    Read the article

  • Loading index in MemoryIndex instance

    - by Javi
    Hello, Is there any way to load an existing index into an instance of MemoryIndex?. I have an application which uses Hibernate Search so I can use index() in FullTextEntityManager instance to index an object. I'd like to recover back the created index and insert it into a MemoryIndex instance to execute several queries over it. Is it possible? Thanks.

    Read the article

  • Is it safe to modify CCK tables by hand?

    - by LanguaFlash
    I'm not intimately familiar with CCK but I have a one-time custom setup and know that I could get some performance gains if I created indexes and changed the field type and length of some of the fields in my CCK table. Is it save to modify this table at all or will I end up destroying something in the process? Thanks

    Read the article

  • A question about matrix manipulation

    - by appi
    Given a 1*N matrix or an array, how do I find the first 4 elements which have the same value and then store the index for those elements? PS: I'm just curious. What if we want to find the first 4 elements whose value differences are within a certain range, say below 2? For example, M=[10,15,14.5,9,15.1,8.5,15.5,9.5], the elements I'm looking for will be 15,14.5,15.1,15.5 and the indices will be 2,3,5,7.

    Read the article

  • Lucene Search Returning Extra, Undesired Records

    - by Brandon
    I have a Lucene index that contains a field called 'Name'. I escape all special characters before inserting a value into my index using QueryParser.Escape(value). In my example I have 2 documents with the following names respectively: Test Test (Test) They get inserted into my index as such (I can confirm this using Luke): [test] [test] [\(test\)] I insert these values as TOKENIZED and using the StandardAnalyzer. When I perform a search, I use the QueryParser.Escape(searchString) against my search string input to escape special characters and then use the QueryParser with my 'Name' field and the StandardAnalyzer to perform my search. When I perform a search for 'Test', I get back both documents in my index (as expected). However, when I perform a search for 'Test (Test)', I am getting back both documents still. I realize that in both examples it matches on the 'test' term in the index, but I am confused in my 2nd example why it would not just pull back the document with the value of 'Test (Test)' because my search should create two terms: [test] and [\(test\)] I would imagine it would perform some sort of boolean operator where BOTH terms must match in that situation so I would get back just one record. Is there something I am missing or a trick to make the search behave as desired?

    Read the article

  • how and when to update a mysql index?

    - by fayer
    im using this sql query to create an index: $query = "CREATE INDEX id_index2 ON countries(geoname_id, name)"; but how do i update the index when new entries are added? should i run a php script with the update query in CRON and run it every night? is this best practice for automated index updating?

    Read the article

  • User activity vs. System activity on the Index Usage Statistics report

    - by Zachary G Jensen
    I recently decided to crawl over the indexes on one of our most heavily used databases to see which were suboptimal. I generated the built-in Index Usage Statistics report from SSMS, and it's showing me a great deal of information that I'm unsure how to understand. I found an article at Carpe Datum about the report, but it doesn't tell me much more than I could assume from the column titles. In particular, the report differentiates between User activity and system activity, and I'm unsure what qualifies as each type of activity. I assume that any query that uses a given index increases the '# of user X' columns. But what increases the system columns? building statistics? Is there anything that depends on the user or role(s) of a user that's running the query?

    Read the article

  • significance of index name in creating an index (mySQL)

    - by Will
    I've done something like this in order to use on duplicate key update: CREATE UNIQUE INDEX blah on mytable(my_col_to_make_an_index); and its worked just fine. I'm just not sure what the purpose of the index name is -- in this case 'blah'. The stuff I've read says to use one but I can't fathom why. It doesn't seem to be used in queries, although I can see it if I export the schema. So ... what purpose does the index name serve? If it helps the line in the CREATE TABLE ends up looking like: UNIQUE KEY `clothID` (`clothID`)

    Read the article

  • Can I tell sitecrawlers to visit a certain page?

    - by Ace
    Hi there! I have this drupal website that revolves around a document database. By design you can only find these documents by searching the site. But I want all the results to be indexed by Googlebot and other crawlers, so I was thinking, what if I make a page that lists all the documents, and then tell the robots to visit the page to index all my documents..? Is this possible, or is there a better way to do it?

    Read the article

  • How should I implement simple caches with concurrency on Redis?

    - by solublefish
    Background I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis. I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos. In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply: Dictionary<int,Foo> _cacheFooById; Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId; Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant: "For a given group [say by OwnerId], the whole group is in cache or none of it is." So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't. What I did already The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value: SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo)); I later decided I liked: HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes. This works to first order. If my high-level code is like: UpdateCache(myFoo); AddToIndex(myFoo); That translates into: HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) ); myFoos.Add(theFoo.OwnerId); HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos)); However, this is broken in two ways. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find. So how do I index properly? I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic. I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC? I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache. If I understand it right, I can combine all these things to do the job. Something like: while(!succeeded) { WATCH( "ServiceCache:Foo:" + theFoo.FooId ); WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId ); WATCH( "ServiceCache:FooIndexAll" ); MULTI(); SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo)); SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId); SADD ("ServiceCache:FooIndexAll", theFoo.FooId); EXEC(); //TODO somehow set succeeded properly } Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together. All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing. How do I lock properly? Even if I had no indexes, there's still a (probably rare) race condition. A: HGET - cache miss B: HGET - cache miss A: SELECT B: SELECT A: HSET C: HGET - cache hit C: UPDATE C: HSET B: HSET ** this is stale data that's clobbering C's update. Note that C could just be a really-fast A. Again I think WATCH, MULTI, retry would work, but... ick. I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here? Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ? How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

    Read the article

  • How do I return the indices of a multidimensional array element in C?

    - by Eddy
    Say I have a 2D array of random boolean ones and zeroes called 'lattice', and I have a 1D array called 'list' which lists the addresses of all the zeroes in the 2D array. This is how the arrays are defined: define n 100 bool lattice[n][n]; bool *list[n*n]; After filling the lattice with ones and zeroes, I store the addresses of the zeroes in list: for(j = 0; j < n; j++) { for(i = 0; i < n; i++) { if(!lattice[i][j]) // if element = 0 { list[site_num] = &lattice[i][j]; // store address of zero site_num++; } } } How do I extract the x,y coordinates of each zero in the array? In other words, is there a way to return the indices of an array element through referring to its address?

    Read the article

  • get index name within specific table name

    - by AmRoSH
    I need to check if this index not exist in specific table name not in all tables because this select statement select all indexes under this condition. IF NOT EXISTS (SELECT name from sysindexes WHERE name = 'IDX_InsuranceID') CREATE NONCLUSTERED INDEX [IDX_InsuranceID] ON [dbo].[QuoteInsurances] ( [InsuranceID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY] GO Thanks,

    Read the article

  • Is it possible to have a composite index with a list property and a sort order?

    - by npdoty
    And if not, why not? The following index always fails for me, even though I had thought I could have a sort order with a list property as long as the index didn't sort or match against any other properties. - kind: Foo properties: - name: location_geocells - name: time direction: desc If such a composite index is allowed, are there any reasons that this might be failing for me? Do the existence of other indices on the same model increase the likelihood of this failure? Does the combination of a sort order with a list property require more than N entries, where N is the number of values in the list property? (If so, how many does it require?)

    Read the article

  • Update table with index is too slow

    - by pauloya
    Hi, I was watching the Profiler on a live system of our application and I saw that there was an update instruction that we run periodically (every second) that was quite slow. It took around 400ms every time. The query includes this update (which is the slow part) UPDATE BufferTable SET LrbCount = LrbCount + 1, LrbUpdated = getdate() WHERE LrbId = @LrbId This is the table CREATE TABLE BufferTable( LrbId [bigint] IDENTITY(1,1) NOT NULL, ... LrbInserted [datetime] NOT NULL, LrbProcessed [bit] NOT NULL, LrbUpdated [datetime] NOT NULL, LrbCount [tinyint] NOT NULL, ) The table has 2 indexes (non unique and non clustered) with the fields by this order: * Index1 - (LrbProcessed, LrbCount) * Index2 - (LrbInserted, LrbCount, LrbProcessed) When I looked at this I thought that the problem would come from Index1 since LrbCount is changing a lot and it changes the order of the data in the index. But after desactivating index1 I saw the query was taking the same time as initially. Then I rebuilt index1 and desactivated index2, this time the query was very fast. It seems to me that Index2 should be faster to update, the order of the data shouldn't change since the LrbInserted time is not changed. Can someone explain why index2 is much heavier to update then index1? Thank you!

    Read the article

  • Problem with PHP array index.

    - by user632027
    Following code: <?php $test_array = array(); $test_array['string_index'] = "data in string index"; $test_array[] = "data in index 0"; $test_array[] = "data in index 1"; $test_array[] = "data in index 2"; foreach($test_array as $key => $val) { if($key != 'string_index') { echo $val."<br>"; } } ?> gives result: data in index 1 data in index 2 Question is - where is "data in index 0"??? How to get elements from numeric indices 0-n? Also if I change 'string_index' to something else which doesn't exist, it echoes everything except [0]. Plz, explain me this. Thnx in advance

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >