Search Results

Search found 1650 results on 66 pages for 'indexes'.

Page 16/66 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • Building a case for solr

    - by Midhat
    Our product consists of multiple applications, All using Lucene. 2 of the applications I am involved with have Lucene indexes of about 3 GB and 12GB. Another team is building an application, for which they estimate the LUCENE INDEX size to be close to 1 Terabyte. New documents are added to the indexes every 15 days approx. We do not have any apparent performance issues with the current applications. So my question is SHould we be using Solr now? When should one stop using Lucene and graduate to Solr? Any disadvantages/problems for using Solr? The client applications are made in ASP.Net, but I assume they will be able to use a solr server using solrnet

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • SQL Server 2008 Table Maintenance - Rebuild, Reorganize, Update Stats, Check Integrity etc HELP!

    - by Albert
    I'm migrating a ~15GB database from SQL Server 2005 to a new server running SQL Server 2008, and along with that I need to create all the new Maintenance Plans. I can take care of all the backup stuff, but the table maintenance baffles me some. Does anyone have any input on how often I should (or how often you do would suffice too) the following tasks? Check Database Integrity Rebuild Indexes Reorganize Indexes Update Statistics Shrink Database? Am I missing anything? Again if you can share how often you do these tasks that would be great...and/or share any general information about your approach to table maintenance that would be helpful. Lastly does it matter what order I run these tasks in (when setting up a job)?

    Read the article

  • JLayeredPane versus Container layering

    - by Gili
    JLayeredPane allows one to stack multiple Components on top of one another using JLayeredPane.add(Component, Integer). Components in higher "layers" display on top of Components in lower "layers". Container.add(Component, int) provides a similar mechanism whereby Components with lower indexes display on top of Components with higher indexes. Please note that the first mechanism uses Integer and the second mechanism uses int. Also, one renders high values on top of low ones, and the other does the opposite. Do not mix the two :) My question is: what's the point of using JLayeredPane when Container already provides the same mechanism? Does one layer components better than the another? UPDATE: There is also Container.setComponentZOrder(Component, int) to consider.

    Read the article

  • How to check if a Statistics is auto-created in a SQL Server 2000 DB using T-SQL?

    - by The Shaper
    Hi all. A while back I had to come up with a way to clean up all indexes and user-created statistics from some tables in a SQL Server 2005 database. After a few attempts it worked, but now I gotta have it working in SQL Server 2000 databases as well. For SQL Server 2005, I used SELECT Name FROM sys.stats WHERE object_id = object_id(@tableName) AND auto_created = 0 to fetch Statistics that were user-created. However, SQL 2000 doesn't have a sys.stats table. I managed to fetch the indexes and statistics in a distinguishable way from the sysindexes table, but I just couldn't figure out what the sys.stats.auto_created match is for SQL 2000. Any pointers? BTW: T-SQL please.

    Read the article

  • Sql server indexed view

    - by Jose
    OK, I'm confused about sql server indexed views(using 2008) I've got an indexed view called AssignmentDetail when I look at the execution plan for select * from AssignmentDetail it shows the execution plan of all the underlying indexes of all the other tables that the indexed view is supposed to abstract away. I would think that the execution plan woul simply be an clustered index scan of PK_AssignmentDetail(the name of the clustered index for my view) but it doesn't. There seems to be no performance gain with this indexed view what am I supposed to do? Should I also create a non-clustered index with all of the columns so that it doesn't have to hit all the other indexes? Any insight would be greatly appreciated

    Read the article

  • SSIS - Parallel Execution of Tasks - How efficient is it?

    - by Randy Minder
    I am building an SSIS package that will contain dozens of Sequence tasks. Each Sequence task will contain three tasks. One to truncate a destination table and remove indexes on the table, another to import data from a source table, and a third to add back indexes to the destination table. My question is this. I currently have nine of these Sequences tasks built, and none are dependent on any of the others. When I execute the package, SSIS seems to do a pretty good job of determining which tasks in which Sequence to execute, which, by the way, appears to be quite random. As I continue adding more Sequences, should I attempt to be smarter about how SSIS should execute these Sequences, or is SSIS smart enough to do it itself? Thanks.

    Read the article

  • What is an index in MySQL?

    - by Eric
    http://i.imgur.com/JdsUK.jpg I created a table like the picture above. What are the "Indexes"? primary key? unique? It works well without setting indexes.. What do they do? why do I need them? Also, I set all String fields to TEXT because I didn't know how many characters I need. Is this a good idea? I don't see any difference. Thanks!

    Read the article

  • Dynamic mass hosting using mod_wsgi

    - by Virgil Balibanu
    Hi, I am trying to configure an apache server using mod_wsgi for dynamic mass hosting. Each user will have it's own instance of a python application located in /mnt/data/www/domains/[user_name] and there will be a vhost.map telling me which domain maps to each user's directory (the directory will have the same name as the user). What i do not know is how to write the WSGIScriptAliasMatch line so that it also takes the path from the vhost.map file. What i want to do is something like this: I can have on my server different domains like www.virgilbalibanu.com or virgil.balibanu.com and flaviu.balibanu.com where each domain would belog to another user, the user name having no neccesary connection to the domain name. I want to do this beacuse a user, wehn he makes an acoount receives something like virgil.mydomain.com but if he has his own domain he can change it later to that, for example www.virgilbalibanu.ro, and this way I would only need to chenage the line in the vhost.map file So far I have something like this: Alias /media/ /mnt/data/www/iitcms/media/ #all media is taken from here RewriteEngine on RewriteMap lowercase int:tolower # define the map file RewriteMap vhost txt:/mnt/data/www/domains/vhost.map #this does not work either, can;t say why atm RewriteCond %{REQUEST_URI} ^/uploads/ RewriteCond ${lowercase:%{SERVER_NAME}} ^(.+)$ RewriteCond ${vhost:%1} ^(/.*)$ RewriteRule ^/(.*)$ %1/media/uploads/$1 #---> this I have no ideea how i could do WSGIScriptAliasMatch ^([^/]+) /mnt/data/www/domains/$1/apache/django.wsgi <Directory "/mnt/data/www/domains"> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <DirectoryMatch ^/mnt/data/www/domains/([^/]+)/apache> AllowOverride None Options FollowSymLinks ExecCGI Order deny,allow Allow from all </DirectoryMatch> <Directory /mnt/data/www/iitcms/media> AllowOverride None Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> <DirectoryMatch ^/mnt/data/www/domains/([^/]+)/media/uploads> AllowOverride None Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </DirectoryMatch> I know the part i did with mod_rewrite doesn't work, couldn't really say why not but that's not as important so far, I am curious how could i write the WSGIScriptAliasMatch line so that to accomplish my objective. I would be very grateful for any help, or any other ideas related to how i can deal with this. Also it would be great if I'd manage to get each site to run in wsgi daemon mode, thou that is not as important. Thanks, Virgil

    Read the article

  • Oracle - Is there any effects of not having a primary key on a table ?

    - by Sathya
    We use sequence numbers for primary keys on the tables. There are some tables where we dont really use the primary key for any querying purpose. But, we have Indexes on other columns. These are non-unique indexes. The queries use these non-primary key columns in the WHERE conditions. So, I dont really see any benefit of having a primary key on such tables. My experience with SQL 2000 was that, it used to replicate tables which had some primary key. Otherwise it would not. I am using Oracle 10gR2. I would like to know if there are any such side-effects of having tables that dont have primary key.

    Read the article

  • Connection to DB2 in Python

    - by Mestika
    Hi, I'm trying to create a database connection in a python script to my DB2 database. When the connection is done I've to run some different SQL statements. I googled the problem and has read the ibm_db API (http://code.google.com/p/ibm-db/wiki/APIs) but just can't seem to get it right. Here is what I got so far: import sys import getopt import timeit import multiprocessing import random import os import re import ibm_db import time from string import maketrans query_str = None conn = ibm_db.pconnect("dsn=write","usrname","secret") query_stmt = ibm_db.prepare(conn, query_str) ibm_db.execute(query_stmt, "SELECT COUNT(*) FROM accounts") result = ibm_db.fetch_assoc() print result status = ibm_db.close(conn) but I get an error. I really tried everything (or, not everything but pretty damn close) and I can't get it to work. I just need to make a automatic test python script that can test different queries with different indexes and so on and for that I need to create and remove indexes a long the way. Hope someone has a solutions or maybe knows about some example codes out there I can download and study. Thanks Mestika

    Read the article

  • How do I add/remove items to a ListView in virtual mode?

    - by Eric
    If I'm using a ListView in virtual mode then, as I understand it, the list view only keeps track of a small number of items in the list. As the user scrolls it dynamically retrieves items it needs to show from the virtual list. But what if an item is added or removed from the master list? If an item is added/removed outside of the range of indexes being shown by the list view then I would assume the list view would show the added/missing items when the user scrolls to that index. Is this correct? But what if an item is added/removed from the range of indexes the user is currently viewing? How do I trigger the list view to refresh the items it is currently viewing to show the new/missing items?

    Read the article

  • Performance - User defined query / filter to search data

    - by Cagatay Kalan
    What is the best way to design a system where users can create their own criterias to search data ? By "design" i mean, data storage, data access layer and search structure. We will actually refactor an existing application which is written in C# and ASP .NET and we don't want to change the infrastructure. Our main issue is performance and we use MSSQL and DevExpress to build queries. Some queries run in 4-5 minutes and all the columns included in the queries have indexes. When i check queries, i see that DevExpress builds too many "exists" clauses and i'm not happy with that because i have doubts that some of these queries skip some indexes. What may be the alternatives to DevExpress? NHibernate or Entity Framework? Can we build dynamic criteria system and store these to database in both of them ? And also do we need any alternative storage like a lucene index or OLAP database?

    Read the article

  • Data Access from single table in sql server 2005 is too slow

    - by Muhammad Kashif Nadeem
    Following is the script of table. Accessing data from this table is too slow. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Emails]( [id] [int] IDENTITY(1,1) NOT NULL, [datecreated] [datetime] NULL CONSTRAINT [DF_Emails_datecreated] DEFAULT (getdate()), [UID] [nvarchar](250) COLLATE Latin1_General_CI_AS NULL, [From] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [To] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [Subject] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [Body] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [HTML] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [AttachmentCount] [int] NULL, [Dated] [datetime] NULL ) ON [PRIMARY] Following query takes 50 seconds to fetch data. select id, datecreated, UID, [From], [To], Subject, AttachmentCount, Dated from emails If I include Body and Html in select then time is event worse. indexes are on: id unique clustered From Non unique non clustered To Non unique non clustered Tabls has currently 180000+ records. There might be 100,000 records each month so this will become more slow as time will pass. Does splitting data into two table will solve the problem? What other indexes should be there?

    Read the article

  • Powershell function that creates a array by input

    - by user2971548
    I'm quite new to Powershell and working on a little project with functions. What I'm trying to do is creating a function that takes 2 arguments. The first argument ($Item1) decides the size of the array, the second argument ($Item2) decides the value of the indexes. So if I write: $addToArray 10 5 I need the function to create a array with 10 indexes and the value 5 in each of them. The second argument would also have to take "text" as a value. This is my code so far. $testArray = @(); $indexSize = 0; function addToArray($Item1, $Item2) { while ($indexSize -ne $Item1) { $indexSize ++; } Write-host "###"; while ($Item2 -ne $indexSize) { $script:testArray += $Item2; $Item2 ++; } } Any help is appreciated. Kind regards Dennis Berntsson

    Read the article

  • install CakePHP on Mac osx: apache problems

    - by ed209
    First time cake user and I'm having real apache problems. For some reason the .htaccess is trying to find File does not exist: /Library/WebServer/Documents/Users but there is no such directory as Users. I have tried setting up the following also: /etc/apache2/extra/httpd-vhosts.conf <VirtualHost *:80 > DocumentRoot "/Users/username/Sites/mysite/app/webroot" ServerName mysite.dev ServerAlias www.mysite.dev mysite.dev *.mysite.dev <Directory "/Users/username/Sites/mysite/app/webroot"> Options Indexes FollowSymLinks AllowOverride All </Directory> </VirtualHost> /etc/hosts 127.0.0.1 mysite.dev /etc/apache2/users/username.conf <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymlinks AllowOverride All Order allow,deny Allow from all </Directory> That also hasn't worked, but with a different error Failed opening required 'cake/libs/cache/file.php' Although I'd rather not use virtual hosts, and just run it off localhost

    Read the article

  • Java: Selected rows's index does not changes when I sort them!

    - by adrian7
    Hello, I have a Jtable on which I called the method table1.setAutoCreateRowSorter(true);. So this works on well. But I also have a methos in my JFrame class which is fired when i push a button. It gets the selected rows indexes using this code int selectedRows[] = this.table1.getSelectedRows();. And displays an edit window for the first row corresponding in the selected interval. The problem is that if I don't click on column's headers (I mean i don't sorte them at all) my method works perfect. But when I sort the row, the indexes of the rows doesn't seems to change at all - thus resulting an edit window for the old row whicn was initially in that position before making any sort. I am using JDK 6 could anyonw give ma a tip?

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state without waiting half a day?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • Query takes time on comparing non numeric data of two tables, how to optimize it?

    - by Muhammad Kashif Nadeem
    I have two DBs. The 1st db has CallsRecords table and 2nd db has Contacts table, both are on SQL Server 2005. Below is the sample of two tables. Contact table has 1,50,000 records CallsRecords has 75,000 records Indexes on CallsRecords: CallFrom CallTo PickUP Indexes on Contacts: PhoneNumber I am using this query to find matches but it take more than 7 minutes. SELECT * FROM CallsRecords r INNER JOIN Contact c ON r.CallFrom = c.PhoneNumber OR r.CallTo = c.PhoneNumber OR r.PickUp = c.PhoneNumber In Estimated execution plan inner join cost 95% Any help to optimize it.

    Read the article

  • Is the time cost constant when bulk inserting data into an indexed table?

    - by SiLent SoNG
    I have created an archive table which will store data for selecting only. Daily there will be a program to transfer a batch of records into the archive table. There are several columns which are indexed; while others are not. I am concerned with time cost per batch insertion: - 1st batch insertion: N1 - 2nd batch insertion: N2 - 3rd batch insertion: N3 The question is: will N1, N2, and N3 roughly be the same, or N3 N2 N1? That is, will the time cost be a constant or incremental, with existence of several indexes? All indexes are non-clustered. The archive table structure is this: create table document ( doc_id int unsigned primary key, owner_id int, -- indexed title smalltext, country char(2), year year(4), time datetime, key ix_owner(owner_id) }

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • Fastest way to get an Excel Range of Rows

    - by gayan
    In a VSTO C# project I want to get a range of rows from a set of row indexes. The row indexes can be for example like "7,8,9,12,14". Then I want the range "7:9,12,14" rows. I now do this: Range rng1 = sheet.get_Range("A7:A9,A12,A14", Type.Missing); rng1 = rng1.EntireRow; But it's a bit inefficient due to string handling in range specification. sheet.Rows["7:9"] works but I can't give this sheet.Rows["7:9,12,14"] // Fails

    Read the article

  • Sql Server performance

    - by Jose
    I know that I can't get a specific answer to my question, but I would like to know if I can find the tools to get to my answer. Ok we have a Sql Server 2008 database that for the last 4 days has had moments where for 5-20 minutes becomes unresponsive for specific queries. e.g. The following queries run in different query windows simultaneously have the following results SELECT * FROM Assignment --hangs indefinitely SELECT * FROM Invoice -- works fine Many of the tables have non-clustered indexes to help speed up SELECTs Here's what I know: 1) The same query will either hang indefinitely or run normally. 2) In Activity Monitor in the processes tab there are normally around 80-100 processes running I think that what's happening is 1) A user updates a table 2) This causes one or more indexes to get updated 3) Another user issues a select while the index is updating Is there a way I can figure out why at a specific moment in time SQL Server is being unresponsive for a specific query?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >