Search Results

Search found 1124 results on 45 pages for 'indexing'.

Page 13/45 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Indexing in a crystal report

    - by Arpan
    Hi i am new to asp.net programing language. I want to have an index page in my crystal report can somebody provide me with an example how to create an index page in a crystal report. I will really apreciate your help.

    Read the article

  • Spatial Index for Rectangles With Fast Insert

    - by TheCloudlessSky
    Hello, I'm looking for a data structure that provides indexing for Rectangles. I need the insert algorithm to be as fast as possible since the rectangles will be moving around the screen (think of dragging a rectangle with your mouse to a new position). I've looked into R-Trees, R+Trees, kD-Trees, Quad-Trees and B-Trees but from my understanding insert's are usually slow. I'd prefer to have inserts at sub-linear time complexity so maybe someone can prove me wrong about either of the listed data structures. I should be able to query the data structure for what rectangles are at point(x, y) or what rectangles intersect rectangle(x, y, width, height). EDIT: The reason I want insert so fast is because if you think of a rectangle being moved around the screen, they're going to have to be removed and then re-inserted. Thanks!

    Read the article

  • SQL Server: One large persisted computed column for Fulltext Indexing

    - by Alex
    It appears to me as the easiest, most straightforward solution, but please correct me if I'm wrong. Instead of having a fulltext index on all individual columns of a table, isn't it better to just generate one single wide computed column and run the fulltext index against that only? It appears to me that it gets rid of all the issues of having multiple columns, incl. that I can't search "x AND y" as this will not match a row with "x" present in column 1 and "y" present in column 2. Any counterarguments?

    Read the article

  • Re-indexing table; update with from

    - by David Thorisson
    The query says it all, I can't find out the right syntax without without using a for..next UPDATE Webtree SET Webtree.Sorting=w2.Sorting FROM ( SELECT BranchID, CASE WHEN @Index>=ROW_NUMBER() OVER(ORDER BY Sorting ASC) THEN ROW_NUMBER() OVER(ORDER BY Sorting ASC) ELSE ROW_NUMBER() OVER(ORDER BY Sorting ASC)+1 END AS Sorting FROM Webtree w2 WHERE w2.ParentID=@ParentID ) WHERE Webtree.BranchID=w2.BranchID

    Read the article

  • Newbie question - MySQL index size

    - by Tommy
    I've just started to investigating how I should optimize my database. Indexing seems to be a good idea, so I want to index a VARCHAR column, the engine is MyISAM. From what I've read, I understand that an index is limited to a size of 1000 bytes. A VARCHAR character is 3 bytes in size. Does this mean that if I want to index a VARCHAR column with 50 rows, I need an index prefix of 6 characters? I came to that number by dividing 1000 with the row number 50, then the bytesize per character that is 3. 1000/50/3=6,66. It seems a little complicated, so I'm just wondering if I'm thinking right? It seems weird to me that you'd only be able to index 333 rows in a VARCHAR column, using a prefix of 1 character.

    Read the article

  • How to best deal with photos passed to IFilter?

    - by sharptooth
    I'm implementing an IFilter for indexing image formats. One problem is photos - many users have tons of photos, photos are huge and loading and searching for text on them is time consuming. Yes, sometimes people use cameras instead of scanners for digitizing documents, but the potential problems IMO far outweight the possibility of encountering a document digitized with a photo camera. So my implementation will not extract text from photos at all. What should the IFilter do once it detects that a given file is a photo image - indicate an error or return empty text?

    Read the article

  • matlab noninteger step indexing

    - by rlbond
    So, I have a vector: k = 1:100; And I want to take 19 elements from it, which are roughly equally-spaced. So I write this: m = k(1:(99/18):end); This works great, except for a tiny problem: Warning: Integer operands are required for colon operator when used as index m = 1 7 12 18 23 29 34 40 45 51 56 62 67 73 78 84 89 95 100 Now, I understand why this comes up, but I'd like to get rid of that warning. Is there a "right" way to do this without a warning?

    Read the article

  • C array assignment and indexing with similar variable.

    - by Todd R.
    Hello! I apologize if this has been posted before. Compiling under two separate compilers, BCC 5.5 and LCC, yields 0 and 1. #include <stdio.h> int main(void) { int i = 0, array[2] = {0, 0}; array[i] = ++i; printf("%d\n", array[1]); } Am I to assume not all compilers evaluate expressions within an array from right to left?

    Read the article

  • Resuming MySQL indexing

    - by gmemon
    Hello All, I have been building index on a 200 million row table for almost 14 hours. Due to resource over-consumption on the machine (because of a separate incident), the machine cashed. Clearly, I want to avoid another 14 hours to re-construct the index. Is there a way that I can resume the construction of index from the point (or slightly back) where the machine crashed? I can see the temporary files created. Thanks

    Read the article

  • How to use a variable to specify filegroup in MSSQL

    - by gt
    I want to alter a table to add a constraint during upgrade on a MSSQL database. This table is normally indexed on a filegroup called 'MY_INDEX' - but may also be on a database without this filegroup. In this case I want the indexing to be done on the 'PRIMARY' filegroup. I tried the following code to achieve this: DECLARE @fgName AS VARCHAR(10) SET @fgName = CASE WHEN EXISTS(SELECT groupname FROM sysfilegroups WHERE groupname = 'MY_INDEX') THEN QUOTENAME('MY_INDEX') ELSE QUOTENAME('PRIMARY') END ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY ( [myGuid] ASC ) ON @fgName -- fails: 'incorrect syntax' However, the last line fails as it appears a filegroup cannot be specified by variable. Is this possible?

    Read the article

  • Indexing SET field

    - by Dienow
    I have two entities A and B. They are related with many to many relation. Entity A can be related up to 100 B entities. Entity B can be related up to 10000 A entities. I need quick way to select for example 30 A entities, that have relation with specified B entities, filtered and sorted by different attributes. Here how I see ideal solution: I put all information I know about A entities, including their relations with B entities into single row (Special table with SET field) then add all necessary indexes. The problem is that you can't use index while querying by SET field. What should I do? I can replace database with something different, if that'll help.

    Read the article

  • How to use a variable to specify filegroup in SQL Server

    - by gt
    I want to alter a table to add a constraint during upgrade on a SQL Server database. This table is normally indexed on a filegroup called 'MY_INDEX' - but may also be on a database without this filegroup. In this case I want the indexing to be done on the 'PRIMARY' filegroup. I tried the following code to achieve this: DECLARE @fgName AS VARCHAR(10) SET @fgName = CASE WHEN EXISTS(SELECT groupname FROM sysfilegroups WHERE groupname = 'MY_INDEX') THEN QUOTENAME('MY_INDEX') ELSE QUOTENAME('PRIMARY') END ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY ( [myGuid] ASC ) ON @fgName -- fails: 'incorrect syntax' However, the last line fails as it appears a filegroup cannot be specified by variable. Is this possible?

    Read the article

  • Indexing large DB's with Lucene/PHP

    - by thebluefox
    Afternoon chaps, Trying to index a 1.7million row table with the Zend port of Lucene. On small tests of a few thousand rows its worked perfectly, but as soon as I try and up the rows to a few tens of thousands, it times out. Obviously, I could increase the time php allows the script to run, but seeing as 360 seconds gets me ~10,000 rows, I'd hate to think how many seconds it'd take to do 1.7million. I've also tried making the script run a few thousand, refresh, and then run the next few thousand, but doing this clears the index each time. Any ideas guys? Thanks :)

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • Compact matlab matrix indexing notation

    - by AnnaR
    I've got an nxk sized matrix, containing k numbers per row. I want to use these k number as indexes to k-dimensional matrix. Is there any compact way of doing so in matlab or must I use a for-loop? This is what I want to do (in matlab-pseudo code), but in a more matlabish way. for row=1:1:n finalTable(row) = kDimensionalMatrix(indexmatrix(row, 1),... indexmatrix(row, 2),...,indexmatrix(row, k)) end

    Read the article

  • What data is actually stored in a B-tree database in CouchDB?

    - by Andrey Vlasovskikh
    I'm wondering what is actually stored in a CouchDB database B-tree? The CouchDB: The Definitive Guide tells that a database B-tree is used for append-only operations and that a database is stored in a single B-tree (besides per-view B-trees). So I guess the data items that are appended to the database file are revisions of documents, not the whole documents: +---------|### ... | | +------|###|------+ ... ---+ | | | | +------+ +------+ +------+ +------+ | doc1 | | doc2 | | doc1 | ... | doc1 | | rev1 | | rev1 | | rev2 | | rev7 | +------+ +------+ +------+ +------+ Is it true? If it is true, then how the current revision of a document is determined based on such a B-tree? Doesn't it mean, that CouchDB needs a separate "view" database for indexing current revisions of documents to preserve O(log n) access? Wouldn't it lead to race conditions while building such an index? (as far as I know, CouchDB uses no write locks).

    Read the article

  • Why do Lua arrays(tables) start at 1 instead of 0?

    - by AraK
    Hi, I don't understand the rational behind the decision of this part of Lua. Why does indexing start at 1? I have read(as many others did) this great paper. It seems to me a strange corner of a language that is very pleasant to learn and program. Don't get me wrong, Lua is just great but there has to be an explanation somewhere. Most of what I found(on the web) is just saying the index starts at 1. Full stop. It would be very interesting to read what its designers said about the subject. Note that I am "very" beginner in Lua, I hope I am not missing something obvious about tables.

    Read the article

  • Always-indexed MySQL indexing/searching replacements for InnoDB?

    - by Chad Johnson
    I am using InnoDB for a MySQL table, and obviously queries using LIKE and RLIKE/REGEXP can take a lot of time. I've tried Spinx, and it works great, except I have to re-index context at intervals. I can re-index every minute, but I am wondering if there is either 1) a setting in Sphinx to keep records always indexed or 2) other software besides Sphinx that will keep records always indexed. I want it where that immediately upon inserting or updating a record, the index is updated.

    Read the article

  • Facebook user_id as MongoDB BSON ObjectId?

    - by MattDiPasquale
    I'm rebuilding Lovers on Facebook with Sinatra & Redis. I like Redis because it doesn't have the long (12-byte) BSON ObjectIds and I am storing sets of Facebook user_ids for each user. The sets are requests_sent, requests_received, & relationships, and they all contain Facebook user ids. I'm thinking of switching to MongoDB because I want to use it's geospatial indexing. If I do, I'd want to use the FB user ids as the _id field because I want the sets to be small and I want the JSON responses to be small. But, is the BSON ObjectId better (more efficient for MongoDB) to use than just an integer (fb user_id)?

    Read the article

  • Integrate Lucene or any other search product with SQL server 2005

    - by HBACHARYA
    Hi, I need to use full text search with SQL server 2005 and I have explored its inbuilt search approach (SQL server full text indexing) but it seems less powerful. I have also looked features of Lucene. Now my questions: Is is possible to integrate lucene and SQL server in anyway? 1. Can my T-Sql queries use Lucene index for returning results? (May be uses CLR based function internally) 2. How to update Lucene index while data in the tables are getting updated 3. What can be overall architecutre? 4. Are there any commercial products avaliable which provides this kind of support? Thanks, HB

    Read the article

  • Numpy zero rank array indexing/broadcasting

    - by Lemming
    I'm trying to write a function that supports broadcasting and is fast at the same time. However, numpy's zero-rank arrays are causing trouble as usual. I couldn't find anything useful on google, or by searching here. So, I'm asking you. How should I implement broadcasting efficiently and handle zero-rank arrays at the same time? This whole post became larger than anticipated, sorry. Details: To clarify what I'm talking about I'll give a simple example: Say I want to implement a Heaviside step-function. I.e. a function that acts on the real axis, which is 0 on the negative side, 1 on the positive side, and from case to case either 0, 0.5, or 1 at the point 0. Implementation Masking The most efficient way I found so far is the following. It uses boolean arrays as masks to assign the correct values to the corresponding slots in the output vector. from numpy import * def step_mask(x, limit=+1): """Heaviside step-function. y = 0 if x < 0 y = 1 if x > 0 See below for x == 0. Arguments: x Evaluate the function at these points. limit Which limit at x == 0? limit > 0: y = 1 limit == 0: y = 0.5 limit < 0: y = 0 Return: The values corresponding to x. """ b = broadcast(x, limit) out = zeros(b.shape) out[x>0] = 1 mask = (limit > 0) & (x == 0) out[mask] = 1 mask = (limit == 0) & (x == 0) out[mask] = 0.5 mask = (limit < 0) & (x == 0) out[mask] = 0 return out List Comprehension The following-the-numpy-docs way is to use a list comprehension on the flat iterator of the broadcast object. However, list comprehensions become absolutely unreadable for such complicated functions. def step_comprehension(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) out.flat = [ ( 1 if x_ > 0 else ( 0 if x_ < 0 else ( 1 if l_ > 0 else ( 0.5 if l_ ==0 else ( 0 ))))) for x_, l_ in b ] return out For Loop And finally, the most naive way is a for loop. It's probably the most readable option. However, Python for-loops are anything but fast. And hence, a really bad idea in numerics. def step_for(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) for i, (x_, l_) in enumerate(b): if x_ > 0: out[i] = 1 elif x_ < 0: out[i] = 0 elif l_ > 0: out[i] = 1 elif l_ < 0: out[i] = 0 else: out[i] = 0.5 return out Test First of all a brief test to see if the output is correct. >>> x = array([-1, -0.1, 0, 0.1, 1]) >>> step_mask(x, +1) array([ 0., 0., 1., 1., 1.]) >>> step_mask(x, 0) array([ 0. , 0. , 0.5, 1. , 1. ]) >>> step_mask(x, -1) array([ 0., 0., 0., 1., 1.]) It is correct, and the other two functions give the same output. Performance How about efficiency? These are the timings: In [45]: xl = linspace(-2, 2, 500001) In [46]: %timeit step_mask(xl) 10 loops, best of 3: 19.5 ms per loop In [47]: %timeit step_comprehension(xl) 1 loops, best of 3: 1.17 s per loop In [48]: %timeit step_for(xl) 1 loops, best of 3: 1.15 s per loop The masked version performs best as expected. However, I'm surprised that the comprehension is on the same level as the for loop. Zero Rank Arrays But, 0-rank arrays pose a problem. Sometimes you want to use a function scalar input. And preferably not have to worry about wrapping all scalars in at least 1-D arrays. >>> step_mask(1) Traceback (most recent call last): File "<ipython-input-50-91c06aa4487b>", line 1, in <module> step_mask(1) File "script.py", line 22, in step_mask out[x>0] = 1 IndexError: 0-d arrays can't be indexed. >>> step_for(1) Traceback (most recent call last): File "<ipython-input-51-4e0de4fcb197>", line 1, in <module> step_for(1) File "script.py", line 55, in step_for out[i] = 1 IndexError: 0-d arrays can't be indexed. >>> step_comprehension(1) array(1.0) Only the list comprehension can handle 0-rank arrays. The other two versions would need special case handling for 0-rank arrays. Numpy gets a bit messy when you want to use the same code for arrays and scalars. However, I really like to have functions that work on as arbitrary input as possible. Who knows which parameters I'll want to iterate over at some point. Question: What is the best way to implement a function as the one above? Is there a way to avoid if scalar then like special cases? I'm not looking for a built-in Heaviside. It's just a simplified example. In my code the above pattern appears in many places to make parameter iteration as simple as possible without littering the client code with for loops or comprehensions. Furthermore, I'm aware of Cython, or weave & Co., or implementation directly in C. However, the performance of the masked version above is sufficient for the moment. And for the moment I would like to keep things as simple as possible.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >