Search Results

Search found 991 results on 40 pages for 'indexed'.

Page 30/40 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • SQL: Is it quicker to insert sorted data into a table?

    - by AngryWhenHungry
    A table in Sybase has a unique varchar(32) column, and a few other columns. It is indexed on this column too. At regular intervals, I need to truncate it, and repopulate it with fresh data from other tables. insert into MyTable select list_of_columns from OtherTable where some_simple_conditions order by MyUniqueId If we are dealing with a few thousand rows, would it help speed up the insert if we have the order by clause for the select? If so, would this gain in time compensate for the extra time needed to order the select query? I could try this out, but currently my data set is small and the results don't say much.

    Read the article

  • PHP - MySQL - Select runs indefinitely

    - by John
    I have three tables listings: id, pid, beds, baths, etc, etc, etc, db locations: id, pid, zip, lat, lon, etc, etc, etc, db images id, pid, height, width, raw, etc, etc, db id, pid & db are indexed. db just references the mls provider a particular item came from. in images the raw column holds raw image data there are about 15k rows in listings/locations, and about 120k rows in images so there are multiple rows that have the same pid. when i do "select pid from listings" or "select pid from locations" the query completes successfully in about 100ms. when i do "select pid from images" it just hangs in sqlyog and never completes... i was thinking since the raw column contains alot of information that it might be trying to select that too, but my query doesn't try to select that so I can't imagine why it's taking so long... any idea why this is happening??

    Read the article

  • Send 404 when requesting index.php through .htaccess?

    - by Daniel
    I've recently refactored an existing CodeIgniter application to use url segments instead of query strings, and I'm using a rewriterule in htaccess to rewrite stuff to index.php: RewriteRule ^(.*)$ /index.php/$1 [L] My problem right now is that a lot of this website's pages are indexed by google with a link to index.php. Since I made the change to use url segments instead, I don't care about these google results anymore and I want to send a 404 (no need to use 301 Move permanently, there have been enough changes, it'll just have to recrawl everything). To get to the point: How do I redirect requests to /index.php?whatever to a 404 page? I was thinking of rewriting to a non-existent file that would cause apache to send a 404. Would this be an acceptable solution? How would the rewriterule for that look like? edit: Currently, existing google results will just cause the following error: An Error Was Encountered The URI you submitted has disallowed characters.

    Read the article

  • How to create a better tables Structure.

    - by user160820
    For my website i have tables Category :: id | name Product :: id | name | categoryid Now each category may have different sizes, for that I have also created a table Size :: id | name | categoryid | price Now the problem is that each category has also different ingredients that customer can choose to add to his purchased product. And these ingredients have different prices for different sizes. For that I also have a table like Ingredient :: id | name | sizeid | categoryid | price I am not sure if this Structure really normalized is. Can someone please help me to optimize this structure and which indexed do i need for this Structure?

    Read the article

  • deleting a large number of rows from a table

    - by Azeem
    We have a requirement to delete rows in the order of millions from multiple tables as a batch job (note that we are not deleting all the rows, we are deleting based on a timestamp stored in an indexed column). Obviously a normal DELETE takes forever (because of logging, referential constraint checking etc.). I know in the LUW world, we have ALTER TABLE NOT LOGGED INITIALLY but I can't seem to find the an equivalent SQL statement for DB2 v8 z/OS. Any one has any ideas on how to do this really fast? Also, any ideas on how to avoid the referential checks when deleting the rows? Please let me know.

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • SQL Database dilemma : Optimize for Querying or Writing?

    - by Harry
    I'm working on a personal project (Search engine) and have a bit of a dilemma. At the moment it is optimized for writing data to the search index and significantly slow for search queries. The DTA (Database Engine Tuning Adviser) recommends adding a couple of Indexed views inorder to speed up search queries. But this is to the detriment of writing new data to the DB. It seems I can't have one without the other! This is obviously not a new problem. What is a good strategy for this issue?

    Read the article

  • How to add indexes to MySQL tables?

    - by Michael
    I've got a very large MySQL table with about 150,000 rows of data. Currently, when I try and run SELECT * FROM table WHERE id = '1'; the code runs fine as the ID field is the primary index. However, recently for a development in the project, I have to search the database by another field. For example SELECT * FROM table WHERE product_id = '1'; This field was not previously indexed, however, I've added it as an index but when I try to run the above query, the results is very slow. An EXPLAIN query reveals that there is no index for the product_id field when I've already added one and as a result the query takes any where from 20 minutes to 30 minutes to return a single row. EDIT: My full EXPLAIN results are: | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+------+---------+------+------+------------------+ | 1 | SIMPLE | table | ALL | NULL | NULL | NULL | NULL | 157211 | Using where | +----+-------------+-------+------+----------------------+------+---------+------+------+------------------+

    Read the article

  • Lucene.Net - How to treat a space-seperated phrase as a single token?

    - by Gareth D
    I've implemented a search facility using Lucene.Net. The index includes UK academic qualifications, including "A Level". I'd like the users to be able to search using the phrase "A Level", but using the Standrad Analyser the "A" is stripped out as a stop-word and therefore only "Level" is indexed/searched. What's my best option to work around this? I'm guessing I need to somehow tokenise "A Level" to "A-Level" or similar by creating a custom analyser. Is this the best approach? Note that I want don't want the whole search to be a phrase query. i.e. in my search box I want the user to be able to enter <"A Level" AND English Maths Physics and this would return any with "A Level" and either of English MAths or Physics. Question updated to reflect this.

    Read the article

  • Data storage advice needed: Best way to store location + time data?

    - by sobedai
    I have a project in mind that will require the majority of queries to be keyed off of lat/long as well as date + time. Initially, I was thinking of a standard RDBMS where lat, long, and the datetime field are properly indexed. Then, I began thinking of a document based system where the document was essentially a timestamp and each document has lat/long with in it. Each document could have n objects associated with it. I'm looking for advice on what would be the best type of storage engine for this sort of thing is - which of the above idea would be better or if there is something else completely that is the ideal solution. Thanks

    Read the article

  • Removing items from a nested list Python

    - by johntfoster
    I'm trying to remove items from a nested list in Python. I have a nested list as follows: families = [[0, 1, 2],[0, 1, 2, 3],[0, 1, 2, 3, 4],[1, 2, 3, 4, 5],[2, 3, 4, 5, 6]] I want to remove the entries in each sublist that coorespond to the indexed position of the sublist in the master list. So, for example, I need to remove 0 from the first sublist, 1 from second sublist, etc. I am trying to use a list comrehension do do this. This is what I have tried: familiesNew = [ [ families[i][j] for j in families[i] if i !=j ] for i in range(len(families)) ] This works for range(len(families)) up to 3, however beyond that I get IndexError: list index out of range. I'm not sure why. Can somebody give me an idea of how to do this. Preferably a one-liner (list comprehension). Thanks.

    Read the article

  • Insert data into table effeciently, postgresql

    - by Rowan_Gaffney
    I am new to postgresql (and databases in general) and was hoping to get some pointers on improving the efficiency of the following statement. I am inserting data from one table to another, and do not want to insert duplicate values. I have a rid (unique identifier in each table) that are indexed and are Primary Keys. I am currently using the following statement: Insert INTO table1 SELECT * FROM table2 WHERE rid NOT IN (SELECT rid FROM table1). As of now the table one is 200,000 records, table2 is 20,000 records. Table1 is going to keep growing (probably to around 2,000,000) and table2 will stay around 20,000 records. As of now the statement takes about 15 minutes to run. I am concerned that as Table1 grows this is going to take way to long. Any suggestions?

    Read the article

  • Index View Index Creation Failing

    - by aBetterGamer
    I'm trying to create an index on a view and it keeps failing, I'm pretty sure its b/c I'm using an alias for the column. Not sure how or if I can do it this way. Below is a simplified scenario. CREATE VIEW v_contracts WITH SCHEMABINDING AS SELECT t1.contractid as 'Contract.ContractID' t2.name as 'Customer.Name' FROM contract t1 JOIN customer t2 ON t1.contractid = t2.contractid GO CREATE UNIQUE CLUSTERED INDEX v_contracts_idx ON v_contracts(t1.contractid) GO --------------------------- Incorrect syntax near '.'. CREATE UNIQUE CLUSTERED INDEX v_contracts_idx ON v_contracts(contractid) GO --------------------------- Column name 'contractid' does not exist in the target table or view. CREATE UNIQUE CLUSTERED INDEX v_contracts_idx ON v_contracts(Contract.ContractID) GO --------------------------- Incorrect syntax near '.'. Anyone know how to create an indexed view using aliased columns please let me know.

    Read the article

  • StringListProperty limited to 500 char strings (Google App Engine / Python)

    - by MarcoB
    It seems that StringListProperty can only contain strings up to 500 chars each, just like StringProperty... Is there a way to store longer strings than that? I don't need them to be indexed or anything. What I would need would be something like a "TextListProperty", where each string in the list can be any length and not limited to 500 chars. Can I create a property like that? Or can you experts suggest a different approach? Perhaps I should use a plain list and pickle/unpickle it in a Blob field, or something like that? I'm a bit new to Python and GAE and I would greatly appreciate some pointers instead of spending days on trial and error...thanks!

    Read the article

  • mysql stored routine vs. mysql-alternative?

    - by user522962
    We are using a mysql database w/ about 150,000 records (names) total. Our searches on the 'names' field is done through an autocomplete function in php. We have the table indexed but still feel that the searching is a bit sluggish (a few full seconds vs. something like Google Finance w/ near-instant response). We came up w/ 2 possibilities, but wanted to get more insight: Can we create a bunch (many thousands or more) of stored procedures to speed up searches, or will creating that many stored procedures bog-down the db? Is there a faster alternative to mysql for "select" statements (speed on inserting & updating rows isn't too important so we can sacrifice that, if necessary). I've vaguely heard of BigTable & others that don't support JOIN statements....we need JOIN statements for some of our other queries we do. thx

    Read the article

  • JMS message received at only one server

    - by BJH
    I'm having a problem with a JEE6 application running in a clustered environment using WebSphere ApplicationServer 8. A search index is used for quick search in the UI (using Lucene), which must be re-indexed after new data arrived in the corresponding DB layer. To achieve this we're sending a JMS message to the application, then the search index will be refreshed. The problem is, that the messages only arrives at one of the cluster members. So only there the search index is up to date. At the other servers it remains outdated. How can I achieve that the search index gets updated at all cluster members? Can I receive the message somehow on all servers? Or is there a better way to do this?

    Read the article

  • Combining Content Data in Google Analytics

    - by David Csonka
    When I first start one of my Wordpress blogs, I had the permanent URL for each post include the date of posting. The slug format looked like this: /blog/2010/01/25/this-is-my-article/ Later on, I changed it so that the date was not included in the permanent URL, like this: /blog/this-is-my-article/ and setup a redirect plugin to make sure that users would get to the page they wanted until the site was re-indexed. In Google Analytics, when I review the stats for content I now have multiple records for what is essentially the same page. ie: Top Content List: 45 Pageviews- /blog/this-is-my-article/ 24 Pageviews- /blog/2010/01/25/this-is-my-article/ 33 Pageviews- /blog/some-other-article/ Is there any way to combine those records somehow?

    Read the article

  • Tag suggestion (not tag autocomplete)

    - by takeshin
    AJAX autocomplete is fairly simple to implement. However, I wonder how to handle smart tag suggestion like this on SO. To clarify the difference between autocomplete and suggestion: autocomplete: foo [foobar, foobaz] suggestion: foo [barfoo, foobar, foobaz], or even better, with 'did you mean' feature: [barfoo, foobar, foobaz, fobar, fobaz] I suppose I need some full text search in tags (all letters indexed, not just words). There would be no problem to do it witch regex or other patterns for limited number of tags (even client side). But how to implement this feature for big number of tags? Is there any particular reason (besides URL) the tags on SO are dash separated? What about Unicode characters in tags? I store the tags in the table with the following columns: id, tagname. My SQL query returns objects with following fields: id, tagname, count

    Read the article

  • Passing Session[] and Request[] to Methods in C#

    - by NYARROW
    In C#, How do you pass the Session[] and Request[] objects to a method? I would like to use a method to parse out Session and Request paramaters for a .aspx page to reduce the size of my Page_Load method. I am passing quite a few variables, and need to support both POSTand GET methods. For most calls, not all variables are present, so I have to test every variable multiple ways, and the code gets long... This is what I am trying to do, but I can't seem to properly identify the Session and Request paramaters (this code will not compile, because the arrays are indexed by number) static string getParam( System.Web.SessionState.HttpSessionState[] Session, System.Web.HttpRequest[] Request, string id) { string rslt = ""; try { rslt = Session[id].ToString(); } catch { try { rslt = Request[id].ToString(); } catch { } } return rslt; } From Page_Load, I want to call this method as follows to retrieve the "MODE" paramater: string rslt; rslt = getParam(Session, Request, "MODE"); Thanks!

    Read the article

  • Loop through non-integer rows using SQL

    - by Jesse
    I know how to accomplish my task with .NET, but I wanted to do this just in SQL. I need to loop through all of the rows where the primary key is somewhat arbitrary. It can be a number or a series of letters, and probably any number of unusual things. I know I could do something like this... DECLARE @numRows INT SET @numRows = (SELECT COUNT(pkField) FROM myTable) DECLARE @I INT SET @I = 1 WHILE (@I <= @numRows) BEGIN --Do what I need to here SET @I = @I + 1 END ...if my rows were indexed in a contiguous fashion, but I don't know enough about SQL to do that if they're not. I keep coming across the use of "cursors," but I come across just as much reading about avoiding cursors. I found this SO solution but I'm not sure if that's what I'm needing? I appreciate any ideas.

    Read the article

  • return only one document for each filter defined in the query

    - by Garytxo
    Hi all, In one of my latest projects I use Solr 1.4 for searching products.However I have ran into a slight problem, which I aint sure if its possible to do using Solr. All products are indexed by "country" and "category" and the "id", "class" and "description" are stored values. I now have been requested to extract a sample list of products that we have for a give "category" and "ONLY RETURNING ONE" product for each country where the product is available. In my current implementation, I have a dismax query to get a list of all the countries that correspond to the catergory, then I call again solr to extract all products for each country, limiting the no. rows by the size of the countries found in the previous query. The problem I have with this current implementation is I can not be certain that I have one product for each country in the list. Therefore would anyone know if it possible to tell solr that you want only one product per country provided in the query? Any guidance would be useful.

    Read the article

  • Joomla 2.5 disable and remove Smart Cache

    - by WooDzu
    I am maintaining a Joomla 2.5 based magazine website with 3-4 new, long articles every day. Smart Search was enabled by default and now I've got a few "finder" tables full of indexed phrases and therms. I wonder if there are any disadvantages if I'd: Disable the Smart Search plugin Remove these 'finder' tables completely Aha, we're using a Search field, which works fine, but I'm not sure what's going to happen if I disable the plugin and remove these tables. Will it then search for phrases in content Joomla tables or simply break w/o missing 'finder' tables Has anyone tried this before?

    Read the article

  • Sphinx without using an auto_increment id

    - by squeeks
    I am current in planning on creating a big database (2+ million rows) with a variety of data from separate sources. I would like to avoid structuring the database around auto_increment ids to help prevent against sync issues with replication, and also because each item inserted will have a alphanumeric product code that is guaranteed to be unique - it seems to me more sense to use that instead. I am looking at a search engine to index this database with Sphinx looking rather appealing due to its design around indexing relational databases. However, looking at various tutorials and documentation seems to show database designs being dependent on an auto_increment field in one form or another and a rather bold statement in the documentation saying that document ids must be 32/64bit integers only or things break. Is there a way to have a database indexed by Sphinx without auto_increment fields as the id?

    Read the article

  • SQL Server - how to determine if indexes aren't being used?

    - by rwmnau
    I have a high-demand transactional database that I think is over-indexed. Originally, it didn't have any indexes at all, so adding some for common processes made a huge difference. However, over time, we've created indexes to speed up individual queries, and some of the most popular tables have 10-15 different indexes on them, and in some cases, the indexes are only slightly different from each other, or are the same columns in a different order. Is there a straightforward way to watch database activity and tell if any indexes are not hit anymore, or what their usage percentage is? I'm concerned that indexes were created to speed up either a single daily/weekly query, or even a query that's not being run anymore, but the index still has to be kept up to date every time the data changes. In the case of the high-traffic tables, that's a dozen times/second, and I want to eliminate indexes that are weighing down data updates while providing only marginal improvement.

    Read the article

  • With Google's #! mess, what effect would a redirect on the converted URL have?

    - by Ne0nx3r0
    So Google takes: http://www.mysite.com/mypage/#!pageState and converts it to: http://www.mysite.com/mypage/?_escaped_fragment_=pageState ...So... Would be it fair game to redirect that with a 301 status to something like: http://www.mysite.com/mypage/pagestate/ and then return an HTML snapshot? My thought is if you have an existing html structure, and you just want to add ajax as a progressive enhancement, this would be a fair way to do it, if Google just skipped over _escaped_fragment_ and indexed the redirected URL. Then your ajax links are configured by javascript, and underneath them are the regular links that go to your regular site structure. So then when a user comes in on a static url (ie http://www.mysite.com/mypage/pagestate/ ), the first link he clicks takes him to the ajax interface if he has javascript, then it's all ajax. On a side note does anyone know if Yahoo/MSN onboard with this 'spec' (loosely used)? I can't seem to find anything that says for sure.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >