Search Results

Search found 1650 results on 66 pages for 'indexes'.

Page 27/66 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Guides for PostgreSQL query tuning?

    - by Joe
    I've found a number of resources that talk about tuning the database server, but I haven't found much on the tuning of the individual queries. For instance, in Oracle, I might try adding hints to ignore indexes or to use sort-merge vs. correlated joins, but I can't find much on tuning Postgres other than using explicit joins and recommendations when bulk loading tables. Do any such guides exist so I can focus on tuning the most run and/or underperforming queries, hopefully without adversely affecting the currently well-performing queries? I'd even be happy to find something that compared how certain types of queries performed relative to other databases, so I had a better clue of what sort of things to avoid. update: I should've mentioned, I took all of the Oracle DBA classes along with their data modeling and SQL tuning classes back in the 8i days ... so I know about 'EXPLAIN', but that's more to tell you what's going wrong with the query, not necessarily how to make it better. (eg, are 'while var=1 or var=2' and 'while var in (1,2)' considered the same when generating an execution plan? What if I'm doing it with 10 permutations? When are multi-column indexes used? Are there ways to get the planner to optimize for fastest start vs. fastest finish? What sort of 'gotchas' might I run into when moving from mySQL, Oracle or some other RDBMS?) I could write any complex query dozens if not hundreds of ways, and I'm hoping to not have to try them all and find which one works best through trial and error. I've already found that 'SELECT count(*)' won't use an index, but 'SELECT count(primary_key)' will ... maybe a 'PostgreSQL for experienced SQL users' sort of document that explained sorts of queries to avoid, and how best to re-write them, or how to get the planner to handle them better. update 2: I found a Comparison of different SQL Implementations which covers PostgreSQL, DB2, MS-SQL, mySQL, Oracle and Informix, and explains if, how, and gotchas on things you might try to do, and his references section linked to Oracle / SQL Server / DB2 / Mckoi /MySQL Database Equivalents (which is what its title suggests) and to the wikibook SQL Dialects Reference which covers whatever people contribute (includes some DB2, SQLite, mySQL, PostgreSQL, Firebird, Vituoso, Oracle, MS-SQL, Ingres, and Linter).

    Read the article

  • Problem with displaying and downloading email attachments using the zend framework

    - by Ali
    Hi guys I'm trying to display emails in my inbox and their respective attachments. At the same time I also wish to be able to download the attachments however I'm kinda stuck here. Here is the code I use to display the attachments: $one_message = $mail->getMessage($i); $one_message->id = $i; $one_message->UID = $mail->getUniqueId($i); // get the contact of this message $email = array_pop(extract_emails_from($one_message->from)); $one_message->contactFrom = $contact_manager->get_person_from_email($email); $one_message->parts = array(); foreach (new RecursiveIteratorIterator($mail->getMessage($i)) as $ii=>$part) { try { $tpart = $part; $one_message->parts[$ii] = $tpart; // put the part of the message indexed with what I assume is its position if (strtok($part->contentType, ';') == 'text/html') { $b = $part->getContent(); $h2t->set_html($b); $one_message->body = $h2t->get_text(); } if (strtok($part->contentType, ';') == 'text/plain') { $b = $part->getContent(); $one_message->body = $part->getContent(); } } catch (Zend_Mail_Exception $e) { // ignore } } And I display the attachments using the following link: ?action=download&element=email-attachment&box=inbox&uid=<?php echo $one_message->UID;?>&id=<?php echo $one_message->id;?>&part=<?php echo $ii;?> The part variable is the index taken from the loop above However when I try to download using the exact same code as above modified slightly: $id = $_GET['id']; $pid = $_GET['part']; $one_message = $mail->getMessage($id); foreach (new RecursiveIteratorIterator($mail->getMessage($id)) as $ii=>$part) { if($pid == $ii): $content = base64_decode($part->getContent()); $cnt_typ = explode(";" , $part->contentType); $name = explode("=",$cnt_typ[1]); $filename = $name[1];//It is the file name of the attachement in browser //This for avoiding " from the file name when sent from yahoomail $filename = str_replace('"'," ",$filename); $filename = trim($filename); header("Cache-Control: public"); header("Content-Type: ".$cn_typ[1]); header("Content-Disposition: attachment; filename=\"" . $filename . "\""); header("Content-Transfer-Encoding: binary"); echo $content; exit; endif; } For some reason it doesn't work at all. I did a var dump and noticed that in the for loop with the reverseiterator teh indexes $ii returned are 1 - 2 - 2 !! Whats going on here how can the indexes be repeated? How can I tell one part from the others?

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • When to address integer overflow in C

    - by Yktula
    Related question: http://stackoverflow.com/questions/199333/best-way-to-detect-integer-overflow-in-c-c In C code, should integer overflow be addressed whenever integers are added? It seems like pointers and array indexes should be checked at all. When should integer overflow be checked for? When numbers are added in C without type explicitly mentioned, or printed with printf, when will overflow occur? Is there a way to automatically detect when an integer arithmetic overflow?

    Read the article

  • Any other ways to install heroku except gem install

    - by pierr
    Hi, Command gem install heroku failed with following messsage and I have tried the solution here , but failed also. So , is there any other way i can install heroku? WARNING: RubyGems 1.2+ index not found for: http://gems.rubyforge.org/ RubyGems will revert to legacy indexes degrading performance. ERROR: could not find gem heroku locally or in a repository

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • using Multi Probe LSH with LSHKIT

    - by Yijinsei
    Hi Guys, I have read through the source code for mplsh, but I still unsure on how to use the indexes generated by lshkit to speed up the process in comparing feature vector in Euclidean Distance. Do you guys have any experience regarding this?

    Read the article

  • Continuation - Viewing FIRST_ROWS before query completes.

    - by Frank Developer
    I have identified the query constructs my users normally use. would it make sense for me to create composite indexes to support those constructs and provide FIRST_ROWS capability? If I migrate from SE to IDS, I will loose the ability to write low-level functions with c-isam calls, but gain FIRST_ROWS along with other goodies like: SET-READS for index scans (onconfig USE_[KO]BATCHEDREAD), optimizer directives, parallel queries, etc?

    Read the article

  • Binary Search Help

    - by aloh
    Hi, for a project I need to implement a binary search. This binary search allows duplicates. I have to get all the index values that match my target. I've thought about doing it this way if a duplicate is found to be in the middle: Target = G Say there is this following sorted array: B, D, E, F, G, G, G, G, G, G, Q, R S, S, Z I get the mid which is 7. Since there are target matches on both sides, and I need all the target matches, I thought a good way to get all would be to check mid + 1 if it is the same value. If it is, keep moving mid to the right until it isn't. So, it would turn out like this: B, D, E, F, G, G, G, G, G, G (MID), Q, R S, S, Z Then I would count from 0 to mid to count up the target matches and store their indexes into an array and return it. That was how I was thinking of doing it if the mid was a match and the duplicate happened to be in the mid the first time and on both sides of the array. Now, what if it isn't a match the first time? For example: B, D, E, F, G, G, J, K, L, O, Q, R, S, S, Z Then as normal, it would grab the mid, then call binary search from first to mid-1. B, D, E, F, G, G, J Since G is greater than F, call binary search from mid+1 to last. G, G, J. The mid is a match. Since it is a match, search from mid+1 to last through a for loop and count up the number of matches and store the match indexes into an array and return. Is this a good way for the binary search to grab all duplicates? Please let me know if you see problems in my algorithm and hints/suggestions if any. The only problem I see is that if all the matches were my target, I would basically be searching the whole array but then again, if that were the case I still would need to get all the duplicates. Thank you BTW, my instructor said we cannot use Vectors, Hash or anything else. He wants us to stay on the array level and get used to using them and manipulating them.

    Read the article

  • Does @JoinTable has a property of "table" or not?

    - by Kent Chen
    The following is copied from hibernate's document. (http://docs.jboss.org/hibernate/stable/annotations/reference/en/html_single/#d0e2770) @CollectionOfElements @JoinTable( table=@Table(name="BoyFavoriteNumbers"), joinColumns = @JoinColumn(name="BoyId") ) @Column(name="favoriteNumber", nullable=false) However, when I put this in practice, I just found that @JoinTable has no "table" property, instead it has a "name" property to specify the table name. But I need "table" property to specify indexes. What's going on here? I'm almost driven crazy!

    Read the article

  • STL: writing "where" operator for a vector.

    - by Arman
    Hello, I need to find the indexes in the vector based on several boolean predicates. ex: vector<float> v; vector<int> idx; idx=where( bool_func1(v), bool_func2(v), ... ); What is the way to declare **where** function, in order to use the several user defined boolean functions over the vector? thanks Arman.

    Read the article

  • Can Spotlight index a MacFUSE filesystem?

    - by tdavies
    Spotlight indexes at the file level, so a file containing a complicated data structure may need to be split into a set of files for Spotlight to index it in a useful way. Can you use MacFUSE to achieve this more dynamically? Will Spotlight index a MacFUSE volume? Can MacFUSE handle the necessary per-file metadata? Can a MacFUSE process notify Spotlight when attributes of a file change?

    Read the article

  • Database Schema Managing Framework/Library

    - by Karol Kolenda
    I'm looking for any database framework/library for .net which will act as a unified layer between an application and databases. Please note that I'm not interested in querying/updating data (there is plenty of DALs for that) but rather a framework which allows me to manage table schemas and indexes in a managed fashion (without using database specific SQL). I'm particularly interested in a library which supports Oracle, SQL Server and PostreSQL.

    Read the article

  • Index a set of files to search their text quickly?

    - by Ricket
    I have a unique need: I am frequently searching a large set of text files for a keyword. Right now, I open up Notepad++ and use the "Find in files" feature. It works just fine, but with the amount of files, each search takes several minutes to complete. Is there a good program more suited for this purpose, perhaps that indexes a set of files and then lets you search the set repeatedly and very quickly? It would greatly speed up my workflow.

    Read the article

  • Smarty html_options

    - by SeanJA
    For smarty's html_options function, is there a way to avoid having to do this (other than not using smarty that is)? {if $smarty.post} {html_options name=option_1 optins=$options selected=$smarty.post.option_1} {else} {html_options name=option_1 optins=$options} {/if} I realize that it won't show up in the template, but it seems like a bad practice to leave something that is not defined in the template (it also fills up my error logs with noise about undefined indexes).

    Read the article

  • once more: .htaccess - RewriteRules working, but browser address bar displaying full (unfriendly) URL

    - by audio.zoom
    Hi, I can't seem to find my problem with this really simple .htaccess file: Options -Indexes Options +FollowSymLinks RewriteEngine on RewriteBase /d/ #drupal style url rewriting RewriteRule ^([^.]*)$ index.php?path=$1 [L] It works as intended for everything except the root url with no slash. So it rewrites and passes the hidden get querystring to my pages for localhost/d/x/y/z as ?path=x/y/z and localhost/d/ as ?path= (blank) but... localhost/d works, but now appears in the address bar as this ugly monster: http://localhost/d/?path=/Users/audiozoom/Documents/webroot/d What could be the problem?

    Read the article

  • HT create a new vector in data frame that takes correlation of existing vectors

    - by Milktrader
    I have a time series of two indexes, with each row representing the closing price on the same day. I'd like to go to row 30 and lookback over the last 30 'days' and calculate the pearson correlation. And then store that value in a new vector. Then, repeat the calculation for the entire time series. It is a trivial task in Excel, so I'm convinced it can be done in R. I don't know the method to use though.

    Read the article

  • bash: assign grep regex results to array

    - by Ryan
    Hello everyone, I am trying to assign a regular expression result to an array inside of a bash script but I am unsure whether that's possible, or if I'm doing it entirely wrong. The below is what I want to happen, however I know my syntax is incorrect: indexes[4]=$(echo b5f1e7bfc2439c621353d1ce0629fb8b | grep -o '[a-f0-9]\{8\}') such that: index[1]=b5f1e7bf index[2]=c2439c62 index[3]=1353d1ce index[4]=0629fb8b Any links, or advice, would be wonderful :)

    Read the article

  • Any good SQL Anywhere database schema comparison tools?

    - by Lurker Indeed
    Are there any good database schema comparison tools out there that support Sybase SQL Anywhere version 10? I've seen a litany of them for SQL Server, a few for MySQL and Oracle, but nothing that supports SQL Anywhere correctly. I tried using DB Solo, but it turned all my non-unique indexes into unique ones, and I didn't see any options to change that.

    Read the article

  • Multiple or single index in Lucene?

    - by Bruno Reis
    I have to index different kinds of data (text documents, forum messages, user profile data, etc) that should be searched together (ie, a single search would return results of the different kinds of data). What are the advantages and disadvantages of having multiple indexes, one for each type of data? And the advantages and disadvantages of having a single index for all kinds of data? Thank you.

    Read the article

  • do's and don'ts for writing mysql queries

    - by nik
    One thing I always wonder while writing query is that am I writing most optimized query or not? I know certain things like: 1) using SELECT field1, filed2 instead of SELECT * 2) Giving proper indexes to the tables but I am sure there are more things that should be kept in mind for writing queries, since most of the database can only grow more and optimal query will help gr8 in execution time, Can u share some tips and tricks on writing queries?

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • Determine if an Index has been used as a hint

    - by Joe Bloggs
    In SQL Server, there is the option to use query hints. eg SELECT c.ContactID FROM Person.Contact c WITH (INDEX(AK_Contact_rowguid)) I am in the process of getting rid of unused indexes and was wondering how I could go about determining if an index was used as a query hint. Does anyone have suggestions on how I could do this? Cheers, Joe

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >