Search Results

Search found 40870 results on 1635 pages for 'database design'.

Page 516/1635 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • Cursor returns zero rows from query to table

    - by brockoli
    I've created an SQLiteDatabase in my app and populated it with some data. I can connect to my AVD with a terminal and when I issue select * from articles; I get a list of all the rows in my table and everything looks fine. However, in my code when I query my table, I get a cursor back that has my tables columns, but zero rows of data. Here is my code.. mDbHelper.open(); Cursor articles = mDbHelper.fetchAllArticles(); startManagingCursor(articles); Cursor feeds = mDbHelper.fetchAllFeeds(); startManagingCursor(feeds); mDbHelper.close(); int titleColumn = articles.getColumnIndex("title"); int feedIdColumn = articles.getColumnIndex("feed_id"); int feedTitleColumn = feeds.getColumnIndex("title"); /* Check if our result was valid. */ if (articles != null) { int count = articles.getCount(); /* Check if at least one Result was returned. */ if (articles.moveToFirst()) { In the above code, my Cursor articles returns with my 4 columns, but when I call getCount() it returns zero, even though I can see hundreds of rows of data in that table from command line. Any idea what I might be doing wrong here? Also.. here is my code for fetchAllArticles.. public Cursor fetchAllArticles() { return mDb.query(ARTICLES_TABLE, new String[] {ARTICLE_KEY_ROWID, ARTICLE_KEY_FEED_ID, ARTICLE_KEY_TITLE, ARTICLE_KEY_URL}, null, null, null, null, null); } Rob W.

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • Django User "per project" group assignation

    - by Ben G
    Hi, Here's my problem : my site has users, which can create projects, and access other user's projects. Each project can assign different rights to users. So, i could have Project A : user "John" is in group "manager" , and Project "B" user "John" is in group "worker". How could I use the Django User authentication model to do that ? From a SQL point a view, what i would like is to be able to add "project_id" in the primary key for the "auth_user_groups" table. I don't think profile is of any help here. Any advice ? UPDATE : "worker" and "manager" are just two examples of the permission group (or "roles") that my application defines. There will be more in the future. Eg : i will probably also have "admin", "reporting", etc...

    Read the article

  • What is faster in MySQL? WHERE sub request = 0 or IN list

    - by Nicolas Manzini
    Hello I was wondering what is better in MySQL. I have a SELECT querry that exclude every entry associated to a banned userID currently I have a subquerry clause in the WHERE statement that goes like AND (SELECT COUNT(*) FROM TheBlackListTable WHERE userID = userList.ID AND blackListedID = :userID2 ) = 0 Which will accept every userID not present in the TheBlackListTable Would it be faster to retrieve first all Banned ID in a previous request and replace the previous clause by AND creatorID NOT IN listOfBannedID Thank you!

    Read the article

  • SQL Server Concatenate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • Creating an appropriate index for a frequently used query in SQL Server

    - by Slauma
    In my application I have two queries which will be quite frequently used. The Where clauses of these queries are the following: WHERE FieldA = @P1 AND (FieldB = @P2 OR FieldC = @P2) and WHERE FieldA = @P1 AND FieldB = @P2 P1 and P2 are parameters entered in the UI or coming from external datasources. FieldA is an int and highly on-unique, means: only two, three, four different values in a table with say 20000 rows FieldB is a varchar(20) and is "almost" unique, there will be only very few rows where FieldB might have the same value FieldC is a varchar(15) and also highly distinct, but not as much as FieldB FieldA and FieldB together are unique (but do not form my primary key, which is a simple auto-incrementing identity column with a clustered index) I'm wondering now what's the best way to define an index to speed up specifically these two queries. Shall I define one index with... FieldB (or better FieldC here?) FieldC (or better FieldB here?) FieldA ... or better two indices: FieldB FieldA and FieldC FieldA Or are there even other and better options? What's the best way and why? Thank you for suggestions in advance!

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • What is the most efficient procedure for implementing a sortable ajax list on the backend?

    - by HenryL
    The most common method is to assign a sequential order field for each item in the list and do an update that maintains the sequence with every ajax sort operation. Unfortunately, this requires an update to each item of the list every time someone sorts. This is fine for small lists, but what's the best way to implement sorting for larger lists that are constantly updated? I am looking for something that minimizes DB IO.

    Read the article

  • Date range intersection in SQL

    - by Will
    I have a table where each row has a start and stop date-time. These can be arbitrarily short or long spans. I want to query the sum duration of the intersection of all rows with two start and stop date-times. How can you do this in MySQL? Or do you have to select the rows that intersect the query start and stop times, then calculate the actual overlap of each row and sum it client-side?

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • Deploying and hosting scala in the cloud?

    - by TiansHUo
    I am starting a web app considering scalability as one of the top priorities. What would be the benefits of this: cassandra scala lift vs the traditional LAMP on the cloud? Since from what I've read, please correct me, the cloud itself is scalable I have never seen anyone deploy scala on the cloud before. Is it worth the effort to learn the platform? Is it ready for production use?

    Read the article

  • SQL query duration is longer for smaller dataset?

    - by entens
    I received reports that a my report generating application was not working. After my initial investigation, I found that the SQL transaction was timing out. I'm mystified as to why the query for a smaller selection of items would take so much longer to return results. Quick query (averages 4 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-17-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC Long query (averages 1 minute 20 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-18-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC I could simply increase the timeout on the SqlCommand, but it doesn't change the fact the query is taking longer than it should. Why would requesting a subset of the items take longer than the query that returns more data? How can I optimize this query?

    Read the article

  • quaring larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • T-SQL MERGE - finding out which action it took

    - by IanC
    I need to know if a MERGE statement performed an INSERT. In my scenario, the insert is either 0 or 1 rows. Test code: DECLARE @t table (C1 int, C2 int) DECLARE @C1 INT, @C2 INT set @c1 = 1 set @c2 = 1 MERGE @t as tgt USING (SELECT @C1, @C2) AS src (C1, C2) ON (tgt.C1 = src.C1) WHEN MATCHED AND tgt.C2 != src.C2 THEN UPDATE SET tgt.C2 = src.C2 WHEN NOT MATCHED BY TARGET THEN INSERT VALUES (src.C1, src. C2) OUTPUT deleted.*, $action, inserted.*; SELECT inserted.* The last line doesn't compile (no scope, unlike a trigger). I can't get access to @action, or the output. Actually, I don't want any output meta data. How can I do this?

    Read the article

  • Desimal data Type Display scale part as zero

    - by Wael Dalloul
    I have Decimal field in SQLserver 2005 table, Price decimal(18, 4) if I write 12 it will be converted to 12.0000, if I write 12.33 it will be converted into 12.3300. Always it's putting zero to the right of the decimal point in the count of Scale Part(4). I was using these in SQL Server 2000, it was not behaving like this, in SQL Server 2000 if I put 12.5 it will be stored as 12.5 not as 12.5000 what SQLServer2005 do. My Question is how to stop SQL Server 2005 from putting zeros to the right of the decimal point?

    Read the article

  • In SQL server, to convert a varchar which have this format (nnn:nn:nn)

    - by user1688917
    I have this varchar format as time accumulation and i want to convert it to an integer to do a SUM and get the total time for a group. The fist part which may be 1, 2, 3, 4 or even five digits represent the accumulation of Hours and then seperated by a colon. then come the second part which is accumulation of minutes and last accumulation of seconds (2 digits each). How to convert this to integer in one query if possile.

    Read the article

  • Retrieving all objects in code upfront for performance reasons

    - by ming yeow
    How do you folks retrieve all objects in code upfront? I figure you can increase performance if you bundle all the model calls together? This makes for a bigger deal, especially if your DB cannot keep everything in memory def hitDBSeperately { get X users ...code get Y users... code get Z users... code } Versus: def hitDBInSingleCall { get X+Y+Z users code for X code for Y... }

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Update query with conditional?

    - by dmontain
    I'm not sure if this possible. If not, let me know. I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >