Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 237/1148 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • SELECT product from subclass: How many queries do I need?

    - by Stefano
    I am building a database similar to the one described here where I have products of different type, each type with its own attributes. I report a short version for convenience product_type ============ product_type_id INT product_type_name VARCHAR product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id ... (common attributes to all product) magazine ======== magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id ... (magazine-specific attributes) web_site ======== web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id ... (web-site specific attributes) This way I do not need to make a huge table with a column for each attribute of different product types (most of which will then be NULL) How do I SELECT a product by product.product_id and see all its attributes? Do I have to make a query first to know what type of product I am dealing with and then, through some logic, make another query to JOIN the right tables? Or is there a way to join everything together? (if, when I retrieve the information about a product_id there are a lot of NULL, it would be fine at this point). Thank you

    Read the article

  • SQL query error while trying to put a file in the database

    - by DaGhostman Dimitrov
    Hey Guys I have a big problem that I have no Idea why.. I have few forms that upload files to the database, all of them work fine except one.. I use the same query in all(simple insert). I think that it has something to do with the files i am trying to upload but I am not sure. Here is the code: if ($_POST['action'] == 'hndlDocs') { $ref = $_POST['reference']; // Numeric value of $doc = file_get_contents($_FILES['doc']['tmp_name']); $link = mysqli_connect('localhost','XXXXX','XXXXX','documents'); mysqli_query($link,"SET autocommit = 0"); $query = "INSERT INTO documents ({$ref}, '{$doc}', '{$_FILES['doc']['type']}') ;"; mysqli_query($link,$query); if (mysqli_error($link)) { var_dump(mysqli_error($link)); mysqli_rollback($link); } else { print("<script> window.history.back(); </script>"); mysqli_commit($link); } } The database has only these fields: DATABASE documents ( reference INT(5) NOT NULL, //it is unsigned zerofill doc LONGBLOB NOT NULL, //this should contain the binary data mime_type TEXT NOT NULL // the mime type of the file php allows only application/pdf and image/jpeg ); And the error I get is : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '00001, '????' at line 1 I will appreciate every help. Cheers!

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • Is it possible to create multi-tiered WHERE statements in mySQL

    - by Brendan
    I'm currently developing a program that will generate reports based upon lead data. My issue is that I'm running 3 queries for something that I would like to only have to run one query for. For instance, I want to gather data for leads generated in the past day submission_date > (NOW() - INTERVAL 1 DAY) and I would like to find out how many total leads there were, and how many sold leads there were in that timeframe. (sold=1 / sold=0). The issue comes with the fact that this query is currently being done with 2 queries, one with WHEREsold= 1 and one with WHEREsold= 0. This is all well and good, but when I want to generate this data for the past day,week,month,year,and all time I will have to run 10 queries to obtain this data. I feel like there HAS to be a more efficient way of doing this. I know I can create a mySQL function for this, but I don't see how this could solve the problem. Thanks!!

    Read the article

  • C# - Fast and simple multi dimensional data structures?

    - by Jeremy Rudd
    I need to store multi-dimensional data consisting of numbers in a manner thats easy to work with. I'm capturing data in real time, and once processed I would destroy and GC older data. This data structure must be fast so it won't hit my overall app performance. The faster the better. What are my choices in terms of platform supported data structures? I'm using VS 2010. and .NET 4.

    Read the article

  • App Engine - Query using a class member as parameter

    - by Zach
    I have a simple class, relevant details below: @PersistenceCapable(identityType = IdentityType.APPLICATION) public class SimpleCategory implements Serializable{ ... public static enum type{ Course, Category, Cuisine } @Persistent public type t; ... } I am attempting to query all SimpleCategory objects of the same type. public SimpleCategory[] getCategories(SimpleCategory.type type) { PersistenceManager pm = PMF.get().getPersistenceManager(); try{ Query q = pm.newQuery(SimpleCategory.class); q.setFilter("t == categoryType"); q.declareParameters("SimpleCategory.type categoryType"); List<SimpleCategory> cats = (List<SimpleCategory>) q.execute(type); ... } This results in a ClassNotResolvedException for SimpleCategory.type. The google hits I've found so far recommended to: Use query.declareImports to specify the class i.e. q.declareImports("com.test.zach.SimpleCategory.type"); Specify the fully qualified name of SimpleCategory in declareParameters Neither of these suggestions has worked. By removing .type and recompiling, I can verify that declareParameters can see SimpleCategory just fine, it simply cannot see the SimpleCategory.type, despite the fact that the remainder of the method has full visibility to it. What am I missing?

    Read the article

  • Is mono fast enough for Mac OS X?

    - by prosseek
    I have to use .NET/C# for the next company project. As I've developed my project on Mac, I looked into the mono for development environment/tool. Is the mono for Mac OS X is fast enough? I mean, what about the performance in running the assembly compared to running the same code on .NET under windows machine? Do I have to buy PC laptop for developing C#/.NET in practical sense?

    Read the article

  • Converted PHP query to PDO and now getting no results

    - by jaw
    I had a query working just fine, but after converting it to PDO I am getting no results when I do a var_dump($row). But there are no error messages. Can anyone see what I might be doing wrong? //Here is the original query that worked fine and returned results global $wpdb; $results = $wpdb->get_results("SELECT stories.story_name, stories.category, stories.SID, wp_users.ID, wp_users.display_name FROM stories LEFT JOIN wp_users ON stories.ID=wp_users.ID where stories.active = 1"); //Here is the query in PDO form which returns no results $results = $dbh->prepare("select wp_users.ID, wp_users.display_name, stories.SID, stories.story_name, stories.category, FROM stories LEFT JOIN wp_users ON stories.ID=wp_users.ID WHERE stories.active=1"); $results->bindParam(':wp_users.ID', $user_ID, PDO::PARAM_INT); $results->bindParam(':display_name', $display_name, PDO::PARAM_STR); $results->bindParam(':stories.SID', $SID, PDO::PARAM_INT); $results->bindParam(':story_name', $story_name, PDO::PARAM_STR); $results->bindParam(':category', $category, PDO::PARAM_STR); $results->execute(); $row = $results->fetchAll(PDO::FETCH_ASSOC); //returns 0 results but should return 8 as original code above did echo var_dump($row);

    Read the article

  • Mod_rewrite on all website images

    - by Esteve Camps
    I'm designing an image repository. I want to uncouple the filename from the image html link. For instance: image in filesystem is called images/items/12543.jpg HTML is <img src="images/car.jpg" /> Does anyone strongly discourages me to rewrite all image requests using PHP so when retrieving images/car.jpg, Apache really replies content from images/items/12543.jpg? I don't know if I may get performance problems.

    Read the article

  • Why trigger query fails in the sqlite with Qt?

    - by dexterous_stranger
    I am a beginner in the SQL. I am using SQLite, Qt - on embedded systems. I want to put a trigger here. The trigger is that whenever the primary key Id is greater than 32145, then channelNum=101 should be set. I want to set the attrib name - text also, but I got the compilation issue. I believe that the setting of trigger is the part of DDL - Data definition language. Please let me know that if I am wrong here. Here is my create db code. I get the sql query error. Also please do suggest how to set the text attrib = "COmedy". /** associate db with query **/ QSqlQuery query ( m_demo_db ); /** Foreign keys are disabled by default in sqlite **/ /** Here is the pragma to turn them on first **/ query.exec("PRAGMA foreign_keys = ON;"); if ( false == query.exec()) { qDebug()<<"Pragma failed"; } /** Create Table for storing user preference LCN for DTT **/ qDebug()<<"Create Table postcode.db"; query.prepare(" CREATE TABLE dttServiceList (Id INTEGER PRIMARY KEY, attrib varchar(20), channelNum integer )" ); if ( false == query.exec()) { qDebug()<<"Create dttServiceList table failed"; } /** Try placing trigger here **/ triggerQuery = "CREATE TRIGGER upd_check BEFORE INSERT ON dttServiceList \ FOR EACH ROW \ BEGIN \ IF Id > 32145 THEN SET channelNum=101; \ END IF; \ END; "; query.prepare(triggerQuery); if ( false == query.exec()) { qDebug()<<"Trigger failed !!"; qDebug() << query.lastError(); } Also, how to set the text name in the trigger - I want to SET attrib = "Comedy".

    Read the article

  • Select multiple records by one query

    - by kofto4ka
    Hello there. Please, give me advice, how to construct select query. I have table table with fields type and obj_id. I want to select all records in concordance with next array: $arr = array( 0 => array('type' => 1, 'obj_id' => 5), 1 => array('type' => 3, 'obj_id' => 15), 2 => array('type' => 4, 'obj_id' => 14), 3 => array('type' => 12, 'obj_id' => 17), ); I want to select needed rows by one query, is it real? Smth like select * from `table` where type in (1,3,4,12) and obj_id in (5,15,14,17) But this query returns also records with type = 3 and obj_id = 14, and for example type = 1 and obj_id = 17. p.s. moderators, please fix my title, I dont know how to describe my question.

    Read the article

  • MySQL Query Where Column Like Column

    - by shmeeps
    I'm working on a small project that involves grabbing a list of contacts which are stored for each group. Essentially, the database is set up so that each group has a primary and secondary contact stored as, unsurprisingly, Group.Primary and Group.Secondary. The objective is to pull every Primary and Secondary contact for each Group and display them in a sortable table. I have the sortable table all worked out, but I have come across a small problem. Each primary and secondary field can have more than one contact separated by a comma. For instance, if Primary contained 123,256 , it would need to pull both Contacts with IDs 123 and 256. I had intended to use a query formatted like this: SELECT * FROM Group G, Contacts C WHERE G.Primary LIKE %C.ID% OR G.Secondary LIKE %C.ID% so that I could just skip the comma part, but I can't seem to find a working query for this. My question to you is, am I just overlooking something here? Is there a simple query that would let me do this? Or am I better off getting the groups and contacts separately, and combine the two later. I think the former is a little easier to understand when read, which is a plus as this is a shared project, but if that is not possible I will do the latter. This code is simplified, but it gets the point across.

    Read the article

  • MySQLDB query not returning all rows

    - by RBK
    I am trying to do a simple fetch using MySQLDB in Python. I have 2 tables(Accounts & Products). I have to look up Accounts table, get acc_id from it & query the Products table using it. The Products tables has more than 10 rows. But when I run this code it randomly returns between 0 & 6 rows each time I run it. Here's the code snippet: # Set up connection con = mdb.connect('db.xxxxx.com', 'user', 'password', 'mydb') # Create cursor cur = con.cursor() # Execute query cur.execute("SELECT acc_id FROM Accounts WHERE ext_acc = '%s'" % account_num ) # account_num is alpha-numberic and is got from preceding part of the program # A tuple is returned, so get the 0th item from it acc_id = cur.fetchone()[0] print "account_id = ", acc_id # Close the cursor - I was not sure if I can reuse it cur.close() # Reopen the cursor cur = con.cursor() # Second query cur.execute("SELECT * FROM Products WHERE account_id = %d" % acc_id) keys = cur.fetchall() print cur.rowcount # This prints incorrect row count for key in keys: # Does not print all rows. Tried to directly print keys instead of iterating - same result :( print key # Closing the cursor & connection cur.close() con.close() The weird part is, I tried to step through the code using a debugger(PyDev on Eclipse) and it correctly gets all rows(both the value stored in the variable 'keys' as well as console output are correct). I am sure my DB has correct data since I ran the same SQL on MySQL console & got the correct result. Just to be sure I was not improperly closing the connection, I tried using with con instead of manually closing the connection and it's the same result. I did RTFM but I couldn't find much in it to help me with this issue. Where am I going wrong? Thank you. EDIT: I noticed another weird thing now. In the line cur.execute("SELECT * FROM Products WHERE account_id = %d" % acc_id), I hard-coded the acc_id value, i.e made it cur.execute("SELECT * FROM Products WHERE account_id = %d" % 322) and it returns all rows

    Read the article

  • servlet and jsp sending query result following MVC framework

    - by kawtousse
    Hi every one, in order to separate java code and html code and be more faithful to MVC framework i am coding like that; in the servlet i put the following: net.sf.hibernate.Session s = null; net.sf.hibernate.Transaction tx; try { s= HibernateUtil.currentSession(); tx=s.beginTransaction(); Query query = s.createQuery("select opcemployees.Nom,opcemployees.Prenom,dailytimesheet.TrackingDate,dailytimesheet.Activity," + "dailytimesheet.ProjectCode,dailytimesheet.WAName,dailytimesheet.TaskCode," + "dailytimesheet.TimeSpent,dailytimesheet.PercentTaskComplete from Opcemployees opcemployees,Dailytimesheet dailytimesheet " + "where opcemployees.Matricule=dailytimesheet.Matricule and dailytimesheet.Etat=3 " + "group by opcemployees.Nom,opcemployees.Prenom" ); for(Iterator it=query.iterate();it.hasNext();) { if(it.hasNext()){ Object[] row = (Object[]) it.next(); request.setAttribute("items", row); }} } catch (HibernateException e){ e.printStackTrace(); } request.getRequestDispatcher("EspaceValidation.jsp").forward(request, response); and in jsp i start like that: <table> <c:forEach items="${items}" var="item"> <tr> <td>? </td> <td>?</td> </tr> </c:forEach> in this case what should i put exactly to obtain my result.a table fulled with the right value from the request

    Read the article

  • What's the fastest lookup algorithm for a key, pair data structure (i.e, a map)?

    - by truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 – 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • What would be better, (1 database + 4 tables) or (2 databases + 2 tables each) ?

    - by griseldas
    Hi there, I would like to be advised on what would be better (in regards to performance) A) 1 DATABASE with 4 tables or B) 2 DATABASES (same server), each with 2 tables. The tables size and usage are more or less similar, so the 2 tables on Database 1 would be similar usage/size to the 2 tables on database 2 The tables could have +500,000 records and the 2 tables on each database are not related (no join queries etc between them) Thanks in advance for your comments

    Read the article

  • Maximum capabilities of MySQL

    - by cdated
    How do I know when a project is just to big for MySQL and I should use something with a better reputation for scalability? Is there a max database size for MySQL before degradation of performance occurs? What factors contribute to MySQL not being a viable option compared to a commercial DBMS like Oracle or SQL Server?

    Read the article

  • What does "performant" software actually mean?

    - by Roddy
    I see it used a lot, but haven't seen a definition that makes complete sense. Wiktionary says "characterized by an adequate or excellent level of performance or efficiency", which isn't much help. Initially I though performant just meant "fast", but others seem to think it's also about stability, code quality, memory use/footprint, or some combination of all those. I think this is a "real" question - but if enough people reckon this is a subjective question, that's an answer in itself.

    Read the article

  • SQL queries to determine all values that would satisfy an arbitrary query

    - by jasterm007
    I'm trying to figure out how to efficiently run a set of queries that will provide a new table of all values that would return results for an arbitrary query. Say my table has a schema like: id name age city What is an efficient way to list all values that would return results for an arbitrary query, say "NOT city=X AND age BETWEEN Y and Z"? My naive approach for this would be to use a script and recurse through all possible combinations of {city, age, age} and see which SELECTs return more than 0 results, but that seems incredibly inefficient. I've also tried building large joins on {city, age, age} as well and basically using that table as an argument list to the query, but that quickly becomes an impossibility for queries on many columns. For simple conjunctive equality queries, i.e. "name=X and age=Y", this is much simpler, as I can do something like SELECT name, age, count(*) AS count FROM main GROUP BY name, age HAVING count > 0 But I'm having difficulty coming up with a general approach for anything more complicated than that. Any pointers in the right direction would be most helpful, thanks.

    Read the article

  • Displaying Query Results Horizontally

    - by AndyD273
    I am wondering if it is possible to take the results of a query and return them as a CSV string instead of as a column of cells. Basically, we have a table called Customers, and we have a table called CustomerTypeLines, and each Customer can have multiple CustomerTypeLines. When I run a query against it, I run into problems when I want to check multiple types, for instance: Select * from Customers a Inner Join CustomerTypeLines b on a.CustomerID = b.CustomerID where b.CustomerTypeID = 14 and b.CustomerTypeID = 66 ...returns nothing because a customer can't have both on the same line, obviously. In order to make it work, I had to add a field to Customers called CustomerTypes that looks like ,14,66,67, so I can do a Where a.CustomerTypes like '%,14,%' and a.CustomerTypes like '%,66,%' which returns 85 rows. Of course this is a pain because I have to make my program rebuild this field for that Customer each time the CustomerTypeLines table is changed. It would be nice if I could do a sub query in my where that would do the work for me, so instead of returning the results like: 14 66 67 it would return them like ,14,66,67, Is this possible?

    Read the article

  • MySQL - Exclude rows from Select based on duplication of two columns

    - by Carson C.
    I am attempting to narrow results of an existing complex query based on conditional matches on multiple columns within the returned data set. I'll attempt to simplify the data as much as possible here. Assume that the following table structure represents the data that my existing complex query has already selected (here ordered by date): +----+-----------+------+------------+ | id | remote_id | type | date | +----+-----------+------+------------+ | 1 | 1 | A | 2011-01-01 | | 3 | 1 | A | 2011-01-07 | | 5 | 1 | B | 2011-01-07 | | 4 | 1 | A | 2011-05-01 | +----+-----------+------+------------+ I need to select from that data set based on the following criteria: If the pairing of remote_id and type is unique to the set, return the row always If the pairing of remote_id and type is not unique to the set, take the following action: Of the sets of rows for which the pairing of remote_id and type are not unique, return only the single row for which date is greatest and still less than or equal to now. So, if today is 2010-01-10, I'd like the data set returned to be: +----+-----------+------+------------+ | id | remote_id | type | date | +----+-----------+------+------------+ | 3 | 1 | A | 2011-01-07 | | 5 | 1 | B | 2011-01-07 | +----+-----------+------+------------+ For some reason I'm having no luck wrapping my head around this one. I suspect the answer lies in good application of group_by, but I just can't grasp it. Any help is greatly appreciated!

    Read the article

  • Regex vs. string:find() for simple word boundary

    - by user576267
    Say I only need to find out whether a line read from a file contains a word from a finite set of words. One way of doing this is to use a regex like this: .*\y(good|better|best)\y.* Another way of accomplishing this is using a pseudo code like this: if ( (readLine.find("good") != string::npos) || (readLine.find("better") != string::npos) || (readLine.find("best") != string::npos) ) { // line contains a word from a finite set of words. } Which way will have better performance? (i.e. speed and CPU utilization)

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >