Search Results

Search found 33585 results on 1344 pages for 'sql execution plan'.

Page 680/1344 | < Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • postgres subquery w/ derived column

    - by Wells
    The following query won't work, but it should be clear what I'm trying to do: split the value of 't' on space and use the last element in that array in the subquery (as it will match tl). Any ideas how to do this? Thanks! SELECT t, y, "type", regexp_split_to_array(t, ' ') as t_array, sum(dr), ( select uz from f.tfa where tl = t_array[-1] ) as uz, sc FROM padres.yd_fld WHERE y = 2010 AND pos <> 0 GROUP BY t, y, "type", sc;

    Read the article

  • Has anyone ever successfully make index merge work for MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+ Has anyone ever successfully make index merge work for MySQL? I'll be glad to see successful stories here:)

    Read the article

  • Update table with if statement PS/SQL

    - by Matt
    I am trying to do something like this but am having trouble putting it into oracle coding. BEGIN IF ((SELECT complete_date FROM task_table WHERE task_id = 1) IS NULL) THEN UPDATE task_table SET complete_date = //somedate WHERE task_id = 1; ELSE UPDATE task_table SET complete_date = NULL; END IF; END; But this does not work i also tried IF EXISTS(SELECT complete_date FROM task_table WHERE task_id = 1) with no luck

    Read the article

  • Require reasonably random results from an SQL SELECT query within a Joomla article (Cache enabled)

    - by Shrinivas
    Setup: Joomla website on LAMP stack I have a MySQL table containing some records, these are queried by a simple SELECT on the Joomla article, as pasted below. This specific Joomla website has Caching turned on in Joomla's Global Configuration. I need to randomize the order in which I display the resultset, each time the page is loaded. Regular php/mysql would offer me two approaches for this: 1. use 'order by RAND()' or any of a number of methods to allow a SELECT query to return reasonably random results. 2. once php gets the result from the SELECT into an array, shuffle the array to get a reasonably random order of array items. However, as this Joomla instance has Caching turned ON in its Global Configuration, either of the above approaches fails. The first time I load the page the order is randomized, however any further reloads do not cause the order to change, as the page is delivered from cache. The instant the Cache is disabled, both approaches (shuffle/order by rand) work perfectly. What am I missing? How do I override the Global Cache for this specific article? A very simple requirement, that is met by both php and mysql reasonably well, is blocked by the Joomla Cache that I cannot turn off. The php that returns results from the database. <pre> $db = JFactory::getDBO(); $select = "SELECT id FROM jos_mytable;"; //order by RAND() $db->setQuery($select); echo $db->getQuery(); //Show me the Query! $rows = $db->loadObjectList(); //shuffle($rows); foreach($rows as $row) { echo "$row->id"; }

    Read the article

  • How to get common field in ten tables with different field name

    - by Fero
    Hi all, I am having a common field in ten tables with different field name. example: table1: t1_id     t1_location 1         india 2         china 3         america table2: t2_id     t2_location 4         london 5         australia 6         america Now my o/p should be: location india china america london australia How should i get that using mysql query. thanks in advance

    Read the article

  • change postgres date format

    - by Jay
    Is there a way to change the default format of a date in Postgres? Normally when I query a Postgres database, dates come out as yyyy-mm-dd hh:mm:ss+tz, like 2011-02-21 11:30:00-05. But one particular program the dates come out yyyy-mm-dd hh:mm:ss.s, that is, there is no time zone and it shows tenths of a second. Apparently something is changing the default date format, but I don't know what or where. I don't think it's a server-side configuration parameter, because I can access the same database with a different program and I get the format with the timezone. I care because it appears to be ignoring my "set timezone" calls in addition to changing the format. All times come out EST. Additional info: If I write "select somedate from sometable" I get the "no timezone" format. But if I write "select to_char(somedate::timestamptz, 'yyyy-mm-dd hh24:mi:ss-tz')" then timezones work as I would expect. This really sounds to me like something is setting all timestamps to implicitly be "to_char(date::timestamp, 'yyyy-mm-dd hh24:mi:ss.m')". But I can't find anything in the documentation about how I would do this if I wanted to, nor can I find anything in the code that appears to do this. Though as I don't know what to look for, that doesn't prove much.

    Read the article

  • Filter on count(*) in oracle

    - by chris
    I have a grouped query, and would like to filter it based on count(*) Can I do this without a subquery? This is what I have currently: select * from (select ID, count(*) cnt from name group by ID) where cnt > 1;

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • SimpleDB as Denormalized DB

    - by Max
    In an environment where you have a relational database which handles all business transactions is it a good idea to utilise SimpleDB for all data queries to have faster and more lightweight search? So the master data storage would be a relational DB which is "replicated"/"transformed" into SimpleDB to provide very fast read only queries since no JOINS and complicated subselects are needed.

    Read the article

  • Tree structured resource Authorization

    - by user323883
    I have portfolio table with portoflio_id and parent_portfolio_id and I have user table now some users may have access to all portfolios, or selective portfolios or depending on group, everything under a portfolio tree. can someone suggest a good schema or any existing framework

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • What's the best way to access a MS Access database using PHP?

    - by Jack Roscoe
    Hi, I need to access some data from an MS Access database and retrieve some data from it using PHP. I've looked around the web, and found the following line which seems to correctly connect to the database: $conn->Open("DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=C:\wamp\www\data\MYDB.mdb"); However, I have tried to retrieve some data in the following way: $query = "SELECT pageid FROM pages_table"; $result = mysqli_query($conn, $query); $amount_of_pages = 0; if(mysqli_num_rows($result) <= 0) echo "No results found."; else while($row = mysqli_fetch_array($result, MYSQL_ASSOC)) $amount_of_pages++; And was presented with the following errors: Warning: mysqli_query() expects parameter 1 to be mysqli, object given in C:\wamp\www\data\index.php on line 19 Warning: mysqli_num_rows() expects parameter 1 to be mysqli_result, null given in C:\wamp\www\data\index.php on line 23 No results found. I don't really understand the connection to the Access database, is there something I should be doing differently? Thanks in advance for any help.

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • Sql Alchemy Duplicated Commit

    - by PythonWolf
    Good Morning i'm currently facing a problem in my Cherrypy application. Im my own custom session module , anyway when performing session.add() The exact same object gets updated Twice. cherrypy.request.SessionManager.user_data = user try: db_session.add(cherrypy.request.SessionManager) db_session.commit() Will Return 2011-06-21 09:16:48,991 INFO sqlalchemy.engine.base.Engine.0x...04cL BEGIN (implicit) 2011-06-21 09:16:49,015 INFO sqlalchemy.engine.base.Engine.0x...04cL SELECT ..... FROM "Clients_Users" WHERE "Clients_Users".username = %(username_1)s AND "Clients_Users".password = %(password_1)s LIMIT 1 OFFSET 0 2011-06-21 09:16:49,015 INFO sqlalchemy.engine.base.Engine.0x...04cL {'password_1': '123', 'username_1': u'1'} 2011-06-21 09:16:49,047 INFO sqlalchemy.engine.base.Engine.0x...04cL UPDATE "SYS_Sessions" SET user_data=%(user_data)s WHERE "SYS_Sessions".id = %(SYS_Sessions_id)s 2011-06-21 09:16:49,067 INFO sqlalchemy.engine.base.Engine.0x...04cL {'SYS_Sessions_id': 92L, 'user_data': } 2011-06-21 09:16:49,071 INFO sqlalchemy.engine.base.Engine.0x...04cL COMMIT 2011-06-21 09:16:49,093 INFO sqlalchemy.engine.base.Engine.0x...04cL BEGIN (implicit) 2011-06-21 09:16:49,095 INFO sqlalchemy.engine.base.Engine.0x...04cL UPDATE "SYS_Sessions" SET user_data=%(user_data)s WHERE "SYS_Sessions".id = %(SYS_Sessions_id)s 2011-06-21 09:16:49,095 INFO sqlalchemy.engine.base.Engine.0x...04cL {'SYS_Sessions_id': 92L, 'user_data': } 2011-06-21 09:16:49,108 INFO sqlalchemy.engine.base.Engine.0x...04cL COMMIT As Anyone seen this before ? P.S This doesn't happen in the rest of the modules i have made.

    Read the article

  • mysql: inserting data and autoincrement

    - by every_answer_gets_a_point
    i am converting from access to mysql i have a table in access where one of the columns is an autonumber when i transfer the data into the mysql database (where i also have a column that is auto_increment), should i be transfering the auto_increment data into the auto_increment column, or will it auto_increment itself? how do i ensure that if i do not transfer the autoincrement data from access, that it auto_increments properly?

    Read the article

  • Get smallest date for each element in access query

    - by skerit
    So I have a table containing different elements and dates. It basically looks like this: actieElement beginDatum 1 1/01/2010 1 1/01/2010 1 10/01/2010 2 1/02/2010 2 3/02/2010 What I now need is the smallest date for every actieElement. I've found a solution using a simple GROUP BY statement, but that way the query loses its scope and you can't change anything anymore. Without the GROUP BY statement I get multiple dates for every actieElement because certain dates are the same. I thought of something like this, but it also does not work as it would give the subquery more then 1 record: SELECT s1.actieElement, s1.begindatum FROM tblActieElementLink AS s1 WHERE (((s1.actieElement)=(SELECT TOP 1 (s2.actieElement) FROM tblActieElementLink s2 WHERE s1.actieElement = s2.actieElement ORDER BY s2.begindatum ASC)));

    Read the article

  • Which MySql line is faster:

    - by Camran
    I have a classified_id variable which matches one document in a MySql table. I am currently fetching the information about that one record like this: SELECT * FROM table WHERE table.classified_id = $classified_id I wonder if there is a faster approach, for example like this: SELECT 1 FROM table WHERE table.classified_id = $classified_id Wont the last one only select 1 record, which is exactly what I need, so that it doesn't have to scan the entire table but instead stops searching for records after 1 is found? Or am I dreaming this? Thanks

    Read the article

  • Regular Expression to parse SQL Structure

    - by user351429
    I am trying to parse the MySQL data types returned by "DESCRIBE [TABLE]". It returns strings like: int(11) float varchar(200) int(11) unsigned float(6,2) I've tried to do the job using regular expressions but it's not working. PHP CODE: $string = "int(11) numeric"; $regex = '/(\w+)\s*(\w+)/'; var_dump( preg_split($regex, $string) );

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • how can add an extra select in this query?

    - by BulgedSnowy
    i've three tables related. images: id | filename | filesize | ... nodes: image_id | tag_id tags: id | name And i'm using this query to search images containing x tags SELECT images.* FROM images INNER JOIN nodes ON images.id = nodes.image_id WHERE tag_id IN (SELECT tags.id FROM tags WHERE tags.tag IN ("tag1","tag2")) GROUP BY images.id HAVING COUNT(*)= 2 The problem is that i need to retrieve also all images contained by the retrieved image, and i need this in the same query. This the actual query wich search retrieve all tags contained by the image: SELECT tag FROM nodes JOIN tags ON nodes.tag_id = tags.id WHERE image_id = images.id and nodes.private = images.private ORDER BY tag How can i mix this two to have only one query?

    Read the article

  • Running an application from an USB device...

    - by Workshop Alex
    I'm working on a proof-of-concept application, containing a WCF service with console host and client, both on a single USB device. On the same device I will also have the client application which will connect to this service. The service uses the entity framework to connect to the database, which in this POC will just return a list of names. If it works, it will be used for a larger project. Creating the client and service was easy and this works well from USB. But getting the service to connect to the database isn't. I've found this site, suggesting that I should modify machine.config but that stops the XCopy deployment. This project cannot change any setting of the PC, so this suggestion is bad. I cannot create a deployment setup either. The whole thing just needs to run from USB disk. So, how do I get it to run? (The service just selects a list of names from the database, which it returns to the client. If this POC works, it will do far more complex things!)

    Read the article

< Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >