Search Results

Search found 28930 results on 1158 pages for 'sql ce'.

Page 666/1158 | < Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • How do I put data from multiple records into different columns?

    - by Bryan
    My two tables are titled analyzed and analyzedCopy3. I'm trying to put information from analyzedCopy3 into multiple columns in analyzed. Sample data from analyzedCopy3: readings_miu_id OriginalCol ColRSSIz 110001366 Frederick Road -108 110001366 Steel Street 110001366 Fifth Ave. 110001508 Steel Street -104 What I want to do is put the top 3 OriginalCol, ColRSSIz combinations into columns that I have in the table analyzed. In analyzed there is only one record for each unique readings_miu_id. Any ideas? Thanks in advance. Additional Info: By "top 3 OriginalCol, ColRSSIz combinations" I mean the first 3 combinations with the highest value in the ColRSSIz column. For any readings_miu_id there could be anywhere from 1 row of information to 6 rows of information. So at most I'm only wanting the top 3. If there is less than 3 rows for the readings_miu_id then the other columns need to be blank. Query that generates the table "analyzed": strSql4 = " SELECT readings_miu_id, Count(readings_miu_id) as NumberOfReads, First(PercentSuccessz) as PercentSuccess, First(Readingz)as Reading, First(MIUwindowz) as MIUwindow, First(SNz) as SN, First(Noisez) as Noise, First(RSSIz) as RSSI, First(ColRSSIz) as ColRSSI, First(MIURSSIz) as MIURSSI, First(Col1z) as Col1, First(Col1RSSIz) as Col1RSSI, First(Col2z) as Col2, First(Col2RSSIz) as Col2RSSI, First(Col3z) as Col3, First(Col3RSSIz) as Col3RSSI, First(Firmwarez) as Firmware, First(CFGDatez) as CFGDate, First(FreqCorrz) as FreqCorr, First(Activez) as Active, First(MeterTypez) as MeterType, First(OriginColz) as OriginCol, First(ColIDz) as ColID, First(Ownagez) as Ownage, First(SiteIDz) as SiteID, First(PremIDz) as PremID, First(prem_group1z) as prem_group1, First(prem_group2z) as prem_group2, First(ReadIDz) as ReadID, First(prem_addr1z) as prem_addr1 " & _ "INTO analyzed " & _ "FROM analyzedCopy2 " & _ "GROUP BY readings_miu_id, PremIDz; " DoCmd.SetWarnings False DoCmd.RunSQL strSql4 DoCmd.SetWarnings True

    Read the article

  • postgres subquery w/ derived column

    - by Wells
    The following query won't work, but it should be clear what I'm trying to do: split the value of 't' on space and use the last element in that array in the subquery (as it will match tl). Any ideas how to do this? Thanks! SELECT t, y, "type", regexp_split_to_array(t, ' ') as t_array, sum(dr), ( select uz from f.tfa where tl = t_array[-1] ) as uz, sc FROM padres.yd_fld WHERE y = 2010 AND pos <> 0 GROUP BY t, y, "type", sc;

    Read the article

  • Using LEFT JOIN to only selection one joined row

    - by Alex
    I'm trying to LEFT JOIN two tables, to get a list of all rows from TABLE_1 and ONE related row from TABLE_2. I have tried LEFT JOIN and GROUP BY c_id, however I wan't the related row from TABLE_2 to be sorted by isHeadOffice DESC. Here are some sample tables TABLE 1 c_id Name ---------------- 1 USA 2 Canada 3 England 4 France 5 Spain TABLE2 o_id c_id Office isHeadOffice ------------------------------------------------ 1 1 New York 1 2 1 Washington 0 3 1 Boston 0 4 2 Toronto 0 5 3 London 0 6 3 Manchester 1 7 4 Paris 1 8 4 Lyon 0 So what I am trying to get from this would be something like: RESULTS c_id Name Office ---------------------------- 1 USA New York 2 Canada Toronto 3 England Manchester 4 France Paris 5 Spain NULL I'm using PHP & MySQL. Any ideas?

    Read the article

  • Find all those columns which have only null values, in a MySQL table

    - by Robin v. G.
    The situation is as follows: I have a substantial number of tables, with each a substantial number of columns. I need to deal with this old and to-be-deprecated database for a new system, and I'm looking for a way to eliminate all columns that have - apparently - never been in use. I wanna do this by filtering out all columns that have a value on any given row, leaving me with a set of columns where the value is NULL in all rows. Of course I could manually sort every column descending, but that'd take too long as I'm dealing with loads of tables and columns. I estimate it to be 400 tables with up to 50 (!) columns per table. Is there any way I can get this information from the information_schema? EDIT: Here's an example: column_a column_b column_c column_d NULL NULL NULL 1 NULL 1 NULL 1 NULL 1 NULL NULL NULL NULL NULL NULL The output should be 'column_a' and 'column_c', for being the only columns without any filled in values.

    Read the article

  • Monitoring Log Shipped Databases

    - by Registered User
    I need a consistent way to monitor databases that are read-only log shipped copies of production databases. In the past I have relied on the following methods: Set the job that restores logs to the database kick off another job as its last step. Set the job that restores logs to the database to insert a record in a control table as its last step. Query the msdb database to check the status of the job that restores logs to the database. Query a control table inside the database itself that gets a value immediately before transaction logs are backed up. Query MAX values from tables inside the database to see if it has recent changes. Although the above methods work, they can't be implemented for every log shipped database that I query for various reasons. What is the best method for monitoring the "data as of" date for a log shipped database?

    Read the article

  • How do I go about link web content in a database with a nested set model?

    - by wb
    My nested set table is as follows. create table depts ( id int identity(0, 1) primary key , lft int , rgt int , name nvarchar(60) , abbrv nvarchar(20) ); Test departments. insert into depts (lft, rgt, name, abbrv) values (1, 14, 'root', 'r'); insert into depts (lft, rgt, name, abbrv) values (2, 3, 'department 1', 'd1'); insert into depts (lft, rgt, name, abbrv) values (4, 5, 'department 2', 'd2'); insert into depts (lft, rgt, name, abbrv) values (6, 13, 'department 3', 'd3'); insert into depts (lft, rgt, name, abbrv) values (7, 8, 'sub department 3.1', 'd3.1'); insert into depts (lft, rgt, name, abbrv) values (9, 12, 'sub department 3.2', 'd3.2'); insert into depts (lft, rgt, name, abbrv) values (10, 11, 'sub sub department 3.2.1', 'd3.2.1'); My web content table is as follows. create table content ( id int identity(0, 1) , dept_id int , page_name nvarchar(60) , content ntext ); Test content. insert into content (dept_id, page_name, content) values (3, 'index', '<h2>welcome to department 3!</h2>'); insert into content (dept_id, page_name, content) values (4, 'index', '<h2>welcome to department 3.1!</h2>'); insert into content (dept_id, page_name, content) values (6, 'index', '<h2>welcome to department 3.2.1!</h2>'); insert into content (dept_id, page_name, content) values (2, 'what-doing', '<h2>what is department 2 doing?/h2>'); I'm trying to query the correct page content (from the content table) based on the url given. I can easily accomplish this task with a root department. However, querying a department with multiple depths is proving to be a little harder. For example: http://localhost/departments.asp?d3/ (Should return <h2>welcome to department 3!</h2>) http://localhost/departments.asp?d2/what-doing (Should return <h2>what is department 2 doing?</h2>) I'm not sure if this can be create in one query or if there will need to be a recursive function of some sort. Also, if there is nothing after the last / then assume we want the index page. How can this be accomplished? Thank you.

    Read the article

  • MySQL COUNT() multiple columns

    - by liam
    Hello, I'm trying to fetch the most popular tags from all videos in my database (ignoring blank tags). I also need the 'flv' for each tag. I have this working as I want if each video has one tag: SELECT tag_1, flv, COUNT(tag_1) AS tagcount FROM videos WHERE NOT tag_1='' GROUP BY tag_1 ORDER BY tagcount DESC LIMIT 0, 10 However in my database, each video is allowed three tags - tag_1, tag_2 and tag_3. Is there a way to get the most popular tags reading from multiple columns? The record structure is: +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | flv | varchar(150) | YES | | NULL | | | tag_1 | varchar(75) | YES | | NULL | | | tag_2 | varchar(75) | YES | | NULL | | | tag_3 | varchar(75) | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • Should I Use GUID or IDENTITY as Thread Number?

    - by user311509
    offerID is the thread # which represents the thread posted. I see in forums posts are represented by random numbers. Is this achieved by IDENTITY? If not, please advice. nvarchar(max) will carry all kind of texts along with HTML tags. CREATE TABLE Offer ( offerID int IDENTITY (4382,15) PRIMARY KEY, memberID int NOT NULL REFERENCES Member(memberID), title nvarchar(200) NOT NULL, thread nvarchar(max) NOT NULL, . . . );

    Read the article

  • What's the best way to access a MS Access database using PHP?

    - by Jack Roscoe
    Hi, I need to access some data from an MS Access database and retrieve some data from it using PHP. I've looked around the web, and found the following line which seems to correctly connect to the database: $conn->Open("DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=C:\wamp\www\data\MYDB.mdb"); However, I have tried to retrieve some data in the following way: $query = "SELECT pageid FROM pages_table"; $result = mysqli_query($conn, $query); $amount_of_pages = 0; if(mysqli_num_rows($result) <= 0) echo "No results found."; else while($row = mysqli_fetch_array($result, MYSQL_ASSOC)) $amount_of_pages++; And was presented with the following errors: Warning: mysqli_query() expects parameter 1 to be mysqli, object given in C:\wamp\www\data\index.php on line 19 Warning: mysqli_num_rows() expects parameter 1 to be mysqli_result, null given in C:\wamp\www\data\index.php on line 23 No results found. I don't really understand the connection to the Access database, is there something I should be doing differently? Thanks in advance for any help.

    Read the article

  • How to join two query in SQL (Oracle)

    - by MAHESH A SONI
    How can I join these queries? SELECT RCTDT, SUM(RCTAMOUNT), COUNT(RCTAMOUNT) FROM RECEIPTS4 WHERE RCTDT BETWEEN '01-nov-2009' AND '30-nov-2009' AND RCTTYPE='CA' AND RCTAMOUNT>0 GROUP BY RCTDT --- SELECT RCTDT, SUM(RCTAMOUNT), COUNT(RCTAMOUNT) FROM RECEIPTS4 WHERE RCTDT BETWEEN '01-nov-2009' AND '30-nov-2009' AND RCTTYPE='CQ' AND RCTAMOUNT>0 GROUP BY RCTDT

    Read the article

  • How do I get every nth row in a table, or how do I break up a subset of a table into sets or rows of

    - by Jherico
    I have a table of heterogeneous pieces of data identified by a primary key (ID) and a type identifier (TYPE_ID). I would like to be able to perform a query that returns me a set of ranges for a given type broken into even page sizes. For instance, if there are 10,000 records of type '1' and I specify a page size of 1000, I want 10 pairs of numbers back representing values I can use in a BETWEEN clause in subsequent queries to query the DB 1000 records at a time. My initial attempt was something like this select id, rownum from CONTENT_TABLE where type_id = ? and mod(rownum, ?) = 0 But this doesn't work.

    Read the article

  • Getting the last element of a Postgres array, declaratively

    - by Wojciech Kaczmarek
    How to obtain the last element of the array in Postgres? I need to do it declaratively as I want to use it as a ORDER BY criteria. I wouldn't want to create a special PGSQL function for it, the less changes to the database the better in this case. In fact, what I want to do is to sort by the last word of a specific column containing multiple words. Changing the model is not an option here. In other words, I want to push Ruby's sort_by {|x| x.split[-1]} into the database level. I can split a value into array of words with Postgres string_to_array or regexp_split_to_array functions, then how to get its last element?

    Read the article

  • How can i design a DB where the user can define the fields and types of a detail table in a M-D rela

    - by Simon
    My application has one table called 'events' and each event has approx 30 standard fields, but also user defined fields that could be any name or type, in an 'eventdata' table. Users can define these event data tables, by specifying x number of fields (either text/double/datetime/boolean) and the names of these fields. This 'eventdata' (table) can be different for each 'event'. My current approach is to create a lookup table for the definitions. So if i need to query all 'event' and 'eventdata' per record, i do so in a M-D relaitionship using two queries (i.e. select * from events, then for each record in 'events', select * from 'some table'). Is there a better approach to doing this? I have implemented this so far, but most of my queries require two distinct calls to the DB - i cannot simply join my master 'events' table with different 'eventdata' tables for each record in in 'events'. I guess my main question is: can i join my master table with different detail tables for each record? E.g. SELECT E.*, E.Tablename FROM events E LEFT JOIN 'E.tablename' T ON E._ID = T.ID If not, is there a better way to design my database considering i have no idea on how many user defined fields there may be and what type they will be.

    Read the article

  • Regular Expression to parse SQL Structure

    - by user351429
    I am trying to parse the MySQL data types returned by "DESCRIBE [TABLE]". It returns strings like: int(11) float varchar(200) int(11) unsigned float(6,2) I've tried to do the job using regular expressions but it's not working. PHP CODE: $string = "int(11) numeric"; $regex = '/(\w+)\s*(\w+)/'; var_dump( preg_split($regex, $string) );

    Read the article

  • Why might one app connect to SQL backend OK and a second app fail if they share the same connectionstring?

    - by hawbsl
    Trying to figure out a SQL connection error 26 in our app. We've got two closely related apps Foo and FooAddIn. Foo is a Winforms app built in VS2010 and runs fine and connects fine to our SQLExpress back end. FooAddIn is an Outlook AddIn which references Foo.exe and connects to the same SQL Express instance. Or rather, it doesn't connect, instead reporting: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Now, both apps share the same connectionstring and we've verified they really do share the same connectionstring. At this stage we're just testing from within the same developer machine, so the apps are on the same machine, going via the same VS2010 IDE. So a lot of the advice online for this error doesn't apply because the fact that Foo connects through to SQL Express tells us the database is there and available and can be reached. What else is there to check? One thing is that Foo and FooAddIn are running different runtime versions of System.Data (v2.0.50727 and v4.0.30319). Could that be a factor?

    Read the article

  • Date type in oracle does not include time values

    - by Matt
    I have a PHP application using an Oracle XE database. Whenever I add a date the hours minutes, and seconds seem to get left out. Is there some special format, or type I should use to be able to store this? I have tried using to_date, and specifying the format I am using. Many thanks for any suggestions from this confused MySql dveloper.

    Read the article

  • Is this use of PreparedStatements in a Thread in JAVA correct?

    - by Gormcito
    I'm still an undergrad just working part time and so I'm always trying to be aware of better ways to do things. Recently I had to write a program for work where the main thread of the program would spawn "task" threads (for each db "task" record) which would perform some operations and then update the record to say that it has finished. Therefore I needed a database connection object and PreparedStatement objects in or available to the ThreadedTask objects. This is roughly what I ended up writing, is creating a PreparedStatement object per thread a waste? I thought static PreparedStatments could create race conditions... Thread A stmt.setInt(); Thread B stmt.setInt(); Thread A stmt.execute(); Thread B stmt.execute(); A´s version never gets execed.. Is this thread safe? Is creating and destroying PreparedStatement objects that are always the same not a huge waste? public class ThreadedTask implements runnable { private final PreparedStatement taskCompleteStmt; public ThreadedTask() { //... taskCompleteStmt = Main.db.prepareStatement(...); } public run() { //... taskCompleteStmt.executeUpdate(); } } public class Main { public static final db = DriverManager.getConnection(...); }

    Read the article

  • multi-row update table with "different" data

    - by kralco626
    I think the best way to explain this is to tell you what I have. I have two tables A and B both have columns Field1 and Field2. However Field 2 is not populated in table B I want to populate field 2 of table B with field 2 of table A where field 1 of table A matches field 1 of table B. something like update tableB set Field2 = tableA.field2 where tablea.field1 = tableb.field1. The reason this may seem so odd and obscure is that I'm tyring to do an inital data load form an old database to a new one. please let me know if you need clarification

    Read the article

< Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >