Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 694/1278 | < Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >

  • Oracle: Finding Columns with only null values

    - by Jorge Valois
    I have a table with a lot of columns and a type column. Some columns seem to be always empty for a specific type. I want to create a view for each type and only show the relevant columns for each type. Working under the assumption that if a column has ONLY null values for a specific type, then that columns should not be part of the view, how can you find that out with queries? Is there a SELECT [columnName] FROM [table] WHERE [columnValues] ARE ALL [null] I know I COMPLETELY made it all up above... I'm just trying to get the idea across. Thanks in advance!

    Read the article

  • Which MySql line is faster:

    - by Camran
    I have a classified_id variable which matches one document in a MySql table. I am currently fetching the information about that one record like this: SELECT * FROM table WHERE table.classified_id = $classified_id I wonder if there is a faster approach, for example like this: SELECT 1 FROM table WHERE table.classified_id = $classified_id Wont the last one only select 1 record, which is exactly what I need, so that it doesn't have to scan the entire table but instead stops searching for records after 1 is found? Or am I dreaming this? Thanks

    Read the article

  • Getting "too many rows" error inside a "for" cursor loop

    - by Will
    I have a trigger that contains two cursors loops, one nested inside the other like this: FOR outer_rec IN outer_cursor LOOP FOR inner_rec IN inner_cursor LOOP -- Do some calculations END LOOP; END LOOP; Somewhere in this it is throwing the following error: ORA-01422: exact fetch returns more than requested number of rows I've been trying to determine where it's coming from for an hour or so.. but should this error never happen? Also.. I am assuming the inner loop automatically closes and opens itself again every time the outer loop goes the next record, i hope this is correct.

    Read the article

  • writting an sql query

    - by Praveen Prasad
    iam having 2 tables table Items Table (this table holds all items iam having) itemId --------- Item1 Item2 Item3 Item4 Item5 table 2 users_item relation UserId || ItemId 1 || Item1 1 || Item2 userId one has stored 2 items Item1,Item2. Now i want to write a query on table1 (Items table) so that it displays all items which user1 has NOT chosen.

    Read the article

  • Extract time part from TimeStamp column in ORACLE

    - by RRUZ
    Actually i' am using MyTimeStampField-TRUNC(MyTimeStampField) to extract the time part from an timestamp column in Oracle. SELECT CURRENT_TIMESTAMP-TRUNC(CURRENT_TIMESTAMP) FROM DUAL this return +00 13:12:07.100729 this work ok for me, to extract the time part from an timestamp field, but i' m wondering if exist a better way (may be using an built-in function of ORACLE) to do this? Thanks.

    Read the article

  • Linq2SQL or EntityFramework and databinding

    - by rene marxis
    is there some way to do databinding with linq2SQL or EntityFramework using "typed links" to the bound property? Public Class Form1 Dim db As New MESDBEntities 'datacontext/ObjectContext Dim bs As New BindingSource Private Sub Form1_Load(sender As System.Object, e As System.EventArgs) Handles MyBase.Load bs.DataSource = (From m In db.PROB_GROUP Select m) grid.DataSource = bs TextBox1.DataBindings.Add("Text", bs, "PGR_NAME") TextBox1.DataBindings.Add("Text", bs, db.PROB_GROUP) '**<--- Somthing like this** End Sub End Class I'd like to have type checking when compiling and the model changed.

    Read the article

  • How to join two query in SQL (Oracle)

    - by MAHESH A SONI
    How can I join these queries? SELECT RCTDT, SUM(RCTAMOUNT), COUNT(RCTAMOUNT) FROM RECEIPTS4 WHERE RCTDT BETWEEN '01-nov-2009' AND '30-nov-2009' AND RCTTYPE='CA' AND RCTAMOUNT>0 GROUP BY RCTDT --- SELECT RCTDT, SUM(RCTAMOUNT), COUNT(RCTAMOUNT) FROM RECEIPTS4 WHERE RCTDT BETWEEN '01-nov-2009' AND '30-nov-2009' AND RCTTYPE='CQ' AND RCTAMOUNT>0 GROUP BY RCTDT

    Read the article

  • Query for multiple joins

    - by Shailaja
    i have 3 tables named dataset,dataelem and transformdataelem with column names as below: main.Dataset ------------ datasetID (PK) applicationID main.Dataelem ------------- dataelemID(PK) datasetID(FK) dataelemname biztermID main.Transformdataelem ---------------------- OutputdataelemID InputdataelemID My requirement is: All tables are referenced. Extract all the dataelemId rows from dataelem table where applicationID of dataset table is equal to 1044 and biztermid shud be null. Then whatever resultant dataelemIDs from the above query should be matched with outputdataelemID of Transformdataelem table and we shud get the respective input dataelemId's. Again with these matched inputdataelemID's we shud get the dataelemname's from datelem table.

    Read the article

  • join codition in sqlserver

    - by Pallavi
    after applying join condition on two tables i want records which is maximum among records of left table my query SELECT a1.*, t.*, ( a1.trnratefrom - t.trnratefrom )AS minrate, ( a1.trnrateto - t.trnrateto ) AS maxrate FROM (SELECT a.srno, trndate, b.trnsrno, Upper(Rtrim(Ltrim(b.trnstate))) AS trnstate, Upper(Rtrim(Ltrim(b.trnarea))) AS trnarea, Upper(Rtrim(Ltrim(b.trnquality))) AS trnquality, Upper(Rtrim(Ltrim(b.trnlength))) AS trnlength, Upper(Rtrim(Ltrim(b.trnunit))) AS trnunit, b.trnratefrom, b.trnrateto, a.remark, entdate FROM mstprodrates a INNER JOIN trnprodrates b ON a.srno = b.srno)a1 INNER JOIN (SELECT c.srno, trndate, d.trnsrno, Upper(Rtrim(Ltrim(d.trnstate))) AS trnstate, Upper(Rtrim(Ltrim(d.trnarea))) AS trnarea, Upper(Rtrim(Ltrim(d.trnquality))) AS trnquality, Upper(Rtrim(Ltrim(d.trnlength))) AS trnlength, Upper(Rtrim(Ltrim(d.trnunit))) AS trnunit, d.trnratefrom, d.trnrateto, c.remark, entdate FROM mstprodrates c INNER JOIN trnprodrates d ON c.srno = d.srno) AS t ON a1.trnstate = t.trnstate AND a1.trnquality = t.trnquality AND a1.trnunit = t.trnunit AND a1.trnlength = t.trnlength AND a1.trnarea = t.trnarea AND a1.remark = t.remark WHERE t.srno = (SELECT MAX(srno) FROM a1 WHERE srno < a1.srno)

    Read the article

  • how can add an extra select in this query?

    - by BulgedSnowy
    i've three tables related. images: id | filename | filesize | ... nodes: image_id | tag_id tags: id | name And i'm using this query to search images containing x tags SELECT images.* FROM images INNER JOIN nodes ON images.id = nodes.image_id WHERE tag_id IN (SELECT tags.id FROM tags WHERE tags.tag IN ("tag1","tag2")) GROUP BY images.id HAVING COUNT(*)= 2 The problem is that i need to retrieve also all images contained by the retrieved image, and i need this in the same query. This the actual query wich search retrieve all tags contained by the image: SELECT tag FROM nodes JOIN tags ON nodes.tag_id = tags.id WHERE image_id = images.id and nodes.private = images.private ORDER BY tag How can i mix this two to have only one query?

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • Concurrent usage of table causing issues

    - by Sven
    Hello In our current project we are interfacing with a third party data provider. They need to insert data in a table of ours. This inserting can be frequent every 1 min, every 5min, every 30, depends on the amount of new data they need to provide. The use the isolation level read committed. On our end we have an application, windows service, that calls a webservice every 2 minutes to see if there is new data in this table. Our isolation level is repeatable read. We retrieve the records and update a column on these rows. Now the problem is that sometimes this third party provider needs to insert a lot of data, let's say 5000 records. They do this per transaction (5rows per transaction), but they don't close the connection. They do one transaction and then the next untill all records are inserted. This caused issues for our process, we receive a timeout. If this goes on for a long time the database get's completely unstable. For instance, they maybe stopped, but the table somehow still stays unavailable. When I try to do a select on the table, I get several records but at a certain moment I don't get any response anymore. It just says retrieving data but nothing comes anymore until I get a timeout exception. Only solution is to restart the database and then I see the other records. How can we solve this. What is the ideal isolation level setting in this scenario?

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • MySQL "NULL" questions

    - by Camran
    I have a table with several columns. Sometimes some of these column fields may be empty (ie. I won't use them in some cases). My questions: Would it be smart to set them to NULL in phpmyadmin? What does the "NULL" property actually do? Would I gain anything at all by setting them to NULL? Is it possible to use a NULL field the same way even though it is set to null?

    Read the article

  • MySql Query to return number of photos in each album

    - by GivenPie
    MY table is set up like this, all I need to do is call a query to my Photos table. I have PhotoID as the primary key and GalleryID as the foreign key to Gallery. How can I could the number of unique PhotoID's for each multiple GalleryIDs. So to speak there are may duplicate GalleryIDs because there are many photos in a gallery. So I just need to could the number of unique PhotoIDs associated with that GalleryID. Can it be done in one query?

    Read the article

  • How to create unique user key

    - by Grayson Mitchell
    Scenario: I have a fairly generic table (Data), that has an identity column. The data in this table is grouped (lets say by city). The users need an identifier in order for printing on paper forms, etc. The users can only access their cites data, so if they use the identity column for this purpose they will see odd numbers (e.g. a 'New York' user might see 1,37,2028... as the listed keys. Idealy they would see 1,2,3... (or something similar) The problem of course is concurrency, this being a web application you can't just have something like: UserId = Select Count(*)+1 from Data Where City='New York' Has anyone come up with any cunning ways around this problem?

    Read the article

  • FOSS version of SQLCompare or something similar?

    - by Scott
    Actually, free is good enough, it doesn't have to be open source :) I'm currently using the Schema Compare utility of VS2008, but it doesn't have a command line interface and has some other weaknesses as well. I'm wondering what free tools others are using to provide command line schema comparisons/synchronizations? Thanks.

    Read the article

  • Has anyone ever successfully make index merge work for MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+ Has anyone ever successfully make index merge work for MySQL? I'll be glad to see successful stories here:)

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

< Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >