Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 676/1132 | < Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >

  • how to write this query using joins?

    - by aquero
    Hi, i have a table campaign which has details of campaign mails sent. campaign_table: campaign_id campaign_name flag 1 test1 1 2 test2 1 3 test3 0 another table campaign activity which has details of campaign activities. campaign_activity: campaign_id is_clicked is_opened 1 0 1 1 1 0 2 0 1 2 1 0 I want to get all campaigns with flag value 3 and the number of is_clicked columns with value 1 and number of columns with is_opened value 1 in a single query. ie. campaign_id campaign_name numberofclicks numberofopens 1 test1 1 1 2 test2 1 1 I did this using sub-query with the query: select c.campaign_id,c.campaign_name, (SELECT count(campaign_id) from campaign_activity WHERE campaign_id=c.id AND is_clicked=1) as numberofclicks, (SELECT count(campaign_id) from campaign_activity WHERE campaign_id=c.id AND is_clicked=1) as numberofopens FROM campaign c WHERE c.flag=1 But people say that using sub-queries are not a good coding convention and you have to use join instead of sub-queries. But i don't know how to get the same result using join. I consulted with some of my colleagues and they are saying that its not possible to use join in this situation. Is it possible to get the same result using joins? if yes, please tell me how.

    Read the article

  • Simple SELECT query problem...

    - by Povylas
    Hi, I have this MySql select query witch seams to have a bug but I am quit "green" so I simple can not see, so maybe you could help? Here is the query: SELECT node_id FROM rate WHERE node_id='".$cat_node_id_string."' LIMIT ".$node_count_star.",".$node_count_end." ORDER BY SUM(amount) GROUP BY node_id Thanks for help in advance...

    Read the article

  • How do I create a multiple-table check constraint?

    - by Zack Peterson
    Please imagine this small database... Diagram Tables Volunteer Event Shift EventVolunteer ========= ===== ===== ============== Id Id Id EventId Name Name EventId VolunteerId Email Location VolunteerId Phone Day Description Comment Description Start End Associations Volunteers may sign up for multiple events. Events may be staffed by multiple volunteers. An event may have multiple shifts. A shift belongs to only a single event. A shift may be staffed by only a single volunteer. A volunteer may staff multiple shifts. Check Constraints Can I create a check constraint to enforce that no shift is staffed by a volunteer that's not signed up for that shift's event? Can I create a check constraint to enforce that two overlapping shifts are never staffed by the same volunteer?

    Read the article

  • Question about Transact SQL syntax

    - by Yousui
    Hi guys, The following code works like a charm: BEGIN TRY BEGIN TRANSACTION COMMIT TRANSACTION END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; DECLARE @ErrorMessage NVARCHAR(4000), @ErrorSeverity int; SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(); RAISERROR(@ErrorMessage, @ErrorSeverity, 1); END CATCH But this code gives an error: BEGIN TRY BEGIN TRANSACTION COMMIT TRANSACTION END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; RAISERROR(ERROR_MESSAGE(), ERROR_SEVERITY(), 1); END CATCH Why?

    Read the article

  • Migrating from mssql to firebird: pro and cons

    - by user193655
    i am considering the migration for 3 reasons: 1) SQLSERVER installation is a nightmar, expecially for 1-user software. Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express eidtion has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platiforms, but this will backfire I fear.

    Read the article

  • What are the rules governing how a bind variable can be used in Postgres and where is this defined?

    - by Craig Miles
    I can have a table and function defined as: CREATE TABLE mytable ( mycol integer ); INSERT INTO mytable VALUES (1); CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol = l_myvar; RETURN l_myrow; END; $$ LANGUAGE plpgsql; In this case l_myvar acts as a bind variable for the value passed when I call: SELECT * FROM myfunction(1); and returns the row where mycol = 1 If I redefine the function as: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (l_myvar); RETURN l_myrow; END; $$ LANGUAGE plpgsql; SELECT * FROM myfunction(1); still returns the row where mycol = 1 However, if I now change the function definition to allow me to pass an integer array and try to this array in the IN clause, I get an error: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer[]) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (array_to_string(l_myvar, ',')); RETURN l_myrow; END; $$ LANGUAGE plpgsql; Analysis reveals that although: SELECT array_to_string(ARRAY[1, 2], ','); returns 1,2 as expected SELECT * FROM myfunction(ARRAY[1, 2]); returns the error operator does not exist: integer = text at the line: WHERE mycol IN (array_to_string(l_myvar, ',')); If I execute: SELECT * FROM mytable WHERE mycol IN (1,2); I get the expected result. Given that array_to_string(l_myvar, ',') evaluates to 1,2 as shown, why arent these statements equivalent. From the error message it is something to do with datatypes, but doesnt the IN(variable) construct appear to be behaving differently from the = variable construct? What are the rules here? I know that I could build a statement to EXECUTE, treating everything as a string, to achieve what I want to do, so I am not looking for that as a solution. I do want to understand though what is going on in this example. Is there a modification to this approach to make it work, the particular example being to pass in an array of values to build a dynamic IN clause without resorting to EXECUTE? Thanks in advance Craig

    Read the article

  • How efficient is a details table?

    - by Jeffrey Lott
    At my job, we have pseudo-standard of creating one table to hold the "standard" information for an entity, and a second table, named like 'TableNameDetails', which holds optional data elements. On average, for every row in the main table will have about 8-10 detail rows in it. My question is: What kind of performance impacts does this have over adding these details as additional nullable columns on the main table?

    Read the article

  • clearing an entire column in access

    - by I__
    is there a way to clear an entire column in a datasheet in access? i can just right click on it and delete it but that will affect the structure, i just need to clear all the records. how do i do this? perhaps the question i should be asking is how do i clear the entire contents of a datasheet in access?

    Read the article

  • Getting counts of 0 from a query with a double group by

    - by Maltiriel
    I'm trying to write a query that gets the counts for a table (call it item) categorized by two different things, call them type and code. What I'm hoping for as output is the following: Type Code Count 1 A 3 1 B 0 1 C 10 2 A 0 2 B 13 2 C 2 And so forth. Both type and code are found in lookup tables, and each item can have just one type but more than one code, so there's also a pivot (aka junction or join) table for the codes. I have a query that can get this result: Type Code Count 1 A 3 1 C 10 2 B 13 2 C 2 and it looks like (with join conditions omitted): SELECT typelookup.name, codelookup.name, COUNT(item.id) FROM typelookup LEFT OUTER JOIN item JOIN itemcodepivot RIGHT OUTER JOIN codelookup GROUP BY typelookup.name, codelookup.name Is there any way to alter this query to get the results I'm looking for? This is in MySQL, if that matters. I'm not actually sure this is possible all in one query, but if it is I'd really like to know how. Thanks for any ideas.

    Read the article

  • Should I use DATE and TIME as fields rather than DATETIME

    - by whitstone86
    I have a project on going for a TV guide, called mytvguide in the database - I use PHPMyAdmin. This is the structure for one table, which is called tvshow1: Field Type channel varchar(255) date date No airdate time No expiration time No episode varchar(255) setreminder varchar(255) but am not sure how to get DATE, TIME to work with the pagination script (below is the script, which works for the version with DATETIME): http://pastebin.com/6S1ejAFJ However, although the DATETIME one works - it shows programmes that air on the day itself like this: Programme 1 showing on Channel 1 2:35pm "Episode 2" Set Reminder Programme 1 showing on Channel 1 May 26th - 12:50pm "Episode 3" Set Reminder Programme 1 showing on Channel 1 May 26th - 5:55pm "Episode 3" Set Reminder but I'm not quite sure how to replicate that for the fields that use DATE, TIME functions as seen above. Any advice on this is appreciated, thanks!

    Read the article

  • How to write my own global lock / unlock functions for PostgreSQL

    - by rafalmag
    I have postgresql (in perlu) function getTravelTime(integer, timestamp), which tries to select data for specified ID and timestamp. If there are no data or if data is old, it downloads them from external server (downloading time ~300ms). Multiple process use this database and this function. There is an error when two process do not find data and download them and try to do an insert to travel_time table (id and timestamp pair have to be unique). I thought about locks. Locking whole table would block all processes and allow only one to proceed. I need to lock only on id and timestamp. pg_advisory_lock seems to lock only in "current session". But my processes uses their own sessions. I tried to write my own lock/unlock functions. Am I doing it right? I use active waiting, how can I omit this? Maybe there is a way to use pg_advisory_lock() as global lock? My code: CREATE TABLE travel_time_locks ( id_key integer NOT NULL, time_key timestamp without time zone NOT NULL, UNIQUE (id_key, time_key) ); ------------ -- Function: mylock(integer, timestamp) DROP FUNCTION IF EXISTS mylock(integer, timestamp) CASCADE; -- Usage: SELECT mylock(1, '2010-03-28T19:45'); -- function tries to do a global lock similar to pg_advisory_lock(key, key) CREATE OR REPLACE FUNCTION mylock(id_input integer, time_input timestamp) RETURNS void AS $BODY$ DECLARE rows int; BEGIN LOOP BEGIN -- active waiting here !!!! :( INSERT INTO travel_time_locks (id_key, time_key) VALUES (id_input, time_input); EXCEPTION WHEN unique_violation THEN CONTINUE; END; EXIT; END LOOP; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1; ------------ -- Function: myunlock(integer, timestamp) DROP FUNCTION IF EXISTS myunlock(integer, timestamp) CASCADE; -- Usage: SELECT myunlock(1, '2010-03-28T19:45'); -- function tries to do a global unlock similar to pg_advisory_unlock(key, key) CREATE OR REPLACE FUNCTION myunlock(id_input integer, time_input timestamp) RETURNS integer AS $BODY$ DECLARE BEGIN DELETE FROM ONLY travel_time_locks WHERE id_key=id_input AND time_key=time_input; RETURN 1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1;

    Read the article

  • How to use SQL trigger to record the affected column's row number

    - by Freeman
    I want to have an 'updateinfo' table in order to record every update/insert/delete operations on another table. In oracle I've written this: CREATE TABLE updateinfo ( rnumber NUMBER(10), tablename VARCHAR2(100 BYTE), action VARCHAR2(100 BYTE), UPDATE_DATE date ) DROP TRIGGER TRI_TABLE; CREATE OR REPLACE TRIGGER TRI_TABLE AFTER DELETE OR INSERT OR UPDATE ON demo REFERENCING NEW AS NEW OLD AS OLD FOR EACH ROW BEGIN if inserting then insert into updateinfo(rnumber,tablename,action,update_date ) values(rownum,'demo', 'insert',sysdate); elsif updating then insert into updateinfo(rnumber,tablename,action,update_date ) values(rownum,'demo', 'update',sysdate); elsif deleting then insert into updateinfo(rnumber,tablename,action,update_date ) values(rownum,'demo', 'delete',sysdate); end if; -- EXCEPTION -- WHEN OTHERS THEN -- Consider logging the error and then re-raise -- RAISE; END TRI_TABLE; but when checking updateinfo, all rnumber column is zero. is there anyway to retrieve the correct row number?

    Read the article

  • LINQ Left Join And Right Join

    - by raja
    Hi, I need a help, I have two dataTable called A and B , i need all rows from A and matching row of B Ex: A: B: User | age| Data ID | age|Growth 1 |2 |43.5 1 |2 |46.5 2 |3 |44.5 1 |5 |49.5 3 |4 |45.6 1 |6 |48.5 I need Out Put: User | age| Data |Growth ------------------------ 1 |2 |43.5 |46.5 2 |3 |44.5 | 3 |4 |45.6 |

    Read the article

  • How to retrieve last primary Id from mdb's table?

    - by William
    I got table with next columns: Id, Name, Age, Class I am trying to insert new row in db like this: INSERT INTO MyTable (Name, Age, Class) VALUES (@name, @age, @class) And get an exeption: "Index or primary key cannot contain a Null value." The question is how to add a new row without knowing next primary Id, or maybe there is a way to get this Id from the table with the help of another query ?

    Read the article

  • How do I drop a foerign key in mssql?

    - by mmattax
    I have created a foreign key (in mssql / tsql) by: alter table company add CountryID varchar(3); alter table company add constraint Company_CountryID_FK foreign key(CountryID) references Country; I then run this query: alter table company drop column CountryID; and I get this error: Msg 5074, Level 16, State 4, Line 2 The object 'Company_CountryID_FK' is dependent on column 'CountryID'. Msg 4922, Level 16, State 9, Line 2 ALTER TABLE DROP COLUMN CountryID failed because one or more objects access this column. I have tried this, yet it does not seem to work: alter table company drop foreign key Company_CountryID_FK; alter table company drop column CountryID; What do I need to do to drop the CountryID column? Thanks.

    Read the article

  • Why is this postgresql query so slow?

    - by user315975
    I'm no database expert, but I have enough knowledge to get myself into trouble, as is the case here. This query SELECT DISTINCT p.* FROM points p, areas a, contacts c WHERE ( p.latitude > 43.6511659465 AND p.latitude < 43.6711659465 AND p.longitude > -79.4677941889 AND p.longitude < -79.4477941889) AND p.resource_type = 'Contact' AND c.user_id = 6 is extremely slow. The points table has fewer than 2000 records, but it takes about 8 seconds to execute. There are indexes on the latitude and longitude columns. Removing the clause concering the resource_type and user_id make no difference. The latitude and longitude fields are both formatted as number(15,10) -- I need the precision for some calculations. There are many, many other queries in this project where points are compared, but no execution time problems. What's going on?

    Read the article

  • Will MySql caching cause performance problems?

    - by Camran
    I am about to upload my website onto a VPS. It is a classifieds website, where all data is stored in MySql and Solr. I wonder if when using MySql:s cache, the server will slow down? Ie, if somebody makes a search for the first time, and MySql is to cache the query, will the caching make the server slower than if it would not cache anything? After the caching is done I know things will improve in terms of performance... But I would like to know if I should even use the cache or not, what do you think? Thanks

    Read the article

  • Date/time query from Access table ( last month)

    - by chupeman
    Hello, I am using the query builder from Visual Studio 2008 to extract data from an Access mdb ( 2003), but I can't make it to work with a datetime field. When I run it with a third party query app I have works fine, but when I try to implement it into visual studio I can't do it. What is the correct way to extract last month data? This is what I have: SELECT [Datos].[ID], [Datos].[E-mail Address], [Datos].[ZIP/Postal Code], [Datos].[Store], [Datos].[date], [Datos].[gender], [Datos].[age] FROM [Datos] WHERE ([Datos].[date] =<|Last month|>) Any help is appreciated. Thank you

    Read the article

  • Getting highest results in a JOIN

    - by Keithamus
    I've got three tables; Auctions, Auction Bids and Users. The table structure looks something like this: Auctions: id title -- ----- 1 Auction 1 2 Auction 2 Auction Bids: id user_id auction_id bid_amt -- ------- ---------- ------- 1 1 1 200.00 2 2 1 202.00 3 1 2 100.00 Users is just a standard table, with id and user name. My aim is to join these tables so I can get the highest values of these bids, as well as get the usernames related to those bids; so I have a result set like so: auction_id auction_title auctionbid_amt user_username ---------- ------------- -------------- ------------- 1 Auction 1 202.00 Bidder2 2 Auction 2 100.00 Bidder1 So far my query is as follows: SELECT a.id, a.title, ab.bid_points, u.display_name FROM auction a LEFT JOIN auctionbid ab ON a.id = ab.auction_id LEFT JOIN users u ON u.id = ab.user_id GROUP BY a.id This gets the single rows I am after, but it seems to display the lowest bid_amt, not the highest.

    Read the article

  • Why is Postgres doing a Hash in this query?

    - by Claudiu
    I have two tables: A and P. I want to get information out of all rows in A whose id is in a temporary table I created, tmp_ids. However, there is additional information about A in the P table, foo, and I want to get this info as well. I have the following query: SELECT A.H_id AS hid, A.id AS aid, P.foo, A.pos, A.size FROM tmp_ids, P, A WHERE tmp_ids.id = A.H_id AND P.id = A.P_id I noticed it going slowly, and when I asked Postgres to explain, I noticed that it combines tmp_ids with an index on A I created for H_id with a nested loop. However, it hashes all of P before doing a Hash join with the result of the first merge. P is quite large and I think this is what's taking all the time. Why would it create a hash there? P.id is P's primary key, and A.P_id has an index of its own.

    Read the article

  • Select *, max(date) works in phpMyAdmin but not in my code

    - by kdobrev
    OK, my statement executes well in phpMyAdmin, but not how I expect it in my php page. This is my statement: SELECT egid , group_name , limit , MAX( date ) FROM employee_groups GROUP BY egid ORDER BY egid DESC ; This is may table: CREATE TABLE employee_groups ( egid int(10) unsigned NOT NULL, date date NOT NULL, group_name varchar(50) NOT NULL, limit smallint(5) unsigned NOT NULL, PRIMARY KEY (egid,date) ) ENGINE=MyISAM DEFAULT CHARSET=cp1251; I want to extract the most recent list of groups, e.g. if a group has been changed I want to have only the last change. And I need it as a list (all groups).

    Read the article

  • Group and count in Rails

    - by alamodey
    I have this bit of code and I get an empty object. @results = PollRoles.find( :all, :select => 'option_id, count(*) count', :group => 'option_id', :conditions => ["poll_id = ?", @poll.id]) Is this the correct way of writing the query? I want a collection of records that have an option id and the number of times that option id is found in the PollRoles model. EDIT: This is how I''m iterating through the results: <% @results.each do |result| %> <% @option = Option.find_by_id(result.option_id) %> <%= @option.question %> <%= result.count %> <% end %>

    Read the article

< Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >