Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 702/1132 | < Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >

  • what is the 'extra' mean in this django code..

    - by zjm1126
    TOPIC_COUNT_SQL = """ SELECT COUNT(*) FROM topics_topic WHERE topics_topic.object_id = maps_map.id AND topics_topic.content_type_id = %s """ MEMBER_COUNT_SQL = """ SELECT COUNT(*) FROM maps_map_members WHERE maps_map_members.map_id = maps_map.id """ maps = maps.extra(select=SortedDict([ ('member_count', MEMBER_COUNT_SQL), ('topic_count', TOPIC_COUNT_SQL), ]), select_params=(content_type.id,)) i don't know this mean, thanks

    Read the article

  • MySQL: How to copy rows, but change a few fields?

    - by Andrew
    I have a large number of rows that I would like to copy, but I need to change one field. I can select the rows that I want to copy: select * from Table where Event_ID = "120" Now I want to copy all those rows and create new rows while setting the Event_ID to 155. How can I accomplish this?

    Read the article

  • How to generate a script for changing a column of varchar to xml type with data being converted?

    - by user1323981
    Initially I have a column (partner_email) of varchar.Now a recent change has come where it needs to be changed to be changed to the XML type but the previous records needs to be reserve into the new column. I have applied the below algorithm to accomplish the work /*********************************************************************** Purpose: To change the partner_email column from Varchar Type To Xml Type and convert the existing records from varchar to xml types. Programmers Notes: 1. Create a new Column by the name partner_email_temp of type XML into the Partner Table 2. Copy the Email contents from partner_email to partner_email_temp column after proper conversion N.B.~ The format will be <PartnerEmails> <Email>[email protected]</Email> <Email /> <Email /> </PartnerEmails> 3. Drop the exisitng partner_email 4. Rename partner_email_temp column to partner_email ***********************************************************************/ USE [Test] GO --===== Create a partner_email_temp column of type xml into the Partner table IF NOT EXISTS ( SELECT * FROM INFORMATION_SCHEMA.columns WHERE table_name = 'Partner' AND column_name = 'partner_email_temp' ) BEGIN ALTER TABLE [dbo].[Partner] ADD partner_email_temp XML NULL END GO --===== Copy the Email contents from partner_email to partner_email_temp column -- after proper conversion to xml type UPDATE [dbo].[Partner] SET partner_email_temp = CAST('<PartnerEmails><Email>' + REPLACE(partner_email, '&', '&amp;') + '</Email><Email></Email><Email></Email></PartnerEmails>' AS XML) GO --===== Drop the exisitng partner_email ALTER TABLE [dbo].[Partner] DROP COLUMN partner_email GO --===== Rename partner_email_temp column to partner_email Exec sp_RENAME 'Partner.partner_email_temp','partner_email','COLUMN' GO I works fine for the first time I ran. Now if I ran it for the next time, it am getting an error Msg 8116, Level 16, State 1, Line 4 Argument data type xml is invalid for argument 1 of replace function. Caution: Changing any part of an object name could break scripts and stored procedures. The intention is that, if the partner_email column is varchar, the script will change it to xml type and will convert all the data in xml format . If I ran it second time, it should ignore the statement. How to achieve this? I am trying in a different way DECLARE @columnDataType VARCHAR(50) SELECT @columnDataType = DATA_TYPE FROM INFORMATION_SCHEMA.columns WHERE table_name = 'Partner' AND column_name = 'partner_email' print @columnDataType IF (@columnDataType = 'varchar') BEGIN --===== Create a partner_email_temp column of type xml into the Partner table IF NOT EXISTS ( SELECT * FROM INFORMATION_SCHEMA.columns WHERE table_name = 'Partner' AND column_name = 'partner_email_temp' ) BEGIN ALTER TABLE [dbo].[Partner] ADD partner_email_temp XML NULL --===== Copy the Email contents from partner_email to partner_email_temp column -- after proper conversion to xml type UPDATE [dbo].[Partner] SET partner_email_temp = CAST('<PartnerEmails><Email>' + REPLACE(partner_email, '&', '&amp;') + '</Email><Email></Email><Email></Email></PartnerEmails>' AS XML) --===== Drop the exisitng partner_email ALTER TABLE [dbo].[Partner] DROP COLUMN partner_email --===== Rename partner_email_temp column to partner_email EXEC sp_RENAME 'Partner.partner_email_temp','partner_email','COLUMN' END END but getting error Msg 207, Level 16, State 1, Line 29 Invalid column name 'partner_email_temp'. Help needed

    Read the article

  • Greatest not null column

    - by Álvaro G. Vicario
    I need to update a row with a formula based on the largest value of two DATETIME columns. I would normally do this: GREATEST(date_one, date_two) However, both columns are allowed to be NULL. I need the greatest date even when the other is NULL (of course, I expect NULL when both are NULL) and GREATEST() returns NULL when one of the columns is NULL. This seems to work: GREATEST(COALESCE(date_one, date_two), COALESCE(date_two, date_one)) But I wonder... am I missing a more straightforward method?

    Read the article

  • Any idea why this query always returns duplicate items?

    - by Kardo
    I want to get all Images not used by current ItemID. The this subquery but it also always returns duplicate Images: EDITED select Images.ImageID, Images.ItemStatus, Images.UserName, Images.Url, Image_Item.ItemID, Image_Item.ItemID from Images left join (select ImageID, ItemID, MAX(DateCreated) x from Image_Item where ItemID != '5a0077fe-cf86-434d-9f3b-7ff3030a1b6e' group by ImageID, ItemID having count(*) = 1) image_item on Images.imageid = image_item.imageid where ItemID is not null I guess the problem is with the subquery which I can't avoid duplicate rows: select ImageID, ItemID, MAX(DateCreated) x from Image_Item where ItemID != '5a0077fe-cf86-434d-9f3b-7ff3030a1b6e' group by ImageID, ItemID having count(*) = 1 Result: F2EECBDC-963D-42A7-90B1-4F82F89A64C7 0578AC61-3C32-4A1D-812C-60A09A661E71 F2EECBDC-963D-42A7-90B1-4F82F89A64C7 9A4EC913-5AD6-4F9E-AF6D-CF4455D81C10 42BC8B1A-7430-4915-9CDA-C907CBC76D6A CB298EB9-A105-4797-985E-A370013B684F 16371C34-B861-477C-9A7C-DEB27C8F333D 44E6349B-7EBF-4C7E-B3B0-1C6E2F19992C Table: Images ImageID uniqueidentifier UserName nvarchar(100) DateCreated smalldatetime Url nvarchar(250) ItemStatus char(1) Table: Image_Item ImageID uniqueidentifier ItemID uniqueidentifier UserName nvarchar(100) ItemStatus char(1) DateCreated smalldatetime Any kind help is highly appreciated.

    Read the article

  • Do partitions allow multiple bulk loads?

    - by ck
    I have a database that contains data for many "clients". Currently, we insert tens of thousands of rows into multiple tables every so often using .Net SqlBulkCopy which causes the entire tables to be locked and inaccessible for the duration of the transaction. As most of our business processes rely upon accessing data for only one client at a time, we would like to be able to load data for one client, while updating data for another client. To make things more fun, all PKs, FKs and clustered indexes are on GUID columns (I am looking at changing this). I'm looking at adding the ClientID into all tables, then partitioning on this. Would this give me the functionality I require?

    Read the article

  • Joining tables, if percentage is above certain value

    - by CluelessGerman
    My question is similar to this one: Compare rows and get percentage However, little different. I adapted my question to the other post. I got 2 tables. First table: user_id | post_id 1 1 1 2 1 3 2 12 2 15 And second table: post_id | rating 1 1 1 2 1 3 2 1 2 5 3 null 3 1 3 4 12 4 15 1 So now I would like to count the rating for each post, in the second table. If the rating has more than, lets say, 50% positive ratings than I want to get the post_id and going it to the post_id from table one and add 1 to the user_id. At the end it would return the user_id with the number of positive posts. The result for above table would be: user_id | helpfulPosts 1 2 2 1 The post with post_id 1 and 3 have positive rating, because more than 50% have ratings of 1-3. The post with id = 2 is not positive, because the rating is exactly 50%. How would I achieve this? For clarification: It's a mysql rdbm and a positive post, is one where the number of rating_ids with 1, 2 and 3 are more than half of the overall rating. Basically the same thing, from the other thread I posted above.

    Read the article

  • Difference between "and" and "where" in joins

    - by Midhat
    Whats the difference between SELECT DISTINCT field1 FROM table1 cd JOIN table2 ON cd.Company = table2.Name and table2.Id IN (2728) and SELECT DISTINCT field1 FROM table1 cd JOIN table2 ON cd.Company = table2.Name where table2.Id IN (2728) both return the same result and both have the same explain output

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • Oracle - truncating a global temporary table

    - by superdario
    I am processing large amounts of data in iterations, each and iteration processes around 10-50 000 records. Because of such large number of records, I am inserting them into a global temporary table first, and then process it. Usually, each iteration takes 5-10 seconds. Would it be wise to truncate the global temporary table after each iteration so that each iteration can start off with an empty table? There are around 5000 iterations.

    Read the article

  • Database table schema design - varchar(n). Suitable choice of N

    - by morpheous
    Coming from a C background, I may be getting too anal about this and worrying unnecessarily about bits and bytes here. Still, I cant help thinking how the data is actually stored and that if I choose an N which is easily factorizable into a power of 2, the database will be more effecient in how it packs data etc. Using this "logic", I have a string field in a table which is a variable length up to 21 chars. I am tempted to use 32 instead of 21, for the reason given above - however now I am thinking that I am wasting disk space because there will be space allocated for 11 extra chars that are guaranteed to be never used. Since I envisage storing several tens of thousands of rows a day, it all adds up. Question: Mindful of all of the above, Should I declare varchar(21) or varchar(32) and why?

    Read the article

  • Improve SQL query performance

    - by Anax
    I have three tables where I store actual person data (person), teams (team) and entries (athlete). The schema of the three tables is: In each team there might be two or more athletes. I'm trying to create a query to produce the most frequent pairs, meaning people who play in teams of two. I came up with the following query: SELECT p1.surname, p1.name, p2.surname, p2.name, COUNT(*) AS freq FROM person p1, athlete a1, person p2, athlete a2 WHERE p1.id = a1.person_id AND p2.id = a2.person_id AND a1.team_id = a2.team_id AND a1.team_id IN ( SELECT id FROM team, athlete WHERE team.id = athlete.team_id GROUP BY team.id HAVING COUNT(*) = 2 ) GROUP BY p1.id ORDER BY freq DESC Obviously this is a resource consuming query. Is there a way to improve it?

    Read the article

  • How do I get the earlist DateTime of a set, where there is a few conditions

    - by radbyx
    Create script for Product SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[Product]( [ProductID] [int] IDENTITY(1,1) NOT NULL, [ProductName] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO Create script for StateLog SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[StateLog]( [StateLogID] [int] IDENTITY(1,1) NOT NULL, [ProductID] [int] NOT NULL, [Status] [bit] NOT NULL, [TimeStamp] [datetime] NOT NULL, CONSTRAINT [PK_Uptime] PRIMARY KEY CLUSTERED ( [StateLogID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[StateLog] WITH CHECK ADD CONSTRAINT [FK_Uptime_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Product] ([ProductID]) GO ALTER TABLE [dbo].[StateLog] CHECK CONSTRAINT [FK_Uptime_Products] GO I have this and it's not enough: select top 5 [ProductName], [TimeStamp] from [Product] inner join StateLog on [Product].ProductID = [StateLog].ProductID where [Status] = 0 order by TimeStamp desc; (My query givess the 5 lastest TimeStamp's where Status is 0(false).) But I need a thing more: Where there is a set of lastest TimeStamps for a product where Status is 0, i only want the earlist of them (not the lastet). Example: Let's say for Product X i have: TimeStamp1(status = 0) TimeStamp2(status = 1) TimeStamp3(status = 0) TimeStamp4(status = 0) TimeStamp5(status = 1) TimeStamp6(status = 0) TimeStamp7(status = 0) TimeStamp8(status = 0) Correct answer would then be:: TimeStamp6, because it's the first of the lastest timestamps.

    Read the article

  • Selecting distinct values from mysql with largest timestamp

    - by user987048
    I am building a mail system. The inbox is only supposed to grab the last message (one with the highest time value) of a concatenation of user and sender, where the user or sender is the user ID. Here is the table structure: CREATE TABLE IF NOT EXISTS `mail` ( `user` int(11) NOT NULL, `sender` int(11) NOT NULL, `body` text NOT NULL, `new` enum('0','1') NOT NULL default '1', `time` int(11) NOT NULL, KEY `user` (`user`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, with a table with the following data: user sender new time ***************************************** 1 0 0 5 1 0 0 6 2 1 0 7 1 0 1 8 1 2 0 9 1 0 1 11 1 2 1 12 I want to select the following: WHERE USER OR SENDER = X (in this case, 1) user sender new time ***************************************** 2 1 0 7 1 2 0 9 1 0 1 11 How would I go about doing something like this?

    Read the article

  • SQLite: Simple DELETE statement did not work

    - by user186446
    I have a table MRU, that has 3 columns. (VALUE varchar(255); TYPE varchar(20); DT_ADD datetime) This is a table simply storing an entry and recording the date time it was recorded. What I wanted to do is: delete the oldest entry whenever I add a new entry that exceeds a certain number. here is my query: delete from MRU where type = 'FILENAME' ORDER BY DT_ADD limit 1; The query returns an error. Thanks

    Read the article

  • How to count number of occurences for all different values in database column?

    - by drasto
    I have a Postgre database that has say 10 columns. The fifth column is called column5. There are 100 rows in the database and possible values of column5 are c5value1, c5value2, c5value3...c5value29, c5value30. I would like to print out a table that shows how many times each value occurs. So the table would look like this: Value(of column5) number of occurrences of the value c5value1 1 c5value2 5 c5value3 3 c5value4 9 c5value5 1 c5value6 1 . . . . . . What is the command that does that? Thanks for help

    Read the article

  • Best method to search heriarachal data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (ie Glasgow) or in a larger area (ie Scotland). The two approaches I am considering are 1: keep a note of children in the database for each record (ie cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. 2: Use a recursive function to find all results who have a relation to the selected area The problems I see from both are 1: holding this data in the db will mean having to keep track of all changes to structure 2: Recursion is slow and inefficent Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • Adding Related Entities without using navigation properties

    - by Barisa Puter
    I have the following classes, set for testing: public class Company { [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public int Id { get; set; } public string Name { get; set; } } public class Employee { [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public int Id { get; set; } public string Name { get; set; } public int CompanyId { get; set; } public virtual Company Company { get; set; } } public class EFTestDbContext : DbContext { public DbSet<Employee> Employees { get; set; } public DbSet<Company> Companies { get; set; } } For the sake of testing, I wanted to insert one company and one employee for that company with single SaveChanges call, like this: Company company = new Company { Name = "Sample company" }; context.Companies.Add(company); // ** UNCOMMENTED FOR TEST 2 //Company company2 = new Company //{ // Name = "Some other company" //}; //context.Companies.Add(company2); Employee employee = new Employee { Name = "Hans", CompanyId = company.Id }; context.Employees.Add(employee); context.SaveChanges(); Even though I am not using navigational properties, but instead I've made relation over Id, this somehow mysteriously worked - employee was saved with proper foreign key to company which got updated from 0 to real value, which made me go ?!?! Some hidden C# feature? Then I've decided to add more code, which is commented in the snippet above, making it to be inserting of 2 x Company entity and 1 x Employee entity, and then I got exception: Unable to determine the principal end of the 'CodeLab.EFTest.Employee_Company' relationship. Multiple added entities may have the same primary key. Does this mean that in cases where foreign key is 0, and there is a single matching entity being inserted in same SaveChanges transaction, Entity Framework will assume that foreign key should be for that matching entity? In second test, when there are two entities matching the relation type, Entity Framework throws an exception as it is not able to figure out to which of the Companies Employee should be related to.

    Read the article

  • Getting records from a table based on a filter field and Between but also having the OR login for mu

    - by Pentium10
    I have a this table, where I store multiple ids and an age range (def1,def2) CREATE TABLE "template_requirements" ("_id" INTEGER NOT NULL, "templateid" INTEGER, "def1" VARCHAR(255), "def2" VARCHAR(255), PRIMARY KEY("_id")) Having values such as: templateid | def1 | def2 100 | 7 | 25 200 | 40 | 90 300 | 7 | 25 300 | 40 | 60 as you see for templateid 300 we have an or logic: age between 7 and 25 or age between 40 and 60. I want to get all the template ids that are not for a certain age like 25... What's the problem? If I run a query like this one: SELECT group_concat(templateid) FROM template_requirements where and '25' not between cast(def1 as integer) and cast(def2 as integer) it returns 200, 300, which is wrong, as the 300 matched on row 40 to 60, but shouldn't be included in the result as we have a condition with same templateid 7 to 25 that fails the not beetween stuff. How would be the correct query in SQLite, I would like to keep the group_concat stuff.

    Read the article

< Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >