Search Results

Search found 1805 results on 73 pages for 'varchar'.

Page 37/73 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Select the latest record for each category linked available on an object

    - by Simpleton
    I have a tblMachineReports with the columns: Status(varchar),LogDate(datetime),Category(varchar), and MachineID(int). I want to retrieve the latest status update from each category for every machine, so in effect getting a snapshot of the latest statuses of all the machines unique to their MachineID. The table data would look like Category - Status - MachineID - LogDate cata - status1 - 001 - date1 cata - status2 - 002 - date2 catb - status3 - 001 - date2 catc - status2 - 002 - date4 cata - status3 - 001 - date5 catc - status1 - 001 - date6 catb - status2 - 001 - date7 cata - status2 - 002 - date8 catb - status2 - 002 - date9 catc - status2 - 001 - date10 Restated, I have multiple machines reporting on multiple statuses in this tblMachineReports. All the rows are created through inserts, so their will obviously be duplicate entries for machines as new statuses come in. None of the columns can be predicted, so I can't do any ='some hard coded string' comparisons in any part of the select statement. For the sample table I provided, the desired results would look like: Category - Status - MachineID - LogDate catc - status2 - 002 - date4 cata - status3 - 001 - date5 catb - status2 - 001 - date7 cata - status2 - 002 - date8 catb - status2 - 002 - date9 catc - status2 - 001 - date10 What would the select statement look like to achieve this, getting the latest status for each category on each machine, using MS SQL Server 2008? I have tried different combinations of subqueries combined with aggregate MAX(LogDates)'s, along with joins, group bys, distincts, and what-not, but have yet to find a working solution.

    Read the article

  • How to Convert using of SqlLit to Simple SQL command in C#

    - by Nasser Hajloo
    I want to get start with DayPilot control I do not use SQLLite and this control documented based on SQLLite. I want to use SQL instead of SQL Lite so if you can, please do this for me. main site with samples http://www.daypilot.org/calendar-tutorial.html The database contains a single table with the following structure CREATE TABLE event ( id VARCHAR(50), name VARCHAR(50), eventstart DATETIME, eventend DATETIME); Loading Events private DataTable dbGetEvents(DateTime start, int days) { SQLiteDataAdapter da = new SQLiteDataAdapter("SELECT [id], [name], [eventstart], [eventend] FROM [event] WHERE NOT (([eventend] <= @start) OR ([eventstart] >= @end))", ConfigurationManager.ConnectionStrings["db"].ConnectionString); da.SelectCommand.Parameters.AddWithValue("start", start); da.SelectCommand.Parameters.AddWithValue("end", start.AddDays(days)); DataTable dt = new DataTable(); da.Fill(dt); return dt; } Update private void dbUpdateEvent(string id, DateTime start, DateTime end) { using (SQLiteConnection con = new SQLiteConnection(ConfigurationManager.ConnectionStrings["db"].ConnectionString)) { con.Open(); SQLiteCommand cmd = new SQLiteCommand("UPDATE [event] SET [eventstart] = @start, [eventend] = @end WHERE [id] = @id", con); cmd.Parameters.AddWithValue("id", id); cmd.Parameters.AddWithValue("start", start); cmd.Parameters.AddWithValue("end", end); cmd.ExecuteNonQuery(); } }

    Read the article

  • Cross join (pivot) with n-n table containing values

    - by Styx31
    I have 3 tables : TABLE MyColumn ( ColumnId INT NOT NULL, Label VARCHAR(80) NOT NULL, PRIMARY KEY (ColumnId) ) TABLE MyPeriod ( PeriodId CHAR(6) NOT NULL, -- format yyyyMM Label VARCHAR(80) NOT NULL, PRIMARY KEY (PeriodId) ) TABLE MyValue ( ColumnId INT NOT NULL, PeriodId CHAR(6) NOT NULL, Amount DECIMAL(8, 4) NOT NULL, PRIMARY KEY (ColumnId, PeriodId), FOREIGN KEY (ColumnId) REFERENCES MyColumn(ColumnId), FOREIGN KEY (PeriodId) REFERENCES MyPeriod(PeriodId) ) MyValue's rows are only created when a real value is provided. I want my results in a tabular way, as : Column | Month 1 | Month 2 | Month 4 | Month 5 | Potatoes | 25.00 | 5.00 | 1.60 | NULL | Apples | 2.00 | 1.50 | NULL | NULL | I have successfully created a cross-join : SELECT MyColumn.Label AS [Column], MyPeriod.Label AS [Period], ISNULL(MyValue.Amount, 0) AS [Value] FROM MyColumn CROSS JOIN MyPeriod LEFT OUTER JOIN MyValue ON (MyValue.ColumnId = MyColumn.ColumnId AND MyValue.PeriodId = MyPeriod.PeriodId) Or, in linq : from p in MyPeriods from c in MyColumns join v in MyValues on new { c.ColumnId, p.PeriodId } equals new { v.ColumnId, v.PeriodId } into values from nv in values.DefaultIfEmpty() select new { Column = c.Label, Period = p.Label, Value = nv.Amount } And seen how to create a pivot in linq (here or here) : (assuming MyDatas is a view with the result of the previous query) : from c in MyDatas group c by c.Column into line select new { Column = line.Key, Month1 = line.Where(l => l.Period == "Month 1").Sum(l => l.Value), Month2 = line.Where(l => l.Period == "Month 2").Sum(l => l.Value), Month3 = line.Where(l => l.Period == "Month 3").Sum(l => l.Value), Month4 = line.Where(l => l.Period == "Month 4").Sum(l => l.Value) } But I want to find a way to create a resultset with, if possible, Month1, ... properties dynamic. Note : A solution which results in a n+1 query : from c in MyDatas group c by c.Column into line select new { Column = line.Key, Months = from l in line group l by l.Period into period select new { Period = period.Key, Amount = period.Sum(l => l.Value) } }

    Read the article

  • Storing statistics of multple data types in SQL Server 2008

    - by Mike
    I am creating a statistics module in SQL Server 2008 that allows users to save data in any number of formats (date, int, decimal, percent, etc...). Currently I am using a single table to store these values as type varchar, with an extra field to denote the datatype that it should be. When I display the value, I use that datatype field to format it. I use sprocs to calculate the data for reporting; and the datatype field to convert to the appropriate datatype for the appropriate calculations. This approach works, but I don't like storing all kinds of data in a varchar field. The only alternative that I can see is to have separate tables for each datatype I want to store, and save the record information to the appropriate table based on datatype. To retreive, I run a case statement to join the appropriate table and get the data. This seems to solve. This however, seems like a lot of work for ... what gain? Wondering if I'm missing something here. Is there a better way to do this? Thanks in advance!

    Read the article

  • Replacing XML reserved characters in SQL Server 2005

    - by Barn
    I'm working on a system that takes relational data from a sql server DB and uses SSIS to produce an XML extract using sql server 2005's 'FOR XML PATH' command and a schema. The problem lies with replacing the XML reserved characters. 'FOR XML PATH' is only replacing <, , and &, not ' and ", so I need a way of replacing these myself. I've tried pre-processing the fields in the database to replace XML reserved characters with their entitised equivalents (e.g. & becomes &amp;), but once these fields are used to construct XML using FOR XML the leading & is replaced with &amp;, so I end up with &amp;amp; where I should have &amp;. What I've tried so far is altering the element's contents after the XML has been constructed using XQuery inside SQL server like so: DECLARE @data VARCHAR(MAX) SET @data = CONVERT(VARCHAR(MAX), [my xml column].query(' data(/root/node_i_want)') SELECT @data = [function to replace quotes etc](@data) SET [my xml column].modify('replace value of (/root/node_i_want)[1] with sql:variable("@data")') but I get the same problem. Essentially, is there something wrong I'm doing with the above, or a way to tell FOR XML to entitise other characters, or something like that? Basically anything short of having to write a program to change the XML after it has been assembled in large batches and saved to files!

    Read the article

  • Using datetime float representation as primary key

    - by devanalyst
    From my experience I have learn that using an surrogate INT data type column as primary key esp. an IDENTITY key column offers better performance than using GUID or char/varchar data type column as primary key. I try to use IDENTITY key as primary key wherever possible. But recently I came across a schema where the tables were horizontally partitioned and were managed via a Partitioned view. So the tables could not have an IDENTITY column since that would make the Partitioned View non updatable. One work around for this was to create a dummy 'keygenerator' table with an identity column to generate IDs for primary key. But this would mean having a 'keygenerator' table for each of the Partitioned View. My next thought was to use float as a primary key. The reason is the following key algorithm that I devised DECLARE @KEY FLOAT SET @KEY = CONVERT(FLOAT,GETDATE())/100000.0 SET @KEY = @EMP_ID + @KEY Heres how it works. CONVERT(FLOAT,GETDATE()) gives float representation of current datetime since internally all datetime are represented by SQL as a float value. CONVERT(FLOAT,GETDATE())/100000.0 converts the float representation into complete decimal value i.e. all digits are pushed to right side of ".". @KEY = @EMP_ID + @KEY adds the Employee ID which is an integer to this decimal value. The logic is that the Employee ID is guaranteed to be unique across sessions since an employee cannot connect to an application more than once at the same time. And for the same employee each time a key will be generated the current datetime will be unique. In all an unique key across all employee sessions and across time. So for Emp Ids 11 and 12, I have key values like 12.40046693321566357, 11.40046693542361111 But my concern whether float data type as primary key offer benefits compared to choosing GUID or char/varchar as primary keys. Also important thing is because of partitioning the float column is going to be part of a composite key.

    Read the article

  • Specifying ASP.NET MVC attributes for auto-generated data models

    - by Lyubomyr Shaydariv
    Hello to everyone. I'm very new to ASP.NET MVC (as well as ASP.NET in general), and going to gain some knowledge for this technology, so I'm sorry I can ask some trivial questions. I have installed ASP.NET MVC 3 RC1 and I'm trying to do the following. Let's consider that I have a model that's completely auto-generated from a table using the "LINQ to SQL Classes" template in VS2010. The template generates 3 files (two .cs files and one .layout file respectively), and the generated partial class is expected to be used as an MVC model. Let's also consider, a single DB column, that's mapped into the model, may look like this: [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { get { return this._Name; } set { if ( (this._Name != value) ) { // ... generated stuff goes here } } } The ASP.NET MVC engine also provides a beautiful declarative way to specify some additional stuff, like RequiredAttribute, DisplayNameAttribute and other nice attributes. But since the mapped model is a purely auto-genereated model, I've realized that I should not change the model manually, and specify the fields like: [Required] [DisplayName("Project name")] [StringLength(128)] [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { ... though this approach works perfectly... until I change the model in the DBML-designer removing the ASP.NET MVC attributes automatically. So, how do I specify ASP.NET MVC attributes for the DBML models and their fields safely? Thanks in advance, and Merry Christmas.

    Read the article

  • How to use Common Table Expression and check no duplication in SQL Server

    - by vodkhang
    I have a table references to itself. User table: id, username, managerid and managerid links back to id Now, I want to get all the managers including direct manager, manager of direct manager, so on and so forth... The problem is that I do not want to have a unstop recursive sql. So, I want to check if an id alreay in a list, I will not include it anymore. Here is my sql for that: with all_managers (id, username, managerid, idlist) as ( select u1.id, u1.username, u1.managerid, ' ' from users u1, users u2 where u1.id = u2.managerid and u2.id = 6 UNION ALL select u.id, u.username, u.managerid, idlist + ' ' + u.id from all_managers a, users u where a.managerid = u.id and charindex(cast(u.id as nvarchar(5)), idlist) != 0 ) select id, username from all_managers; The problem is that in this line: select u1.id, u1.username, u1.managerid, ' ' The SQL Server complains with me that I can not put ' ' as the initialized for idlist. nvarchar(40) does not work as well. I do not know how to declare it inside a common table expression like this one. Usually, in db2, I can just put varchar(40) My sample data: ID UserName ManagerID 1 admin 1 2 a 1 3 b 1 4 c 2 What I want to do is that I want to find all managers of c guy. The result should be: admin, a, b. Some of the user can be his manager (like admin) because the ManagerID does not allow NULL and some does not have direct manager. With common table expression, it can lead to an infinite recursive. So, I am also trying to avoid that situation by trying to not include the id twice. For example, in the 1st iteration, we already have id : 1, so, in the 2nd iteration and later on, 1 should never be allowed. I also want to ask if my current approach is good or not and any other solutions? Because if I have a big database with a deep hierarchy, I will have to initialize a big varchar to keep it and it consumes memory, right?

    Read the article

  • sqlite3 no insert is done after --> insert into --> SQLITE_DONE

    - by Fra
    Hi all, I'm trying to insert some data in a table using sqlite3 on an iphone... All the error codes I get from the various steps indicate that the operation should have been successful, but in fact, the table remains empty...I think I'm missing something... here the code: sqlite3 *database = nil; NSString *dbPath = [[[ NSBundle mainBundle ] resourcePath ] stringByAppendingPathComponent:@"placemarks.sql"]; if(sqlite3_open([dbPath UTF8String], &database) == SQLITE_OK){ sqlite3_stmt *insert_statement = nil; //where int pk, varchar name,varchar description, blob picture static char *sql = "INSERT INTO placemarks (pk, name, description, picture) VALUES(99,'nnnooooo','dddooooo', '');"; if (sqlite3_prepare_v2(database, sql, -1, &insert_statement, NULL) != SQLITE_OK) { NSAssert1(0, @"Error: failed to prepare statement with message '%s'.", sqlite3_errmsg(database)); } int success = sqlite3_step(insert_statement); int finalized = sqlite3_finalize(insert_statement); NSLog(@"success: %i finalized: %i",success, finalized); NSAssert1(101, @"Error: failed to insert into the database with message '%s'.", sqlite3_errmsg(database)); sqlit3_step returns 101, so SQLITE_DONE, which should be ok.. If I execute the sql statement in the command line it works properly... anyone has an idea? could it be that there's a problem in writing the placemarks.sql because it's in the resources folder? rgds Fra

    Read the article

  • Setting the comment of a column to that of another column in Postgresql

    - by dland
    Suppose I create a table in Postgresql with a comment on a column: create table t1 ( c1 varchar(10) ); comment on column t1.c1 is 'foo'; Some time later, I decide to add another column: alter table t1 add column c2 varchar(20); I want to look up the comment contents of the first column, and associate with the new column: select comment_text from (what?) where table_name = 't1' and column_name = 'c1' The (what?) is going to be a system table, but after having looked around in pgAdmin and searching on the web I haven't learnt its name. Ideally I'd like to be able to: comment on column t1.c1 is (select ...); but I have a feeling that's stretching things a bit far. Thanks for any ideas. Update: based on the suggestions I received here, I wound up writing a program to automate the task of transferring comments, as part of a larger process of changing the datatype of a Postgresql column. You can read about that on my blog.

    Read the article

  • Comparing values from a string in a MySQL query

    - by bellesebastien
    I'm having some trouble comparing values found in VARCHAR fields. I have a table with products and each product has volume. I store the volume in a VARCHAR field and it's usually a number (30, 40, 200..) but there are products that have multiple volumes and their data is stored separated by semicolons, like so 30;60;80. I know that storing multiple volumes like that is not recommended but I have to work with it like it is. I'm trying to implement a search by volume function for the products. I want to also display the products that have a bigger or equal volume than the one searched. This is not a problem with the products that have a single volume, but it is a problem with the multiple volume products. Maybe an example will make things clearer: Let's say I have a product with this in it's volume field: 30;40;70;80. If someone searched for a volume, lets say 50, I want that product to be displayed. To do this I was thinking of writing my own custom MySQL function (I've never this before) but maybe someone can offer a different solution. I apologize for my poor English but I hope I made my question clear. Thanks.

    Read the article

  • Inserting "null" (literally) in to a stored procedure parameter.

    - by Nazadus
    I'm trying to insert the word "Null" (literally) in to a parameter for a stored procedure. For some reason SqlServer seems to think I mean NULL and not "Null". If I do a check for IF @LastName IS NULL // Test: Do stuff Then it bypasses that because the parameter isn't null. But when I do: INSERT INTO Person (<params>) VALUES (<stuff here>, @LastName, <more stuff here>); // LastName is 'Null' It bombs out saying that LastName doesn't accept nulls. I would seriously hate to have this last name, but someone does... and it's bombing the application. We're using SubSonic 2.0 (yeah, it's fairly old but upgrading is painful) as our DAL and stepping through it, I see it does create the parameters properly (for what I can tell). I've tried creating a temp table to see if I could replicate it manually but it seems to work just fine. Here is the example I create: DECLARE @myval VARCHAR(50) SET @myval = 'Null' CREATE TABLE #mytable( name VARCHAR(50)) INSERT INTO #mytable VALUES (@myval) SELECT * FROM #mytable DROP table #mytable Any thoughts on how I can fix this?

    Read the article

  • How to put foreign key constraints on a computed fields in sql server?

    - by Asaf R
    Table A has a computed field called Computed1. It's persisted and not null. Also, it always computes to an expression which is char(50). It's also unique and has a unique key constraint on it. Table B has a field RefersToComputed1, which should refer to a valid Computed1 value. Trying to create a foreign key constraint on B's RefersToComputed1 that references A' Computed1 leads to the following error: Error SQL01268: .Net SqlClient Data Provider: Msg 1753, Level 16, State 0, Line 1 Column 'B.RefersToComputed1' is not the same length or scale as referencing column 'A.Computed1' in foreign key 'FK_B_A'. Columns participating in a foreign key relationship must be defined with the same length and scale. Q: Why is this error created? Are there special measures needed for foreign keys for computed columns, and if so what are they? Summary: The specific problem rises from computed, char based, fields being varchar. Hence, Computed1 is varchar(50) and not char(50). It's best to have a cast surrounding a computed field's expression to force it to a specific type. Credit goes to Cade Roux for this tip.

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Field specific errors for ETL

    - by AaronLS
    I am creating a ETL process in MS SQL Server and I would like to have errors specific to a particular column of a particular row. For example, the data is initially loaded from excel files into a table(we'll call the Initial table) where all columns are varchar(2000) and then I stage the data to another table(the DataTypedTable) that contains more specific data types (datetime,int, etc.) or more tightly constrained varchar lengths. I need to be able to create error messages for a specific field such as: "Jan. 13th" is not a valid date format for the submission date. Please use a format of MM/DD/YYYY These error messages would need to be stored in some way such that later in the process a automated process can create reports with the error messages such that each message references a specific row and field(someone will need to go back and correct the data in the source system and resubmit the excel file). So ideally it would be inserted into a Failures tables of some sort and contain the primary key of the failed row, the column name, and the error message. Question: So I am wondering if this can be accomplished with SSIS, or some open source tool like Talend, and if so, what would be your general approach? Or what hand coded approach you would take? Couple approaches I've thought of using SQL(up until no I have done ETL by hand in SQL procs, but I want to consider other approaches. Possible C# even.): Use a cursor to read through the Initial table, and for each row insert a blank record with only the primary key into the DataTyped table, then use a single update statement for each column, such that if that update fails I can insert a very specific error message specific to that column in the error messages table. Insert all the data as is into the DataTyped table, but have duplicate columns like SubmissionDate and SubmissionDateOld. After the initial insert the *Old columns have data, the rest are blank, and I have a single update for each column that sets the SubmissionDate based on the SubmissionDateOld. In addition to suggesting an approach, I'd like to know if you are using that approach or something similar already in the work you do.

    Read the article

  • Optimizing MySql query to avoid using "Using filesort"

    - by usef_ksa
    I need your help to optimize the query to avoid using "Using filesort".The job of the query is to select all the articles that belongs to specific tag. The query is: "select title from tag,article where tag='Riyad' AND tag.article_id=article.id order by tag.article_id". the tables structure are the following: Tag table CREATE TABLE `tag` ( `tag` VARCHAR( 30 ) NOT NULL , `article_id` INT NOT NULL , INDEX ( `tag` ) ) ENGINE = MYISAM ; Article table CREATE TABLE `article` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `title` VARCHAR( 60 ) NOT NULL ) ENGINE = MYISAM Sample data INSERT INTO `article` VALUES (1, 'About Riyad'); INSERT INTO `article` VALUES (2, 'About Newyork'); INSERT INTO `article` VALUES (3, 'About Paris'); INSERT INTO `article` VALUES (4, 'About London'); INSERT INTO `tag` VALUES ('Riyad', 1); INSERT INTO `tag` VALUES ('Saudia', 1); INSERT INTO `tag` VALUES ('Newyork', 2); INSERT INTO `tag` VALUES ('USA', 2); INSERT INTO `tag` VALUES ('Paris', 3); INSERT INTO `tag` VALUES ('France', 3);

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • SQL Server 2008: CASE vs IF-ELSE-IF vs GOTO

    - by Saharsh Shah
    I have some rules in my application and I have written the business logic of that rules in my procedure. At the time of creation of procedure I came to know that CASE statement won't work in my scenario. So I have tried two ways to perform same operations (using IF-ELSE-IF or GOTO) shown as below. Method 1 Using IF-ELSE-IF conditions: DECLARE @V_RuleId SMALLINT; IF (@V_RuleId = 1) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 2) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 3) BEGIN /*My business logic*/ END /* ... ... ... ...*/ ELSE IF (@V_RuleId = 19) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 20) BEGIN /*My business logic*/ END Method 2 Using GOTO statement: DECLARE @V_RuleId SMALLINT, @V_Temp VARCHAR(100); SET @V_Temp = 'GOTO RULE' + CONVERT(VARCHAR, @V_RuleId); EXECUTE sp_executesql @V_Temp; RULE1: BEGIN /*My business logic*/ END RULE2: BEGIN /*My business logic*/ END RULE3: BEGIN /*My business logic*/ END /* ... ... ... ...*/ RULE19: BEGIN /*My business logic*/ END RULE20: BEGIN /*My business logic*/ END Today I have 20 rules. It can be increase to any number in future. If I can able to use CASE statement then I have not any problem with performance, but I can't do that so I am worried about the performance of my procedure. Also one thing to be noticed that this procedure will execute very frequently by application. My questions are: Is there any way to use CASE statement in my procedure? If not, which method is best to use in my procedure to improve the performance of my code? Thanks in advance...

    Read the article

  • Is this an example of LINQ-to-SQL?

    - by Edward Tanguay
    I made a little WPF application with a SQL CE database. I built the following code with LINQ to get data out of the database, which was surprisingly easy. So I thought "this must be LINQ-to-SQL". Then I did "add item" and added a "LINQ-to-SQL classes" .dbml file, dragged my table onto the Object Relational Designer but it said, "The selected object uses an unsupported data provider." So then I questioned whether or not the following code actually is LINQ-to-SQL, since it indeed allows me to access data from my SQL CE database file, yet officially "LINQ-to-SQL" seems to be unsupported for SQL CE. So is the following "LINQ-to-SQL" or not? using System.Linq; using System.Data.Linq; using System.Data.Linq.Mapping; using System.Windows; namespace TestLinq22 { public partial class Window1 : Window { public Window1() { InitializeComponent(); MainDB db = new MainDB(@"Data Source=App_Data\Main.sdf"); var customers = from c in db.Customers select new {c.FirstName, c.LastName}; TheListBox.ItemsSource = customers; } } [Database(Name = "MainDB")] public class MainDB : DataContext { public MainDB(string connection) : base(connection) { } public Table<Customers> Customers; } [Table(Name = "Customers")] public class Customers { [Column(DbType = "varchar(100)")] public string FirstName; [Column(DbType = "varchar(100)")] public string LastName; } }

    Read the article

  • conditional update records mysql query

    - by Shakti Singh
    Hi, Is there any single msql query which can update customer DOB? I want to update the DOB of those customers which have DOB greater than current date. example:- if a customer have dob 2034 update it to 1934 , if have 2068 updated with 1968. There was a bug in my system if you enter date less than 1970 it was storing it as 2070. The bug is solved now but what about the customers which have wrong DOB. So I have to update their DOB. All customers are stored in customer_entity table and the entity_id is the customer_id Details is as follows:- desc customer_entity -> ; +------------------+----------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+----------------------+------+-----+---------------------+----------------+ | entity_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | entity_type_id | smallint(8) unsigned | NO | MUL | 0 | | | attribute_set_id | smallint(5) unsigned | NO | | 0 | | | website_id | smallint(5) unsigned | YES | MUL | NULL | | | email | varchar(255) | NO | MUL | | | | group_id | smallint(3) unsigned | NO | | 0 | | | increment_id | varchar(50) | NO | | | | | store_id | smallint(5) unsigned | YES | MUL | 0 | | | created_at | datetime | NO | | 0000-00-00 00:00:00 | | | updated_at | datetime | NO | | 0000-00-00 00:00:00 | | | is_active | tinyint(1) unsigned | NO | | 1 | | +------------------+----------------------+------+-----+---------------------+----------------+ 11 rows in set (0.00 sec) And the DOB is stored in the customer_entity_datetime table the column value contain the DOB. but in this table values of all other attribute are also stored such as fname,lname etc. So the attribute_id with value 11 is DOB attribute. mysql> desc customer_entity_datetime; +----------------+----------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+----------------------+------+-----+---------------------+----------------+ | value_id | int(11) | NO | PRI | NULL | auto_increment | | entity_type_id | smallint(8) unsigned | NO | MUL | 0 | | | attribute_id | smallint(5) unsigned | NO | MUL | 0 | | | entity_id | int(10) unsigned | NO | MUL | 0 | | | value | datetime | NO | | 0000-00-00 00:00:00 | | +----------------+----------------------+------+-----+---------------------+----------------+ 5 rows in set (0.01 sec) Thanks.

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • Please help debug this ASP.Net [VB] code. Trying to write to text file from SQL Server DB.

    - by NJTechGuy
    I am a PHP programmer. I have no .Net coding experience (last seen it 4 years ago). Not interested in code-behind model since this is a quick temporary hack. What I am trying to do is generate an output.txt file whenever the user submits new data. So an output.txt file if exists should be replaced with the new one. I want to write data in this format : 123|Java Programmer|2010-01-01|2010-02-03 124|VB Programmer|2010-01-01|2010-02-03 125|.Net Programmer|2010-01-01|2010-02-03 I don't know VB, so not sure about string manipulations. Hope a kind soul can help me with this. I will be grateful to you. Thank you :) <%@ Import Namespace="System.IO" %> <%@ Import Namespace="System.Data" %> <%@ Import Namespace="System.Data.SqlClient" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim sqlConn As New SqlConnection("Data Source=winsqlus04.1and1.com;Initial Catalog=db28765269;User Id=dbo2765469;Password=ByhgstfH;") Dim myCommand As SqlCommand Dim dr As SqlDataReader Dim FILENAME as String = Server.MapPath("Output4.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) Try sqlConn.Open() 'opening the connection myCommand = New SqlCommand("SELECT id, title, CONVERT(varchar(10), expirydate, 120) AS [expirydate],CONVERT(varchar(10), creationdate, 120) AS [createdate] from tblContact where flag = 0 AND ACTIVE = 1", sqlConn) 'executing the command and assigning it to connection dr = myCommand.ExecuteReader() While dr.Read() objStreamWriter.WriteLine("JobID: " & dr(0).ToString()) objStreamWriter.WriteLine("JobID: " & dr(2).ToString()) objStreamWriter.WriteLine("JobID: " & dr(3).ToString()) End While dr.Close() sqlConn.Close() Catch x As Exception End Try objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" />

    Read the article

  • Multiple table relationships in Zend Help

    - by Zogi
    Hi Guys I have been doing some DB mapping to link two tables to no avail. Everytime I run the code I get the following error: Message: File "Role.php" does not exist or class "Role" was not found in the file Stack trace: #0 C:\wamp\www\zend\library\Zend\Db\Table\Row\Abstract.php(867): Zend_Db_Table_Row_Abstract->_getTableFromString('Role') #1 C:\wamp\www\uw\application\models\admin\User.php(56): Zend_Db_Table_Row_Abstract->findDependentRowset('Role') #2 C:\wamp\www\uw\application\controllers\AdminController.php(110): Application_Model_Admin_User->getUsers() #3 C:\wamp\www\zend\library\Zend\Controller\Action.php(513): AdminController->usersAction() #4 C:\wamp\www\zend\library\Zend\Controller\Dispatcher\Standard.php(289): Zend_Controller_Action->dispatch('usersAction') #5 C:\wamp\www\zend\library\Zend\Controller\Front.php(954): Zend_Controller_Dispatcher_Standard->dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #6 C:\wamp\www\zend\library\Zend\Application\Bootstrap\Bootstrap.php(97): Zend_Controller_Front->dispatch() #7 C:\wamp\www\zend\library\Zend\Application.php(366): Zend_Application_Bootstrap_Bootstrap->run() #8 C:\wamp\www\uwi\public\index.php(26): Zend_Application->run() #9 {main} Code & DB below: application/models/admin/User.php class Application_Model_Admin_User extends Zend_Db_Table_Abstract { protected $_name = 'user'; protected $_dependentTables = array('Role'); public function getUsers() { $rows = $this->fetchAll($this->select()->where('active = ?', 1)); $rows1 = $rows->current(); $rows2 = $rows1->findDependentRowset('Role'); return $rows2; } } application/models/admin/Role.php class Application_Model_Admin_Role extends Zend_Db_Table_Abstract { protected $_name = 'role'; protected $_referenceMap = array ( 'Role' => array( 'columns' => array('id'), 'refTableClass' => 'User', 'refColumns' => array('role_id') ); } DB tables CREATE TABLE role ( id integer auto_increment NOT NULL, name varchar(120), PRIMARY KEY(id) ); CREATE TABLE user ( id integer auto_increment NOT NULL, username varchar(120), PRIMARY KEY(id), FOREIGN KEY(role_id) REFERENCES role(id) );

    Read the article

  • t-sql most efficient row to column? crosstab for xml path, pivot

    - by ajberry
    I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema below, but concept is similar) in both fixed width and delimited formats. The below FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I've looked at pivot but most of the examples I have found are aggregating information. I just want to combine the child rows and tack them onto the parent. I should also point out I don't need to deal with the column names either since the output of the child rows will either be a fixed width string or a delimited string. For example, given the following tables: OrderId CustomerId ----------- ----------- 1 1 2 2 3 3 DetailId OrderId ProductId ----------- ----------- ----------- 1 1 100 2 1 158 3 1 234 4 2 125 5 3 101 6 3 105 7 3 212 8 3 250 for an order I need to output: orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250 or orderid Products ----------- ----------------------- 1 100|158|234 2 125 3 101|105|212|250 Thoughts or suggestions? I am using SQL Server 2k5. Example Setup: create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 using FOR XML PATH: select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o which outputs what I want, however is very slow for large amounts of data. One of the child tables is over 2 million rows, pushing the processing time out to ~ 4 hours. orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >