Search Results

Search found 10481 results on 420 pages for 'identity insert'.

Page 154/420 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • ValidateRequest="false" doesn't work when posting HTML values

    - by Ivan90
    I am developing a personal blog in ASP.NET MVC 1.0. This blog application has Views like "Insert Post", "Edit Post", etc. I need to post a string containing HTML back to the appropriate controller method. That HTML value is being posted from a textarea. I've read that it's necessary to disable ValidateRequest directly on the page with the attribute ValidateRequest = "false" or in the web.config file. When I insert an HTML value in my textarea, I get always the error of 'potential value dangerous'. How can I use ValidateRequest to allow the form element containing HTML values to be posted?

    Read the article

  • Can anyone recommend a good BSS/OSS platform for a voip provider?

    - by john unkas
    We are a voip startup and want to launch a voip service, while we have the call control platform (broadworks) we are wondering what BSS/OSS platform to use. Our options are to buy a turnkey solution (if it exists) or else to glue together opensource and commerical offerings to create a complete solution. BSS components we're looking for are identity management, billing, rating, product catalogue, subscription management, reporting, etc..

    Read the article

  • The proccesses finished berfore they become completed

    - by Meysam Javadi
    suppose this scenario: i have a XML file that is uploaded, i read it to a DataTable with ReadXml method. now i want to insert this DataTable to database. i create a t-sql command to insert database(with StoredProcedure) for example var command =string.Format("exec SyncToDB '{0}','{1}", odatarow.ItemArray.GetValue(0), odatarow..ItemArray.GetValue(1)); Hitherto everything is ok but when i run this code ,all of rows not affected to DB, i think that a timeout occur the code execution. *note:*the number of rows is numerous.(25000). how to solve this?

    Read the article

  • Dynamic upsert in postgresql

    - by Daniel
    I have this upsert function that allows me to modify the fill_rate column of a row. CREATE FUNCTION upsert_fillrate_alarming(integer, boolean) RETURNS VOID AS ' DECLARE num ALIAS FOR $1; dat ALIAS FOR $2; BEGIN LOOP -- First try to update. UPDATE alarming SET fill_rate = dat WHERE equipid = num; IF FOUND THEN RETURN; END IF; -- Since its not there we try to insert the key -- Notice if we had a concurent key insertion we would error BEGIN INSERT INTO alarming (equipid, fill_rate) VALUES (num, dat); RETURN; EXCEPTION WHEN unique_violation THEN -- Loop and try the update again END; END LOOP; END; ' LANGUAGE 'plpgsql'; Is it possible to modify this function to take a column argument as well? Extra bonus points if there is a way to modify the function to take a column and a table.

    Read the article

  • Problems inserting file data into sqlite database using python

    - by tylerc230
    I'm trying to open an image file in python and add that data to an sqlite table. I created the table using: "CREATE TABLE "images" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "description" VARCHAR, "image" BLOB );" I am trying to add the image to the db using: imageFile = open(imageName, 'rb') b = sqlite3.Binary(imageFile.read()) targetCursor.execute("INSERT INTO images (image) values(?)", (b,)) targetCursor.execute("SELECT id from images") for id in targetCursor: imageid= id[0] targetCursor.execute("INSERT INTO %s (questionID,imageID) values(?,?)" % table, (questionId, imageid)) When I print the value of 'b' it looks like binary data but when I call: 'select image from images where id = 1' I get '????' printed to the console. Anyone know what I'm doing wrong?

    Read the article

  • NHibernate - Saving simple parent-child relationship generates unnecessary selects with assigned id

    - by Alice
    Entities: public class Parent { virtual public long Id { get; set; } virtual public string Description { get; set; } virtual public ICollection<Child> Children { get; set; } } public class Child { virtual public long Id { get; set; } virtual public string Description { get; set; } virtual public Parent Parent { get; set; } } Mappings: public class ParentMap : ClassMap<Parent> { public ParentMap() { Id(x => x.Id).GeneratedBy.Assigned(); Map(x => x.Description); HasMany(x => x.Children) .AsSet() .Inverse() .Cascade.AllDeleteOrphan(); } } public class ChildMap : ClassMap<Child> { public ChildMap() { Id(x => x.Id).GeneratedBy.Assigned(); Map(x => x.Description); References(x => x.Parent) .Not.Nullable() .Cascade.All(); } } and using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { var parent = new Parent { Id = 1 }; parent.Children = new HashSet<Child>(); var child1 = new Child { Id = 2, Parent = parent }; var child2 = new Child { Id = 3, Parent = parent }; parent.Children.Add(child1); parent.Children.Add(child2); session.Save(parent); transaction.Commit(); } this codes generates following sql NHibernate: SELECT child_.Id, child_.Description as Descript2_0_, child_.Parent_id as Parent3_0_ FROM [Child] child_ WHERE child_.Id=@p0;@p0 = 2 [Type: Int64 (0)] NHibernate: SELECT child_.Id, child_.Description as Descript2_0_, child_.Parent_id as Parent3_0_ FROM [Child] child_ WHERE child_.Id=@p0;@p0 = 3 [Type: Int64 (0)] NHibernate: INSERT INTO [Parent] (Description, Id) VALUES (@p0, @p1);@p0 = NULL[Type: String (4000)], @p1 = 1 [Type: Int64 (0)] NHibernate: INSERT INTO [Child] (Description, Parent_id, Id) VALUES (@p0, @p1, @p2);@p0 = NULL [Type: String (4000)], @p1 = 1 [Type: Int64 (0)], @p2 = 2 [Type:Int64 (0)] NHibernate: INSERT INTO [Child] (Description, Parent_id, Id) VALUES (@p0, @p1, @p2);@p0 = NULL [Type: String (4000)], @p1 = 1 [Type: Int64 (0)], @p2 = 3 [Type:Int64 (0)] Why are these two selects generated and how can I remove it?

    Read the article

  • Procedure Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32).

    - by Nick
    The stored proc is failing at below location,Thanks, for all your help. --Insert MSOrg Information DECLARE @PersonnelNumber int, @MSOrg varchar(255) DECLARE csr CURSOR FAST_FORWARD FOR SELECT PersonnelNumber FROM Person OPEN csr FETCH NEXT FROM csr INTO @PersonnelNumber WHILE @@FETCH_STATUS = 0 BEGIN EXEC GetMSOrg @PersonnelNumber, @MSOrg out INSERT INTO PersonSubject ( PersonnelNumber ,SubjectID ,SubjectValue ,Created ,Updated ) SELECT @PersonnelNumber ,SubjectID ,@MSOrg ,getDate() ,getDate() FROM Subject WHERE DisplayName = 'MS Org' FETCH NEXT FROM csr INTO @PersonnelNumber END CLOSE csr DEALLOCATE csr Below is the stored prc defination GetMSOrg and fails at third condition CREATE PROCEDURE [dbo].[GetMSOrg] ( @PersonnelNumber int ,@OrgTerm varchar(200) out ) AS DECLARE @MDRTermID int ,@ReportsToPersonnelNbr int --Check to see if we have reached the top of the chart SELECT @ReportsToPersonnelNbr = ReportsToPersonnelNbr FROM ReportsTo WHERE PersonnelNumber = @PersonnelNumber IF (@ReportsToPersonnelNbr IS NULL) --Reached the Top of the Org Ladder BEGIN SET @OrgTerm = 'Non-standard rollup' END ELSE IF (@PersonnelNumber IN (SELECT PersonnelNumber FROM OrgTermMap)) BEGIN SELECT @OrgTerm = s.Term FROM OrgTermMap tm JOIN Taxonomy..StaticHierarchy s ON tm.OrgTermID = s.TermID WHERE tm.PersonnelNumber = @PersonnelNumber END ELSE BEGIN SELECT @MDRTermID = tm.OrgTermID FROM ReportsTo r JOIN OrgTermMap tm ON r.ReportsToPersonnelNbr = tm.PersonnelNumber WHERE r.PersonnelNumber = @PersonnelNumber IF (@MDRTermID IS NULL) BEGIN EXEC GetMSOrg @ReportsToPersonnelNbr, @OrgTerm out END ELSE BEGIN SELECT @OrgTerm = Term FROM Taxonomy..StaticHierarchy WHERE VocabID = 118 AND TermID = @MDRTermID END END GO

    Read the article

  • Importing HTML into TinyMCE using ColdFusion

    - by knawlejj
    Hey everyone, I would appreciate a pointing in the right direction with the problem I'm having. In short, I'm working on an application that will create PDFs using TinyMCE and ColdFusion 8. I have the ability to create a PDF by just entering in text, pictures, etc. However, I want to be able to import an html template and insert it into the TinyMCE . Basically, I have a file directory code snippet that lets me browse through my 'HTMLTemplates' folder, and am able to select an HTML document. Now, I want to be able to take all the code from that selected HTML document and insert it into my TinyMCE box. Any tips on how I might do this, maybe? Thanks!

    Read the article

  • sql server procedure optimization

    - by stackoverflow
    SQl Server 2005: Option: 1 CREATE TABLE #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) CREATE UNIQUE CLUSTERED INDEX Idx1 ON #test(customerid) CREATE INDEX Idx2 ON #test(field1 DESC) CREATE INDEX Idx3 ON #test(field2 DESC) CREATE INDEX Idx4 ON #test(field3 DESC) INSERT INTO #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) SELECT customerid, orderdate, field1, field2, field3 FROM ATABLERETURNING4000000ROWS compared to Option: 2 CREATE TABLE #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) INSERT INTO #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) SELECT customerid, orderdate, field1, field2, field3 FROM ATABLERETURNING4000000ROWS CREATE UNIQUE CLUSTERED INDEX Idx1 ON #test(customerid) CREATE INDEX Idx2 ON #test(field1 DESC) CREATE INDEX Idx3 ON #test(field2 DESC) CREATE INDEX Idx4 ON #test(field3 DESC) When we use the second option it runs close to 50% faster. Why is this?

    Read the article

  • Which workaround to use for the following SQL deadlock?

    - by Marko
    I found a SQL deadlock scenario in my application during concurrency. I belive that the two statements that cause the deadlock are (note - I'm using LINQ2SQL and DataContext.ExecuteCommand(), that's where this.studioId.ToString() comes into play): exec sp_executesql N'INSERT INTO HQ.dbo.SynchronizingRows ([StudioId], [UpdatedRowId]) SELECT @p0, [t0].[Id] FROM [dbo].[UpdatedRows] AS [t0] WHERE NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [dbo].[ReceivedUpdatedRows] AS [t1] WHERE ([t1].[StudioId] = @p0) AND ([t1].[UpdatedRowId] = [t0].[Id]) ))',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; and exec sp_executesql N'INSERT INTO HQ.dbo.ReceivedUpdatedRows ([UpdatedRowId], [StudioId], [ReceiveDateTime]) SELECT [t0].[UpdatedRowId], @p0, GETDATE() FROM [dbo].[SynchronizingRows] AS [t0] WHERE ([t0].[StudioId] = @p0)',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; The basic logic of my (client-server) application is this: Every time someone inserts or updates a row on the server side, I also insert a row into the table UpdatedRows, specifying the RowId of the modified row. When a client tries to synchronize data, it first copies all of the rows in the UpdatedRows table, that don't contain a reference row for the specific client in the table ReceivedUpdatedRows, to the table SynchronizingRows (the first statement taking part in the deadlock). Afterwards, during the synchronization I look for modified rows via lookup of the SynchronizingRows table. This step is required, otherwise if someone inserts new rows or modifies rows on the server side during synchronization I will miss them and won't get them during the next synchronization (explanation scenario to long to write here...). Once synchronization is complete, I insert rows to the ReceivedUpdatedRows table specifying that this client has received the UpdatedRows contained in the SynchronizingRows table (the second statement taking part in the deadlock). Finally I delete all rows from the SynchronizingRows table that belong to the current client. The way I see it, the deadlock is occuring on tables SynchronizingRows (abbreviation SR) and ReceivedUpdatedRows (abbreviation RUR) during steps 2 and 3 (one client is in step 2 and is inserting into SR and selecting from RUR; while another client is in step 3 inserting into RUR and selecting from SR). I googled a bit about SQL deadlocks and came to a conclusion that I have three options. Inorder to make a decision I need more input about each option/workaround: Workaround 1: The first advice given on the web about SQL deadlocks - restructure tables/queries so that deadlocks don't happen in the first place. Only problem with this is that with my IQ I don't see a way to do the synchronization logic any differently. If someone wishes to dwelve deeper into my current synchronization logic, how and why it is set up the way it is, I'll post a link for the explanation. Perhaps, with the help of someone smarter than me, it's possible to create a logic that is deadlock free. Workaround 2: The second most common advice seems to be the use of WITH(NOLOCK) hint. The problem with this is that NOLOCK might miss or duplicate some rows. Duplication is not a problem, but missing rows is catastrophic! Another option is the WITH(READPAST) hint. On the face of it, this seems to be a perfect solution. I really don't care about rows that other clients are inserting/modifying, because each row belongs only to a specific client, so I may very well skip locked rows. But the MSDN documentaion makes me a bit worried - "When READPAST is specified, both row-level and page-level locks are skipped". As I said, row-level locks would not be a problem, but page-level locks may very well be, since a page might contain rows that belong to multiple clients (including the current one). While there are lots of blog posts specifically mentioning that NOLOCK might miss rows, there seems to be none about READPAST (never) missing rows. This makes me skeptical and nervous to implement it, since there is no easy way to test it (implementing would be a piece of cake, just pop WITH(READPAST) into both statements SELECT clause and job done). Can someone confirm whether the READPAST hint can miss rows? Workaround 3: The final option is to use ALLOW_SNAPSHOT_ISOLATION and READ_COMMITED_SNAPSHOT. This would seem to be the only option to work 100% - at least I can't find any information that would contradict with it. But it is a little bit trickier to setup (I don't care much about the performance hit), because I'm using LINQ. Off the top of my head I probably need to manually open a SQL connection and pass it to the LINQ2SQL DataContext, etc... I haven't looked into the specifics very deeply. Mostly I would prefer option 2 if somone could only reassure me that READPAST will never miss rows concerning the current client (as I said before, each client has and only ever deals with it's own set of rows). Otherwise I'll likely have to implement option 3, since option 1 is probably impossible... I'll post the table definitions for the three tables as well, just in case: CREATE TABLE [dbo].[UpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY CLUSTERED, [RowId] [uniqueidentifier] NOT NULL, [UpdateDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX IX_RowId ON dbo.UpdatedRows ([RowId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[ReceivedUpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY NONCLUSTERED, [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]), [StudioId] [uniqueidentifier] NOT NULL REFERENCES, [ReceiveDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE CLUSTERED INDEX IX_Studios ON dbo.ReceivedUpdatedRows ([StudioId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[SynchronizingRows]( [StudioId] [uniqueidentifier] NOT NULL [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]) PRIMARY KEY CLUSTERED ([StudioId], [UpdatedRowId]) ) ON [PRIMARY] GO PS! Studio = Client. PS2! I just noticed that the index definitions have ALLOW_PAGE_LOCK=ON. If I would turn it off, would that make any difference to READPAST? Are there any negative downsides for turning it off?

    Read the article

  • converting date data type into varchar

    - by Sheetal Inani
    I have some dates fields in table. These columns contain dates in the following format: mmddyy For example: 31/12/2010 00:00:00:0000 I need to import these values into a table which is set to varchar and numeric and formats dates like this: monthName varchar Year numeric(4,0) currently I'm using INSERT INTO [School].[dbo].[TeacherAttendenceDet] ([TeacherCode], [MonthName], [Year]) (SELECT MAX(employeecode), Datename(MONTH, dateofjoining) AS MONTH, Datepart(YEAR, dateofjoining) AS DATE FROM employeedet GROUP BY dateofjoining) but datename() gives result in date format.. I have to save it in varchar format How can I do this? this is employeemast table: EmployeeCode numeric(5, 0) PayScaleCode numeric(7, 0) DesignationCode varchar(50) CityCode numeric(5, 0) EmployeeName varchar(50) FatherName varchar(50) BirthDate varchar(50) DateOfJoining varchar(50) Address varchar(150) this is TeacherAttendenceDet table TeacherCode numeric(5, 0) Unchecked Year numeric(4, 0) Unchecked MonthName varchar(12) Unchecked i have to insert in teacherattendencedet table the monthname and year from employeemast

    Read the article

  • changing order of items in tkinter listbox

    - by user1104854
    Is there an easier way to change the order of items in a tkinter listbox than deleting the values for specific key, then re-entering new info? For example, I want to be able to re-arrange items in a listbox. If I want to swap the position of two, this is what I've done. It works, but I just want to see if there's a quicker way to do this. def moveup(self,selection): value1 = int(selection[0]) - 1 #value to be moved down one position value2 = selection #value to be moved up one position nameAbove = self.fileListSorted.get(value1) #name to be moved down nameBelow = self.fileListSorted.get(value2) #name to be moved up self.fileListSorted.delete(value1,value1) self.fileListSorted.insert(value1,nameBelow) self.fileListSorted.delete(value2,value2) self.fileListSorted.insert(value2,nameAbove)

    Read the article

  • How to remove duplicate records in a table?

    - by Mason Wheeler
    I've got a table in a testing DB that someone apparently got a little too trigger-happy on when running INSERT scripts to set it up. The schema looks like this: ID UNIQUEIDENTIFIER TYPE_INT SMALLINT SYSTEM_VALUE SMALLINT NAME VARCHAR MAPPED_VALUE VARCHAR It's supposed to have a few dozen rows. It has about 200,000, most of which are duplicates in which TYPE_INT, SYSTEM_VALUE, NAME and MAPPED_VALUE are all identical and ID is not. Now, I could probably make a script to clean this up that creates a temporary table in memory, uses INSERT .. SELECT DISTINCT to grab all the unique values, TRUNCATE the original table and then copy everything back. But is there a simpler way to do it, like a DELETE query with something special in the WHERE clause?

    Read the article

  • How do I make my wordpress post appear in a mouse hover preview?

    - by Dan
    Mouse hover previews usually only show title and image. Instead, I want the entire wordpress post to show. The code that calls the preview is this: adTitle = jQuery(this).find('img').attr('alt'); jQuery('body').append("<div id='preview'><a href='"+ this.href +"' class='colorbox-thumb'><img src='"+ this.href +"' alt='' /></a><p>"+ adTitle +"</p></div>"); jQuery('#preview') .fadeIn('fast') I want to insert a wordpress post instead. I have tried to insert the php code after append("

    Read the article

  • how to hide the image? how can i do ?

    - by user309381
    function Psend() { new Ajax.Request('Handler.ashx', { method: 'get', onSuccess: function(transport) { var response = transport.responseText || "no response text"; //alert("Success! \n\n" + response); var obj = response.evalJSON(true); for (i = 0; i < 4; i++) { DeCheBX = $('MyDiv').insert(new Element('input', { 'type': 'checkbox', 'id': "img" + obj[i].Nam, 'value': obj[i].IM, 'onClick': 'SayHi(this)' })); DeImg = $('MyDiv').insert(new Element('img', { 'id': "img" + obj[i].Nam, 'src': obj[i].IM, 'style': 'display = inline', 'onClick': 'Say(this)' })); document.body.appendChild(DeCheBX); document.body.appendChild(DeImg); } }, onFailure: function() { alert('Something went wrong...') } }); SayHi = function(x) { if ($(x).checked == true) { // $('id').hide(); **$('img'+i).style.visibility = "hidden";**// doesnt work } };

    Read the article

  • Would this rollback/stop all records from inserting?

    - by chobo2
    Hi I been going through this tutorial http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx and them make a SP like this CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc Now my requirements require me to mass insert and mass update one after another. So first I am wondering can I merge those into one SP? I am not sure how it works with this OPENXML but I would think it would just be making sure that the XPath is right. Next what happens while it would be running this combined SP and something goes wrong. Would it roll back all the records or just stop and the records that happened before this event that crashed it would be inserted?

    Read the article

  • jquery: add html after a group of divs

    - by Omu
    I have something like this: ... <div id="d120" >content</div> <div id="d123" >content</div> <div id="d112" >content</div> <div id="d145" >content</div> <div id="d134" >content</div> //Insert here hello world <div id="bla" >asd</div> <div id="footer" >asd</div> anybody knows how to insert html after all the divs that have id like d+number

    Read the article

  • Merging tables in MySQL - sum up columns

    - by Alan Williamson
    I have an interesting problem, that i am sure has a simple answer, but i can't seem to find it in the docs. I have two separate database tables, on different servers. They are both identical table schema with the same primary keys. I want to merge the tables together on one server. But, if the row on Server1.Table1 exists in Server2.Table2 then sum up the totals in the columns i specify. Table1{ column_pk, counter }; "test1", 3 "test2", 4 Table2{ column_pk, counter }; "test1", 5 "test2", 6 So after i merge i want: "test1",8 "test2",10 Basically i need to do a mysqldump but instead of it kicking out raw INSERT statements, i need to do a INSERT..ON DUPLICATE KEY UPDATE statements. What are my options? Appreciate any input, thank you

    Read the article

  • Is INT the correct datatype for ABS(CHECKSUM(NEWID()))?

    - by Chad Sellers
    I'm in the process of creating unique customers ID's that is an alternative Id for external use. In the process of adding a new column "cust_uid" with datatype INT for my unique ID's, When I do an INSERT into this new column: Insert Into Customers(cust_uid) Select ABS(CHECKSUM(NEWID())) I get a error: Could not create an acceptable cursor. OLE DB provider "SQLNCLI" for linked server "SHQ2IIS1" returned message "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. I've check all data types on both tables and the only things that has changed is the new column in both tables. The update is being done on one Big @$$ table...and for reasons above my pay grade, we would like to have new uid's that are different form the one's that we currently have "so users don't know how many accounts we actually have." Is INT the correct datatype for ABS(CHECKSUM(NEWID())) ?

    Read the article

  • Can't update date in aspx to a MS-ACCESS table

    - by Bjork
    Hello I'm having problem with updating datatypes I insert the date in the C# part like this string strSQL = "INSERT into Frettir (CreatedBy,CreatedOn,Title,Description,Starts,Ends,CatId,SectionId,ArticleExt,Myndatexti,MyndUrAlbumi,NrMyndar) values(?,?,?,?,?,?,?,?,?,?,?,?)"; cmd.Parameters.Add("@Starts",OleDbType.Date).Value = dstartdate; but I update in the aspx part like this UpdateCommand="UPDATE [Frettir] SET [Title]=@Title,[Description]=@Description,[CreatedBy]=@notandaID,[ArticleExt]=@ArticleExt, [Myndatexti]=@Myndatexti,[Starts]=@Starts WHERE [ArticleID]=@id2 " I get an error Data type mismatch in criteria expression It seems that there are some type differences between the type that is input in the c# part and the aspx-part Can anyone help me with this?

    Read the article

  • Sqllite doesn' write a column

    - by user1904675
    I do this: DatabaseHelper dbHelper = new DatabaseHelper(context); dbHelper.getWritableDatabase(); String sql = "insert into "+getTableName()+("+DatabaseHelper.PRODUCT_MARK+","+DatabaseHelper.PRODUCT_NAME+") VALUES ('"+input.getMark()+"','"+input.getName()+"')"; System.out.println(sql); getDatabase().execSQL(sql); dbHelper.close(); The system print 12-14 16:53:33.857: I/System.out(1350): insert into product (pMark,name) VALUES ('aaaaa ','zz') But when I read from db the property mark is not valorized... Where is my mistake?

    Read the article

  • ORACLE -1401 error

    - by Sachin Chourasiya
    I have a stored procedure in Oracle 9i which inserts records in a table. The table has a primary key built to ensure duplicte rows doesnot exists. I am trying to insert a record by calling this stored procedure and it works first time properly. I am again trying to insert a duplicate record and expecting unique constraint violation error. But I am getting ORA-01401 inserted value too large for column I knew its meaning but my query is , if the value inserted is really large then how it got successful in the first attempt.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >