Search Results

Search found 8267 results on 331 pages for 'insert into'.

Page 236/331 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • why single SQL delete statement will cause deadlock?

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. I am wondering why even a single delete statement of this store procedure will cause deadlock if executed by multiple threads at the same time? For the delete statement, Param1 is a column of table FooTable, Param1 is a foreign key of another table (refers to another primary key clustered index column of the other table). There is no index on Param1 itself for table FooTable. FooTable has another column which is used as clustered primary key, but not Param1 column. create PROCEDURE [dbo].[FooProc] ( @Param1 int ,@Param2 int ,@Param3 int ) AS DELETE FooTable WHERE Param1 = @Param1 INSERT INTO FooTable ( Param1 ,Param2 ,Param3 ) VALUES ( @Param1 ,@Param2 ,@Param3 ) DECLARE @ID bigint SET @ID = ISNULL(@@Identity,-1) IF @ID > 0 BEGIN SELECT IdentityStr FROM FooTable WHERE ID = @ID END thanks in advance, George

    Read the article

  • Linq to SQl Stored Procedure Problem( it can't figure out the return type)

    - by chobo2
    Hi I have this SP USE [Test] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[UsersInsert](@UpdatedProdData XML) AS INSERT INTO dbo.UserTable(UserId,UserName,LicenseId,Password,PasswordSalt,Email,IsApproved,IsLockedOut,CreateDate, LastLoginDate,LastLockOutDate,FailedPasswordAttempts,RoleId) SELECT @UpdatedProdData.value('(/ArrayOfUsers/Users/UserId)[1]', 'uniqueidentifier'), @UpdatedProdData.value('(/ArrayOfUsers/Users/UserName)[1]', 'varchar(20)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/LicenseId)[1]', 'varchar(50)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/Password)[1]', 'varchar(128)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/PasswordSalt)[1]', 'varchar(128)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/Email)[1]', 'varchar(50)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/IsApproved)[1]', 'bit'), @UpdatedProdData.value('(/ArrayOfUsers/Users/IsLockedOut)[1]', 'bit'), @UpdatedProdData.value('(/ArrayOfUsers/Users/CreateDate)[1]', 'datetime'), @UpdatedProdData.value('(/ArrayOfUsers/Users/LastLoginDate)[1]', 'datetime'), @UpdatedProdData.value('(/ArrayOfUsers/Users/LastLockOutDate)[1]', 'datetime'), @UpdatedProdData.value('(/ArrayOfUsers/Users/FailedPasswordAttempts)[1]', 'int'), @UpdatedProdData.value('(/ArrayOfUsers/Users/RoleId)[1]', 'int') Now this SP creates just fine. It's when I go to VS2010 and try to drag this SP in my method panel of my linq to sql file in design view. It tells me that it can't figure out the return type. I try to go to the properties but it does not have "none" as a choice and I can't type it in. It should be "none" so how do I set it to "none"?

    Read the article

  • Arry of pointers in objective c using NSArray

    - by Amir
    Hello, I am writting program for my iphone and have a qestion. lets say i have class named my_obj class my_obj { NSString *name; NSinteger *id; NSinteger *foo; NSString *boo; } now i allocate 100 objects from type my_obj and insert them to array from type NSArray. then i want to sort the Array in two different ways. one by the name and the second by the id. i want to allocate another two arrays from type NSArray *arraySortByName *arraySortById what i need to do if i just want the sorted arrays to be referenced to the original array so i will get two sorted arrays that point to the original array (that didnt changed!) i other word i dont want to allocate another 100 objects to each sorted array.

    Read the article

  • Images with razor asp.net mvc inside JS

    - by sarsnake
    I need to dynamically insert an image in my JS code. In my Razor template I have: @section Includes { <script type="text/javascript"> var imgPath = "@Url.Content("~/Content/img/")"; alert(imgPath); </script> } Then in my JS I have: insertImg = ""; if (response[i].someFlag == 'Y') { insertImg = "<img src=\"" + imgPath + "/imgToInsert.gif\" width=\"6px\" height=\"10px\" />"; } But it doesn't work - it will not find the image. The image is stored in /Content/img folder... What am I doing wrong?

    Read the article

  • How to very efficiently assign lat/long to city boundary described by shape ?

    - by watcherFR
    I have a huge shapefile of 36.000 non-overlapping polygones (city boundaries). I want to easily determine the polygone into which a given lat/long falls. What would the best way given that it must be extremely computationaly efficient ? I was thinking of creating a lookup table (tilex,tiley,polygone_id) where tilex and tiley are tile identifiers at zoom levels 21 or 22. Yes, the lack of precision of using tile numbers and a planar projection is acceptable in my application. I would rather not use postgres's GIS extension and am fine with a program that will run for 2 days to generate all the INSERT statements.

    Read the article

  • How do I wrap a very long line of text in a GWT label?

    - by user323295
    This is an extract of my code at the moment: VerticalPanel mainPanel = new VerticalPanel(); RootPanel.get("messages").add(mainPanel); HorizontalPanel tempPanel = new HorizontalPanel(); tempPanel.setSize("100px", "200px"); Label content = new Label("AAAveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryverylongtextZZZ"); content.setWidth("50px"); content.setWordWrap(true); tempPanel.add(content); mainPanel.add(tempPanel); The label displays but it does not wrap. If I insert a space it seems that word wrap works, but I guess I want character wrap. Any ideas? I do not want a horizontal scrollbar.

    Read the article

  • crash in the handler that moves up a treenode in a treeview c#

    - by voodoomsr
    i have a event handler that moves the selected treenode up. I don't know why is crash in the line with comented. treeviewdocxml is a treeview object, from System.Windows.Forms treeViewDocXml.BeginUpdate(); TreeNode sourceNode = treeViewDocXml.SelectedNode; if (sourceNode.Parent == null) { return; } if (sourceNode.Index > 0) { sourceNode.Parent.Nodes.Remove(sourceNode); sourceNode.Parent.Nodes.Insert(sourceNode.Index - 1, sourceNode); //HERE CRASH } treeViewDocXml.EndUpdate();

    Read the article

  • Help with a query

    - by stackoverflowuser
    Hi Based on the following table ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 4 10 B 5 4 B 6 1 B 7 10 C 8 3 C 9 30 C I want to check if the total effort against a name is less than 40 then add a row with effort = 40 - (Total Effort) for the name. The ID of the new row can be anything. If the total effort is greater than 40 then trucate the data for one of the rows to make it 40. So after applying the logic above table will be ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 10 30 A 4 10 B 5 4 B 6 1 B 11 25 B 7 10 C 8 3 C 9 27 C I was thinking of opening a cursor, keeping a counter of the total effort, and based on the logic insert existing and new rows in another temporary table. I am not sure if this is an efficient way to deal with this. I would like to learn if there is a better way.

    Read the article

  • Is it so bad to have heaps of elements in your DOM?

    - by alex
    I am making a real estate non interactive display for their shop window. I have kicked jCarousel into doing what I want: Add panels per AJAX Towards the end of the current set, go and AJAX some new panels and insert them This works fine, but it appears calling jQuery's remove() on the prior elements cause an ugly bump. I'm not sure if calling hide() will free up any resources, as the element will still exist (and the element will be off screen anyway). I've seen this, and tried carousel.reset() from within a callback. It just clears out all the elements. This will be running on Google Chrome on Windows XP, and will solely by displaying on LCD televisions. I am wondering, if I can't find a reasonable solution to remove the extra DOM elements, will it bring my application to a crawl, or will Chrome do some clever garbage collecting? Or, how would you solve this problem? Thanks

    Read the article

  • Problem storing string containing quotes

    - by Jack
    I have the following table - $sql = "CREATE TABLE received_queries ( sender_screen_name varchar(50), text varchar(150) )"; I use the following SQL statement to store values in the table $sql = "INSERT INTO received_queries VALUES ('$sender_screen_name', '$text')"; Now I am trying to store the following string as 'text'. One more #haiku: Cotton wool in mind; feeling like a sleep won't cure; I need some coffee. and I get the following error message Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 't cure; I need some coffee.')' at line 1 I think must be a pretty common problem. How do I solve it?

    Read the article

  • Any good opensource SharePoint components that can abstract you from the inner SharePoint plumbings?

    - by JL
    I am looking for a good reusable set of components that can be used to communicate with SharePoint via web services, preferably open source. I want some abstraction from CAML and WebDav and SharePoint Web Services that could help me speed up my development time. Ideally I want to select, insert, update and delete from lists, manage attachments in list items, download items from sharepoint, retrieve user meta data from owner info. This sort of thing. Does any such abstraction exist for Sharepoint that use SharePoints web service model, obviously the use of the MOSS Component API is out of the question because it will only run on the hosted MOSS server, and I am writing an SOA app. Thank you

    Read the article

  • How I can move table to another filegroup ?

    - by denisioru
    Hello, I have MSSQL 2008 Ent and OLTP database with two big tables. How I can move this tables to another filegroup without service interrupting? Now, about 100-130 records inserted and 30-50 records updated each second in this tables. Each table have about 100M records and six fields (including one field geography). I looking for solution via google, but all solutions contain "create second table, insert rows from first table, drop first table, bla bla bla". Can I use partitioning functions for solving this problem? Thank you.

    Read the article

  • Pre loaded database on iPhone?

    - by Julian
    Hi, I have recently developed an app using core data as the storage db. The app allowed the user to read and write to the db. I am now developing a new app which the user doesnt need to write anything to the db, instead the app just needs to read the data. The data has relationships etc so cannot just use a plist or something similar. My question is should I use core data for such a requirement and if so how would i go about entering the data and then releasing the app. Would I have to code the data entry which would populate the db then remove all this code (as I dont want the database to repopulate every time the user opens the app)?? Is there a way to create a core data model using sql commands as with sqlite ie insert into..... etc? Any ideas/thoughts would be very helpful. Many thanks Jules

    Read the article

  • If I take a large datatype. Will it affect performance in sql server

    - by Shantanu Gupta
    If i takes larger datatype where i know i should have taken datatype that was sufficient for possible values that i will insert into a table will affect any performance in sql server in terms of speed or any other way. eg. IsActive (0,1,2,3) not more than 3 in any case. I know i must take tinyint but due to some reasons consider it as compulsion, i am taking every numeric field as bigint and every character field as nVarchar(Max) Please give statistics if possible, to let me try to overcoming that compulsion. I need some solid analysis that can really make someone rethink before taking any datatype.

    Read the article

  • Sorting 1000-2000 elements with many cache misses

    - by Soylent Graham
    I have an array of 1000-2000 elements which are pointers to objects. I want to keep my array sorted and obviously I want to do this as quick as possible. They are sorted by a member and not allocated contiguously so assume a cache miss whenever I access the sort-by member. Currently I'm sorting on-demand rather than on-add, but because of the cache misses and [presumably] non-inlining of the member access the inner loop of my quick sort is slow. I'm doing tests and trying things now, (and see what the actual bottleneck is) but can anyone recommend a good alternative to speeding this up? Should I do an insert-sort instead of quicksorting on-demand, or should I try and change my model to make the elements contigious and reduce cache misses? OR, is there a sort algorithm I've not come accross which is good for data that is going to cache miss?

    Read the article

  • Processing XML file with Huge data

    - by Manish Dhanotiya
    Hi,be m I am working on an application which has below requiements - 1. Download a ZIP file from a server. 2. Uncompress the ZIP file, get the content (which is in XML format) from this file into a String. 3. Pass this content into another method for parsing and further processing. Now, my concerns here is the XML file may be of Huge size say like '100MB', and my JVM has memory of only 512 MB, so how can I get this content into Chunks and pass for Parsing and then insert the data into PL/SQL tables. Since there can be multiple requests running at the same time and considering 512MB of memory what will be the best possible to process this. How I can get the data into Chunks and pass it as Stream for XML parsing. I googled on this, but didnt find any implementation. :( Thanks,

    Read the article

  • What to do with twitter oauth token once retreived?

    - by mcintyre321
    I'm writing a web app that will use twitter as its primary log on method. I've written code which gets the oauth token back from Twitter. My plan is now to Find the entry in my Users table for the twitter username retreived using the token, or create the entry if necessary Update the Users.TwitterOAuthToken column with the new OAuth token Create a permanent cookie with a random guid on the site and insert a record into my UserCookies table matching Cookie to User when a request comes in I will look for the browser cookie id in the UserCookies table, then use that to figure out the user, and make twitter requests on their behalf Write the oauth token into some pages as a js variable so that javascript can make requests on behalf of the user If the user clears his/her cookies the user will have to log in again to twitter Is this the correct process? Have I created any massive security holes? thanks!

    Read the article

  • Index for wildcard match of end of string

    - by Anders Abel
    I have a table of phone numbers, storing the phone number as varchar(20). I have a requirement to implement searching of both entire numbers, but also on only the last part of the number, so a typical query will be: SELECT * FROM PhoneNumbers WHERE Number LIKE '%1234' How can I put an index on the Number column to make those searchs efficient? Is there a way to create an index that sorts the records on the reversed string? Another option might be to reverse the numbers before storing them, which will give queries like: SELECT * FROM PhoneNumbers WHERE ReverseNumber LIKE '4321%' However that will require all users of the database to always reverse the string. It might be solved by storing both the normal and reversed number and having the reversed number being updated by a trigger on insert/update. But that kind of solution is not very elegant. Any other suggestions?

    Read the article

  • Random select is not always returning a single row.

    - by Lieven
    The intention of following (simplified) code fragment is to return one random row. Unfortunatly, when we run this fragment in the query analyzer, it returns between zero and three results. As our input table consists of exactly 5 rows with unique ID's and as we perform a select on this table where ID equals a random number, we are stumped that there would ever be more than one row returned. Note: among other things, we already tried casting the checksum result to an integer with no avail. DECLARE @Table TABLE ( ID INTEGER IDENTITY (1, 1) , FK1 INTEGER ) INSERT INTO @Table SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 SELECT * FROM @Table WHERE ID = ABS(CHECKSUM(NEWID())) % 5 + 1

    Read the article

  • How do detect that transaction has already been started?

    - by xelurg
    I am using Zend_Db to insert some data inside a transaction. My function starts a transaction and then calls another method that also attempts to start a transaction and of course fails(I am using MySQL5). So, the question is - how do I detect that transaction has already been started? Here is a sample bit of code: try { Zend_Registry::get('database')->beginTransaction(); $totals = self::calculateTotals($Cart); $PaymentInstrument = new PaymentInstrument; $PaymentInstrument->create(); $PaymentInstrument->validate(); $PaymentInstrument->save(); Zend_Registry::get('database')->commit(); return true; } catch(Zend_Exception $e) { Bootstrap::$Log->err($e->getMessage()); Zend_Registry::get('database')->rollBack(); return false; } Inside PaymentInstrument::create there is another beginTransaction statement that produces the exception that says that transaction has already been started.

    Read the article

  • JQuery post to php

    - by RussP
    Why is it that I can never get JQuery serialize to work properly. I guess I must be missing something. I can serialize a form data and it shows in an alert: var forminfo = $j('#frmuserinfo').serialize(); alert(forminfo); I then post to my PHP page thus: $j.ajax({ type: "POST", url: "cv-user-process.php", data: "forminfo="+forminfo, cache: false, complete: function(data) { } }); But WHENEVER (not the first time) I try to insert/update the data in the DB I only ever get 1 varaible passed: Here is my PHP script: $testit = mysql_query("UPDATE cv_usersmeta SET inputtest='".$_POST['forminfo']."' WHERE user='X'"); the data passed only ever gets the first variable. why? I think it is more the way I deal with the php but it drives me nuts and always takes me far too long to find where I am going wrong.

    Read the article

  • Duplicate / Copy records in the same MySQL table

    - by Digits
    Hello, I have been looking for a while now but I can not find an easy solution for my problem. I would like to duplicate a record in a table, but of course, the unique primary key needs to be updated. I have this query: INSERT INTO invoices SELECT * FROM invoices AS iv WHERE iv.ID=XXXXX ON DUPLICATE KEY UPDATE ID = (SELECT MAX(ID)+1 FROM invoices) the proble mis that this just changes the ID of the row instead of copying the row. Does anybody know how to fix this ? Thank you verrry much, Digits //edit: I would like to do this without typing all the field names because the field names can change over time.

    Read the article

  • How to change size of STL container in C++

    - by Jaime Pardos
    I have a piece of performance critical code written with pointers and dynamic memory. I would like to rewrite it with STL containers, but I'm a bit concerned with performance. Is there a way to increase the size of a container without initializing the data? For example, instead of doing ptr = new BYTE[x]; I want to do something like vec.insert(vec.begin(), x, 0); However this initializes every byte to 0. Isn't there a way to just make the vector grow? I know about reserve() but it just allocates memory, it doesn't change the size of the vector, and doesn't allows me to access it until I have inserted valid data. Thank you everyone.

    Read the article

  • Spreatsheet:WriteExcel create Chart

    - by yaohung
    Hi, I used csv2xls.pl to convert a text log into .xls file, and then apply create chart function as following: my $chart3 = $workbook-add_chart( type = 'line' , embedded = 1); Configure the series. $chart3-add_series( categories = '=Sheet1!$B$2:$B$64', values = '=Sheet1!$C$2:$C$64', name = 'Test data series 1', ); Add some labels. $chart3-set_title( name = 'Bridge Rate Analysis' ); $chart3-set_x_axis( name = 'Packet Size ' ); $chart3-set_y_axis( name = 'BVI Rate' ); Insert the chart into the main worksheet. $worksheet-insert_chart( 'G2', $chart3 ); ========== I can see the chart in .xls file, however, all the data is in text format, not number, therefore, the chart looks wrong. I am wondering can you tell me how to convert text into number before apply this create chart function? One other thing is any idea how to apply sorting on the .xls file before create chart? Thanks. Yaohung

    Read the article

  • How to convert c++ std::list element to multimap iterator

    - by user63898
    Hello all, I have std::list<multimap<std::string,std::string>::iterator> > Now i have new element: multimap<std::string,std::string>::value_type aNewMmapValue("foo1","test") I want to avoid the need to set temp multimap and do insert to the new element just to get its iterator back so i could to push it back to the: std::list<multimap<std::string,std::string>::iterator> > can i somehow avoid this creation of the temp multimap. Thanks

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >