Search Results

Search found 27339 results on 1094 pages for 'sql dmv'.

Page 539/1094 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • Linq ChangeConflictException occurs when submitting DataContext changes

    - by Alex
    System.Data.Linq.ChangeConflictException: 2 of X updates failed. at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at PROJECT.Controllers.HomeController.ClickProc(Int32 id, String code, String n) This is what I get very often. This action is done thousands of times a day, and I get this exception about once every 5 seconds. From what I understand it happens when something changes in the database in the period between creating DataContext and updating it. Am I right? How can I fix it? Update I just debugged the error and found the following: Table name: dbo.Stats current value: 9852039 original value: 9852038 database value: 9852039 The Stats table is updated constantly. So how can I still make LINQ save the changes. With "classical" SQL Server access through SqlDataCommand I never had problems like that.

    Read the article

  • Auto-generated values for columns in database

    - by Jamal
    Is it a good practice to initialize columns that we can know their values in database, for example identity columns of type unique identifier can have a default value (NEWID()), or columns that shows the record create date can have a default value (GETDATE()). Should I go through all my tables and do this whereever I am sure that I won't need to assign the value manually and the Auto-generated value is correct. I am also thinking about using linq-to-sql classes and setting the "Auto Generated Value" property of these columns to true. Maybe this is what everybody already knows or maybe I am asking a question about a fundamental issue, if so please tell me.

    Read the article

  • Adding a new column to Table which contains live data

    - by Ardman
    I have a large table consisting of over 60 millions records and I would like to add 2 new columns for data migration purposes. There are indexes on the table and some of them are large. So, by me adding the 2 new columns to the table, will I run the risk of slowing down the database whilst it attempts to add them and maybe time-out? Or will it just work? I know that if I try and rearrange the columns SQL Server will ask me to drop and re-create the table, so I definately don't want this. Is this something everyone is challenged with?

    Read the article

  • Inline Conditional Statement in Stored Procedure

    - by Jason
    Here is the pseudo-code for my inline query in my code: select columnOne from myTable where columnOne = '#variableOne#' if len(variableTwo) gt 0 and columnTwo = '#variableTwo#' end I would like to move this into a stored procedure but am having trouble building the query correctly. I assume it would be something like select columnOne from myTable where columnOne = @variableOne CASE WHEN len(@variableTwo) <> 0 THEN and columnTwo = @variableTwo END This is giving me a syntax error. Could someone tell me what I've got wrong. Also, I would like to keep it to only one query and not just have one if statement. Also, I do not want to build the sql in the stored procedure and run Exec() on it.

    Read the article

  • Intime and OutTime for the Modified date

    - by Jash
    Question is already posted on June 4, but still am not get the Proper answer Again Table Structure: T_Person – Table 1 CARDNO 168 471 488 247 519 518 331 240 518 386 441 331 T_Cardevent – Table 2 CARDEVENTDATE CARDEVENTTIME 20090225 163932 20090225 164630 20090225 165027 20090225 165137 20090225 165147 20090225 165715 20090225 165749 20090303 162059 20090303 162723 20090303 155029 20090303 155707 20090303 162824 Query SELECT CARDNO, CARDEVENTDATE, (1000000 * CAST (CARDEVENTDATE AS BIGINT) + CAST (CARDEVENTTIME AS BIGINT) - 30001) / 1000000 AS CardEvenDateAdjusted, CARDEVENTTIME FROM T_CARDEVENT WHERE (CARDEVENTDATE > 20090601) GROUP BY CARDNO, CARDEVENTDATE, CARDEVENTTIME, (1000000 * CAST(CARDEVENTDATE AS BIGINT) + CAST(CARDEVENTTIME AS BIGINT) - 30001) / 1000000 ORDER BY CARDNO, CARDEVENDATEADJUSTED From this above query date is displaying correctly according to that time 03:00:01 to 03:00:00 How can I get min (time) and Max (time) for the adjusted date? I need the sql query for the above condition. Help me? Urgent Please

    Read the article

  • Losing DateTimeOffset precision when using C#

    - by Darvis Lombardo
    I have a SQL Server table with a CreatedDate field of type DateTimeOffset(2). A sample value which is in the table is 2010-03-01 15:18:58.57 -05:00 As an example, from within C# I retrieve this value like so: var cmd = new SqlCommand("SELECT CreatedDate FROM Entities WHERE EntityID = 2", cn); var da = new SqlDataAdapter(cmd); DataTable dt =new DataTable(); da.Fill(dt); And I look at the value: MessageBox.Show(dt.Rows[0][0].ToString()); The result is 2010-03-01 15:18:58 -05:00, which is missing the .57 that is stored in the database. If I look at dt.Rows[0][0] in the Watch window, I also do not see the .57, so it appears it has been truncated. Can someone shed some light on this? I need to use the date to match up with other records in the database and the .57 is needed. Thanks! Darvis

    Read the article

  • Excel Spreadsheet - Best way to perform an Oracle Query on a cell

    - by Jamie
    Hi there, I have an Excel Spreadsheeet. There is a cell containing a concatenated name and surname (don't ask why), for example: Cell A2 BLOGGSJOE On this cell, I would like to run the following SQL and output it to cell A3, A4 and A5 SELECT i.id, i.forename, i.surname FROM individual i WHERE UPPER(REPLACE('" & A2 & "', ' ', '')) = UPPER(REPLACE(i.surname|| i.forename, ' ', '')) AND NVL(i.ind_efface, 'N') = 'N' Any idea how I could perform an oracle query on each cell and return the result? I have enabled an oracle datasource connection in Excel, just not sure what to do now. Is this a stupid approach, and can you recommend a better more proficient way? Thanks muchly! I lack the necessary experience in this type of thing! :-) EDIT: I am aware that I could just write a simple ruby/php/python/whatever script to loop through the excel spreadsheet (or csv file), and then perform the query etc. but i thought there might be a quick way in excel itself.

    Read the article

  • How to approach this SQL query

    - by Kim
    I have data related as follows: A table of Houses A table of Boxes (with an FK back into Houses) A table of Things_in_boxes (with an FK back to Boxes) A table of Owners (with an FK back into Houses) In a nutshell, a House has many Boxes, and each Box has many Things in it. In addition, each House has many Owners. If I know two Owners (say Peter and Paul), how can I list all the Things that are in the Boxes that are in the Houses owned by these guys? Also, I'd like to master this SQL stuff. Can anyone recommend a good book/resource? (I'm using MySQL). Thanks!

    Read the article

  • How to retrieve all errors and messages from a query using ADO

    - by Johan Levin
    When a SQL batch returns more than one message from e.g. print statements, then I can only retrieve the first one using the ADO connection's Errors collection. How do I get the rest of the messages? If I run this script: Option Explicit Dim conn Set conn = CreateObject("ADODB.Connection") conn.Provider = "SQLOLEDB" conn.ConnectionString = "Data Source=(local);Integrated Security=SSPI;Initial Catalog=Master" conn.Open conn.Execute("print 'Foo'" & vbCrLf & "print 'Bar'" & vbCrLf & "raiserror ('xyz', 10, 127)") Dim error For Each error in conn.Errors MsgBox error.Description Next Then I only get "Foo" back, never "Bar" or "xyz". Is there a way to get the remaining messages?

    Read the article

  • Optimising (My)SQL Query

    - by Simon
    I usually use ORM instead of SQL and I am slightly out of touch on the different JOINs... SELECT `order_invoice`.*, `client`.*, `order_product`.*, SUM(product.cost) as net FROM `order_invoice` LEFT JOIN `client` ON order_invoice.client_id = client.client_id LEFT JOIN `order_product` ON order_invoice.invoice_id = order_product.invoice_id LEFT JOIN `product` ON order_product.product_id = product.product_id WHERE (order_invoice.date_created >= '2009-01-01') AND (order_invoice.date_created <= '2009-02-01') GROUP BY `order_invoice`.`invoice_id` The tables/ columns are logically names... it's an shop type application... the query works... it's just very very slow... I use the Zend Framework and would usually use Zend_Db_Table_Row::find(Parent|Dependent)Row(set)('TableClass') but I have to make lots of joins and I thought it'll improve performance by doing it all in one query instead of hundreds... Can I improve the above query by using more appropriate JOINs or a different implementation? Many thanks.

    Read the article

  • LINQ to Entity, joining on NOT IN tables

    - by SlackerCoder
    My brain seems to be mush right now! I am using LINQ to Entity, and I need to get some data from one table that does NOT exist in another table. For example: I need the groupID, groupname and groupnumber from TABLE A where they do not exist in TABLE B. The groupID will exist in TABLE B, along with other relevant information. The tables do not have any relationship. In SQL it would be quite simply (there is a more elegant and efficient solution, but I want to paint a picture of what I need) SELECT GroupID, GroupName, GroupNumber, FROM TableA WHERE GroupID NOT IN (SELECT GroupID FROM TableB) Is there an easy/elegant way to do this? Right now I have a bunch of queries hitting the db, then comparing, etc. It's pretty messy. Thanks.

    Read the article

  • How to set a filter for an estimated maximum price

    - by David
    I cannot figure out how to set an estimated maximum price for a collection of records. What I want to avoid is to simply use SQL MAX, because maybe there are records with exorbitant prices. For example, in the "computers-hardware" category of OLX (http://www.olx.com/computers-hardware-cat-240) the filter for maximum price is estimately set to $1400, but sorting by price, the first items are above $10000 Maybe they calculated the average and then estimated some maximum price... what do you think? And what about the stepping? How would you calculate it?

    Read the article

  • Combining DROP USER and DROP DATABASE with SELECT .. WHERE query?

    - by zsero
    I'd like to make a very simple thing, replicate the functionality of mysql's interactive mysql_secure_installation script. My question is that is there a simple, built-in way in MySQL to combine the output of a SELECT query with the input of a DROP user or DROP database script? For example, if I'd like to drop all users with empty passwords. How could I do that with DROP USER statement? I know an obvious solution would be to run everything for example from a Python script, run a query with mysql -Bse "select..." parse the output with some program construct the drop query run it. Is there an easy way to do it in a simple SQL query? I've seen some example here, but I wouldn't call it simple: http://stackoverflow.com/a/12097567/518169 Would you recommend making a combined query, or just to parse the output using for example Python or bash scripts/sed?

    Read the article

  • T-SQL Better way to determine max of date (accounting for nulls)

    - by Josh
    I am comparing two dates and trying to determine the max of the two dates. A null date would be considered less than a valid date. I am using the following case statement, which works - but feels very inefficient and clunky. Is there a better way? update @TEMP_EARNED set nextearn = case when lastoccurrence is null and lastearned is null then null when lastoccurrence is null then lastearned when lastearned is null then lastoccurrence when lastoccurrence > lastearned then lastoccurrence else lastearned end; (This is in MS SQL 2000, FYI.)

    Read the article

  • Storing rich text documents

    - by David Veeneman
    This is a follow-up to another question I asked earlier today. I am creating a desktop app that stores rich text documents created in WPF (in a RichTextBox control). The app uses SQL Compact, and up until now, I had planned to store each document in a binary column in the database. I am rethinking that approach. Would it be better practice to store each rich text document in the file system, rather than saving it to the database? I figure I could put the documents in the same folder with the database, then store a relative path to each document in its database record, along with other information about the document (tags and so on). I'd like to know some pros and cons of that approach, along with ideas of what is generally considered best practice for this sort of thing. Thanks for your help.

    Read the article

  • Speed up multiple JDBC SQL querys?

    - by paddydub
    I'm working on a shortest path a* algorithm in java with a mysql db. I'm executing the following SQL Query approx 300 times in the program to find route connections from a database of 10,000 bus connections. It takes approx 6-7 seconds to execute the query 300 times. Any suggestions on how I can speed this up or any ideas on a different method i can use ? Thanks ResultSet rs = stmt.executeQuery("select * from connections" + " where Connections.From_Station_stopID ="+StopID+";"); while (rs.next()) { int id = rs.getInt("To_Station_id"); String routeID = rs.getString("To_Station_routeID"); Double lat = rs.getDouble("To_Station_lat"); Double lng = rs.getDouble("To_Station_lng"); int time = rs.getInt("Time"); }

    Read the article

  • Updating a table with the max date of another table

    - by moleboy
    In Oracle 10g, I need to update Table A with data from Table B. Table A has LOCATION, TRANDATE, and STATUS. Table B has LOCATION, STATUSDATE, and STATUS I need to update the STATUS column in Table A with the STATUS column from Table B where the STATUSDATE is the max date upto and including the TRANDATE for that LOCATION (basically, I'm getting the status of the location at the time of a particular transaction). I have a PL/SQL procedure that will do this but I KNOW there must be a way to get it to work using an analytic, and I've been banging my head too long. Thanks!

    Read the article

  • Get the highest odds from the last update

    - by Frankie Yale
    I have these tables in a PostgreSQL database: bookmakers ----------------------- | id | name | ----------------------- | 1 | Unibet | ----------------------- | 2 | 888 | ----------------------- odds --------------------------------------------------------------------- | id | odds_type | odds_index | bookmaker_id | created_at | --------------------------------------------------------------------- | 1 | 1 | 1.55 | 1 | 2012-06-02 10:30 | --------------------------------------------------------------------- | 2 | 2 | 3.22 | 2 | 2012-06-02 10:30 | --------------------------------------------------------------------- | 3 | X | 3.00 | 1 | 2012-06-02 10:30 | --------------------------------------------------------------------- | 4 | 2 | 1.25 | 1 | 2012-05-27 09:30 | --------------------------------------------------------------------- | 5 | 1 | 2.30 | 2 | 2012-05-27 09:30 | --------------------------------------------------------------------- | 6 | X | 2.00 | 2 | 2012-05-27 09:30 | --------------------------------------------------------------------- What I am trying to query is the following: Give me the 1/X/2 odds from the latest update (created_at) from ALL bookmakers and from that last update, give me the highest odds for each odds_type ('1', '2', 'X'). On my website I display them as: Best odds right now: 1 | X | 2 -------------------- 2.30 | 3.00 | 3.22 I have to first get the latest, because the odds from the update from yesterday are no longer valid. Then from that last update, I have - in this case - 2 odds from 2 different bookmakers, so I need to get the best one for type '1','2','X'. Pseudo SQL would be something like: SELECT MAX(odds_index) WHERE odds_type = '1' ORDER BY created_at DESC, odds_index DESC But that doesn't work, because I would always get the latest odds (and not the highest/best from those latest) I hope I'm making sense.

    Read the article

  • Problem with testing a Windows service

    - by prateeksaluja20
    I want to make a Windows service that will access my database. My database is SQL Server 2005. Actually I am working on a website and my database is inside our server. I need to access my database every second and update the records. For that purpose I need to make a Windows service that will install into our server and perform the task. I have been accessing the database from my local machine and then running the service, but problem is I'm not sure how I can test this service. I tried to install into my local machine. It installed and then I ran the service but it did not perform the task and I think service is not able to connect with the database. There is no problem in the service nor its installer. The only issue is how to test my Windows service.

    Read the article

  • Design Solution For Storing-Fetching Images

    - by Chaitanya
    This is a design doubt am facing, I have a collection of 1500 images which are to be displayed on an asp.net page, the images to be displayed differ from one page to another, the count of these images will increase in the time to come, a.) is it a good idea to have the images on the database, but the round trip time to fetch the images from the database might be high. b.) is it good to have all the images on a directory, and have a virtual file system over it, and the application will access the images from the directory Do we have in particular any design strategy in a traditional database for fetching images with the least round trip time, does any solution other than usage of a traditional database exists? ps: I use SQL Server to store these images.

    Read the article

  • vs2010 Cache SQL data incorrect fields

    - by mickartz
    OK, I found a walkthrough on msdn for what I was after (offline database cache). However when I let the wizard create a local database from my online sql server the timespan fields are converted to a string?? Now I know the suggestion was to create my own local database and then use the MS Synch framework...however...this proclaims to do it "out of the box" However now I've a dataset which I've no idea how to use, and a database newly formed (for the synched cache) taht I will have to use Ling to Entities with(??) meanwhile I have this weird timespan to string conversion? should I give up now or push on? can i overwrite the the .designer.cs? typeof(string) to typeof(timespan)? damn wizards!!

    Read the article

  • Data in column not changed

    - by shanks
    I have sql 2005 and when i run below query, data from RealTimeLog table transfer to History but when new data come in RealTimeLog table old data not changed by new one means OutTime data is not changed with new data from RealTimeLog. insert into History (UserID,UserName,LogDate, [InTime], [OutTime]) SELECT UserID,UserName,[LogDate],CONVERT(nvarchar,MIN(CONVERT(datetime, [LogTime], 108)), 108), CONVERT(nvarchar, MAX(CONVERT(datetime, [LogTime], 108)), 108) From RealTimeLog where not Exists (select * from History H Where H.UserID = RealTimeLog.UserID AND H.UserName=RealTimeLog.UserName AND H.LogDate=RealTimeLog.LogDate) GROUP BY UserID,UserName,[LogDate] ORDER BY UserID,[LogDate] for ex. 1 Shanks 02/05/2010 9:00 10:00 if new Max time generated suppose 11:00 in RealtimeLog then it is not inserted in History table and output remain same as above.

    Read the article

  • using operators and functions for sql report charts (visual studio 2010)

    - by user1682566
    I want to create some charts using sql reporting services. But i am unable to use a lot of functions and operators in combination with my data-fields the following work(Stroke-data type is decimal): > =Fields!Stroke.Value > =Sum(Fields!Stroke.Value) > =First(Fields!Stroke.Value) > =Last(Fields!Stroke.Value) > =2+2394.12 the following dont work: > =Fields!Stroke.Value + 2 > =CStr(Fields!Stroke.Value) > =CDbl(Fields!Stroke.Value) > =Fields!Stroke.Value / Fields!Stroke.Value > =Sum(Fields!Stroke.Value) * 2 all other operators and functions(using Fields!Stroke.Value) dont work too

    Read the article

  • ms-access: missing operator in query expression

    - by every_answer_gets_a_point
    i have this sql statement in access: SELECT * FROM (SELECT [Occurrence Number], [1 0 Preanalytical (Before Testing)], NULL, NULL,NULL FROM [Lab Occurrence Form] WHERE NOT ([1 0 Preanalytical (Before Testing)] IS NULL) UNION SELECT [Occurrence Number], NULL, [2 0 Analytical (Testing Phase)], NULL,NULL FROM [Lab Occurrence Form] WHERE NOT ([2 0 Analytical (Testing Phase)] IS NULL) UNION SELECT [Occurrence Number], NULL, NULL, [3 0 Postanalytical ( After Testing)],NULL FROM [Lab Occurrence Form] WHERE NOT ([3 0 Postanalytical ( After Testing)] IS NULL) UNION SELECT [Occurrence Number], NULL, NULL,NULL [4 0 Other] FROM [Lab Occurrence Form] WHERE NOT ([4 0 Other] IS NULL) ) AS mySubQuery ORDER BY mySubQuery.[Occurrence Number]; everything was fine until i added the last line: SELECT [Occurrence Number], NULL, NULL,NULL [4 0 Other] FROM [Lab Occurrence Form] WHERE NOT ([4 0 Other] IS NULL) i get this error: syntax error (missing operator) in query expression 'NULL [4 0 Other]' anyone have any clues why i am getting this error?

    Read the article

  • Automatically CONCATENATE text on data entry

    - by Bill T
    I am a newbie and need help. I have a table called "Employees". It has 2 fields [number] and [encode]. I want to automatically take whatever number is entered into [number] and store it in [encode] so that it is preceded by the appropriate amount of 0's to always make 12 digits. Example: user enters '123' into [number], '000000000123' is automatically stored in [encode] user enters '123456789' into [number], '000123456789' is automatically stored in [encode] I think i want to write a trigger to accomplish this. I think that would make it happen at the time of data entry. is that right? The main idea is would be something like this: variable1 = LENGTH [number] variable2 = REPEAT (0,12-variable1) variable3 = CONCATENATE (variable2, [number]) [encode] = variable3 I just don't know enough to make this happen ANY help would be FANTASTIC. I have SQL-SERVER 2005 and both fields are text

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >