Search Results

Search found 87901 results on 3517 pages for 'exchange server'.

Page 579/3517 | < Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Unable to change the system zone setting on Windows Server 2008 R2.

    - by Ganesh
    Hi All, I have an MFC application that tries to change the system zone setting on the Windows Server 2008 R2. I am using the SetTimeZoneInformation() API which fails with the error code 1314 .i.e. “A required privilege is not held by the client.”. Please refer the sample code below: TIME_ZONE_INFORMATION l_TimeZoneInfo; DWORD l_dwRetVal = 0; ZeroMemory(&l_TimeZoneInfo, sizeof(TIME_ZONE_INFORMATION)); l_TimeZoneInfo.Bias = -330; l_TimeZoneInfo.StandardBias = 0; l_TimeZoneInfo.StandardDate.wDay = 0; l_TimeZoneInfo.StandardDate.wDayOfWeek = 0; l_TimeZoneInfo.StandardDate.wHour = 0; l_TimeZoneInfo.StandardDate.wMilliseconds = 0; l_TimeZoneInfo.StandardDate.wMinute = 0; l_TimeZoneInfo.StandardDate.wMonth = 0; l_TimeZoneInfo.StandardDate.wSecond = 0; l_TimeZoneInfo.StandardDate.wYear = 0; CString l_csDaylightName = _T("India Daylight Time"); CString l_csStdName = _T("India Standard Time"); wcscpy(l_TimeZoneInfo.DaylightName,l_csDaylightName.GetBuffer(l_csDaylightName.GetLength())); wcscpy(l_TimeZoneInfo.StandardName,l_csStdName.GetBuffer(l_csStdName.GetLength())); ::SetLastError(0); if(0 == ::SetTimeZoneInformation(&l_TimeZoneInfo)) { l_dwRetVal = ::GetLastError(); CString l_csErr = _T(""); l_csErr.Format(_T("%d"),l_dwRetVal); } The MFC application has been developed using Visual Studio 2008 and is UAC aware i.e. the application has UAC enabled in its manifest file with the UAC execution level set to “HighestAvailable”. I have administrator privileges and when I run the application it still fails to change the system zone setting. Thanks in Advance, Ganesh

    Read the article

  • Database design for a media server containing movies, music, tv and everything in between?

    - by user364114
    In the near future I am attempting to design a media server as a personal project. MY first consideration to get the project underway is architecture, it will certainly be web based but more specifically than that I am looking for suggestions on the database design. So far I am considering something like the following, where I am using [] to represent a table, the first text is the table name to give an idea of purpose and the items within {} would be fields of the table. Also not, fid is functional id referencing some other table. [Item {id, value/name, description, link, type}] - this could be any entity, single song or whole music album, game, movie - almost see this as a recursive relation, ie. a song is an item but an album that song is part of is also an item or for example a tv season is an item, with multiple items being tv episodes [Type {id, fid, mime type, etc}] - file type specific information - could identify how code handles streaming/sending this item to a user [Location {id, fid, path to file?}] [Users {id, username, email, password, ...? }] - user account information [UAC {id, fid, acess level}] - i almost feel its more flexible to seperate access control permissions form the user accounts themselves [ItemLog {id, fid, fid2, timestamp}] - fid for user id, and fid2 for item id - this way we know what user access what when [UserLog {id, fid, timestamp}] -both are logs for access, whether login or last item access [Quota {id, fid, cap}] - some sort of way to throttle users from queing up the entire site and letting it download ... Suggestions or comments are welcome as the hope is that this project will be a open source project once some code is laid out.

    Read the article

  • Images from SQL Server JPG/PNG Image Column not being Type Converted to Bitmap in HttpHandlers (cons

    - by kanchirk
    Our Silverlight 3.0 Client consumes Images stored/retrieved on the File System thorough ASP.NET HttpHandlers successfully. We are trying to store and read back Images using a SQL Server 2008 Database. Please find the stripped down code pasted below with the Exception. "Bitmap is not Valid" //Store document to the database private void SaveImageToDatabaseKK(HttpContext context, string pImageFileName) { try { //ADO.NET Entity Framework ImageTable documentDB = new ImageTable(); int intLength = Convert.ToInt32(context.Request.InputStream.Length); //Move the file contents into the Byte array Byte[] arrContent = new Byte[intLength]; context.Request.InputStream.Read(arrContent, 0, intLength); //Insert record into the Document table documentDB.InsertDocument(pImageFileName, arrContent, intLength); } catch { } } =The method to Read Back the Row from the Table and Send it back is below.= private void RetrieveImageFromDatabaseTableKK(HttpContext context, string pImageName) { try { ImageTable documentDB = new ImageTable(); var docRow = documentDB.GetDocument(pImageName); //based on Imagename which is unique //DocData column in table is **Image** if (docRow!=null && docRow.DocData != null && docRow.DocData.Length > 0) { Byte[] bytImage = docRow.DocData; if (bytImage != null && bytImage.Length > 0) { Bitmap newBmp = ConvertToBitmap(context, bytImage ); if (newBmp != null) { newBmp.Save(context.Response.OutputStream, ImageFormat.Jpeg); newBmp.Dispose(); } } } } catch (Exception exRI) { } } // Convert byte array to Bitmap (byte[] to Bitmap) protected Bitmap ConvertToBitmap(byte[] bmp) { if (bmp != null) { try { TypeConverter tc = TypeDescriptor.GetConverter(typeof(Bitmap)); Bitmap b = (Bitmap)tc.ConvertFrom(bmp); **//This is where the Exception Occurs.** return b; } catch (Exception) { } } return null; }

    Read the article

  • SQL Server 2008 need just like crosstab query on XML column?

    - by user1332896
    <abc id="abc1"> <def id="def1"> <ghi att='ghi1'> <mn id="0742d2ea" name="RF" dt="0" df="3" ty="0" /> <mn id="64d9a11b" name="CJ" dt="0" df="3" ty="0" /> <mn id="db72d154" name="FJ" dt="2" df="4" ty="0" /> <mn id="39af9fa1" name="BS" dt="0" df="2" ty="0" /> </ghi> <jkl att='jkl1'> <mn id="0742d2ea" name="RF" dt="1" gl="19" /> <mn id="64d9a11b" name="CJ" dt="0" gl="6" /> <mn id="db72d154" name="FJ" dt="0" gl="0" /> <mn id="39af9fa1" name="BS" dt="0" gl="12" /> <mn id="ac4f566f" name="DJ" dt="0" gl="9" /> <mn id="4bf3ba2f" name="RP" dt="0" gl="16" /> <mn id="db1af021" name="SC" dt="1" gl="10" /> <mn id="c4c93a2d" name="DN" dt="1" gl="15" /> </jkl> </def> </abc> I need this output. Is this possible in SQL Server 2008? id name ghiDT ghiDF ghiTY jklDT jklGL 0742d2ea RF 0 3 0 1 19 64d9a11b CJ 0 3 0 0 6 db72d154 FJ 2 4 0 0 0 39af9fa1 BS 0 2 0 0 12 ac4f566f DJ 0 0 0 0 9 4bf3ba2f RP 0 0 0 0 16 db1af021 SC 0 0 0 1 10 c4c93a2d DN 0 0 0 1 15

    Read the article

  • How to upload image to remote server in iphone?

    - by Atulkumar V. Jain
    Hi Everybody, I am trying to upload a image which i am clicking with the help of the camera. I am trying the following code to upload the image to the remote server. -(void)searchAction:(UIImage*)theImage { UIDevice *dev = [UIDevice currentDevice]; NSString *uniqueId = dev.uniqueIdentifier; NSData * imageData = UIImagePNGRepresentation(theImage); NSString *postLength = [NSString stringWithFormat:@"%d",[imageData length]]; NSString *urlString = [@"http://www.amolconsultants.com/im.jsp?" stringByAppendingString:@"imagedata=iPhoneV0&mcid="]; urlString = [urlString stringByAppendingString:uniqueId]; urlString = [urlString stringByAppendingString:@"&lang=en_US.UTF-8"]; NSLog(@"The URL of the image is :- %@", urlString); NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setHTTPBody:imageData]; NSURLConnection *conn=[[NSURLConnection alloc] initWithRequest:request delegate:self]; if (conn == nil) { NSLog(@"Failed to create the connection"); } } But nothing is getting posted. Nothing comes in the console window also. I am calling this method in the action sheet. When the user clicks on the 1st button of the action sheet this method is called to post the image. Can anyone help me with this... Any code will be very helpful... Thanx in advance...

    Read the article

  • Constant CMS Session Expiry On 1&1 Cloud Server?

    - by leen3o
    I have a couple of 1&1's 'Dynamic Cloud Servers' and running Win2008R2 and they are setup as web servers, I have a number of Umbraco CMS installs on them and they have been running fine for over a year. On Saturday on BOTH servers, a very strange thing happened - As soon as I login to the CMS/Umbraco admin I am logged out with about 5 seconds? It's as if my session expires the moment I login? I have checked everything I can as I'm not really a server admin, and everything seems to be exactly as it was last week? Like I say this has happened EXACTLY the same time (Saturday) on TWO different servers? I'm just looking for ideas of what I should be looking for? Also the front end of the sites seem fine... Its only the backend when I login. I have gone to 1&1 about this, and as usual they have washed their hands saying its nothing to do with them - When I am certain it is. How can this happen on two different servers, and affect the same sites in exactly the same way? Any help, tips, things to try would be greatly appreciated.

    Read the article

  • Help ! How do I get the total number rows from my SQL Server paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my SQL Server database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my data list. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a particular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my data list pager. Now I need the total number of all comments (not in the current page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesn't work and raises an error. I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • How Do I Escape Apostrophes in Field Valued in SQL Server?

    - by Mikecancook
    I asked a question a couple days ago about creating INSERTs by running a SELECT to move data to another server. That worked great until I ran into a table that has full on HTML and apostrophes in it. What's the best way to deal with this? Lucking there aren't too many rows so it is feasible as a last resort to 'copy and paste'. But, eventually I will need to do this and the table by that time will probably be way too big to copy and paste these HTML fields. This is what I have now: select 'Insert into userwidget ([Type],[UserName],[Title],[Description],[Data],[HtmlOutput],[DisplayOrder],[RealTime],[SubDisplayOrder]) VALUES (' + ISNULL('N'''+Convert(varchar(8000),Type)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Username)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Title)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Description)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Data)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),HTMLOutput)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),DisplayOrder)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),RealTime)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),SubDisplayOrder)+'''','NULL') + ')' from userwidget Which is works fine except those pesky apostrophes in the HTMLOutput field. Can I escape them by having the query double up on the apostrophes or is there a way of encoding the field result so it won't matter?

    Read the article

  • Creating a Function in SQL Server with a Phone Number as a parameter and returns a Random Number

    - by Emer
    Hi Guys, I am hoping someone can help me here as google is not being as forthcoming as I would have liked. I am relatively new to SQL Server and so this is the first function I have set myself to do. The outline of the function is that it has a Phone number varchar(15) as a parameter, it checks that this number is a proper number, i.e. it is 8 digits long and contains only numbers. The main character I am trying to avoid is '+'. Good Number = 12345678 Bad Number = +12345678. Once the number is checked I would like to produce a random number for each phone number that is passed in. I have looked at substrings, the like operator, Rand(), left(), Right() in order to search through the number and then produce a random number. I understand that Rand() will produce the same random number unless alterations are done to it but right now it is about actually getting some working code. Any hints on this would be great or even point me towards some more documentation. I have read books online and they haven't helped me, maybe I am not looking in the right places. Here is a snippet of code I was working on the Rand declare @Phone Varchar (15) declare @Counter Varchar (1) declare @NewNumber Varchar(15) set @Phone = '12345678' set @Counter = len(@Phone) while @Counter > 0 begin select case when @Phone like '%[0-9]%' then cast(rand()*100000000 as int) else 'Bad Number' end set @counter = @counter - 1 end return Thanks for the help in advance Emer

    Read the article

  • SQL Server full text query across multiple tables - why so slow?

    - by Mikey Cee
    Hi. I'm trying to understand the performance of an SQL Server 2008 full-text query I am constructing. The following query, using a full-text index, returns the correct results immediately: SELECT O.ID, O.Name FROM dbo.EventOccurrence O WHERE FREETEXT(O.Name, 'query') ie, all EventOccurrences with the word 'query' in their name. And the following query, using a full-text index from a different table, also returns straight away: SELECT V.ID, V.Name FROM dbo.Venue V WHERE FREETEXT(V.Name, 'query') ie. all Venues with the word 'query' in their name. But if I try to join the tables and do both full-text queries at once, it 12 seconds to return: SELECT O.ID, O.Name FROM dbo.EventOccurrence O INNER JOIN dbo.Event E ON O.EventID = E.ID INNER JOIN dbo.Venue V ON E.VenueID = V.ID WHERE FREETEXT(E.Name, 'search') OR FREETEXT(V.Name, 'search') Here is the execution plan: http://uploadpad.com/files/query.PNG From my reading, I didn't think it was even possible to make a free text query across multiple tables in this way, so I'm not sure I am understanding this correctly. Note that if I remove the WHERE clause from this last query then it returns all results within a second, so it's definitely the full-text that is causing the issue here. Can someone explain (i) why this is so slow and (ii) if this is even supported / if I am even understanding this correctly. Thanks in advance for your help.

    Read the article

  • What noncluster index would be better to create on SQL Server?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. SELECT CustomerName FROM Customers Which leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 For the first attempt to improve performance, I've created a nonclustered index for this table: CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this first try, I've executed the select statement and the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 Now the second try, I've deleted this nonclustered index in order to create a new one. CREATE NONCLUSTERED INDEX [IX_CategoryName] ON [dbo].[Categories] ( [CategoryId] ASC ) INCLUDE ( [CategoryName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this second try, I've executed the select statement and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 Am I doing something wrong or this is expected? Shall I use the first nonclustered index with two fields, or the second nonclustered with one field (CategoryID) including the second field (CategoryName)?

    Read the article

  • How can I send multiple images in a server push Perl CGI program?

    - by Jujjuru
    I am a beginner in Perl CGI etc. I was experimenting with server-push concept with a piece of Perl code. It is supposed to send a jpeg image to the client every three seconds. Unfortunately nothing seems to work. Can somebody help identify the problem? Here is the code: use strict; # turn off io buffering $|=1; print "Content-type: multipart/x-mixed-replace;"; print "boundary=magicalboundarystring\n\n"; print "--magicalboundarystring\n"; #list the jpg images my(@file_list) = glob "*.jpg"; my($file) = ""; foreach $file(@file_list ) { open FILE,">", $file or die "Cannot open file $file: $!"; print "Content-type: image/jpeg\n\n"; while ( <FILE> ) { print "$_"; } close FILE; print "\n--magicalboundarystring\n"; sleep 3; next; } EDIT: added turn off i/o buffering, added "use strict" and "@file_list", "$file" are made local

    Read the article

  • how do i insert html file that have hebrew text and read it back in sql server 2008 using the filest

    - by gadym
    hello all, i am new to the filestream option in sql server 2008, but i have already understand how to open this option and how to create a table that allow you to save files. let say my table contains: id,name, filecontent i tried to insert an html file (that has hebrew chars/text in it) to this table. i'm writing in asp.net (c#), using visual studio 2008. but when i tried to read back the content , hebrew char becomes '?'. the actions i took were: 1. i read the file like this: // open the stream reader System.IO.StreamReader aFile = new StreamReader(FileName, System.Text.UTF8Encoding.UTF8); // reads the file to the end stream = aFile.ReadToEnd(); // closes the file aFile.Close(); return stream; // returns the stream i inserted the 'stream' to the filecontent column as binary data. i tried to do a 'select' to this column and the data did return (after i coverted it back to string) but hebrew chars become '?' how do i solve this problem ? what I should pay attention to ? thanks, gadym

    Read the article

  • How do I trace SQL Server Failure Audit events?

    - by Tim Perry
    I recently took over management of a Windows 2003 server. The application log is being filled up with messages like these: Event Type: Failure Audit Event Source: MSSQLSERVER Event Category: (4) Event ID: 18456 Date: 3/5/2010 Time: 4:00:30 PM User: N/A Computer: FAIROAKS1 Description: Login failed for user 'administrator'. [CLIENT: <local machine>] Data: 0000: 18 48 00 00 0e 00 00 00 .H...... 0008: 0a 00 00 00 46 00 41 00 ....F.A. 0010: 49 00 52 00 4f 00 41 00 I.R.O.A. 0018: 4b 00 53 00 31 00 00 00 K.S.1... 0020: 07 00 00 00 6d 00 61 00 ....m.a. 0028: 73 00 74 00 65 00 72 00 s.t.e.r. 0030: 00 00 .. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I'd like to figure out what program is causing these. Is there a way to trace and find out which process is causing these errors?

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • Loading city/state from SQL Server to Google Maps?

    - by knawlejj
    I'm trying to make a small application that takes a city & state and geocodes that address to a lat/long location. Right now I am utilizing Google Map's API, ColdFusion, and SQL Server. Basically the city and state fields are in a database table and I want to take those locations and get marker put on a Google Map showing where they are. This is my code to do the geocoding, and viewing the source of the page shows that it is correctly looping through my query and placing a location ("Omaha, NE") in the address field, but no marker, or map for that matter, is showing up on the page: function codeAddress() { <cfloop query="GetLocations"> var address = document.getElementById(<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>).value; if (geocoder) { geocoder.geocode( {<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>: address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location, title: <cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput> }); } else { alert("Geocode was not successful for the following reason: " + status); } }); } </cfloop> } And here is the code to initialize the map: var geocoder; var map; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(42.4167,-90.4290); var myOptions = { zoom: 5, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP } var marker = new google.maps.Marker({ position: latlng, map: map, title: "Test" }); map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } I do have a map working that uses lat/long that was hard coded into the database table, but I want to be able to just use the city/state and convert that to a lat/long. Any suggestions or direction? Storing the lat/long in the database is also possible, but I don't know how to do that within SQL.

    Read the article

  • SQL Server: Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • Sql Server 2005 Database Tables - Row Comparison Column By Column.

    - by Goober
    Scenario I have an TWO datbase tables of exactly the SAME STRUCTURE. The difference between these tables is that one contains data populated by one application and the other is populated by a different application. Each application is trying to produce the same result, but using two different methods of implementation. Proposed Idea What I want to do, is run both applications, which will roughly produce 35000 rows containing 10 columns each - So all in all, 70000 rows of data, I then want to compare each row of data, COLUMN BY COLUMN to check whether the values are the same or not. Current Thoughts Since there is so much data to compare, I feel that the best way in which to do this would be to write an application, preferably in C# (but if necessary, T-sql), to compare each row of data column by column, and write out any failed comparisons to a text log file. Question Could anybody suggest an efficient way in which to perform column by column row comparison for 70000 rows worth of data? I'm struggling for ideas on how to tackle this problem. Extra Detail The two applications are both written in C# .Net 3.5. The Database is running on Sql Server 2005. Help greatly appreciated.

    Read the article

  • How can I efficiently manipulate 500k records in SQL Server 2005?

    - by cdeszaq
    I am getting a large text file of updated information from a customer that contains updates for 500,000 users. However, as I am processing this file, I often am running into SQL Server timeout errors. Here's the process I follow in my VB application that processes the data (in general): Delete all records from temporary table (to remove last month's data) (eg. DELETE * FROM tempTable) Rip text file into the temp table Fill in extra information into the temp table, such as their organization_id, their user_id, group_code, etc. Update the data in the real tables based on the data computed in the temp table The problem is that I often run commands like UPDATE tempTable SET user_id = (SELECT user_id FROM myUsers WHERE external_id = tempTable.external_id) and these commands frequently time out. I have tried bumping the timeouts up to as far as 10 minutes, but they still fail. Now, I realize that 500k rows is no small number of rows to manipulate, but I would think that a database purported to be able to handle millions and millions of rows should be able to cope with 500k pretty easily. Am I doing something wrong with how I am going about processing this data? Please help. Any and all suggestions welcome.

    Read the article

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

  • How do I make a function in SQL Server that accepts a column of data?

    - by brandon k
    I made the following function in SQL Server 2008 earlier this week that takes two parameters and uses them to select a column of "detail" records and returns them as a single varchar list of comma separated values. Now that I get to thinking about it, I would like to take this table and application-specific function and make it more generic. I am not well-versed in defining SQL functions, as this is my first. How can I change this function to accept a single "column" worth of data, so that I can use it in a more generic way? Instead of calling: SELECT ejc_concatFormDetails(formuid, categoryName) I would like to make it work like: SELECT concatColumnValues(SELECT someColumn FROM SomeTable) Here is my function definition: FUNCTION [DNet].[ejc_concatFormDetails](@formuid AS int, @category as VARCHAR(75)) RETURNS VARCHAR(1000) AS BEGIN DECLARE @returnData VARCHAR(1000) DECLARE @currentData VARCHAR(75) DECLARE dataCursor CURSOR FAST_FORWARD FOR SELECT data FROM DNet.ejc_FormDetails WHERE formuid = @formuid AND category = @category SET @returnData = '' OPEN dataCursor FETCH NEXT FROM dataCursor INTO @currentData WHILE (@@FETCH_STATUS = 0) BEGIN SET @returnData = @returnData + ', ' + @currentData FETCH NEXT FROM dataCursor INTO @currentData END CLOSE dataCursor DEALLOCATE dataCursor RETURN SUBSTRING(@returnData,3,1000) END As you can see, I am selecting the column data within my function and then looping over the results with a cursor to build my comma separated varchar. How can I alter this to accept a single parameter that is a result set and then access that result set with a cursor?

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • Is there a way to create subdatabases as a kind of subfolders in sql server?

    - by user193655
    I am creating an application where there is main DB and where other data is stored in secondary databases. The secondary databases follow a "plugin" approach. I use SQL Server. A simple installation of the application will just have the mainDB, while as an option one can activate more "plug-ins" and for every plug-in there will be a new database. Now why I made this choice is because I have to work with an exisiting legacy system and this is the smartest thing I could figure to implement the plugin system. MainDB and Plugins DB have exactly the same schema (basically Plugins DB have some "special content", some important data that one can use as a kind of template - think to a letter template for example - in the application). Plugin DBs are so used in readonly mode, they are "repository of content". The "smart" thing is that the main application can also be used by "plugin writers", they just write a DB inserting content, and by making a backup of the database they creaetd a potential plugin (this is why all DBs has the same schema). Those plugins DB are downloaded from internet as there is a content upgrade available, every time the full PlugIn DB is destroyed and a new one with the same name is creaetd. This is for simplicity and even because the size of this DBs is generally small. Now this works, anyway I would prefer to organize the DBs in a kind of Tree structure, so that I can force the PlugIn DBs to be "sub-DBs" of the main application DB. As a workaround I am thinking of using naming rules, like: ApplicationDB (for the main application DB) ApplicationDB_PlugIn_N (for the N-th plugin DB) When I search for plugin 1 I try to connect to ApplicationDB_PlugIn_1, if I don't find the DB i raise an error. This situation can happen for example if som DBA renamed ApplicationDB_Plugin_1. So since those Plugin DBs are really dependant on ApplicationDB only I was trying to "do the subfolder trick". Can anyone suggest a way to do this? Can you comment on this self-made plugin approach I decribed above?

    Read the article

< Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >