Search Results

Search found 5101 results on 205 pages for 'ssrs 2005'.

Page 152/205 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • NINE Great Reasons to Attend the GlassFish Community Event at JavaOne 2012

    - by Alexandra Huff
    Are you coming to the annual GlassFish Community Event at JavaOne this year? Here are nine great reasons not to miss it! Great company Meet and mingle with community leaders and luminaries, the GlassFish engineering team, and Oracle executives! Learn from others How are your peers using GlassFish in creative ways? A few community members will share their challenges and creative solutions. Ask tough questions Meet Oracle GlassFish and Middleware executives; the panel discussion will be moderated by one of our stellar community leaders! Shirts! Be sure to get this year's GlassFish T-shirt, designed by and voted on by YOU, our community members! Don't miss it - they go fast. Share your story Give us a two minute update on why you love GlassFish and how you are using it! We will immortalize you in a very brief video and post it to our GlassFish Stories page! Find out... about the new book, hot off the press, authored by our very own Arun Gupta: "Java EE 6 Pocket Guide: A Quick Reference for Simplified Enterprise Java Development" If you share... your story, you will win a copy of Arun's new book as our thank you gift! Suggest... some ideas on how to make GlassFish even better! Have fun Lively discussion, news and updates, excellent company -- this is THE place to be on Sunday at JavaOne! Convinced? Excellent! Then please register here! A JavaOne Pass is required to enter Moscone Center. All passes accepted, including Discover, Exhibitor, Press, Blogger, etc. Agenda 11:00 - 11:05: Introduction 11:05 - 11:30: Roadmap and Community Updates 11:30 - 12:15: Q&A with Executive Speaker Panel from Oracle and the GlassFish Team 12:15 - 01:00: Customer Testimonials Location: Moscone West, Room 2005 Add sessions UGF10359 and UGF10360 to Schedule Builder

    Read the article

  • Best practices: Ajax and server side scripting with stored procedures

    - by Luka Milani
    I need to rebuild an old huge website and probably to port everyting to ASP.NET and jQuery and I would like to ask for some suggestion and tips. Actually the website uses: Ajax (client site with prototype.js) ASP (vb script server side) SQL Server 2005 IIS 7 as web server This website uses hundred of stored procedures and the requests are made by an ajax call and only 1 ASP page that contain an huge select case Shortly an example: JAVASCRIPT + PROTOTYPE: var data = { action: 'NEWS', callback: 'doNews', param1: $('text_example').value, ......: ..........}; AjaxGet(data); // perform a call using another function + prototype SERVER SIDE ASP: <% ...... select case request("Action") case "NEWS" With cmmDB .ActiveConnection = Conn .CommandText = "sp_NEWS_TO_CALL_for_example" .CommandType = adCmdStoredProc Set par0DB = .CreateParameter("Param1", adVarchar, adParamInput,6) Set par1DB = .CreateParameter(".....", adInteger, adParamInput) ' ........ ' can be more parameters .Parameters.Append par0DB .Parameters.Append par1DB par0DB.Value = request("Param1") par1DB.Value = request(".....") set rs=cmmDB.execute RecodsetToJSON rs, jsa ' create JSON response using a sub End With .... %> So as you can see I have an ASP page that has a lot of CASE and this page answers to all the ajax request in the site. My question are: Instead of having many CASES is it possible to create dynamic vb code that parses the ajax request and creates dynamically the call to the desired SP (also implementing the parameters passed by JS)? What is the best approach to handle situations like this, by using the advantages of .Net + protoype or jQuery? How the big sites handle situation like this? Do they do it by creating 1 page for request? Thanks in advance for suggestion, direction and tips.

    Read the article

  • Public Solaris/SPARC roadmap until 2015

    - by Karim Berrah
    It now public, and give you a nice overview on what's going on, where Oracle is going with Solaris and SPARC processors. It's now available from here. What can we lean from this roadmap ? well, if you look carefully: Oracle is announcing Solaris 11 this year. The release date should be ... check OOW11 Solaris 10 updates should still be released in 2012 (remember, released in 2005). Check the Solaris lifecycle to understand how long is Solaris to stay side by side with Solaris 11. in 2011, a great 3x Single Strand improvement for the T-Series. Some thing great under preparation. Probably revealed at Oracle Open World 2011. Good news for ISVs ! in 2012, a great 6x Troughput improvement for the M-Serie ! How can this be done ? .... Nearly everything on the SPARC/SOLARIS level is said through the public roadmap,but as you know the evil is in the details ;)

    Read the article

  • Goal =&gt; Microsoft Certification Exams for .NET 4

    - by Raghuraman Kanchi
    Microsoft Learning has announced the availability dates for .NET 4.0 Exams from 2nd July 2010 onwards. Being a MCSD.NET for 2005, My friend and I decided to skip certification exams for 2008 exams aiming towards MCST/MCPD coz we felt it was a mere layer on the top of .NET 2.0. But not so for .NET 4.0. We see .NET 4.0 as a major overhaul with the best of .NET releases in hand. The following exams and their link have been posted below for direct reference. Exam 71-511, TS: Windows Applications Development with Microsoft .NET Framework 4 Exam 71-515, TS: Web Applications Development with Microsoft .NET Framework 4 Exam 71-513: TS: Windows Communication Foundation Development with Microsoft .NET Framework 4 Exam 71-516: TS: Accessing Data with Microsoft .NET Framework 4 Exam 71-518: Pro: Designing and Developing Windows Applications Using Microsoft .NET Framework 4 Exam 71-519: Pro: Designing and Developing Web Applications Using Microsoft .NET Framework 4 I am planning to prepare for each of this exam along with my .NET 4.0 work. Ever since the release of VS 2010 beta 1, I have been working majorly on .NET 4.0 including Silverlight & MEF. Now it is the time to read documentation, prepare notes, understand material, do labs, write programs and application and pass the exam.

    Read the article

  • Build vs Rebuild

    - by prash
    Build means compile and link only the source files that have changed since the last build, while Rebuild means compile and link all source files regardless of whether they changed or not. Build is the normal thing to do and is faster. Sometimes the versions of project target components can get out of sync and rebuild is necessary to make the build successful. In practice, you never need to Clean. Build or Rebuild Solution builds or rebuilds all projects in the your solution, while Build or Rebuild <project name> builds or rebuilds the StartUp project. To set the StartUp project, right click on the desired project name in the Solution Explorer tab and select Set as StartUp project. The project name now appears in bold. Compile just compiles the source file currently being edited. Useful to quickly check for errors when the rest of your source files are in an incomplete state that would prevent a successful build of the entire project. Ctrl-F7 is the shortcut key for Compile. All source files that have changed are saved when you request a build/rebuild, so you don't have to save them first. When you run your executable (F5 or Ctrl-F5), Visual Studio saves all your changed source files and builds anything that changed, so you don't need to explicitly do those steps every time. This allows for quick "trial and error" debugging. Incidentally, if you like those little Visual Studio keyboard shortcuts, you can download posters of the C# and the VB.Net ones, respectively (I am personally a big fan of using keyboard shortcuts :) ).   Visual Studio 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=92ced922-d505-457a-8c9c-84036160639f   Visual Studio 2005 C#: http://www.microsoft.com/downloads/details.aspx?FamilyID=c15d210d-a926-46a8-a586-31f8a2e576fe&DisplayLang=en VB.NET: http://www.microsoft.com/downloads/details.aspx?FamilyID=6bb41456-9378-4746-b502-b4c5f7182203&DisplayLang=en

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Calendar issue in simple java form using swing package?

    - by Rand Mate
    I have created a form using java swing package, For the purpose of adding Date of Birth in the form I have used the following code particularly to add in java Frame via Swing Package. Is there any alternative way to simplify the code. JLabel l2 = new JLabel("Date of Birth"); l2.setBounds(200,160,100,20); String day[] ={"1","2","3","4","5","6","7","8","9","10","11","12","13","14","15","16","17","18","19","20","21","22","23","24","25","26","27","28","29","30"}; String month[] = {"Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"}; String year[] = {"2012","2011","2010","2009","2008","2007","2006","2005","2004","2003","2002","2001","2000","1999","1998","1997","1996","1995","1994","1993","1992","1991","1990","1989","1988","1987","1986","1985","1984","1983","1982","1981","1980","1979","1976","1975"}; JComboBox cb1 = new JComboBox(day); cb1.setBounds(450,160,50,20); JComboBox cb2 = new JComboBox(month); cb2.setBounds(500,160,50,20); JComboBox cb3 = new JComboBox(year); cb3.setBounds(550,160,60,20);

    Read the article

  • ReSharper 7.1 update

    - by TATWORTH
    Jet Brains have announced ReSharper 7.1: a considerable update to the powerful .NET developer productivity tool for Visual Studio. They invite you to download ReSharper 7.1 and take it for a free 30-day trial. I urge you to try this excellent Visual Studio add-on. Here is their announcement: Following this update, ReSharper 7 brings even more value to all .NET developers, such as more ways to refactor, inspect, clean up, review and generate code. Feature highlights of ReSharper 7 now include: Full integration with Visual Studio 2012 while maintaining support for Visual Studio 2005, 2008, and 2010.Performance and bug fixes: Since releasing version 7.0 this summer, we have fixed over 300 performance problems and bugs.New code inspections and contract annotations for a more robust .NET code quality analysis. Sharing ReSharper code inspection results with teammates has been streamlined as well for the purposes of code review.Improved tooling for .NET code maintenance including the top requested Extract Class refactoring that helps decrease code complexity, as well as a way to remove unused assembly references across the entire solution.Enhanced code formatter: We have implemented some of the most demanded code formatter improvements so far. For example, ReSharper 7.1 is able to format XML doc comments and chained method calls.Additional code exploration features helping visualize hierarchies of polymorphic members and CSS styles.An extended and fine-tuned code generation toolset. In terms of support for specific technologies and frameworks, ReSharper 7 is on the cutting edge as well, providing: Support for VB.NET refined with the Extract Class refactoring, new quick-fixes and improved IntelliSense.XAML support considerably enhanced in terms of code completion, typing assistance, naming style control, and code generation.An extensive pack of functionality for developers looking to create Windows Store applications for Windows 8.INotifyPropertyChanged interface support pack to improve productivity of Windows Forms, WPF and Silverlight application developers.Extended web development toolset, including improvements to JavaScript support, and initial support for ASP.NET 4.5 and ASP.NET MVC 4.Addition of two previously unsupported Microsoft development technologies: LightSwitch and SharePoint. For details on features and improvements in ReSharper 7 and a 30-day free trial, please read What's New in ReSharper 7.

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • Generating HTML Help files based on XML documentation

    - by geekrutherford
    Since discovering the XML commenting features built into .NET years ago I have been using it to help make my code more readable and simpler for other developers to understand exactly what the code is doing. Entering /// preceding a line of code causes Visual Studio to insert "summary" tags.  It also results in additional tags being generated if you are commenting a method with parameters and a return type. I already knew that Intellisense would pick up these comments and display them when coding and selecting properties, methods, etc. from a class.  I also knew that you could set Visual Studio to generate an XML file containing said comments.  Only recently did I begin to wonder if I could generate some kind of readable help files based on these comments I so diligently added. After searching the web I came across NDoc, an open source project which creates documentation for you based on the XML files generated by Visual Studio.  Unfortunately, NDoc has become stale and no longer supported (last release was back in 2005). Fortunately there is a little known tool from Microsoft themselves called "Sandcastle Help File Builder".  This nifty little tool gives you a graphical interface that allows you to specify multiple DLL and XML files from which to generate a MSDN like HTML Help File for your own projects! You can check it out here: http://shfb.codeplex.com/ If you are curious how to set Visual Studio to generate the above reference XML documentation files simply go to your projects property page and edit as shown below (my paths are specific, you can leave yours at the default values):

    Read the article

  • OpenXDK Questions

    - by iamcreasy
    I was strolling around XBox development. Apart form buying a DevKit from Microsoft, another thing got my attention is called, OpenXDK which stands for Open XBox Development Kit. From their main site its pretty obvious that there hasn't been any update since 2005 but digging a little deeper, I found that in their project repository is was being updated. Last time stamp was 2009-02-15. Quick google search said, its not actually really on a good place to poke around. Many and MANY features are absent. Being a hobby project I perfectly understand. But, those results are quite old. The question is, is there anybody who has any experience with OpenXDK? If is, that is it possible to shade some light on this? about its limitations? Is this a mature project? How's the latest version and what's it capable of doing? Or should I just stay away from it?

    Read the article

  • EOFs in Solaris 11

    - by nospam(at)example.com (Joerg Moellenkamp)
    Well ? from comments here and elsewhere, the two most worst things seemed to be the the removal of 32-bit support and removal of support for certain components. Just to set things into perspective: Solaris 10 was released 2005, the newsest class of machines not supported by it were the Ultra1. This one was released 1995. The UltraSPARC-Systems not able to run on Solaris 11 were released 2001. Well ? we have 2011 now ?. Regarding 32-bit support: Well ? I don't think "playing around with Solaris on old gear" is the problem. At first, most people are playing around with virtual machines. But there is something different: 64-bit computing was introduced for x86 in 2003 (yes ? it's really that old). I think this move is more hurting to the people using boards with the first-gen Intel Atom "Silverthorne" as small file servers. And then Solaris 10 won't disappear with Solaris 11

    Read the article

  • Dual displays not working - NVidia - Ubuntu 12.04 - Second Monitor - Two Screens

    - by user75105
    Graphics Card: NVidia 460 GTX. Driver: NVIDIA accelerated graphics driver (version current) I have one DVI monitor, an old Dell LCD from 2005, and one VGA monitor, an Asus ML238H from 2010 whose HDMI port broke. The Asus is plugged into my graphics card's primary monitor slot and is the better monitor even though it is VGA but my computer defaults to the Dell. This happens when I boot as well; the loading screens, the motherboard brand image, etc. are all displayed on the Dell monitor until Windows loads. Then both monitors work. The same thing happened when I booted up Ubuntu 12.04 but I did not see the second monitor when the log-in screen popped up, nor did I when I logged in. I went to System Settings/Displays and my Asus monitor is not an option. I clicked Detect Displays and the Asus is not detected. I looked at the other questions regarding NVIDIA drivers and recalled my problems with Ubuntu a few years ago and decided to check the driver. I went to Additional Drivers to install the proprietary driver and it looks like it's installed and active but I'm still having this problem. There is another driver option, the post-release NVIDIA driver, but that does not fix the problem either. Also, under System Details/Graphics the graphics device is listed as Unknown, which might indicate that it is using an open source generic driver and not the proprietary NVidia driver. But under Additional Drivers it says that I am using the NVidia driver. Any help is appreciated.

    Read the article

  • The Freemium-Premium Puzzle

    The more time I spend thinking about the value of information, the more I found that digitalizing information significantly changed the 'information markets', potentially in an irreversible manner. The graph at the bottom outlines my current view. The existing business models tend to be the same in the digital and analogue information world, i.e. revenue is derived from a combination of consumers' payments and advertisement. Even monetizing 'meta-information' such as search engines isn't new. Just think of the once popular 'Who'sWho'. What really changed is the price-value ratio. The curve is pushed down, closer to the axis. You pay less for the same, or often even get more for less. If you recall the capabilities I described in relevance of information you will see that there are many additional features available for digital content compared to analogue content. I think this is a good 'blue ocean strategy' by combining existing capabilities in a new way. (Kim W.C. & Mauborgne, R. (2005) Blue Ocean Strategies. Boston: Harvard Business School Publishing.). In addition the different channels of digital information distribution significantly change the value of information. I will touch on this in one of my next blogs. Right now, many information providers started to offer 'freemium' content through digital channels, hoping to get a premium for the 'full' content. No freemium seems to take them out of business, because they are apparently no longer visible in today's most relevant channels of information consumption. But, the more freemium is provided, the lower the premium gets; a truly puzzling situation. To make it worse, channel providers increasingly regard information as a value adding and differentiating activity. Maybe new types of exclusive, strategic alliances will solve the puzzle, introducing new types of 'gate-keepers', which - to me - somehow does not match the spirit of the WWW and the generation Y's perception of information consumption and exchange.

    Read the article

  • Is sending data to a server via a script tag an outdated paradigm?

    - by KingOfHypocrites
    I inherited some old javascript code for a website tracker that submits data to the server using a script url: var src = "http://domain.zzz/log/method?value1=x&value2=x" var e = document.createElement('script'); e.src = src; I guess the idea was that cross domain requests didn't haven't to be enabled perhaps. Also it was written back in 2005. I'm not sure how well XmlHttpRequests were supported at the time. Anyone could stick this on their website and send data to our server for logging and it ideally would work in most any browser with javascript. The main limitation is all the server can do is send back javascript code and each request has to wait for a response from the server (in the form of a generic acknowledgement javascript method call) to know it was received, then it sends the next. I can't find anyone doing this online or any metrics as to whether this faster or more secure than XmlHttpRequests. I don't know if this is just an old way of doing things or it's still the best way to send data to the server when you are mostly trying to send data one way and you need the best performance possible. So in summary is sending data via a script tag an outdated paradigm? Should I abandon in favor of using XmlHttpRequsts?

    Read the article

  • Dual displays not working - NVidia - Ubuntu 12.4

    - by user75105
    Graphics Card: NVidia 460 GTX. Driver: NVIDIA accelerated graphics driver (version current) I have one DVI monitor, an old Dell LCD from 2005, and one VGA monitor, an Asus ML238H from 2010 whose HDMI port broke. The Asus is plugged into my graphics card's primary monitor slot and is the better monitor even though it is VGA but my computer defaults to the Dell. This happens when I boot as well; the loading screens, the motherboard brand image, etc. are all displayed on the Dell monitor until Windows loads. Then both monitors work. The same thing happened when I booted up Ubuntu 12.4 but I did not see the second monitor when the log-in screen popped up, nor did I when I logged in. I went to System Settings/Displays and my Asus monitor is not an option. I clicked Detect Displays and the Asus is not detected. I looked at the other questions regarding NVIDIA drivers and recalled my problems with Ubuntu a few years ago and decided to check the driver. I went to Additional Drivers to install the proprietary driver and it looks like it's installed and active but I'm still having this problem. There is another driver option, the post-release NVIDIA driver, but that does not fix the problem either. Also, under System Details/Graphics the graphics device is listed as Unknown, which might indicate that it is using an open source generic driver and not the proprietary NVidia driver. But under Additional Drivers it says that I am using the NVidia driver. Any help is appreciated.

    Read the article

  • Don't speak to the DBA while he/she is doing the job?!

    - by Uri Dimant
    Our DBA was very busy today, he helped out our developers to write efficient code,explained to the programmer how to launch the old DTS package on SQL Server 2005 and etc. You know how it is, someone comes in and asking the question and you answer the question at the same time you are busy to do something. So  today answering one of such questions he deleted by mistake very ctitical database. Fortunately, we had zero data loss,thanks to our backup/recovery strategy. I think that is acceptable to say that you are busy right now ,please ask the question later on, what do you think? How do you respond if someone asks you a queston while you are doing something on the server/database and etc.?   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to fix legacy code that uses <string.h> unsafely?

    - by Snowbody
    We've got a bunch of legacy code, written in straight C (some of which is K&R!), which for many, many years has been compiled using Visual C 6.0 (circa 1998) on an XP machine. We realize this is unsustainable, and we're trying to move it to a modern compiler. Political issues have said that the most recent compiler allowed is VC++ 2005. When compiling the project, there are many warnings about the unsafe string manipulation functions used (sprintf(), strcpy(), etc). Reviewing some of these places shows that the code is indeed unsafe; it does not check for buffer overflows. The compiler warning recommends that we move to using sprintf_s(), strcpy_s(), etc. However, these are Microsoft-created (and proprietary) functions and aren't available on (say) gcc (although we're primarily a Windows shop we do have some clients on various flavors of *NIX) How ought we to proceed? I don't want to roll our own string libraries. I only want to go over the code once. I'd rather not switch to C++ if we can help it.

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • De-normalization for the sake of reports - Good or Bad?

    - by Travis
    What are the pros/cons of de-normalizing an enterprise application database because it will make writing reports easier? Pro - designing reports in SSRS will probably be "easier" since no joins will be necessary. Con - developing/maintaining the app to handle de-normalized data will become more difficult due to duplication of data and synchronization. Others?

    Read the article

  • SQL Server 2008 : Standard or SQL Express

    - by dr
    Which is a better choice on a development box if you primarily develop Asp.Net applications and SSRS reports. I have never had to use the Express editions, so I don't really know the pros or cons. The cons I have listed for Standard+ editions are: toll it takes on system resources pain to attach database for projects pain to detach unused databases $$$ Pros: You have everything you need Management Studio features Easy move to production

    Read the article

  • Controlling the print options of a pdf

    - by John Nolan
    I am producing a number of PDFs using either SSRS or using an HTML to PDF component. What I want to do for each document is to set the first page to print to tray 1 and subsequent pages. Is there anyway to do this? System.Drawing.Printing and System.Printing seem like good candidates but they don't seem to be useful for PDFs (i may be wrong). The Adobe sdk doesn't at first glance seem to have that level of granularity either.

    Read the article

  • Marking up table joins in the Microstrategy project metadata with the architect tool?

    - by ConcernedOfTunbridgeWells
    Hi, I am evaluating Microstrategy 9.0.1 and attempting to build a prototype metadata layer using its Architect tool. The tool doesn't seem to have any specific means to mark up joins in the way that the editing tools for SSRS data source views or Business Objects universes do. How does this work in Microstrategy - I have never used this before and may be making invalid assumptions based on other systems I have seen. If one does do this with MicroStrategy, how is it done?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >