Search Results

Search found 666 results on 27 pages for 'profiler'.

Page 20/27 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Connection Timeout Linked Server

    - by Wade73
    I have created a linked server from one SQL Server 2005 to Another 2005. When I run an update query through the SQL Server Management Studio (SSMS), it runs in under a second. If I run the query through a asp webpage it times out. I ran SQL Profiler to see if I noticed anything as well as the Activity Monitor in SSMS and all I found was that a lock was being created (Wait type LOCK_M_U), but I can't find what is locking it. Any help would be appreciated. Wade

    Read the article

  • Entity Framework: Attached Entities not Saving

    - by blog
    Hello: I can't figure out why calling SaveChanges() on the following code results in no changes to the objects I attached: // delete existing user roles before re-attaching if (accountUser.AccountRoles.Count > 0) { foreach (AccountRole role in accountUser.AccountRoles.ToList()) { accountUser.AccountRoles.Remove(role); } } // get roles to add List<int> roleIDs = new List<int>(); foreach (UserRole r in this.AccountRoles) { roleIDs.Add(r.RoleID); } var roleEntities = from roles in db.AccountRoles where roleIDs.Contains(roles.id) select roles; accountUser.AccountRoles.Attach(roleEntities); db.SaveChanges(); In the debugger, I see that the correct roleEntities are being loaded, and that they are valid objects. However, if I use SQL Profiler I see no UPDATE or INSERT queries coming in, and as a result none of my attached objects are being saved.

    Read the article

  • Deadlock Problem because of an Update Lock.

    - by Randy Minder
    We have a deadlock issue we're trying to track down. I have an deadlock graph (xdl) generated from Profiler. It shows the losing SQL statement as a simple Select statement, not an Update, Delete or Insert statement. The graph shows the losing Select statement as requesting a Shared lock on a resource **but also owning an Update lock on a resource**. This is what is baffling me. Why would a Select statement that is not part of an Insert, Update or Delete ever hold an Update lock on a resource? I should add that the Update lock it owns is on the table being selected against by the losing Select statement.

    Read the article

  • What is so great about Visual Studio?

    - by Paperflyer
    In my admittedly somewhat short time as programmer, I have used many development environments on many platforms. Most notably, Eclipse/Linux, XCode/OSX, CLI/editor/Linux, VisualDSP/Blackfin/Windows and MSVC/Windows. (I used each one for several months) There are neat features in pretty much all of them. But somehow, I just can't find any in MSVC. Then again, so many people really seem to like it, so I am probably missing something here. So please tell me: What is so great about Visual Studio? Things I like: Refactoring tools in Eclipse Build error highlighting in XCode and Eclipse Edit-all-in-Scope in XCode Profiler in XCode Flexibility of Eclipse and CLI/editor Data plotting in VisualDSP Things I don't like Build error display in MSVC (not highlighted in code) Honestly, this is not meant to be a rant. Of course I am a Mac-head and biased as hell, but I have to use MSVC on the job, so I really want to like it.

    Read the article

  • Common optimization rules

    - by mafutrct
    This is a dangerous question, so let me try to phrase it correctly. Premature optimization is the root of all evil, but if you know you need it, there is a basic set of rules that should be considered. This set is what I'm wondering about. For instance, imagine you got a list of a few thousand items. How do you look up an item with a specific, unique ID? Of course, you simply use a Dictionary to map the ID to the item. And if you know that there is a setting stored in a database that is required all the time, you simply cache it instead of issuing a database request hundred times a second. I guess there are a few even more basic ideas. I am specifically not looking for "don't do it, for experts: don't do it yet" or "use a profiler" answers, but for really simple, general hints. If you feel this is an argumentative question, you probably misunderstood my intention.

    Read the article

  • exec sp_executesql error 'Incorrect syntax near 1' when using datetime parameter

    - by anne78
    I have a Ssrs report which sends the following text to the database : EXEC ( 'DECLARE @TeamIds as TeamIdTableType ' + @Teams + ' EXEC rpt.DWTypeOfSicknessByCategoryReport @TeamIds , ' + @DateFrom + ', ' + @DateTo + ', ' + @InputRankGroups + ', ' + @SubCategories ) When I view this in profiler it interprets this as : exec sp_executesql N'EXEC ( ''DECLARE @TeamIds as TeamIdTableType '' + @Teams + '' EXEC rpt.DWTypeOfSicknessByCategoryAndEmployeeDetailsReport @TeamIds, '' + @DateFrom + '', '' + @DateTo + '', '' + @InputRankGroups + '', '' + @SubCategories )',N'@Teams nvarchar(34),@DateFrom datetime,@DateTo datetime,@InputRankGroups varchar(1),@SubCategories bit',@Teams=N'INSERT INTO @TeamIds VALUES (5); ',@DateFrom='2010-02-01 00:00:00',@DateTo='2010-04-30 00:00:00',@InputRankGroups=N'1',@SubCategories=1 When this sql runs it errors, on the dates. I have tried changing the format of the date but it does not help. If I remove the dates it works fine. Any help would be appreciated.

    Read the article

  • Getting a query to index seek (rather than scan)

    - by PaulB
    Running the following query (SQL Server 2000) the execution plan shows that it used an index seek and Profiler shows it's doing 71 reads with a duration of 0. select top 1 id from table where name = '0010000546163' order by id desc Contrast that with the following with uses an index scan with 8500 reads and a duration of about a second. declare @p varchar(20) select @p = '0010000546163' select top 1 id from table where name = @p order by id desc Why is the execution plan different? Is there a way to change the second method to seek? thanks EDIT Table looks like CREATE TABLE [table] ( [Id] [int] IDENTITY (1, 1) NOT NULL , [Name] [varchar] (13) COLLATE Latin1_General_CI_AS NOT NULL) Id is primary clustered key There is a non-unique index on Name and a unique composite index on id/name There are other columns - left them out for brevity

    Read the article

  • Application Role and access second database

    - by lszk
    I have written a script to create an audit trails to my database in a second one db. So far I had no problems during tests on my dev machine from SQL Server Management Studio. Problems started to occurs when I first tried to test my triggers from my application by modyfing data in it. Using profiler I found out, that my audit trails db is not visible in sys.databases, so here lies the problem. The application using an Application Role, so as I found on MSDN, that's why I can't get access to other db on the server. I'm not a DBA. I have no experience with properly settings the security stuff, so please guide me, how can I set the setting for guest account (according to MSDN) to get access to this db? I need to have a record for this database in sys.databases and I need to be able to insert data in this database in all tables. No select, update or delete I need.

    Read the article

  • How to determine page generation time with Struts 2 ?

    - by Samuel_xL
    I'm using Struts 2 and I'd like to determine the page generation time without an external profiler. I can easily profile the actions execute() method, but I don't know how to include the time spent before (in dispatchers, interceptors...) and after (time taken by the servlet corresponding to the view ("jsp time")). Is there simple way to do this ? And if there isn't, how could I, at least, profile the "jsp time" (maybe a tag I'm not aware of ?) ? I think it would be accurate enough to just take in account action time + jsp time. Thanks.

    Read the article

  • .Net Sql Client Provider

    - by sameer
    Have come across a situation where in, if a stored procedure is executed in Query Analyser its execution time is less than a second. But when same Stored Procedure is executed using .NET Sql Client Provide. it is taking 61 seconds. Therefore inorder to troubleshoot this issue we went to SQL Profiler we find the request come to SQL Server less then a second but execution completed after 60 seconds. Can anybody suggest why we have such a deviation. Query is a simple as give below SELECT distinct p1.productID, p1.description FROM Details V INNER JOIN Product P ON V.ProductID=P.ProductID INNER JOIN product p1 on p1.productID=p.parentID WHERE V.MarketID='1159' AND V.FinancialYear='1213' AND V.LEPeriodID= '75' AND p1.parentID=100024 AND p1.statusID = 1 ORDER BY description

    Read the article

  • Debug Mode for CodeIgniter?

    - by user350814
    Does CodeIgniter provide a Debug Mode, for example, when accessing an Invalid URL? Ruby on Rails does show debugging Messages when a incorrect URL has been given, and the controller is unable to resolve it using the routes map. How would I enable such debugging messages in CodeIgniter? The profiler ... $this->output->enable_profiler(TRUE); ... only affects single classes, but not all routes. So debugging without an actual debugger mode is a little... difficult. :-)

    Read the article

  • Tools and Utilities for the .NET Developer

    - by mbcrump
    Tweet this list! Add a link to my site to your bookmarks to quickly find this page again! Add me to twitter! This is a list of the tools/utilities that I use to do my job/hobby. I wanted this page to load fast and contain information that only you care about. If I have missed a tool that you like, feel free to contact me and I will add it to the list. Also, this list took a lot of time to complete. Please do not steal my work, if you like the page then please link back to my site. I will keep the links/information updated as new tools/utilities are created.  Windows/.NET Development – This is a list of tools that any Windows/.NET developer should have in his bag. I have used at some point in my career everything listed on this page and below is the tools worth keeping. Name Description License AnkhSVN Subversion support for Visual Studio. It also works with VS2010. Free Aurora XAML Designer One of the best XAML creation tools available. Has a ton of built in templates that you can copy/paste into VS2010. COST/Trial BeyondCompare Beyond Compare 3 is the ideal tool for comparing files and folders on your Windows or Linux system. Visualize changes in your code and carefully reconcile them. COST/Trial BuildIT Automated Task Tool Its main purpose is to automate tasks, whether it is the final packaging of a product, an automated daily build, maybe sending out a mailing list, even backing-up files. Free C Sharper for VB Convert VB to C#. COST CLRProfiler Analyze and improve the behavior of your .NET app. Free CodeRush Direct competitor to ReSharper, contains similar feature. This is one of those decide for yourself. COST/Trial Disk2VHD Disk2vhd is a utility that creates VHD (Virtual Hard Disk - Microsoft's Virtual Machine disk format) versions of physical disks for use in Microsoft Virtual PC or Microsoft Hyper-V virtual machines (VMs). Free Eazfuscator.NET Is a free obfuscator for .NET. The main purpose is to protect intellectual property of software. Free EQATEC Profiler Make your .NET app run faster. No source code changes are needed. Just point the profiler to your app, run the modified code, and get a visual report. COST Expression Studio 3/4 Comes with Web, Blend, Sketch Flow and more. You can create websites, produce beautiful XAML and more. COST/Trial Expresso The award-winning Expresso editor is equally suitable as a teaching tool for the beginning user of regular expressions or as a full-featured development environment for the experienced programmer or web designer with an extensive knowledge of regular expressions. Free Fiddler Fiddler is a web debugging proxy which logs all HTTP(s) traffic between your computer and the internet. Free Firebug Powerful Web development tool. If you build websites, you will need this. Free FxCop FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. Free GAC Browser and Remover Easy way to remove multiple assemblies from the GAC. Assemblies registered by programs like Install Shield can also be removed. Free GAC Util The Global Assembly Cache tool allows you to view and manipulate the contents of the global assembly cache and download cache. Free HelpScribble Help Scribble is a full-featured, easy-to-use help authoring tool for creating help files from start to finish. You can create Win Help (.hlp) files, HTML Help (.chm) files, a printed manual and online documentation (on a web site) all from the same Help Scribble project. COST/Trial IETester IETester is a free Web Browser that allows you to have the rendering and JavaScript engines of IE9 preview, IE8, IE7 IE 6 and IE5.5 on Windows 7, Vista and XP, as well as the installed IE in the same process. Free iTextSharp iText# (iTextSharp) is a port of the iText open source java library for PDF generation written entirely in C# for the .NET platform. Use the iText mailing list to get support. Free Kaxaml Kaxaml is a lightweight XAML editor that gives you a "split view" so you can see both your XAML and your rendered content. Free LINQPad LinqPad lets you interactively query databases in a LINQ. Free Linquer Many programmers are familiar with SQL and will need a help in the transition to LINQ. Sometimes there are complicated queries to be written and Linqer can help by converting SQL scripts to LINQ. COST/Trial LiquidXML Liquid XML Studio 2010 is an advanced XML developers toolkit and IDE, containing all the tools needed for designing and developing XML schema and applications. COST/Trial Log4Net log4net is a tool to help the programmer output log statements to a variety of output targets. log4net is a port of the excellent log4j framework to the .NET runtime. We have kept the framework similar in spirit to the original log4j while taking advantage of new features in the .NET runtime. For more information on log4net see the features document. Free Microsoft Web Platform Installer The Microsoft Web Platform Installer 2.0 (Web PI) is a free tool that makes getting the latest components of the Microsoft Web Platform, including Internet Information Services (IIS), SQL Server Express, .NET Framework and Visual Web Developer easy. Free Mono Development Don't have Visual Studio - no problem! This is an open Source C# and .NET development environment for Linux, Windows, and Mac OS X Free Net Mass Downloader While it’s great that Microsoft has released the .NET Reference Source Code, you can only get it one file at a time while you’re debugging. If you’d like to batch download it for reading or to populate the cache, you’d have to write a program that instantiated and called each method in the Framework Class Library. Fortunately, .NET Mass Downloader comes to the rescue! Free nMap Nmap ("Network Mapper") is a free and open source (license) utility for network exploration or security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Free NoScript (Firefox add-in) The NoScript Firefox extension provides extra protection for Firefox, Flock, Seamonkey and other Mozilla-based browsers: this free, open source add-on allows JavaScript, Java and Flash and other plug-ins to be executed only by trusted web sites of your choice (e.g. your online bank), and provides the most powerful Anti-XSS protection available in a browser. Free NotePad 2 Notepad2, a fast and light-weight Notepad-like text editor with syntax highlighting. This program can be run out of the box without installation, and does not touch your system's registry. Free PageSpy PageSpy is a small add-on for Internet Explorer that allows you to select any element within a webpage, select an option in the context menu, and view detailed information about both the coding behind the page and the element you selected. Free Phrase Express PhraseExpress manages your frequently used text snippets in customizable categories for quick access. Free PowerGui PowerGui is a free community for PowerGUI, a graphical user interface and script editor for Microsoft Windows PowerShell! Free Powershell Comes with Win7, but you can automate tasks by using the .NET Framework. Great for network admins. Free Process Explorer Ever wondered which program has a particular file or directory open? Now you can find out. Process Explorer shows you information about which handles and DLLs processes have opened or loaded. Also, included in the SysInterals Suite. Free Process Monitor Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity. Free Reflector Explore and analyze compiled .NET assemblies, viewing them in C#, Visual Basic, and IL. This is an Essential for any .NET developer. Free Regular Expression Library Stuck on a Regular Expression but you think someone has already figured it out? Chances are they have. Free Regulator Regulator makes Regular Expressions easy. This is a must have for a .NET Developer. Free RenameMaestro RenameMaestro is probably the easiest batch file renamer you'll find to instantly rename multiple files COST ReSharper The one program that I cannot live without. Supports VS2010 and offers simple refactoring, code analysis/assistance/cleanup/templates. One of the few applications that is worth the $$$. COST/Trial ScrewTurn Wiki ScrewTurn Wiki allows you to create, manage and share wikis. A wiki is a collaboratively-edited, information-centered website: the most famous is Wikipedia. Free SharpDevelop What is #develop? SharpDevelop is a free IDE for C# and VB.NET projects on Microsoft's .NET platform. Free Show Me The Template Show Me The Template is a tool for exploring the templates, be their data, control or items panel, that comes with the controls built into WPF for all 6 themes. Free SnippetCompiler Compiles code snippets without opening Visual Studio. It does not support .NET 4. Free SQL Prompt SQL Prompt is a plug-in that increases how fast you can work with SQL. It provides code-completion for SQL server, reformatting, db schema information and snippets. Awesome! COST/Trial SQLinForm SQLinForm is an automatic SQL code formatter for all major databases  including ORACLE, SQL Server, DB2, UDB, Sybase, Informix, PostgreSQL, Teradata, MySQL, MS Access etc. with over 70 formatting options. COST/OnlineFree SSMS Tools SSMS Tools Pack is an add-in for Microsoft SQL Server Management Studio (SSMS) including SSMS Express. Free Storm STORM is a free and open source tool for testing web services. Free Telerik Code Convertor Convert code from VB to C Sharp and Vice Versa. Free TurtoiseSVN TortoiseSVN is a really easy to use Revision control / version control / source control software for Windows.Since it's not an integration for a specific IDE you can use it with whatever development tools you like. Free UltraEdit UltraEdit is the ideal text, HTML and hex editor, and an advanced PHP, Perl, Java and JavaScript editor for programmers. UltraEdit is also an XML editor including a tree-style XML parser. An industry-award winner, UltraEdit supports disk-based 64-bit file handling (standard) on 32-bit Windows platforms (Windows 2000 and later). COST/Trial Virtual Windows XP Comes with some W7 version and allows you to run WinXP along side W7. Free VirtualBox Virtualization by Sun Microsystems. You can virtualize Windows, Linux and more. Free Visual Log Parser SQL queries against a variety of log files and other system data sources. Free WinMerge WinMerge is an Open Source differencing and merging tool for Windows. WinMerge can compare both folders and files, presenting differences in a visual text format that is easy to understand and handle. Free Wireshark Wireshark is one of the best network protocol analyzer's for Unix and windows. This has been used several times to get me out of a bind. Free XML Notepad 07 Old, but still one of my favorite XML viewers. Free Productivity Tools – This is the list of tools that I use to save time or quickly navigate around Windows. Name Description License AutoHotKey Automate almost anything by sending keystrokes and mouse clicks. You can write a mouse or keyboard macro by hand or use the macro recorder. Free CLCL CLCL is clipboard caching utility. Free Ditto Ditto is an extension to the standard windows clipboard. It saves each item placed on the clipboard allowing you access to any of those items at a later time. Ditto allows you to save any type of information that can be put on the clipboard, text, images, html, custom formats, ..... Free Evernote Remember everything from notes to photos. It will synch between computers/devices. Free InfoRapid Inforapid is a search tool that will display all you search results in a html like browser. If you click on a word in that browser, it will start another search to the word you clicked on. Handy if you want to trackback something to it's true origin. The word you looked for will be highlighted in red. Clicking on the red word will open the containing file in a text based viewer. Clicking on any word in the opened document will start another search on that word. Free KatMouse The prime purpose of the KatMouse utility is to enhance the functionality of mice with a scroll wheel, offering 'universal' scrolling: moving the mouse wheel will scroll the window directly beneath the mouse cursor (not the one with the keyboard focus, which is default on Windows OSes). This is a major increase in the usefulness of the mouse wheel. Free ScreenR Instant Screencast with nothing to download. Works with Mac or PC and free. Free Start++ Start++ is an enhancement for the Start Menu in Windows Vista. It also extends the Run box and the command-line with customizable commands.  For example, typing "w Windows Vista" will take you to the Windows Vista page on Wikipedia! Free Synergy Synergy lets you easily share a single mouse and keyboard between multiple computers with different operating systems, each with its own display, without special hardware. It's intended for users with multiple computers on their desk since each system uses its own monitor(s). Free Texter Texter lets you define text substitution hot strings that, when triggered, will replace hotstring with a larger piece of text. By entering your most commonly-typed snippets of text into Texter, you can save countless keystrokes in the course of the day. Free Total Commander File handling, FTP, Archive handling and much more. Even works with Win3.11. COST/Trial Available Wizmouse WizMouse is a mouse enhancement utility that makes your mouse wheel work on the window currently under the mouse pointer, instead of the currently focused window. This means you no longer have to click on a window before being able to scroll it with the mouse wheel. This is a far more comfortable and practical way to make use of the mouse wheel. Free Xmarks Bookmark sync and search between computers. Free General Utilities – This is a list for power user users or anyone that wants more out of Windows. I usually install a majority of these whenever I get a new system. Name Description License µTorrent µTorrent is a lightweight and efficient BitTorrent client for Windows or Mac with many features. I use this for downloading LEGAL media. Free Audacity Audacity® is free, open source software for recording and editing sounds. It is available for Mac OS X, Microsoft Windows, GNU/Linux, and other operating systems. Learn more about Audacity... Also check our Wiki and Forum for more information. Free AVast Free FREE Antivirus. Free CD Burner XP Pro CDBurnerXP is a free application to burn CDs and DVDs, including Blu-Ray and HD-DVDs. It also includes the feature to burn and create ISOs, as well as a multilanguage interface. Free CDEX You can extract digital audio CDs into mp3/wav. Free Combofix Combofix is a freeware (a legitimate spyware remover created by sUBs), Combofix was designed to scan a computer for known malware, spyware (SurfSideKick, QooLogic, and Look2Me as well as any other combination of the mentioned spyware applications) and remove them. Free Cpu-Z Provides information about some of the main devices of your system. Free Cropper Cropper is a screen capture utility written in C#. It makes it fast and easy to grab parts of your screen. Use it to easily crop out sections of vector graphic files such as Fireworks without having to flatten the files or open in a new editor. Use it to easily capture parts of a web site, including text and images. It's also great for writing documentation that needs images of your application or web site. Free DropBox Drag and Drop files to sync between computers. Free DVD-Fab Converts/Copies DVDs/Blu-Ray to different formats. (like mp4, mkv, avi) COST/Trial Available FastStone Capture FastStone Capture is a powerful, lightweight, yet full-featured screen capture tool that allows you to easily capture and annotate anything on the screen including windows, objects, menus, full screen, rectangular/freehand regions and even scrolling windows/web pages. Free ffdshow FFDShow is a DirectShow decoding filter for decompressing DivX, XviD, H.264, FLV1, WMV, MPEG-1 and MPEG-2, MPEG-4 movies. Free Filezilla FileZilla Client is a fast and reliable cross-platform FTP, FTPS and SFTP client with lots of useful features and an intuitive graphical user interface. You can also download a server version. Free FireFox Web Browser, do you really need an explanation? Free FireGestures A customizable mouse gestures extension which enables you to execute various commands and user scripts with five types of gestures. Free FoxIt Reader Light weight PDF viewer. You should install this with the advanced setting or it will install a toolbar and setup some shortcuts. Free gSynchIt Synch Gmail and Outlook. Even supports Outlook 2010 32/64 bit COST/Trial Available Hulu Desktop At home or in a hotel, this has replaced my cable/satellite subscription. Free ImgBurn ImgBurn is a lightweight CD / DVD / HD DVD / Blu-ray burning application that everyone should have in their toolkit! Free Infrarecorder InfraRecorder is a free CD/DVD burning solution for Microsoft Windows. It offers a wide range of powerful features; all through an easy to use application interface and Windows Explorer integration. Free KeePass KeePass is a free open source password manager, which helps you to manage your passwords in a secure way. Free LastPass Another password management, synchronize between browsers, automatic form filling and more. Free Live Essentials One download and lots of programs including Mail, Live Writer, Movie Maker and more! Free Monitores MonitorES is a small windows utility that helps you to turnoff monitor display when you lock down your machine.Also when you lock your machine, it will pause all your running media programs & set your IM status message to "Away" / Custom message(via options) and restore it back to normal when you back. Free mRemote mRemote is a full-featured, multi-tab remote connections manager. Free Open Office OpenOffice.org 3 is the leading open-source office software suite for word processing, spreadsheets, presentations, graphics, databases and more. It is available in many languages and works on all common computers. It stores all your data in an international open standard format and can also read and write files from other common office software packages. It can be downloaded and used completely free of charge for any purpose. Free Paint.NET Simple, intuitive, and innovative user interface for editing photos. Free Picasa Picasa is free photo editing software from Google that makes your pictures look great. Free Pidgin Pidgin is an easy to use and free chat client used by millions. Connect to AIM, MSN, Yahoo, and more chat networks all at once. Free PING PING is a live Linux ISO, based on the excellent Linux From Scratch (LFS) documentation. It can be burnt on a CD and booted, or integrated into a PXE / RIS environment. Free Putty PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. Free Revo Uninstaller Revo Uninstaller Pro helps you to uninstall software and remove unwanted programs installed on your computer easily! Even if you have problems uninstalling and cannot uninstall them from "Windows Add or Remove Programs" control panel applet.Revo Uninstaller is a much faster and more powerful alternative to "Windows Add or Remove Programs" applet! It has very powerful features to uninstall and remove programs. Free Security Essentials Microsoft Security Essentials is a new, free consumer anti-malware solution for your computer. Free SetupVirtualCloneDrive Virtual CloneDrive works and behaves just like a physical CD/DVD drive, however it exists only virtually. Point to the .ISO file and it appears in Windows Explorer as a Drive. Free Shark 007 Codec Pack Play just about any file format with this download. Also includes my W7 Media Playlist Generator. Free Snagit 9 Screen Capture on steroids. Add arrows, captions, etc to any screenshot. COST/Trial Available SysinternalsSuite Go ahead and download the entire sys internals suite. I have mentioned multiple programs in this suite already. Free TeraCopy TeraCopy is a compact program designed to copy and move files at the maximum possible speed, providing the user with a lot of features. Free for Home TrueCrypt Free open-source disk encryption software for Windows 7/Vista/XP, Mac OS X, and Linux Free TweetDeck Fully featured Twitter client. Free UltraVNC UltraVNC is a powerful, easy to use and free software that can display the screen of another computer (via internet or network) on your own screen. The program allows you to use your mouse and keyboard to control the other PC remotely. It means that you can work on a remote computer, as if you were sitting in front of it, right from your current location. Free Unlocker Unlocks locked files. Pretty simple right? Free VLC Media Player VLC media player is a highly portable multimedia player and multimedia framework capable of reading most audio and video formats Free Windows 7 Media Playlist This program is special to my heart because I wrote it. It has been mentioned on podcast and various websites. It allows you to quickly create wvx video playlist for Windows Media Center. Free WinRAR WinRAR is a powerful archive manager. It can backup your data and reduce the size of email attachments, decompress RAR, ZIP and other files downloaded from Internet and create new archives in RAR and ZIP file format. COST/Trial Available Blogging – I use the following for my blog. Name Description License Insert Code for Windows Live Writer Insert Code for Windows Live Writer will format a snippet of text in a number of programming languages such as C#, HTML, MSH, JavaScript, Visual Basic and TSQL. Free LiveWriter Included in Live Essentials, but the ultimate in Windows Blogging Free PasteAsVSCode Plug-in for Windows Live Writer that pastes clipboard content as Visual Studio code. Preserves syntax highlighting, indentation and background color. Converts RTF, outputted by Visual Studio, into HTML. Free Desktop Management – The list below represent the best in Windows Desktop Management. Name Description License 7 Stacks Allows users to have "stacks" of icons in their taskbar. Free Executor Executor is a multi purpose launcher and a more advanced and customizable version of windows run. Free Fences Fences is a program that helps you organize your desktop and can hide your icons when they are not in use. Free RocketDock Rocket Dock is a smoothly animated, alpha blended application launcher. It provides a nice clean interface to drop shortcuts on for easy access and organization. With each item completely customizable there is no end to what you can add and launch from the dock. Free WindowsTab Tabbing is an essential feature of modern web browsers. Window Tabs brings the productivity of tabbed window management to all of your desktop applications. Free

    Read the article

  • Netbeans doesn't start, Java fatal error, unless sudo

    - by elect
    Fresh 13.10 64b Openjdk 6 is there, I just installed Netbeans 7.01 from the repo, but it doesn't work, I open then a console elect@elect-desktop:~$ netbeans # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007faebdf79325, pid=5251, tid=140388628424448 # # JRE version: 6.0_27-b27 # Java VM: OpenJDK 64-Bit Server VM (20.0-b12 mixed mode linux-amd64 compressed oops) # Derivative: IcedTea6 1.12.6 # Distribution: Ubuntu Saucy Salamander (development branch), package 6b27-1.12.6-1ubuntu2 # Problematic frame: # C [libgobject-2.0.so.0+0x14325] g_cclosure_marshal_BOOLEAN__BOXED_BOXEDv+0x985 # # An error report file with more information is saved as: # /home/elect/hs_err_pid5251.log [thread 140386948781824 also had an error] # # If you would like to submit a bug report, please include # instructions how to reproduce the bug and visit: # https://bugs.launchpad.net/ubuntu/+source/openjdk-6/ # /usr/share/netbeans/7.0.1/bin/../platform/lib/nbexec: line 548: 5251 Aborted (core dumped) "/usr/lib/jvm/java-6-openjdk-amd64/bin/java" -Djdk.home="/usr/lib/jvm/java-6-openjdk-amd64" -Djava.library.path=/usr/lib/jni -classpath "/usr/share/netbeans/7.0.1/platform/lib/boot.jar:/usr/share/netbeans/7.0.1/platform/lib/org-openide-modules.jar:/usr/share/netbeans/7.0.1/platform/lib/org-openide-util.jar:/usr/share/netbeans/7.0.1/platform/lib/org-openide-util-lookup.jar:/usr/lib/jvm/java-6-openjdk-amd64/lib/dt.jar:/usr/lib/jvm/java-6-openjdk-amd64/lib/tools.jar" -Dnetbeans.system_http_proxy="DIRECT" -Dnetbeans.system_http_non_proxy_hosts="" -Dnetbeans.dirs="/usr/share/netbeans/7.0.1/nb:/usr/share/netbeans/7.0.1/bin/../ergonomics:/usr/share/netbeans/7.0.1/ide:/usr/share/netbeans/7.0.1/java:/usr/share/netbeans/7.0.1/bin/../xml:/usr/share/netbeans/7.0.1/apisupport:/usr/share/netbeans/7.0.1/bin/../webcommon:/usr/share/netbeans/7.0.1/bin/../websvccommon:/usr/share/netbeans/7.0.1/bin/../enterprise:/usr/share/netbeans/7.0.1/bin/../mobility:/usr/share/netbeans/7.0.1/bin/../profiler:/usr/share/netbeans/7.0.1/bin/../ruby:/usr/share/netbeans/7.0.1/bin/../python:/usr/share/netbeans/7.0.1/bin/../php:/usr/share/netbeans/7.0.1/bin/../visualweb:/usr/share/netbeans/7.0.1/bin/../soa:/usr/share/netbeans/7.0.1/bin/../identity:/usr/share/netbeans/7.0.1/bin/../uml:/usr/share/netbeans/7.0.1/harness:/usr/share/netbeans/7.0.1/bin/../cnd:/usr/share/netbeans/7.0.1/bin/../dlight:/usr/share/netbeans/7.0.1/bin/../groovy:/usr/share/netbeans/7.0.1/bin/../extra:/usr/share/netbeans/7.0.1/bin/../javafx:/usr/share/netbeans/7.0.1/bin/../javacard:" -Dnetbeans.home="/usr/share/netbeans/7.0.1/platform" '-Dnetbeans.importclass=org.netbeans.upgrade.AutoUpgrade' '-Dnetbeans.accept_license_class=org.netbeans.license.AcceptLicense' '-XX:MaxPermSize=384m' '-Xmx768m' '-client' '-Xss2m' '-Xms32m' '-XX:PermSize=32m' '-Dapple.laf.useScreenMenuBar=true' '-Dapple.awt.graphics.UseQuartz=true' '-Dsun.java2d.noddraw=true' '-Dsun.java2d.pmoffscreen=false' -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="/home/elect/.netbeans/7.0/var/log/heapdump.hprof" org.netbeans.Main --userdir "/home/elect/.netbeans/7.0" "--branding" "nb" 0<&0 Looking around, the second answer, here Vigintas Labakojis, points out something regarding permission, I just try sudo netbeans, it works.. Then I look for the ~/.cache/netbeans/ I dont have, I have instead ~/.netbeans/ Then I run his commands on those folder, it doesn't work.. It must be something else, do you have any idea? In any case, my log /home/elect/hs_err_pid5251.log is here

    Read the article

  • The .NET Rocks! Visual Studio 2010 Road Trip

    - by Laila
    Carl Franklin and Richard Campbell, the two .NET Rocks radio show hosts, have decided to set off to 15 cities in the US, between April 19th and May 7th, in their DotNetMobile (a 30 foot RV). What for you'll ask me? Well, to drive around the US, meet up with .NET developers, and show off the latest and greatest in Visual Studio 2010 and .NET 4.0! Each evening, they stop in a city and host a three hour event in front of a 100 to 300 crowd of developers, where Carl is showing off media features in Silverlight 4 and their road trip tracking application, whilst Richard is demo-ing the web performance testing features of VS2010 using his portable server rig. But before they take to the stage, they have a special guest brought in - a rock star from the Visual Studio world - whom they interview for an hour as a .NET Rock episode. So far, they've had - amongst others - Phil Haack, a Program Manager with the ASP.NET team working on ASP.NET MVC, Dan Fernandez, an Evangelism Manager in the Developer and Platform Evangelism team at Microsoft, and Beth Massi, Senior Program Manager on the Visual Studio Community Team at Microsoft. I love the fact that the audience gets a chance to participate, ask questions and have a great laugh, as you can hear in the first episode! Along the way, the .NET Rocks guys are giving away great prizes (including .NET Reflector Pro, ANTS Memory Profiler licenses, and "40" LCD TVs!). Even more out of the ordinary, at each stop on the road trip, one lucky attendee (who entered in the Ride Along competition) gets to jump in the RV with Carl and Richard and ride along with them to the next stop on the roadtrip. How cool is that! Richard told us: "Our first winner in Mountain View was Eric Ziko. I was looking for him to announce that he had won, when he found us and gave us a bottle of scotch he had brought just to say 'thanks for the great show'. We all had a toast from the bottle the next night when he headed back home." Cheeky! There's still space to a few of these events, so if you want to attend, register now, because it's first come first serve. We're grateful to Richard and Carl for giving us the opportunity to sponsor this major .NET event! A unique .NET adventure worth following for sure. Cheers, Laila

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Inside Red Gate - Divisions

    - by Simon Cooper
    When I joined Red Gate back in 2007, there were around 80 people in the company. Now, around 3 years later, it's grown to more than 200. It's a constant battle against Dunbar's number; the maximum number of people you can keep track of in a social group, to try and maintain that 'small company' feel that attracted myself and so many others to apply in the first place. There are several strategies the company's developed over the years to try and mitigate the effects of Dunbar's number. One of the main ones has been divisionalisation. Divisions The first division, .NET, appeared around the same time that I started in 2007. This combined the development, sales, marketing and management of the .NET tools (then, ANTS Profiler v3) into a separate section of the office. The idea was to increase the cohesion and communication between the different people involved in the entire lifecycle of the tools; from initial product development, through to marketing, then to customer support, who would feed back to the development team. This was such a success that the other development teams were re-worked around this model in 2009. Nowadays there are 4 divisions - SQL Tools, DBA, .NET, and New Business. Along the way there have been various tweaks to the details - the sales teams have been merged into the divisions, marketing and product support have been (mostly) centralised - but the same basic model remains. So, how has this helped? As Red Gate has continued to grow over the years, divisionalisation has turned Red Gate from a monolithic software company into what one person described as a 'federation of small businesses'. Each division is free to structure itself as it sees fit, it's free to decide what to concentrate development work on, organise its own newsletters and webinars, decide its own release schedule. Each division is its own small business. In terms of numbers, the size of each division varies from 20 people (.NET) to 52 (SQL Tools); well below Dunbar's number. From a developer's perspective, this means organisational structure is very flat & wide - there's only 2 layers between myself and the CEOs (not that it matters much; everyone can go and have a chat to Neil or Simon, or anyone else inbetween, whenever they want. Provided you can catch them at their desk!). As Red Gate grows, and expands into new areas, new divisions will be created as needed, old ones merged or disbanded, but the division structure will help to maintain that small-company feel that keeps Red Gate working as it does.

    Read the article

  • Have you really fixed that problem?

    - by DavidWimbush
    The day before yesterday I saw our main live server's CPU go up to constantly 100% with just the occasional short drop to a lower level. The exact opposite of what you'd want to see. We're log shipping every 15 minutes and part of that involves calling WinRAR to compress the log backups before copying them over. (We're on SQL2005 so there's no native compression and we have bandwidth issues with the connection to our remote site.) I realised the log shipping jobs were taking about 10 minutes and that most of that was spent shipping a 'live' reporting database that is completely rebuilt every 20 minutes. (I'm just trying to keep this stuff alive until I can improve it.) We can rebuild this database in minutes if we have to fail over so I disabled log shipping of that database. The log shipping went down to less than 2 minutes and I went off to the SQL Social evening in London feeling quite pleased with myself. It was a great evening - fun, educational and thought-provoking. Thanks to Simon Sabin & co for laying that on, and thanks too to the guests for making the effort when they must have been pretty worn out after doing DevWeek all day first. The next morning I came down to earth with a bump: CPU still at 100%. WTF? I looked in the activity monitor but it was confusing because some sessions have been running for a long time so it's not a good guide what's using the CPU now. I tried the standard reports showing queries by CPU (average and total) but they only show the top 10 so they just show my big overnight archiving and data cleaning stuff. But the Profiler showed it was four queries used by our new website usage tracking system. Four simple indexes later the CPU was back where it should be: about 20% with occasional short spikes. So the moral is: even when you're convinced you've found the cause and fixed the problem, you HAVE to go back and confirm that the problem has gone. And, yes, I have checked the CPU again today and it's still looking sweet.

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • Certificate Revocation checking affecting system performance [migrated]

    - by Colm Clarke
    I have a .NET 3.5 desktop application that had been showing periodic slow downs in functionality whenever the test machine it was on was out of the office. I managed to replicate the error on a machine in the office without an internet connection, but it was only when i used ANTS performance profiler that i got a clearer picture of what was going on. In ANTS I saw a "Waiting for synchronization" taking up to 16 seconds that corresponded to the delay I could see in the application when NHibernate tried to load the System.Data.SqlServerCE.dll assembly. If I tried the action again immediately it would work with no delay but if I left it for 5 minutes then it would be slow to load again the next time I tried it. From my research so far it appears to be because the SqlServerCE dll is signed and so the system is trying to connect to get the certificate revocation lists and timing out. Disabling the "Automatically detect settings" setting in the Internet Options LAN settings makes the problem go away, as does disabling the "Check for publishers certificate revocation". But the admins where this application will be deployed are not going to be happy with the idea of disabling certificate checking on a per machine or per user basis so I really need to get the application level disabling of the CRL check working. There is the well documented bug in .net 2.0 which describes this behaviour, and offers a possible fix with a config file element. <?xml version="1.0" encoding="utf-8"?> <configuration> <runtime> <generatePublisherEvidence enabled="false"/> </runtime> </configuration> This is NOT working for me however even though I am using .net 3.5. The SQLServerCE dll is being loaded dynamically by NHibernate and I wonder if the fact that it's dynamic could somehow be why the setting isn't working, but I don't know how I could check that. Can anyone offer suggestions as to why the config setting might not work? Or is there another way I could disable the check at the application level, perhaps a CAS policy setting that I can use to set an exception for the application when it's installed? Or is there something I can change in the application to up the trust level or something like that? I have also tried using to no advantage ServicePointManager.CheckCertificateRevocationList = false; http://rusanu.com/2009/07/24/fix-slow-application-startup-due-to-code-sign-validation/ I have also tried those registry settings out and unfortunately they didn't help. The dlls that appear to be the cause of the hold up are native SQL Server CE dlls, and looking at the stack traces in ProcMon mscorwks.dll doesn't appear to be involved even though the checks on crypto and cert registry keys are being done under the .NET application. It's definitely still something to do with publisher certificate checking because unticking "Check for publisher revocation certificate" still works but something odd is going on.

    Read the article

  • Auto Mocking using JustMock

    - by mehfuzh
    Auto mocking containers are designed to reduce the friction of keeping unit test beds in sync with the code being tested as systems are updated and evolve over time. This is one sentence how you define auto mocking. Of course this is a more or less formal. In a more informal way auto mocking containers are nothing but a tool to keep your tests synced so that you don’t have to go back and change tests every time you add a new dependency to your SUT or System Under Test. In Q3 2012 JustMock is shipped with built in auto mocking container. This will help developers to have all the existing fun they are having with JustMock plus they can now mock object with dependencies in a more elegant way and without needing to do the homework of managing the graph. If you are not familiar with auto mocking then I won't go ahead and educate you rather ask you to do so from contents that is already made available out there from community as this is way beyond the scope of this post. Moving forward, getting started with Justmock auto mocking is pretty simple. First, I have to reference Telerik.JustMock.Container.DLL from the installation folder along with Telerik.JustMock.DLL (of course) that it uses internally and next I will write my tests with mocking container. It's that simple! In this post first I will mock the target with dependencies using current method and going forward do the same with auto mocking container. In short the sample is all about a report builder that will go through all the existing reports, send email and log any exception in that process. This is somewhat my  report builder class looks like: Reporter class depends on the following interfaces: IReporBuilder: used to  create and get the available reports IReportSender: used to send the reports ILogger: used to log any exception. Now, if I just write the test without using an auto mocking container it might end up something like this: Now, it looks fine. However, the only issue is that I am creating the mock of each dependency that is sort of a grunt work and if you have ever changing list of dependencies then it becomes really hard to keep the tests in sync. The typical example is your ASP.NET MVC controller where the number of service dependencies grows along with the project. The same test if written with auto mocking container would look like: Here few things to observe: I didn't created mock for each dependencies There is no extra step creating the Reporter class and sending in the dependencies Since ILogger is not required for the purpose of this test therefore I can be completely ignorant of it. How cool is that ? Auto mocking in JustMock is just released and we also want to extend it even further using profiler that will let me resolve not just interfaces but concrete classes as well. But that of course starts the debate of code smell vs. working with legacy code. Feel free to send in your expert opinion in that regard using one of telerik’s official channels. Hope that helps

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

  • 3rd Party Tools: dbForge Studio for SQL Server

    - by Greg Low
    I've been taking a look at some of the 3rd party tools for SQL Server. Today, I looked at DBForge Studio for SQL Server from the team at DevArt. Installation was smooth. I did find it odd that it defaults to SQL authentication, not to Windows but either works fine. I like the way they have followed the SQL Server Management Studio visual layout. That will make the product familiar to existing SQL Server Management Studio users. I was keen to see what the database diagram tools are like. I found that the layouts generated where quite good, and certainly superior to the built-in SQL Server ones in SSMS. I didn't find any easy way to just add all tables to the diagram though. (That might just be me). One thing I did like was that it doesn't get confused when you have role playing dimensions. Multiple foreign key relationships between two tables display sensibly, unlike with the standard SQL Server version. It was pleasing to see a printing option in the diagramming tool. I found the database comparison tool worked quite well. There are a few UI things that surprised me (like when you add a new connection to a database, it doesn't select the one you just added by default) but generally it just worked as advertised, and the code that was generated looked ok. I used the SQL query editor and found the code formatting to be quite fast and while I didn't mind the style that it used by default, it wasn't obvious to me how to change the format. In Tools/Options I found things that talked about Profiles but I wasn't sure if that's what I needed. The help file pointed me in the right direction and I created a new profile. It's a bit odd that when you create a new profile, that it doesn't put you straight into editing the profile. At first I didn't know what I'd done. But as soon as I chose to edit it, I found that a very good range of options were available. When entering SQL code, the code completion options are quick but even though they are quite complete, one of the real challenges is in making them useful. Note in the following that while the options shown are correct, none are actually helpful: The Query Profiler seemed to work quite well. I keep wondering when the version supplied with SQL Server will ever have options like finding the most expensive operators, etc. Now that it's deprecated, perhaps never but it's great to see the third party options like this one and like SQL Sentry's Plan Explorer having this functionality. I didn't do much with the reporting options as I use SQL Server Reporting Services. Overall, I was quite impressed with this product and given they have a free trial available, I think it's worth your time taking a look at it.

    Read the article

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • JavaOne Session Report - Java ME SDK 3.2

    - by Janice J. Heiss
    Oracle Product Manager for Java ME SDK, Sungmoon Cho, presented a session, "Developing Java Mobile and Embedded Applications with Java ME SDK 3.2,” wherein he covered the basic new features of the Java ME Platform SDK 3.2, a state-of-the-art toolbox for developing mobile and embedded applications. The session began with a summary of the four main components of Java ME SDK. A device emulator allows developers to quickly run and test applications before commercialization. It supports CLDC/MIDP CLDC/IMP.NG and CLC/AGUI. A development environment assists writing, running debugging and deploying and enables on-device debugging. Samples provide developers with useful codes and frameworks. IDE Plugins – NetBeans and Eclipse – equip developers with CPU Profiler, Memory Monitor, Network Monitor, and Device Selector. This means that manual integration is no longer necessary. Cho then talked about the Java ME SDK’s on-device tooling architecture: * Java ME SDK provides an architecture ideal for on-device-debugging.* Device Manager plays the central role by managing different devices whether it is the emulator or a device that Oracle provides or recommends or a third party device as long as the devices have a Java Runtime that supports the protocol that is designated.* The Emulator provides an accurate emulation, since it uses the same code base used in Oracle’s Java ME runtime.* The Universal Emulator Interface (UEI) makes it easy for IDEs to detect the platform.He then focused on the Java ME SDK release highlights, which include: * Implementation and support for the new Oracle® Java Wireless Client 3.2 runtime and the Oracle® Java ME Embedded runtime. A full emulation for the runtime is provided.* Support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). This is a new profile for embedded devices. * A new Custom Device Skin Creator.* An Eclipse plugin for CLDC/MIDP.* Profiling, Network monitoring, and Memory monitoring are now integrated with the NetBeans profiling tools.* Java ME SDK Update CenterCho summarized the main features: IDE Integration (NetBeans and Eclipse) enables developers to write, run, profile, and debug their applications on their favorite IDE. CPU ProfilerThis enables developers to more quickly detect the hot spot and where CPU time is being used. They can double click the method to jump directly into the source code.Memory Monitor Developers can monitor objects and memory usage in real time.Debugger on the Emulator and DeviceDevelopers can run their applications step by step, and inspect the variables to pinpoint the problem. The debugging can take place either on the emulator or the device.Embedded Application DevelopmentIMP-NG, Device Access, Logging, and AMS API Support are now available.On-Device ToolingConnect your device to your computer, and run and debug the application right on your device.Custom Device Skin CreatorDefine your own device and test on an environment that is closest to your target device. The informative session concluded with a demo that showed more concretely how to apply the new features in Java ME SDK 3.2.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >