Search Results

Search found 691 results on 28 pages for 'ants profiler'.

Page 15/28 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • CodePlex Daily Summary for Friday, May 25, 2012

    CodePlex Daily Summary for Friday, May 25, 2012Popular ReleasesWardsEngine: WardsEngine Alpha Release 1: A tiny taste of the library is here!Facebook PowerShell Module: FacebookPSModule Alpha 0.6.2: 2012-05-10: Release 0.6.2 adds Test-FBConnection and New-FBConnection -Connection -PageId, fixes a few minor bugs, and prepares for PowerShell 3.0. Soon I will declare this module to be in Beta. 0.6.0: This major new release adds support for Facebook Pages. New Facebook policies will push most medium-or-larger-sized organizations from users/groups to pages, so this support is critical to automate Facebook under the current policies. New-FBConnection -PageId <pageID> to get a connection spec...THE NVL Maker: The NVL Maker Ver 3.5: http://download.codeplex.com/Download?ProjectName=nvlmaker&DownloadId=371510 ????:http://115.com/file/be9pgwiw#THE-NVL-Maker-ver3.5-sim.7z ????:http://www.mediafire.com/?314vut69otco7tc ======================================== ???? ======================================== 3.5 alpha ???: ·??????,???????? ·????????,??????config.tjs????? ·??????KAGConfigEX2????????(??????config.tjs) ·???????? ·2.32?EXE????(???????????????,??????,???????) ????: ·???system????KAGEX1??(2.32?EXE?,??????????????...totalem: version 2012.05.24.1: Beta version added function to extract content of CD / DVD discs extract content of data disc to ISO image extract content of audio CD disc to WAV and MP3 filesPeople's Note: People's Note 0.41: Version 0.41 adds in-app playback of WAV file attachments. It also adds some more polish to the user interface. To install: copy the appropriate CAB file onto your WM device and run it.CODE Framework: 4.0.20524.0: This release has quite a few enhancements for WPF applications and SOA features. See change logs for more details.JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0 RC1 Refresh 2: JayData is a unified data access library for JavaScript developers to query and update data from different sources like webSQL, indexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video: http://www.youtube.com/watch?v=LlJHgj1y0CU Overview Knockout.js integrationUsing the Knockout.js module, your UI can be automatically refreshed when the data model changes, so you can develop the front-end of your data manager app even faster. Querying 1:N relations in WebSQL & SQLite SQ...????SDK for .Net 4.X: 2012?5?25??: ?Google Code????,???????????? *??????? *??????? *?????,????????????API?????Christoc's DotNetNuke Module Development Template: 00.00.08 for DNN6: BEFORE USE YOU need to install the MSBuild Community Tasks available from http://msbuildtasks.tigris.org For best results you should configure your development environment as described in this blog post Then read this latest blog post about customizing and using these custom templates. Installation is simple To use this template place the ZIP (not extracted) file in your My Documents\Visual Studio 2010\Templates\ProjectTemplates\Visual C#\Web OR for VB My Documents\Visual Studio 2010\Te...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.53: fix issue #18106, where member operators on numeric literals caused the member part to be duplicated when not minifying numeric literals ADD NEW FEATURE: ability to create source map files! The first mapfile format to be supported is the Script# format. Use the new -map filename switch to create map files when building your sources.BlackJumboDog: Ver5.6.3: 2012.05.22 Ver5.6.3  (1) HTTP????????、ftp://??????????????????????LogicCircuit: LogicCircuit 2.12.5.22: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionThis release is fixing start up issue.Orchard Project: Orchard 1.4.2: This is a service release to address 1.4 and 1.4.1 bugs. Please read our release notes for Orchard 1.4.2: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.0): New fetures:View other users predictions Hide/Show background image (web part property) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...Metadata Document Generator for Microsoft Dynamics CRM 2011: Metadata Document Generator (2.0.0.0): New UI Metro style New features Save and load settings to/from file Export only OptionSet attributes Use of Gembox Spreadsheet to generate Excel (makes application lighter : 1,5MB instead of 7MB)ExtAspNet: ExtAspNet v3.1.6: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://bbs.extasp.net/ ??:http://demo.extasp.net/ ??:http://doc.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-20 v3.1.6 -??RowD...Dynamics XRM Tools: Dynamics XRM Tools BETA 1.0: The Dynamics XRM Tools 1.0 BETA is now available Seperate downloads are available for On Premise and Online as certain features are only available On Premise. This is a BETA build and may not resemble the final release. Many enhancements are in development and will be made available soon. Please provide feedback so that we may learn and discover how to make these tools better.PHPExcel: PHPExcel 1.7.7: See Change Log for details of the new features and bugfixes included in this release. BREAKING CHANGE! From PHPExcel 1.7.8 onwards, the 3rd-party tcPDF library will no longer be bundled with PHPExcel for rendering PDF files through the PDF Writer. The PDF Writer is being rewritten to allow a choice of 3rd party PDF libraries (tcPDF, mPDF, and domPDF initially), none of which will be bundled with PHPExcel, but which can be downloaded seperately from the appropriate sites.GhostBuster: GhostBuster Setup (91520): Added WMI based RestorePoint support Removed test code from program.cs Improved counting. Changed color of ghosted but unfiltered devices. Changed HwEntries into an ObservableCollection. Added Properties Form. Added Properties MenuItem to Context Menu. Added Hide Unfiltered Devices to Context Menu. If you like this tool, leave me a note, rate this project or write a review or Donate to Ghostbuster. Donate to GhostbusterEXCEL??、??、????????:DataPie(??MSSQL 2008、ORACLE、ACCESS 2007): DataPie_V3.2: V3.2, 2012?5?19? ????ORACLE??????。New Projectsando: UltraPackageBildschirmpause: A small WPF / .net 4.0 app that reminds you to give your eyes some rest.DataJuggler.Web.Controls: DataJuggler.Web.Controls is a web control library with over a dozen useful controls that make web development faster and easier. The controls are free to use, I do accept donations or you can help promote my web site www.daily-click.com, which focuses on saving people and animals. If you have any questions, suggestions, problems or words of praise, please send an email to support@daily-click.com Daily-Click.com Using Technology to help people and animals.Eraile - My AI for Google Ants: My AI code (C#) for the Google Ants "competition".Expression Tree Visualizer for VS 2010: The entire project is a "visualizer" that displays the expression tree nodes and node attributes. only important attributes are selected to be appended to the visualization tree in order to keep it simple and useful.Fix soft Animation Maker: Animate everything in Windows Forms even easier than WPF.Creating ImageAnimation, ColorAnimations, DobleAnimations and IntegerAnimations was never easier.With a line code,you can create Fade in animation,moves,resizes and … .This product is made for Windows Forms but it works on WPF,too.It supports changes of values when animation is playing.It means the animation macth itself with new value.Invasive Cafe: Community website to help educate communities about local invasive species and organize and mobilize them to mitigate them. Includes Silverlight games to educate, entertain and attract users.JiHelper: Helper--JiPh?n m?m qu?n lý TT Ngo?i Ng? - Tin H?c ÐH Tây Nguyên: Ph?n m?m qu?n lý TT Ngo?i Ng? - Tin H?c ÐH Tây Nguyên Lu?n van T?t Nghi?p ÐHQWeiBo: ???????????????,???,??wp7??,??????。SCWLib: This project is a CSharp version of ShareChiWai Utility Lib http://sharechiwailib.codeplex.com which is a collection of code which I used when I develop application/website. Instead of writing the same code on every project, I built it as a dll, so that we [as a developer] can reuse it in future. Lets make it better =)It may have different Lib to ShareChiWai Utility Lib, however, in future it will have the same and probably a bit more feature than ShareChiWai Utility Lib. Hope you find i...Silverlight CRM Lookup Control: SilverCrmLookup is a Silverlight user control for CRM 2011 to help improve user experience for silverlight web resources. Contains built in properties that gives Single Lookup or Multi Lookup options and item auto complete options , Filtering ...SkypeLogIPGrabber: The SkypeLogIPGrabber works with the patched Skype 5.5/5.9 clients to read the produced unencrypted log files to give the IP Information that the Skype client logged.SmartLogistic: Smartlogistic Supply Chain Managment SystemSql Cloud: i will write laterSym: Sym is a symbolic computation application by SymbolicComputation.com. Solve algebra problems, etc.template Nas: template nastestdd0524201201: dfstestdd05242012git001: dftestddhg0524201201: adsftesttfs0524201201: dsftesttom05242012git01: dsfdsfdsftesttom05242012git02: fdsfdsfdsfTwitterOAuth: TwitterOAuth an oauth library for .net apps,Ulfi: Universal log filter (Ulfi) is a tool I've written for myself to ease watching through painfully error logs.VSTestManagerSandbox: This project was created solely for those people who needs a sandbox to use VS Test ManagerWindows Phone Cocos 2D like Game Engine: This project seeks to make an engine for XNA Game Development for Windows Phone, similar to the popular Cocos2D (http://www.cocos2d-iphone.org/). Due to the success of this engine on platforms like iPhone and Android would be easier for developers to port or create new games on this engine. This engine contains many similarities with the original cocos2d engine for iOS, including full compatibility with the tileset plist files created in many availables tools. WP Coco use a animation...XNA TileViewer: Texture tilling preview program written in C#, XNA.YuCMS: YuCMS ??CMS,??CMS ??????~ ?????????: ??????SDK for .Net 4.X: ????????????API?????,???.Net 4.0??????WebForm、Winform?WPF??????。

    Read the article

  • propertyregex removes return characters in multiline

    - by javydreamercsw
    I'm using ants propertyregex method to change a property and it works fine up to a point. I'm lossing return characters. Here's what I'm trying to change: cluster.path=\ ${nbplatform.active.dir}/harness:\ ${nbplatform.active.dir}/platform:\ ${nbplatform.active.dir}/nb This is in a .properties file. So I'm trying to change it like this: <propertyregex property="cluster.path" input="${cluster.path}" regexp="nbplatform.active.dir" replace="xplatform.base" global="true" override="true"/> The stuff is replaced but I get: cluster.path= ${xplatform.base}/harness\: ${xplatform.base}/platform\: ${xplatform.base}/nb This brakes logic down the line not controlled by me (Netbeans) that uses the ':' as delimiter. Any idea?

    Read the article

  • NFOP performance problem

    - by Jay Stevens
    We're using NFOP in a project (C#, ASP.NET 2.0) to ultimately return PDF files to the user. The process currently goes like this: Stored Procedure - XML XML - XSLT - XSL-FO XSL-FO - NFOP - PDF This works fine, the PDF is generated BEAUTIFULLY. The problem is that it takes 300+ seconds to do it. The ANTS profiler indicates that the problem is sitting in the driver.run() method inside of NFOP. It's not like this is a gargantuan amount of data, the size of the xsl-fo source going into the nfop driver object is ~980k. What's the most likely source and resolution of this problem? ANY hints or tips or answers are most appreciated, we were supposed to head to VA scan at 11 am. :|

    Read the article

  • What are some good algorithms for drawing lines between graph nodes?

    - by ApplePieIsGood
    What I'm specifically grappling with is not just the layout of a graph, but when a user selects a graph node and starts to drag it around the screen area, the line has to constantly be redrawn to reflect what it would look like if the user were to release the node. I suppose this is part of the layout algorithm? Also some applications get a bit fancy and don't simply draw the line in a nice curvy way, but also bend the line around the square shaped node in almost right angles. See attached image and keep in mind that as a node is dragged, the line is drawn as marching ants, and re-arranged nicely, while retaining its curved style.

    Read the article

  • OutOfMemoryException, stack size is huge, large number of threads

    - by Captain Comic
    Hello, I was profiling my .net windows service. I was trying to discover OutOfMemoryException and discovered that my stack size is huge and is growing because the the number of threads keeps growing. Each thread gets 1024 KB on Windows x64 machine. Thus when my app has 754 threads the stack size would be 772 MB. The problem for me is that i don't know where these thread come from. Initially my app has a very limited number of threads and they keep growing with time. I have two suspicions - either these threads are created by WCF or by database connection. My application uses both WCF and datasets. Also I tried to profile my app in Ants do Trace i can see large number of System.ServiceModel.Channels.ClientReliableDuplexSessionChannel and this number is increasing with time. I can see thousands of these objects created. So what I want to know is who is creating threads (tools to discover, profilers) and if it is WCF who is creating these threads.

    Read the article

  • Animation issue caused by C# parameters passed by reference rather than value, but where?

    - by Jordan Roher
    I'm having trouble with sprite animation in XNA that appears to be caused by a struct passed as a reference value. But I'm not using the ref keyword anywhere. I am, admittedly, a C# noob, so there may be some shallow bonehead error in here, but I can't see it. I'm creating 10 ants or bees and animating them as they move across the screen. I have an array of animation structs, and each time I create an ant or bee, I send it the animation array value it requires (just [0] or [1] at this time). Deep inside the animation struct is a timer that is used to change frames. The ant/bee class stores the animation struct as a private variable. What I'm seeing is that each ant or bee uses the same animation struct, the one I thought I was passing in and copying by value. So during Update(), when I advance the animation timer for each ant/bee, the next ant/bee has its animation timer advanced by that small amount. If there's 1 ant on screen, it animates properly. 2 ants, it runs twice as fast, and so on. Obviously, not what I want. Here's an abridged version of the code. How is BerryPicking's ActorAnimationGroupData[] getting shared between the BerryCreatures? class BerryPicking { private ActorAnimationGroupData[] animations; private BerryCreature[] creatures; private Dictionary<string, Texture2D> creatureTextures; private const int maxCreatures = 5; public BerryPickingExample() { this.creatures = new BerryCreature[maxCreatures]; this.creatureTextures = new Dictionary<string, Texture2D>(); } public void LoadContent() { // Returns data from an XML file Reader reader = new Reader(); animations = reader.LoadAnimations(); CreateCreatures(); } // This is called from another function I'm not including because it's not relevant to the problem. // In it, I remove any creature that passes outside the viewport by setting its creatures[] spot to null. // Hence the if(creatures[i] == null) test is used to recreate "dead" creatures. Inelegant, I know. private void CreateCreatures() { for (int i = 0; i < creatures.Length; i++) { if (creatures[i] == null) { // In reality, the name selection is randomized creatures[i] = new BerryCreature("ant"); // Load content and texture (which I create elsewhere) creatures[i].LoadContent( FindAnimation(creatures[i].Name), creatureTextures[creatures[i].Name]); } } } private ActorAnimationGroupData FindAnimation(string animationName) { int yourAnimation = -1; for (int i = 0; i < animations.Length; i++) { if (animations[i].name == animationName) { yourAnimation = i; break; } } return animations[yourAnimation]; } public void Update(GameTime gameTime) { for (int i = 0; i < creatures.Length; i++) { creatures[i].Update(gameTime); } } } class Reader { public ActorAnimationGroupData[] LoadAnimations() { ActorAnimationGroupData[] animationGroup; XmlReader file = new XmlTextReader(filename); // Do loading... // Then later file.Close(); return animationGroup; } } class BerryCreature { private ActorAnimation animation; private string name; public BerryCreature(string name) { this.name = name; } public void LoadContent(ActorAnimationGroupData animationData, Texture2D sprite) { animation = new ActorAnimation(animationData); animation.LoadContent(sprite); } public void Update(GameTime gameTime) { animation.Update(gameTime); } } class ActorAnimation { private ActorAnimationGroupData animation; public ActorAnimation(ActorAnimationGroupData animation) { this.animation = animation; } public void LoadContent(Texture2D sprite) { this.sprite = sprite; } public void Update(GameTime gameTime) { animation.Update(gameTime); } } struct ActorAnimationGroupData { // There are lots of other members of this struct, but the timer is the only one I'm worried about. // TimerData is another struct private TimerData timer; public ActorAnimationGroupData() { timer = new TimerData(2); } public void Update(GameTime gameTime) { timer.Update(gameTime); } } struct TimerData { public float currentTime; public float maxTime; public TimerData(float maxTime) { this.currentTime = 0; this.maxTime = maxTime; } public void Update(GameTime gameTime) { currentTime += (float)gameTime.ElapsedGameTime.TotalSeconds; if (currentTime >= maxTime) { currentTime = maxTime; } } }

    Read the article

  • WeakReferences are not freed in embedded OS

    - by Carsten König
    I've got a strange behavior here: I get a massive memory leak in production running a WPF application that runs on a DLOG-Terminal (Windows Embedded Standard SP1) that behaves perfectly fine if I run it localy on a normal desktop (Win7 prof.) After many unsucessful attempts to find any problem I put one of those directly beside my monitor, installed the ANTs MemoryProfiler and did one hour test run simulating user operations on both the terminal and my development PC. Result is, that due to some strange reasons the embedded system piles up a huge amount of WeakReference and EffectiveValueEntry[] Objects. Here are are some pictures: Development (PC): And the terminal: Just look at the class list... Has anyone seen something like this before and are there known solutions to this? Where can I get help? (PS the terminals where installed with images prepared for .net4)

    Read the article

  • How to profile a silverlight application?

    - by rudigrobler
    Is their any profilers that support Silverlight? I have tried ANTS (Version 3.1) without any success? Does version 4 support it? Any other products I can try? Updated since the release of Silverlight 4, it is now possible to do full profiling on SL applications... check out this article on the topic At PDC, I announced that Silverlight 4 came with the new CoreCLR capability of being profile-able by the VS2010 profilers: this means that for the first time, we give you the power to profile the managed and native code (user or platform) used by a Silverlight application. woohoo. kudos to the CLR team. Sidenote: From silverlight 1-3, one could only use things like xperf (see XPerf: A CPU Sampler for Silverlight) which is very powerful to see the layout/text/media/gfx/etc pipelines, but only gives the native callstack.) From SilverLite (PDC video, TechEd Iceland, VS2010, profiling, Silverlight 4)

    Read the article

  • WPF Datatemplate + ItemsControl each item uses > 1 MB Memory?

    - by Matt H.
    Does that sound right to anyone???? I have an ItemsControl that displays data from a custom object that implements iNotifyPropertyChanged. The DataTemplate consists of: Border 3 buttons 5 textboxes An ellipse A Bindable RichTextBox (custom class that inherits from RichTextBox... so I could make Document a dependency property (to support binding)) Several grids and stackpanels for layout It uses: Styles (stored in a resource dictionary higher up the tree) Styles affect: colors, thicknesses, and text properties: which are data-bound to a "settings" class that implements iNotifyPropertyChanged, so the user can change display settings That's it! So what gives? I've also noticed that when I empty and remove the ItemsControl, memory isn't freed. over 5000 instances of "CommandBindingCollection" and "WeakReference" are CREATED (using ANTS profiler). And huge number of EffectiveValueEntry objects are created too. So really, what gives!!! :-) Thanks for your insight! Management needs this project soon but in its current state, it's unreleasable.

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Merge replication stopping without errors in SQL 2008 R2

    - by Rob Farley
    A non-SQL MVP friend of mine, who also happens to be a client, asked me for some help again last week. I was planning on writing this up even before Rob Volk (@sql_r) listed his T-SQL Tuesday topic for this month. Earlier in the year, I (well, LobsterPot Solutions, although I’d been the person mostly involved) had helped out with a merge replication problem. The Merge Agent on the subscriber was just stopping every time, shortly after it started. With no errors anywhere – not in the Windows Event Log, the SQL Agent logs, not anywhere. We’d managed to get the system working again, but didn’t have a good reason about what had happened, and last week, the problem occurred again. I asked him about writing up the experience in a blog post, largely because of the red herrings that we encountered. It was an interesting experience for me, also because I didn’t end up touching my computer the whole time – just tapping on my phone via Twitter and Live Msgr. You see, the thing with replication is that a useful troubleshooting option is to reinitialise the thing. We’d done that last time, and it had started to work again – eventually. I say eventually, because the link being used between the sites is relatively slow, and it took a long while for the initialisation to finish. Meanwhile, we’d been doing some investigation into what the problem could be, and were suitably pleased when the problem disappeared. So I got a message saying that a replication problem had occurred again. Reinitialising wasn’t going to be an option this time either. In this scenario, the subscriber having the problem happened to be in a different domain to the publisher. The other subscribers (within the domain) were fine, just this one in a different domain had the problem. Part of the problem seemed to be a log file that wasn’t being backed up properly. They’d been trying to back up to a backup device that had a corruption, and the log file was growing. Turned out, this wasn’t related to the problem, but of course, any time you’re troubleshooting and you see something untoward, you wonder. Having got past that problem, my next thought was that perhaps there was a problem with the account being used. But the other subscribers were using the same account, without any problems. The client pointed out that that it was almost exactly six months since the last failure (later shown to be a complete red herring). It sounded like something might’ve expired. Checking through certificates and trusts showed no sign of anything, and besides, there wasn’t a problem running a command-prompt window using the account in question, from the subscriber box. ...except that when he ran the sqlcmd –E –S servername command I recommended, it failed with a Named Pipes error. I’ve seen problems with firewalls rejecting connections via Named Pipes but letting TCP/IP through, so I got him to look into SQL Configuration Manager to see what kind of connection was being preferred... Everything seemed fine. And strangely, he could connect via Management Studio. Turned out, he had a typo in the servername of the sqlcmd command. That particular red herring must’ve been reflected in his cheeks as he told me. During the time, I also pinged a friend of mine to find out who I should ask, and Ted Kruger (@onpnt) ‘s name came up. Ted (and thanks again, Ted – really) reconfirmed some of my thoughts around the idea of an account expiring, and also suggesting bumping up the logging to level 4 (2 is Verbose, 4 is undocumented ridiculousness). I’d just told the client to push the logging up to level 2, but the log file wasn’t appearing. Checking permissions showed that the user did have permission on the folder, but still no file was appearing. Then it was noticed that the user had been switched earlier as part of the troubleshooting, and switching it back to the real user caused the log file to appear. Still no errors. A lot more information being pushed out, but still no errors. Ted suggested making sure the FQDNs were okay from both ends, in case the servers were unable to talk to each other. DNS problems can lead to hassles which can stop replication from working. No luck there either – it was all working fine. Another server started to report a problem as well. These two boxes were both SQL 2008 R2 (SP1), while the others, still working, were SQL 2005. Around this time, the client tried an idea that I’d shown him a few years ago – using a Profiler trace to see what was being called on the servers. It turned out that the last call being made on the publisher was sp_MSenumschemachange. A quick interwebs search on that showed a problem that exists in SQL Server 2008 R2, when stored procedures have more than 4000 characters. Running that stored procedure (with the same parameters) manually on SQL 2005 listed three stored procedures, the first of which did indeed have more than 4000 characters. Still no error though, and the problem as listed at http://support.microsoft.com/kb/2539378 describes an error that should occur in the Event log. However, this problem is the type of thing that is fixed by a reinitialisation (because it doesn’t need to send the procedure change across as a transaction). And a look in the change history of the long stored procs (you all keep them, right?), showed that the problem from six months earlier could well have been down to this too. Applying SP2 (with sufficient paranoia about backups and how to get back out again if necessary) fixed the problem. The stored proc changes went through immediately after the service pack was applied, and it’s been running happily since. The funny thing is that I didn’t solve the problem. He had put the Profiler trace on the server, and had done the search that found a forum post pointing at this particular problem. I’d asked Ted too, and although he’d given some useful information, nothing that he’d come up with had actually been the solution either. Sometimes, asking for help is the most useful thing you can do. Often though, you don’t end up getting the help from the person you asked – the sounding board is actually what you need. @rob_farley

    Read the article

  • The .NET Rocks! Visual Studio 2010 Road Trip

    - by Laila
    Carl Franklin and Richard Campbell, the two .NET Rocks radio show hosts, have decided to set off to 15 cities in the US, between April 19th and May 7th, in their DotNetMobile (a 30 foot RV). What for you'll ask me? Well, to drive around the US, meet up with .NET developers, and show off the latest and greatest in Visual Studio 2010 and .NET 4.0! Each evening, they stop in a city and host a three hour event in front of a 100 to 300 crowd of developers, where Carl is showing off media features in Silverlight 4 and their road trip tracking application, whilst Richard is demo-ing the web performance testing features of VS2010 using his portable server rig. But before they take to the stage, they have a special guest brought in - a rock star from the Visual Studio world - whom they interview for an hour as a .NET Rock episode. So far, they've had - amongst others - Phil Haack, a Program Manager with the ASP.NET team working on ASP.NET MVC, Dan Fernandez, an Evangelism Manager in the Developer and Platform Evangelism team at Microsoft, and Beth Massi, Senior Program Manager on the Visual Studio Community Team at Microsoft. I love the fact that the audience gets a chance to participate, ask questions and have a great laugh, as you can hear in the first episode! Along the way, the .NET Rocks guys are giving away great prizes (including .NET Reflector Pro, ANTS Memory Profiler licenses, and "40" LCD TVs!). Even more out of the ordinary, at each stop on the road trip, one lucky attendee (who entered in the Ride Along competition) gets to jump in the RV with Carl and Richard and ride along with them to the next stop on the roadtrip. How cool is that! Richard told us: "Our first winner in Mountain View was Eric Ziko. I was looking for him to announce that he had won, when he found us and gave us a bottle of scotch he had brought just to say 'thanks for the great show'. We all had a toast from the bottle the next night when he headed back home." Cheeky! There's still space to a few of these events, so if you want to attend, register now, because it's first come first serve. We're grateful to Richard and Carl for giving us the opportunity to sponsor this major .NET event! A unique .NET adventure worth following for sure. Cheers, Laila

    Read the article

  • Inside Red Gate - Divisions

    - by Simon Cooper
    When I joined Red Gate back in 2007, there were around 80 people in the company. Now, around 3 years later, it's grown to more than 200. It's a constant battle against Dunbar's number; the maximum number of people you can keep track of in a social group, to try and maintain that 'small company' feel that attracted myself and so many others to apply in the first place. There are several strategies the company's developed over the years to try and mitigate the effects of Dunbar's number. One of the main ones has been divisionalisation. Divisions The first division, .NET, appeared around the same time that I started in 2007. This combined the development, sales, marketing and management of the .NET tools (then, ANTS Profiler v3) into a separate section of the office. The idea was to increase the cohesion and communication between the different people involved in the entire lifecycle of the tools; from initial product development, through to marketing, then to customer support, who would feed back to the development team. This was such a success that the other development teams were re-worked around this model in 2009. Nowadays there are 4 divisions - SQL Tools, DBA, .NET, and New Business. Along the way there have been various tweaks to the details - the sales teams have been merged into the divisions, marketing and product support have been (mostly) centralised - but the same basic model remains. So, how has this helped? As Red Gate has continued to grow over the years, divisionalisation has turned Red Gate from a monolithic software company into what one person described as a 'federation of small businesses'. Each division is free to structure itself as it sees fit, it's free to decide what to concentrate development work on, organise its own newsletters and webinars, decide its own release schedule. Each division is its own small business. In terms of numbers, the size of each division varies from 20 people (.NET) to 52 (SQL Tools); well below Dunbar's number. From a developer's perspective, this means organisational structure is very flat & wide - there's only 2 layers between myself and the CEOs (not that it matters much; everyone can go and have a chat to Neil or Simon, or anyone else inbetween, whenever they want. Provided you can catch them at their desk!). As Red Gate grows, and expands into new areas, new divisions will be created as needed, old ones merged or disbanded, but the division structure will help to maintain that small-company feel that keeps Red Gate working as it does.

    Read the article

  • The Missing Post

    - by Joe Mayo
    It’s somewhat of a mystery how the writing process can conjure up results that weren’t initially intended. Case in point is the fact that another post was planned to be in place of this one, but it never made the light of day.  This particular post started off as an introduction to a technology I had just learned, used, and wanted to share the experience with others.  The beginning was fun and demonstrated how easy it was to get started.  One of the things I’ve been pondering over time is that the Web is filled with introductions to new technologies and quick first looks, so I set out to add more depth, share lessons learned, and generally help you avoid the problems I encountered along the way; problems being a key theme of why you aren’t reading that post at this very minute.  Problems that curiously came from nowhere to thwart my good intentions. Success was sweet when using the tool for the prototypical demo scenario. The thing is, I intended the tool to accomplish a real task.  Having embarked on the path toward getting the job done, glitches began creeping into the process.  Realizing that this was all a bit new, I had patience and found a suitable work-around, but this was to be short lived. As in marching ants to a freshly laid out picnic, the problems kept coming until I had to get up and walk away.  Not to be outdone, sheer will and brute force manual intervention led to mission accomplishment.  Though I kept a positive outlook and was pleased at the final result, the process of using the tool had somewhat soured. Regardless of a less than stellar experience with the tool, I have a great deal of respect for the company that produced it and the people who built it. Perhaps I empathize for what they might feel after reading a post that details such deficiencies in their product.  Sure, if you’re in this business, you’ve got to have a thick skin; brush it off, fix the problem, and move on to greatness. But, today I feel like they’re people and are probably already aware of any issues I would seemingly reveal.  Anyone who builds a product or provides a service takes a lot of pride in what they do.  Sometimes they screw up and if their worth a dime, they make it up. I think that will happen in this case and there’s no reason why I should post information that has the potential to sound more negative than helpful.  While no one would ever notice or care either way, I’m posting something that won’t harm. Joe

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • SQL 2008 Database tuning advisor won’t start

    - by Andrew Hancox
    For some reason I can't get DTA to connect to my development machine. It connects to a remote DB just fine but when I point it to my dev machine I get an error saying: Failed to initialize MSDB database for tuning (exit code: -1073741819). I'm pretty sure it's not a permissions issue since I've used profiler to capture what it's doing and all of the commands it's run so far look fine and are being run under my account which is associated with the sysadmin role, when I run them in sql management studio they go through fine. I'm pretty convinced that the problem is related to creating the objects in MSDB that are used by DTA but I tried creating these manually (I found scripts on the web) and it just seems to push the problem along the line slightly. I'm going out of my mind - have even tried reinstalling SQL but that's not fixed it. I'm using SQL 2008 with SP1 (10.0.2531) on windows server 2008 (patched up to date). SAVE ME!!!!!

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • MacBookPro RAM: CL7 or CL9

    - by chris_l
    Hi, I want to increase the RAM in my MacBookPro 15", 2.53 GHz, Mid-2009. I currently have 4GB, and I want 6GB (and upgrade to 8GB later - I think this should work with my model: http://forums.macrumors.com/showthread.php?t=573906) Now I see two offers: 1x4GB DDR3 1333 CL9 9-9 (149 EUR) 1x4GB DDR3 1066 CL7 7-7 (192 EUR). Both are from the same brand (Transcend), both are labeled "For MacBook/MacBookPro". Now I have no idea, which of these would work better with the 2GB RAM module from my currently installed memory (Running at 1067 MHz, according to System Profiler - but with what CL setting?) Would the 1333 CL9 RAM be set to CL7 when running at 1066 MHz? Thanks, Chris PS Should I expect problems when I later upgrade to 8GB with a different module (in case the same module won't be produced anymore)?

    Read the article

  • php-fpm start error

    - by Sujay
    I am using php-fpm. I recently recompiled php for including imap functions. But on php-fpm start it gives the following error: Starting php_fpm Error in argument 1, char 1: no argument for option - Usage: php-cgi [-q] [-h] [-s] [-v] [-i] [-f ] php-cgi [args...] -a Run interactively -C Do not chdir to the script's directory -c | Look for php.ini file in this directory -n No php.ini file will be used -d foo[=bar] Define INI entry foo with value 'bar' -e Generate extended information for debugger/profiler -f Parse . Implies `-q' -h This help -i PHP information -l Syntax check only (lint) -m Show compiled in modules -q Quiet-mode. Suppress HTTP Header output. -s Display colour syntax highlighted source. -v Version number -w Display source with stripped comments and whitespace. -z Load Zend extension ................................... failed What could be the problem? Is it in php-fpm.conf or php.ini.

    Read the article

  • Java Spotlight Episode 113: John Ceccarelli on Netbeans @JCeccarelli1

    - by Roger Brinkley
    Interview with John Ceccarelli on Netbeans. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News JCP Star Spec Leads 2012 Nominations open now until 31 December Java EE 7 Survey Results JavaFX for Tablets Survey JavaFX Scene Builder - Developer Preview Release Oracle JDK 7u10 released with new security features jtreg update, December 2012 Food For Tests: 7u12 Build b05, 8 b68 Preview Builds + Builds with Lambda & Type Annotation Support Developer Preview of Java SE 8 (with JavaFX) for ARM Project Nashorn: The Vote Is In Events Dec 20, 9:30am JCP Spec Lead Call December on Developing a TCK Jan 15-16, JCP EC Face to Face Meeting, West Coast USA Jan 14-17, IOUG, Redwood Shores Jan 29-31, Distributech,  San Diego Feb 2-3 FOSDEM, Brussels Feb 4-6 Jfokus, Sweden Feature Interview John Jullion-Ceccarelli is the head of engineering for the NetBeans open source project and for the VisualVM Java profiler. John started with Sun Microsystems in 2001 as a technical writer and has since held a variety of positions including technical publications manager, engineering manager, and NetBeans IDE 6.9 Release Boss. He recently relocated to the San Francisco Bay Area after 13 years living in Prague, the Czech Republic. What’s Cool Glassfish is 3 years old Arduino/Raspberry-Pi/JavaFX mash-up by Jose Pereda Early Access of Drombler FX for building modular JavaFX applications with OSGi and Maven Eclipse Modeling Framework Support coming for e(fx)clipse 8003562: Provide a command-line tool to find static dependencies Duke’s Choice Awards Winners LAD - includes JCP EC Member TOTVS London Java Community and SouJava jointly win JCP member of the year

    Read the article

  • How to find and diagnose poorly-performing Firefox addins?

    - by Scott Bilas
    I use a lot of addins in my Firefox. One of them is causing periodic freezes when watching video. Super annoying. Aside from doing a binary search of disabling addins to narrow down the problem (which would take a very long time due to frequency of the freezes), are there any other options? If the problem came from the native app, then I'd just load up a profiler and see where the time is going. But it's all in Javascript. Are there any tools that exist to help figure this out? Maybe some instrumentation I can throw on a few key source files in a local build to help diagnose the problem?

    Read the article

  • SQL Server replication - Log Reader Agent Read Latency Issue, Please help

    - by envykok
    Hi all, I am facing one transactional replication delay issue on log reader agent. The log reader output is : ********* STATISTICS SINCE AGENT STARTED ************** 02-28-2011 20:12:08 Execution time (ms): 304141 Work time (ms): 304016 Distribute Repl Cmds Time(ms): 303764 Fetch time(ms): 300813 Repldone time(ms): 1826 Write time(ms): 5319 Num Trans: 15500 Num Trans/Sec: 50.984159 Num Cmds: 191639 Num Cmds/Sec: 630.358271 It seems Log Reader Reader-Thread Latency, and I also run 'sp_replcounters' and see more than 20,000 sec replication latency and keep on increasing. I used SQL profiler to monitor sp_replcmds and found sp_replcmds execution time was 11 sec to 15 sec Is it there any way to optimize to make Log Reader read faster from transaction log??? Other information: SQL Server 2008 (SP2) Standard 64 bit

    Read the article

  • VS 2010 IDE Features in a nutshell

    - by Rajesh Pillai
    Going through a VS 2010 IDE Features.  We will explore each feature in subsequent posts.  The post are documented as being reviewed by me.   Breakpoint Labeling Breakpoint Searching Breakpoint Import/Export Dynamic Data Tooling WPF Tree Visualizer Call Hierarchy Improved WPF Tooling Historical Debugging Mini-Dump Debugging Quick Search Better Multi-Monitor Support Highlight References Parallel Stacks Window Parallel Tasks Window Document Map Margin Generate from Usage Concurrency Profiler Inline Call Tree Extensible Test Runner MVC Tooling Web Deploy JQuery IntelliSense SharePoint Tooling HTML Snippets Web.config Transformation ClickOnce Enhancements for MS Office     VS is an editor as well as a platform for development and this is only more true with VS 2010.  As an editor there is improved forcus on writing code, understanding code, navigating and publishing code.   VS Shell has been completely rewritten using WPF extending huge benefits.  The start page has been rewritten using XAML, so it is easy to customize.   Support new support for Silverlight, MFC, F# , Azure and extended support for Office 2010, Sharepoint.   Has a good Extension Manager as well.   Enjoy Coding !!!

    Read the article

  • Install OS X 10.5 on Intel 1GB Macbook

    - by Moshe
    I'm trying to install Mac OS X 10.5 on a MacBook that previously had 10.4 on it. i am getting an error after selecting the language. What could be causing a box on the screen that says: Mac OS X cannot be installed on this computer. It's an Intel MacBook with 1GB of DDR2 memory. 80GB hard drive. It has been formatted now, so there's no going back. According to the System Profiler on the 10.5 boot CD, the MacBook is version MacBook2,1. Thanks!

    Read the article

  • Is there a way to capture the "network user name" and the "host name" during a "login failed" alert

    - by Jorge Rivera
    I am a SQL Server 7 DBA and I would like to "improve" the default "Login failed" alert that brings SQL 7. I need a way to know what "user" (network user name) was trying to connect to the server (not only the SQL user account) and I would like to know from where the user was trying to connect (the hostname). Does someone know if this is possible? Using the SQL Profiler is not the best way since is a resource consuming task. Would be a way to setup an alert for doing this? Thanks in advance.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >