Search Results

Search found 105 results on 5 pages for 'marching ants'.

Page 4/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • How to profile a silverlight application?

    - by rudigrobler
    Is their any profilers that support Silverlight? I have tried ANTS (Version 3.1) without any success? Does version 4 support it? Any other products I can try? Updated since the release of Silverlight 4, it is now possible to do full profiling on SL applications... check out this article on the topic At PDC, I announced that Silverlight 4 came with the new CoreCLR capability of being profile-able by the VS2010 profilers: this means that for the first time, we give you the power to profile the managed and native code (user or platform) used by a Silverlight application. woohoo. kudos to the CLR team. Sidenote: From silverlight 1-3, one could only use things like xperf (see XPerf: A CPU Sampler for Silverlight) which is very powerful to see the layout/text/media/gfx/etc pipelines, but only gives the native callstack.) From SilverLite (PDC video, TechEd Iceland, VS2010, profiling, Silverlight 4)

    Read the article

  • WPF Datatemplate + ItemsControl each item uses > 1 MB Memory?

    - by Matt H.
    Does that sound right to anyone???? I have an ItemsControl that displays data from a custom object that implements iNotifyPropertyChanged. The DataTemplate consists of: Border 3 buttons 5 textboxes An ellipse A Bindable RichTextBox (custom class that inherits from RichTextBox... so I could make Document a dependency property (to support binding)) Several grids and stackpanels for layout It uses: Styles (stored in a resource dictionary higher up the tree) Styles affect: colors, thicknesses, and text properties: which are data-bound to a "settings" class that implements iNotifyPropertyChanged, so the user can change display settings That's it! So what gives? I've also noticed that when I empty and remove the ItemsControl, memory isn't freed. over 5000 instances of "CommandBindingCollection" and "WeakReference" are CREATED (using ANTS profiler). And huge number of EffectiveValueEntry objects are created too. So really, what gives!!! :-) Thanks for your insight! Management needs this project soon but in its current state, it's unreleasable.

    Read the article

  • The .NET Rocks! Visual Studio 2010 Road Trip

    - by Laila
    Carl Franklin and Richard Campbell, the two .NET Rocks radio show hosts, have decided to set off to 15 cities in the US, between April 19th and May 7th, in their DotNetMobile (a 30 foot RV). What for you'll ask me? Well, to drive around the US, meet up with .NET developers, and show off the latest and greatest in Visual Studio 2010 and .NET 4.0! Each evening, they stop in a city and host a three hour event in front of a 100 to 300 crowd of developers, where Carl is showing off media features in Silverlight 4 and their road trip tracking application, whilst Richard is demo-ing the web performance testing features of VS2010 using his portable server rig. But before they take to the stage, they have a special guest brought in - a rock star from the Visual Studio world - whom they interview for an hour as a .NET Rock episode. So far, they've had - amongst others - Phil Haack, a Program Manager with the ASP.NET team working on ASP.NET MVC, Dan Fernandez, an Evangelism Manager in the Developer and Platform Evangelism team at Microsoft, and Beth Massi, Senior Program Manager on the Visual Studio Community Team at Microsoft. I love the fact that the audience gets a chance to participate, ask questions and have a great laugh, as you can hear in the first episode! Along the way, the .NET Rocks guys are giving away great prizes (including .NET Reflector Pro, ANTS Memory Profiler licenses, and "40" LCD TVs!). Even more out of the ordinary, at each stop on the road trip, one lucky attendee (who entered in the Ride Along competition) gets to jump in the RV with Carl and Richard and ride along with them to the next stop on the roadtrip. How cool is that! Richard told us: "Our first winner in Mountain View was Eric Ziko. I was looking for him to announce that he had won, when he found us and gave us a bottle of scotch he had brought just to say 'thanks for the great show'. We all had a toast from the bottle the next night when he headed back home." Cheeky! There's still space to a few of these events, so if you want to attend, register now, because it's first come first serve. We're grateful to Richard and Carl for giving us the opportunity to sponsor this major .NET event! A unique .NET adventure worth following for sure. Cheers, Laila

    Read the article

  • Inside Red Gate - Divisions

    - by Simon Cooper
    When I joined Red Gate back in 2007, there were around 80 people in the company. Now, around 3 years later, it's grown to more than 200. It's a constant battle against Dunbar's number; the maximum number of people you can keep track of in a social group, to try and maintain that 'small company' feel that attracted myself and so many others to apply in the first place. There are several strategies the company's developed over the years to try and mitigate the effects of Dunbar's number. One of the main ones has been divisionalisation. Divisions The first division, .NET, appeared around the same time that I started in 2007. This combined the development, sales, marketing and management of the .NET tools (then, ANTS Profiler v3) into a separate section of the office. The idea was to increase the cohesion and communication between the different people involved in the entire lifecycle of the tools; from initial product development, through to marketing, then to customer support, who would feed back to the development team. This was such a success that the other development teams were re-worked around this model in 2009. Nowadays there are 4 divisions - SQL Tools, DBA, .NET, and New Business. Along the way there have been various tweaks to the details - the sales teams have been merged into the divisions, marketing and product support have been (mostly) centralised - but the same basic model remains. So, how has this helped? As Red Gate has continued to grow over the years, divisionalisation has turned Red Gate from a monolithic software company into what one person described as a 'federation of small businesses'. Each division is free to structure itself as it sees fit, it's free to decide what to concentrate development work on, organise its own newsletters and webinars, decide its own release schedule. Each division is its own small business. In terms of numbers, the size of each division varies from 20 people (.NET) to 52 (SQL Tools); well below Dunbar's number. From a developer's perspective, this means organisational structure is very flat & wide - there's only 2 layers between myself and the CEOs (not that it matters much; everyone can go and have a chat to Neil or Simon, or anyone else inbetween, whenever they want. Provided you can catch them at their desk!). As Red Gate grows, and expands into new areas, new divisions will be created as needed, old ones merged or disbanded, but the division structure will help to maintain that small-company feel that keeps Red Gate working as it does.

    Read the article

  • Using Appendbuffers in unity for terrain generation

    - by Wardy
    Like many others I figured I would try and make the most of the monster processing power of the GPU but I'm having trouble getting the basics in place. CPU code: using UnityEngine; using System.Collections; public class Test : MonoBehaviour { public ComputeShader Generator; public MeshTopology Topology; void OnEnable() { var computedMeshPoints = ComputeMesh(); CreateMeshFrom(computedMeshPoints); } private Vector3[] ComputeMesh() { var size = (32*32) * 4; // 4 points added for each x,z pos var buffer = new ComputeBuffer(size, 12, ComputeBufferType.Append); Generator.SetBuffer(0, "vertexBuffer", buffer); Generator.Dispatch(0, 1, 1, 1); var results = new Vector3[size]; buffer.GetData(results); buffer.Dispose(); return results; } private void CreateMeshFrom(Vector3[] generatedPoints) { var filter = GetComponent<MeshFilter>(); var renderer = GetComponent<MeshRenderer>(); if (generatedPoints.Length > 0) { var mesh = new Mesh { vertices = generatedPoints }; var colors = new Color[generatedPoints.Length]; var indices = new int[generatedPoints.Length]; //TODO: build this different based on topology of the mesh being generated for (int i = 0; i < indices.Length; i++) { indices[i] = i; colors[i] = Color.blue; } mesh.SetIndices(indices, Topology, 0); mesh.colors = colors; mesh.RecalculateNormals(); mesh.Optimize(); mesh.RecalculateBounds(); filter.sharedMesh = mesh; } else { filter.sharedMesh = null; } } } GPU code: #pragma kernel Generate AppendStructuredBuffer<float3> vertexBuffer : register(u0); void genVertsAt(uint2 xzPos) { //TODO: put some height generation code here. // could even run marching cubes / dual contouring code. float3 corner1 = float3( xzPos[0], 0, xzPos[1] ); float3 corner2 = float3( xzPos[0] + 1, 0, xzPos[1] ); float3 corner3 = float3( xzPos[0], 0, xzPos[1] + 1); float3 corner4 = float3( xzPos[0] + 1, 0, xzPos[1] + 1 ); vertexBuffer.Append(corner1); vertexBuffer.Append(corner2); vertexBuffer.Append(corner3); vertexBuffer.Append(corner4); } [numthreads(32, 1, 32)] void Generate (uint3 threadId : SV_GroupThreadID, uint3 groupId : SV_GroupID) { uint2 currentXZ = unint2( groupId.x * 32 + threadId.x, groupId.z * 32 + threadId.z); genVertsAt(currentXZ); } Can anyone explain why when I call "buffer.GetData(results);" on the CPU after the compute dispatch call my buffer is full of Vector3(0,0,0), I'm not expecting any y values yet but I would expect a bunch of thread indexes in the x,z values for the Vector3 array. I'm not getting any errors in any of this code which suggests it's correct syntax-wise but maybe the issue is a logical bug. Also: Yes, I know I'm generating 4,000 Vector3's and then basically round tripping them. However, the purpose of this code is purely to learn how round tripping works between CPU and GPU in Unity.

    Read the article

  • Varying performance of MSVC release exe

    - by Andrew
    Hello everyone, I am curious what could be the reason for highly varying performance of the same executable. Sometimes, I run it and it takes 20 seconds and sometimes it is 110. Source is compiled with MSVC in Release mode with standard options. The code is here: vector<double> Un; vector<double> Ucur; double *pUn, *pUcur; ... // time marching for (old_time=time-logfreq, time+=dt; time <= end_time; time+=dt) { for (i=1, j=Un.size()-1, pUn=&Un[1], pUcur=&Ucur[1]; i < j; ++i, ++pUn, ++pUcur) { *pUcur = (*pUn)*(1.0-0.5*alpha*( *(pUn+1) - *(pUn-1) )); } Ucur[0] = (Un[0])*(1.0-0.5*alpha*( Un[1] - Un[j] )); Ucur[j] = (Un[j])*(1.0-0.5*alpha*( Un[0] - Un[j-1] )); Un = Ucur; }

    Read the article

  • Application Performance: The Best of the Web

    - by Michaela Murray
    Wisdom A deep understanding and realization […] resulting in the ability to apply perceptions, judgements and actions. It is also the comprehension of what is true coupled with optimum judgment as to action. - Wikipedia We’re writing a book for ASP.NET developers, and we want you to be a part of it. We know that there’s a huge amount of web developer wisdom that never gets shared, and we want to find those golden nuggets of knowledge and experience, and make sure everyone can learn from them. Right now, we want to find out about your top tips, hard-won lessons, and sage advice for avoiding, finding, and fixing application performance problems. If you work with .NET and SQL, even better – a lot of application performance relies on the interaction with the database, so we want to hear from you! “How Do You Want Me To Be Involved?” Right! Details! We want you, our most excellent readers, to email us with the Best Advice you would give to other developers for getting the best performance out of their applications. It doesn’t matter if your advice is for newbies or veterans, .NET or SQL – so long as it’s about application performance, we want to hear from you. (And if you think that there’s developer wisdom out there that “everyone knows”, a) I’m willing to bet you could find someone who doesn’t know about it, and b) it probably bears repeating anyway!) “I’m Interested. What Can You Do For Me?” Excellent question. For starters, there’s a chance to win a Microsoft Surface (the tablet, not the table-top). Once all the ASP.NET Wisdom has been collected, tallied, and labelled, it will then be weighed and measured by a team of expert judges (whose identities are still a closely-guarded secret).  The top tip in both SQL & .NET categories will each win their author their very own MS Surface. But that’s not all! We can also give you… immortality! More details? Ok. We’ll be collecting all of the tips sent in by our readers (and we can’t wait to learn from you all,) and with the help of our Simple-Talk editors, we will publish and distribute your combined and documented knowledge as a free, community-created, professionally typeset eBook. You will naturally be credited by name / pseudonym / twitter handle / GitHub username / StackOverflow profile / Whatever, as the clearly ingenious author of hot performance tips. The Not-Very-Fine Print Here’s the breakdown: We want to bring together the best application performance knowledge from ASP.NET developers. Closing date for submissions will be 9am GMT, December 4th. Submissions should be made by email – [email protected] Submissions will be judged by a panel of expert judges (who will be revealed soon). The top submission in both the SQL & .NET categories will each win a Microsoft Surface. ALL the tips which make it through the judging process will be polished by Simple-Talk editors, and turned into a professionally typeset eBook, which will be freely available, and promoted alongside the ANTS Performance Profiler tool. Anyone whose entry makes it into the book will be clearly and profusely credited in the method of their choice (or can remain anonymous.) The really REALLY short version Share what you know about ASP.NET application performance for a chance to win a Microsoft Surface, and then get your name credited in a slick eBook with top-notch production values. For more details, see above. We can’t wait to learn from you!

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • .NET "must-have" development tools

    - by nzpcmad
    James Avery wrote a classic article a while back entitled Ten Must-Have Tools Every Developer Should Download Now which is a companion to Visual Studio Add-Ins Every Developer Should Download Now and Scott Hanselman has an excellent list on his blog but if you were on a desert island and were only allowed three .NET development tools which ones would you pick? Update: Assuming you already have an IDE like Visual Studio ... Update (5) : Up to 08/01 : The current state of play: Reflector 13 Resharper 9 NUnit + TestDriven.Net 7 Refactor Pro 4 Process Explorer (other Sysinternals) 3 SnippetCompiler 3 CodeRush 3 MSDN Library 2 LinqPad 2 Cruisecontrol.net 2 VMWare 2 RhinoMocks 2 Fiddler 2 PowerShell 2 PowerCommands for VS 2008 1 Sandcastle 1 SQL Profiler 1 Redgate ANTS profiler 11 NCover 1 VisualSVN 1 Rubber Ducky 1 WinMerge 1 NAnt 1 ViEmu 1 AnkhSVN 1 dotTrace Profiler 1 BeyondCompare 1 DPack VS Plugin 1 WCF Trace Viewer (SDK) 1 xUnit.net 1 SourceGear DiffMerge 1 Ghostdoc 1 Expression Studio 1 XAML Pad 1 KaXaml 1 Blender for 3D modeling 1 Snoop a WPF tool 1 DiffMerge 1 DPack 1 NDepend 1 Kodos 1 WatiN 1 HTTPWatch Basic Edition 1 Paint.Net 1 Mole For VS 1 What I find particularly interesting about this is that "NUnit + TestDriven.Net " is right up there in third place which shows the growing emphasis on testing as an integral part of the development process rather than as an adjunct which is simply bolted on. And I'm somewhat perplexed that Codesmith didn't receive a single vote?

    Read the article

  • Memory leak when using Workflow 4.0 SqlWorkflowInstanceStore and PersistableIdleAction.Unload

    - by Rohland
    Hi, This particular problem is driving me nuts. I wonder if anyone has experienced a similar problem. If I load up a workflow then unload it and perform a memory snapshot then the result is predictable - my workflow is no longer in memory. However, if I load up a workflow and set the PersistableIdle action to PersistableIdleAction.Unload and let the workflow idle the workflow remains in memory even though the Unload action fires. I used ANTS Memory Profiler to debug this issue. This is the object retention graph outputted showing that an internal object is hanging onto my workflow instance. Can anyone else verify this problem? My code amounts to the following: Create SqlWorkflowInstanceStore and setup lock owner handle -- At this point I take a memory snapshot Create an instance of Workflow1 Set the PersistableIdle action Apply the instancestore to Workflow1 Setup action event handlers for Idle, Unload, UnhandledException etc. Persist the workflow instance Run the workflow instance Wait for instance to idle (caused by Delay activity) Ensure the Unload action is fired -- At this point I take a second memory snapshot From the above image, it is clear that the only object referencing Workflow1 is some internal event handlers result which I have no ability to dispose of. Any clues?

    Read the article

  • AntFarm anti-pattern -- strategies to avoid, antidotes to help heal from

    - by alchemical
    I'm working on a 10 page web site with a database back-end. There are 500+ objects in use, trying to implement the MVP pattern in ASP.Net. I'm tracing the code-execution from a single-page, my finger has been on F-11 in Visual Studio for about 40 minutes, there seems to be no end, possibly 1000+ method calls for one web page! If it was just 50 objects that would be one thing, however, code execution snakes through all these objects just like millions of ants frantically woring in their giant dirt mound house, riddled with object tunnels. Hence, a new anti-pattern is born : AntFarm. AntFarm is also known as "OO-Madnes", "OO-Fever", OO-ADD, or simply design-pattern junkie. This is not the first time I've seen this, nor my associates at other companies. It seems that this style is being actively propogated, or in any case is a misunderstanding of the numerous OO/DP gospels going around... I'd like to introduce an anti-pattern to the anti-pattern: GST or "Get Stuff Done" AKA "Get Sh** done" AKA GRD (GetRDone). This pattern focused on just what it says, getting stuff done, in a simple way. I may try to outline it more in a later post, or please share your ideas on this antidote pattern. Anyway, I'm in the midst of a great example of AntFarm anti-pattern as I write (as a bonus, there is no documentation or comments). Please share you thoughts on how this anti-pattern has become so prevelant, how we can avoid it, and how can one undo or deal with this pattern in a live system one must work with!

    Read the article

  • problem disposing class in Dictionary it is Still in the heap memory although using GC.Collect

    - by Bahgat Mashaly
    Hello i have a problem disposing class in Dictionary this is my code private Dictionary<string, MyProcessor> Processors = new Dictionary<string, MyProcessor>(); private void button1_Click(object sender, EventArgs e) { if (!Processors.ContainsKey(textBox1.Text)) { Processors.Add(textBox1.Text, new MyProcessor()); } } private void button2_Click(object sender, EventArgs e) { MyProcessor currnt_processor = Processors[textBox1.Text]; Processors.Remove(textBox2.Text); currnt_processor.Dispose(); currnt_processor = null; GC.Collect(); } public class MyProcessor: IDisposable { private bool isDisposed = false; string x = ""; public MyProcessor() { for (int i = 0; i < 20000; i++) { //this line only to increase the memory usage to know if the class is dispose or not x = x + "gggggggggggg"; } this.Dispose(); GC.SuppressFinalize(this); } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } public void Dispose(bool disposing) { if (!this.isDisposed) { isDisposed = true; this.Dispose(); } } ~MyProcessor() { Dispose(false); } } i use "ANTS Memory Profiler" to monitor heap memory the disposing work only when i remove all keys from dictionary how can i destroy the class from heap memory ? thanks in advance

    Read the article

  • Changing App.config at Runtime

    - by born to hula
    I'm writing a test winforms / C# / .NET 3.5 application for the system we're developing and we fell in the need to switch between .config files at runtime, but this is turning out to be a nightmare. Here's the scene: the Winforms application is aimed at testing a WebApp, divided into 5 subsystems. The test proccess works with messages being sent between the subsystems, and for this proccess to be sucessful each subsystem got to have its own .config file. For my Test Application I wrote 5 separate configuration files. I wish I was able to switch between these 5 files during runtime, but the problem is: I can programatically edit the application .config file inumerous times, but these changes will only take effect once. I've been searching a long time for a form to address this problem but I still wasn't sucessful. I know the problem definition may be a bit confusing but I would really appreciate it if someone helped me. Thanks in advance! --- UPDATE 01-06-10 --- There's something I didn't mention before. Originally, our system is a Web Application with WCF calls between each subsystem. For performance testing reasons (we're using ANTS 4), we had to create a local copy of the assemblies and reference them from the test project. It may sound a bit wrong, but we couldn't find a satisfying way to measure performance of a remote application. --- End Update --- Here's what I'm doing: public void UpdateAppSettings(string key, string value) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); foreach (XmlElement item in xmlDoc.DocumentElement) { foreach (XmlNode node in item.ChildNodes) { if (node.Name == key) { node.Attributes[0].Value = value; break; } } } xmlDoc.Save(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); System.Configuration.ConfigurationManager.RefreshSection("section/subSection"); }

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • Iteration over a linq to sql query is very slow.

    - by devzero
    I have a view, AdvertView in my database, this view is a simple join between some tables (advert, customer, properties). Then I have a simple linq query to fetch all adverts for a customer: public IEnumerable<AdvertView> GetAdvertForCustomerID(int customerID) { var advertList = from advert in _dbContext.AdvertViews where advert.Customer_ID.Equals(customerID) select advert; return advertList; } I then wish to map this to modelItems for my MVC application: public List<AdvertModelItem> GetAdvertsByCustomer(int customerId) { List<AdvertModelItem> lstAdverts = new List<AdvertModelItem>(); List<AdvertView> adViews = _dataHandler.GetAdvertForCustomerID(customerId).ToList(); foreach(AdvertView adView in adViews) { lstAdverts.Add(_advertMapper.MapToModelClass(adView)); } return lstAdverts; } I was expecting to have some performance issues with the SQL, but the problem seems to be with the .ToList() function. I'm using ANTS performance profiler and it reports that the total runtime of the function is 1.400ms, and 850 of those is with the ToList(). So my question is, why does the tolist function take such a long time here?

    Read the article

  • How to detect .NET WPF memory leak or GC long run?

    - by Néstor Sánchez A.
    I have the next very strange situation and problem: .NET 4.0 application for diagram editing (WPF). Runs ok in my PC: 8GM RAM, 3.0GHz, i7 quad-core. While creating objects (mostly diagram nodes and connectors, plus all the undo/redo information) the TaskManager show, as expected, some memory usage "jumps" (up and down). These mem-usage "jumps" also remains executing AFTER user interaction ended. Maybe this is the GC cleaning/regorganizing memory? To see what is going on, I've used the Ants mem profiler, but somewhat it prevents those "jumps" to happen after user interaction. PROBLEM: It Freezes/Hangs after seconds or minutes of usage in some slow/weak laptos/netbooks of my beta testers (under 2GHz of speed and under 2GB of RAM). I was thinking of a memory leak, but... EDIT: Also, there is the case that the memory usage grows and grows until collapse (only in slow machines). In a Windows XP Mode machine (VM in Win 7) with only 512MB of RAM Assigned it works fine without mem-usage "jumps" after user interaction (no GC cleaning?!). So, I really have a big trouble because I cannot reproduce the error, only see these strange behaviour (mem jumps), and the tool supposed to show me what is happening is hiding the problem (like the "observer's paradox"). Any ideas on what's happening and how to solve it?

    Read the article

  • How to Buy an SD Card: Speed Classes, Sizes, and Capacities Explained

    - by Chris Hoffman
    Memory cards are used in digital cameras, music players, smartphones, tablets, and even laptops. But not all SD cards are created equal — there are different speed classes, physical sizes, and capacities to consider. Different devices require different types of SD cards. Here are the differences you’ll need to keep in mind when picking out the right SD card for your device. Speed Class In a nutshell, not all SD cards offer the same speeds. This matters for some tasks more than it matters for others. For example, if you’re a professional photographer taking photos in rapid succession on a DSLR camera saving them in high-resolution RAW format, you’ll want a fast SD card so your camera can save them as fast as possible. A fast SD card is also important if you want to record high-resolution video and save it directly to the SD card. If you’re just taking a few photos on a typical consumer camera or you’re just using an SD card to store some media files on your smartphone, the speed isn’t as important. Manufacturers use “speed classes” to measure an SD card’s speed. The SD Association that defines the SD card standard doesn’t actually define the exact speeds associated with these classes, but they do provide guidelines. There are four different speed classes — 10, 8, 4, and 2. 10 is the fastest, while 2 is the slowest. Class 2 is suitable for standard definition video recording, while classes 4 and 6 are suitable for high-definition video recording. Class 10 is suitable for “full HD video recording” and “HD still consecutive recording.” There are also two Ultra High Speed (UHS) speed classes, but they’re more expensive and are designed for professional use. UHS cards are designed for devices that support UHS. Here are the associated logos, in order from slowest to fastest:       You’ll probably be okay with a class 4 or 6 card for typical use in a digital camera, smartphone, or tablet. Class 10 cards are ideal if you’re shooting high-resolution videos or RAW photos. Class 2 cards are a bit on the slow side these days, so you may want to avoid them for all but the cheapest digital cameras. Even a cheap smartphone can record HD video, after all. An SD card’s speed class is identified on the SD card itself. You’ll also see the speed class on the online store listing or on the card’s packaging when purchasing it. For example, in the below photo, the middle SD card is speed class 4, while the two other cards are speed class 6. If you see no speed class symbol, you have a class 0 SD card. These cards were designed and produced before the speed class rating system was introduced. They may be slower than even a class 2 card. Physical Size Different devices use different sizes of SD cards. You’ll find standard-size CD cards, miniSD cards, and microSD cards. Standard SD cards are the largest, although they’re still very small. They measure 32x24x2.1 mm and weigh just two grams. Most consumer digital cameras for sale today still use standard SD cards. They have the standard “cut corner”  design. miniSD cards are smaller than standard SD cards, measuring 21.5x20x1.4 mm and weighing about 0.8 grams. This is the least common size today. miniSD cards were designed to be especially small for mobile phones, but we now have a smaller size. microSD cards are the smallest size of SD card, measuring 15x11x1 mm and weighing just 0.25 grams. These cards are used in most cell phones and smartphones that support SD cards. They’re also used in many other devices, such as tablets. SD cards will only fit into marching slots. You can’t plug a microSD card into a standard SD card slot — it won’t fit. However, you can purchase an adapter that allows you to plug a smaller SD card into a larger SD card’s form and fit it into the appropriate slot. Capacity Like USB flash drives, hard drives, solid-state drives, and other storage media, different SD cards can have different amounts of storage. But the differences between SD card capacities don’t stop there. Standard SDSC (SD) cards are 1 MB to 2 GB in size, or perhaps 4 GB in size — although 4 GB is non-standard. The SDHC standard was created later, and allows cards 2 GB to 32 GB in size. SDXC is a more recent standard that allows cards 32 GB to 2 TB in size. You’ll need a device that supports SDHC or SDXC cards to use them. At this point, the vast majority of devices should support SDHC. In fact, the SD cards you have are probably SDHC cards. SDXC is newer and less common. When buying an SD card, you’ll need to buy the right speed class, size, and capacity for your needs. Be sure to check what your device supports and consider what speed and capacity you’ll actually need. Image Credit: Ryosuke SEKIDO on Flickr, Clive Darra on Flickr, Steven Depolo on Flickr

    Read the article

  • career advice for PhD scientist seeking to program?

    - by C SD
    I'm largely a self-taught programmer. In fact, I first started programming about half way through biophysics grad school, and even though I think I've done some pretty nice work, I've never worked as part of a 'serious' development team that had more than one or two other developers (and I wouldn't hesitate to call them equally inexperienced in software development as a profession). After finishing my PhD I applied to Google, on a lark, since I had some confidence in my abilities, if not necessarily my experience, and I was hoping to maybe slip in and absorb all the experience and talent I'd be surrounded with and become productive enough, quickly enough, that they wouldn't immediately regret their decision. I was excited to actually get invited to interview up at Mountain View (this was ~ mid 2008). Overall, my memory of the interview was very positive, but after close to a three month wait (is that normal?) they ended up turning me down. I wasn't too surprised or disappointed (aside from the uncomfortably long wait) given my unusual background and admitted lack of experience. I decided to continue as a postdoc, but focus on improving my skills rather than doing research. I've done about three years of that, and my honest assessment is that I've learned a ton more, but I really need more of a peer group to maintain or accelerate my growth. Google invited me to interview again about eight months ago, and the interview process went even better than the first time around (I thought), though they again declined to give me an offer. I have to admit this second rejection was much more discouraging. They had insisted I interview even after I mentioned to them that a move on my part was unlikely given that I had bought a house, gotten married, etc. since the first interview. I guess I was hoping they'd at least give me an offer that I could parlay into a more conventional, but still interesting, programming position close to home. So here I am, going on my third year out of grad school, a glorified postdoc and I'm starting to get pretty discouraged. Even though I could technically get 'back-on-track' for a career in science, I have been focusing the vast majority of this time on gaining programming experience rather than on research and publications. The problem is, whenever I look, most job listings have requirements that seem impossibly grandiose and I hesitate to apply. That, or the job/project seems incredibly dull. Ironically, applying to Google struck me as less intimidating. I suspect that either most people are just a lot less realistic than I am when it comes to assessing how long it will take for them to get up to speed, or they don't care; my fear is that I'm just woefully unqualified for any interesting, well paying work. IE: I'm confident I could switch fully back into C++ mode with a couple weeks work (I mostly use C,Python,C# daily) but I don't list myself as being 'proficient' in C++ on my CV, or applying for jobs that 'require' such knowledge. The few applications for which I did feel I was a legitimately good match have not elicited a response. I suspect the following things are potential problems with my application/CV and I would like feedback on: I don't have a CS degree. My BS was in biochemistry and molecular biology, my PhD in biophysics. I took a undergrad and grad level CS course at UCSD and completely killed them, but I don't know how to translate that to my CV effectively. I have a PhD, but it's not in CS... I've been debating if I should remove it from my CV, and wether or not it would then be misleading to list at least some of those years as some kind of 'programming' job (in many respects it was). I think there are sometimes strong stigmas associated with 'self-taught' programmers. I am certainly one of those. I even recognize that some of those stigmas hold a hint of truth, but I really do want to be an asset to a team. How do I communicate that even though I have been largely self-directing for ~8 years I can still take marching orders when needed? Do I just say so outright? Should I just become a lot less scrupulous about the whole process? anecdote: I have a friend who applied for positions where he completely fudged his qualifications to get past the first culling. He was much more honest and forthcoming about his actual qualifications when contacted and he still managed to get invited to a couple of interviews and even got some offers. His balls are larger than mine though.

    Read the article

  • Some notes on Reflector 7

    - by CliveT
    Both Bart and I have blogged about some of the changes that we (and other members of the team) have made to .NET Reflector for version 7, including the new tabbed browsing model, the inclusion of Jason Haley's PowerCommands add-in and some improvements to decompilation such as handling iterator blocks. The intention of this blog post is to cover all of the main new features in one place, and to describe the three new editions of .NET Reflector 7. If you'd simply like to try out the latest version of the beta for yourself you can do so here. Three new editions .NET Reflector 7 will come in three new editions: .NET Reflector .NET Reflector VS .NET Reflector VSPro The first edition is just the standalone Windows application. The latter two editions include the Windows application, but also add the power of Reflector into Visual Studio so that you can save time switching tools and quickly get to the bottom of a debugging issue that involves third-party code. Let's take a look at some of the new features in each edition. Tabbed browsing .NET Reflector now has a tabbed browsing model, in which the individual tabs have independent histories. You can open a new tab to view the selected object by using CTRL+CLICK. I've found this really useful when I'm investigating a particular piece of code but then want to focus on some other methods that I find along the way. For version 7, we wanted to implement the basic idea of tabs to see whether it is something that users will find helpful. If it is something that enhances productivity, we will add more tab-based features in a future version. PowerCommands add-in We have also included Jason Haley's PowerCommands add-in as part of version 7. This add-in provides a number of useful commands, including support for opening .xap files and extracting the constituent assemblies, and a query editor that allows C# queries to be written and executed against the Reflector object model . All of the PowerCommands features can be turned on from the options menu. We will be really interested to see what people are finding useful for further integration into the main tool in the future. My personal favourite part of the PowerCommands add-in is the query editor. You can set up as many of your own queries as you like, but we provide 25 to get you started. These do useful things like listing all extension methods in a given assembly, and displaying other lower-level information, such as the number of times that a given method uses the box IL instruction. These queries can be extracted and then executed from the 'Run Query' context menu within the assembly explorer. Moreover, the queries can be loaded, modified, and saved using the built-in editor, allowing very specific user customization and sharing of queries. The PowerCommands add-in contains many other useful utilities. For example, you can open an item using an external application, work with enumeration bit flags, or generate assembly binding redirect files. You can see Bart's earlier post for a more complete list. .NET Reflector VS .NET Reflector VS adds a brand new Reflector object browser into Visual Studio to save you time opening .NET Reflector separately and browsing for an object. A 'Decompile and Explore' option is also added to the context menu of references in the Solution Explorer, so you don't need to leave Visual Studio to look through decompiled code. We've also added some simple navigation features to allow you to move through the decompiled code as quickly and easily as you can in .NET Reflector. When this is selected, the add-in decompiles the given assembly, Once the decompilation has finished, a clone of the Reflector assembly explorer can be used inside Visual Studio. When Reflector generates the source code, it records the location information. You can therefore navigate from the source file to other decompiled source using the 'Go To Definition' context menu item. This then takes you to the definition in another decompiled assembly. .NET Reflector VSPro .NET Reflector VSPro builds on the features in .NET Reflector VS to add the ability to debug any source code you decompile. When you decompile with .NET Reflector VSPro, a matching .pdb is generated, so you can use Visual Studio to debug the source code as if it were part of the project. You can now use all the standard debugging techniques that you are used to in the Visual Studio debugger, and step through decompiled code as if it were your own. Again, you can select assemblies for decompilation. They are then decompiled. And then you can debug as if they were one of your own source code files. The future of .NET Reflector As I have mentioned throughout this post, most of the new features in version 7 are exploratory steps and we will be watching feedback closely. Although we don't want to speculate now about any other new features or bugs that will or won't be fixed in the next few versions of .NET Reflector, Bart has mentioned in a previous post that there are lots of improvements we intend to make. We plan to do this with great care and without taking anything away from the simplicity of the core product. User experience is something that we pride ourselves on at Red Gate, and it is clear that Reflector is still a long way off our usual standards. We plan for the next few versions of Reflector to be worked on by some of our top usability specialists who have been involved with our other market-leading products such as the ANTS Profilers and SQL Compare. I re-iterate the need for the really great simple mode in .NET Reflector to remain intact regardless of any other improvements we are planning to make. I really hope that you enjoy using some of the new features in version 7 and that Reflector continues to be your favourite .NET development tool for a long time to come.

    Read the article

  • Profiling Startup Of VS2012 &ndash; YourKit Profiler

    - by Alois Kraus
    The YourKit (v7.0.5) profiler is interesting in terms of price (79€ single place license, 409€ + 1 year support and upgrades) and feature set. You do get a performance and memory profiler in one package for which you normally need also to pay extra from the other vendors. As an interesting side note the profiler UI is written in Java because they do also sell Java profilers with the same feature set. To get all methods of a VS startup you need first to configure it to include System* in the profiled methods and you need to configure * to measure wall clock time. By default it does record only CPU times which allows you to optimize CPU hungry operations. But you will never see a Thread.Sleep(10000) in the profiler blocking the UI in this mode. It can profile as all others processes started from within the profiler but it can also profile the next or all started processes. As usual it can profile in sampling and tracing mode. But since it is a memory profiler as well it does by default also record all object allocations > 1MB. With allocation recording enabled VS2012 did crash but without allocation recording there were no problems. The CPU tab contains the time line of the application and when you click in the graph you the call stacks of all threads at this time. This is really a nice feature. When you select a time region you the CPU Usage estimation for this time window. I have seen many applications consuming 100% CPU only because they did create garbage like crazy. For this is the Garbage Collection tab interesting in conjunction with a time range. This view is like the CPU table only that the CPU graph (green) is missing. All relevant information except for GCs/s is already visible in the CPU tab. Very handy to pinpoint excessive GC or CPU bound issues. The Threads tab does show the thread names and their lifetime. This is useful to see thread interactions or which thread is hottest in terms of CPU consumption. On the CPU tab the call tree does exist in a merged and thread specific view. When you click on a method you get below a list of all called methods. There you can sort for methods with a high own time which are worth optimizing. In the Method List you can select which scope you want to see. Back Traces are the methods which did call you. Callees ist the list of methods called directly or indirectly by your method as a flat list. This is not a call stack but still very useful to see which methods were slow so you can see the “root” cause quite quickly without the need to click trough long call stacks. The last view Merged Calles is a call stacked view of the previous view. This does help a lot to understand did call each method at run time. You would get the same view with a debugger for one call invocation but here you get the full statistics (invocation count) as well. Since YourKit is also a memory profiler you can directly see which objects you have on your managed heap and which objects do hold most of your precious memory. You can in in the Object Explorer view also examine the contents of your objects (strings or whatsoever) to get a better understanding which objects where potentially allocating this stuff.   YourKit is a very easy to use combined memory and performance profiler in one product. The unbeatable single license price makes it very attractive to straightly buy it. Although it is a Java UI it is very responsive and the memory consumption is considerably lower compared to dotTrace and ANTS profiler. What I do really like is to start the YourKit ui and then start the processes I want to profile as usual. There is no need to alter your own application code to be able to inject a profiler into your new started processes. For performance and memory profiling you can simply select the process you want to investigate from the list of started processes. That's the way I like to use profilers. Just get out of the way and let the application run without any special preparations.   Next: Telerik JustTrace

    Read the article

  • Asp.net MVC and MOSS 2010 integration

    - by Robert Koritnik
    Just a sidenote: I'm not sure whether I should post this to serverfault as well, because some MOSS admin may have some info for me as well? A bit of explanation first (without Asp.net MVC) Is it possible to integrate the two? Is it possible to write an application that would share at least credential information with MOSS? I have to write a MOSS application that has to do with these technologies: MOSS 2010 Personal client certificates authentication (most probably on USB keys) Active Directory Federation Services Separate SQL DB that would serve application specific data (separate as not being part of MOSS DB) How should it work? Users should authenticate using personal certificates into MOSS 2010 There would be a certain part of MOSS that would be related to my custom application This application should only authorize certain users via AD FS - I guess these users should have a certain security claim attached to them This application should manage users (that have access to this app) with additional (app specific) security claims related to this application (as additional application level authorization rights for individual application parts) This application should use custom SQL 2008 DB heavily with its own data This application should have the possibility to integrate with external systems as well (Exchange for instance to inject calendar entries, ERP systems etc) This application should be able to export its data (from its DB) to files. I don't know if it's possible, but it would be nice if the app could add these files to MOSS and attach authorization info to them so only users with sufficient rights would be able to view/open these files. Why Asp.net MVC then? I'm very well versed in Asp.net MVC (also with the latest version) and I haven't done anything on Sharepoint since version 2003 (which doesn't do me no good or prepare me for the latest version in any way shape or form). This project will most probably be a death march project so I would rather write my application as a UI rich Asp.net MVC application and somehow integrate it into MOSS. But not only via a link, because I would like to at least share credentials, so users wouldn't need to re-login when accessing my app. Using Asp.net MVC I would at least have the possibility to finish on time or be less death marching. Is this at all possible? Questions Is it possible to integrate Asp.net MVC into MOSS as described above? If integration is not possible, would it be possible to create a completely MOSS based application that would work as described? Which parts of MOSS 2010 should I use to accomplish what I need?

    Read the article

  • WPF TreeView Virtualization

    - by Carlo
    I'm trying to figure out this virtualization feature, I'm not sure if I'm understanding it wrong or what's going on, but I'm using the ANTS memory profiler to check the number of items in a virtualized TreeView, and it just keeps increasing. I have a TreeView with 1,001 items (1 root, 1000 sub-items), and I always get up to 1,001 TreeViewItems, 1,001 ToggleButtons and 1,001 TextBlocks. Isn't virtualization supposed to re-use the items? If so, why would I have 1,001 of each? Also, the CleanUpVirtualizedItem never fires. Let me know if I'm understanding this wrong and if you have resources on how to use this. I've searched over the internet but haven't found anything useful. EDIT: Even the memory used by the tree grows from aporx. 4mb to 12mb when I expand and scroll through all the items. Let me know thanks. This is my code. XAML: <Window x:Class="RadTreeViewExpandedProblem.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <TreeView x:Name="treeView" VirtualizingStackPanel.IsVirtualizing="True" VirtualizingStackPanel.CleanUpVirtualizedItem="TreeView_CleanUpVirtualizedItem"> <TreeView.ItemsPanel> <ItemsPanelTemplate> <VirtualizingStackPanel /> </ItemsPanelTemplate> </TreeView.ItemsPanel> </TreeView> </Grid> </Window> C#: public partial class Window1 : Window { public Window1() { InitializeComponent(); TreeViewItem rootItem = new TreeViewItem() { Header = "Item Level 0" }; for (int i = 0; i < 1000; i++) { TreeViewItem itemLevel1 = new TreeViewItem() { Header = "Item Level 1" }; itemLevel1.Items.Add(new TreeViewItem()); rootItem.Items.Add(itemLevel1); } treeView.Items.Add(rootItem); } private void TreeView_CleanUpVirtualizedItem(object sender, CleanUpVirtualizedItemEventArgs e) { } }

    Read the article

  • Should Application_End fire on an automatic App Pool Recycle?

    - by Laramie
    I have read this, this, this and this plus a dozen other posts/blogs. I have an ASP.Net app in shared hosting that is frequently recycling. We use NLog and have the following code in global.asax void Application_Start(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION STARTING\r\n\r\n"); } protected void Application_OnEnd(Object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION_OnEnd\r\n\r\n"); } void Application_End(object sender, EventArgs e) { HttpRuntime runtime = (HttpRuntime)typeof(System.Web.HttpRuntime).InvokeMember("_theRuntime", BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.GetField, null, null, null); if (runtime == null) return; string shutDownMessage = (string)runtime.GetType().InvokeMember("_shutDownMessage", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); string shutDownStack = (string)runtime.GetType().InvokeMember("_shutDownStack", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); ApplicationShutdownReason shutdownReason = System.Web.Hosting.HostingEnvironment.ShutdownReason; NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug(String.Format("\r\n\r\nAPPLICATION END\r\n\r\n_shutDownReason = {2}\r\n\r\n _shutDownMessage = {0}\r\n\r\n_shutDownStack = {1}\r\n\r\n", shutDownMessage, shutDownStack, shutdownReason)); } void Application_Error(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nApplication_Error\r\n\r\n"); } Our log file is littered with "APPLICATION STARTING" entries, but neither Application_OnEnd, Application_End, nor Application_Error are ever fired during these spontaneous restarts. I know they are working because there are entries for touching the web.config or /bin files. We also ran a memory overload test and can trigger an OutOfMemoryException which is caught in Application_Error. We are trying to determine whether the virtual memory limit is causing the recycling. We have added GC.GetTotalMemory(false) throughout the code, but this is for all of .Net, not just our App´s pool, correct? We've also tried var oPerfCounter = new PerformanceCounter(); oPerfCounter.CategoryName = "Process"; oPerfCounter.CounterName = "Virtual Bytes"; oPerfCounter.InstanceName = "iisExpress"; logger.Debug("Virtual Bytes: " + oPerfCounter.RawValue + " bytes"); but don't have permission in shared hosting. I've monitored the app on a dev server with the same requests that caused the recycles in production with ANTS Memory Profiler attached and can't seem to find a culprit. We have also run it with a debugger attached in dev to check for uncaught exceptions in spawned threads that might cause the app to abort. My questions are these: How can I effectively monitor memory usage in shared hosting to tell how much my application is consuming prior to an application recycle? Why are the Application_[End/OnEnd/Error] handlers in global.asax not being called? How else can I determine what is causing these recycles? Thanks.

    Read the article

  • Asp.net MVC/Silverlight and Sharepoint 2010 integration

    - by Robert Koritnik
    Just a sidenote: I'm not sure whether I should post this to serverfault as well, because some MOSS admin may have some info for me as well? Additional note 1: I've found this document (Asp.net MVC 2 & Sharepoint integration) if anybody with sufficient expirience is willing to comment on its content whether this can be used in my described scenario or not. Additional note 2: I've discovered (later) that Silverlight is supported in Sharepoint 2010 so I'm considering it as well. So if anyone would comment on silverlight integration as well. A bit of explanation first (without Asp.net MVC/Silverlight) Is it possible to integrate the two? Is it possible to write an application that would share at least credential information with MOSS? I have to write a MOSS application that has to do with these technologies: MOSS 2010 Personal client certificates authentication (most probably on USB keys) Active Directory Federation Services Separate SQL DB that would serve application specific data (separate as not being part of MOSS DB) How should it work? Users should authenticate using personal certificates into MOSS 2010 There would be a certain part of MOSS that would be related to my custom application This application should only authorize certain users via AD FS - I guess these users should have a certain security claim attached to them This application should manage users (that have access to this app) with additional (app specific) security claims related to this application (as additional application level authorization rights for individual application parts) This application should use custom SQL 2008 DB heavily with its own data This application should have the possibility to integrate with external systems as well (Exchange for instance to inject calendar entries, ERP systems etc) This application should be able to export its data (from its DB) to files. I don't know if it's possible, but it would be nice if the app could add these files to MOSS and attach authorization info to them so only users with sufficient rights would be able to view/open these files. Why Asp.net MVC/Silverlight then? I'm very well versed in Asp.net MVC (also with the latest version) and I haven't done anything on Sharepoint since version 2003 (which doesn't do me no good or prepare me for the latest version in any way shape or form). This project will most probably be a death march project so I would rather write my application as a UI rich Asp.net MVC application and somehow integrate it into MOSS. But not only via a link, because I would like to at least share credentials, so users wouldn't need to re-login when accessing my app. Using Asp.net MVC I would at least have the possibility to finish on time or be less death marching. Is this at all possible? I haven't done any serious project using SIlverlight, but I will sooner or later have to. So I'm also considering a jump into it at this moment, because it still might make this application development easier than strict Sharepoint 2010. Questions Is it possible to integrate Asp.net MVC/Silverlight into MOSS as described above? If integration is not possible, would it be possible to create a completely MOSS based application that would work as described? Which parts of MOSS 2010 should I use to accomplish what I need?

    Read the article

  • nhibernate configure and buildsessionfactory time

    - by davidsleeps
    Hi, I'm using Nhibernate as the OR/M tool for an asp.net application and the startup performance is really frustrating. Part of the problem is definitely me in my lack of understanding but I've tried a fair bit (understanding is definitely improving) and am still getting nowhere. Currently ANTS profiler has that the Configure() takes 13-18 seconds and the BuildSessionFActory() as taking about 5 seconds. From what i've read, these times might actually be pretty good, but they were generally talking about hundreds upon hundreds of mapped entities...this project only has 10. I've combined all the mapping files into a single hbm mapping file and this did improve things but only down to the times mentioned above... I guess, are there any "Traps for young players" that are regularly missed...obvious "I did this/have you enabled that/exclude file x/mark file y as z" etc... I'll try the serialize the configuration thing to avoid the Configure() stage, but I feel that part shouldn't be that long for that amount of entities and so would essentially be hiding a current problem... I will post source code or configuration if necessary, but I'm not sure what to put in really... thanks heaps! edit (more info) I'll also add that once this is completed, each page is extremely quick... configuration code- hibernate.cfg.xml <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler, NHibernate" /> </configSections> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string_name">MyAppDEV</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider, NHibernate.Caches.SysCache</property> <property name="cache.use_second_level_cache">true</property> <property name="show_sql">false</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> <property name="current_session_context_class">managed_web</property> <mapping assembly="MyApp.Domain"/> </session-factory> </hibernate-configuration> </configuration> My SessionManager class which is bound and unbound in a HttpModule for each request Imports NHibernate Imports NHibernate.Cfg Public Class SessionManager Private ReadOnly _sessionFactory As ISessionFactory Public Shared ReadOnly Property SessionFactory() As ISessionFactory Get Return Instance._sessionFactory End Get End Property Private Function GetSessionFactory() As ISessionFactory Return _sessionFactory End Function Public Shared ReadOnly Property Instance() As SessionManager Get Return NestedSessionManager.theSessionManager End Get End Property Public Shared Function OpenSession() As ISession Return Instance.GetSessionFactory().OpenSession() End Function Public Shared ReadOnly Property CurrentSession() As ISession Get Return Instance.GetSessionFactory().GetCurrentSession() End Get End Property Private Sub New() Dim configuration As Configuration = New Configuration().Configure() _sessionFactory = configuration.BuildSessionFactory() End Sub Private Class NestedSessionManager Friend Shared ReadOnly theSessionManager As New SessionManager() End Class End Class edit 2 (log4net results) will post bits that have a portion of time between them and will cut out the rest... 2010-03-30 23:29:40,898 [4] INFO NHibernate.Cfg.Environment [(null)] - Using reflection optimizer 2010-03-30 23:29:42,481 [4] DEBUG NHibernate.Cfg.Configuration [(null)] - dialect=NHibernate.Dialect.MsSql2005Dialect ... 2010-03-30 23:29:42,501 [4] INFO NHibernate.Cfg.Configuration [(null)] - Mapping resource: MyApp.Domain.Mappings.hbm.xml 2010-03-30 23:29:43,342 [4] INFO NHibernate.Dialect.Dialect [(null)] - Using dialect: NHibernate.Dialect.MsSql2005Dialect 2010-03-30 23:29:50,462 [4] INFO NHibernate.Cfg.XmlHbmBinding.Binder [(null)] - Mapping class: ... 2010-03-30 23:29:51,353 [4] DEBUG NHibernate.Connection.DriverConnectionProvider [(null)] - Obtaining IDbConnection from Driver 2010-03-30 23:29:53,136 [4] DEBUG NHibernate.Connection.ConnectionProvider [(null)] - Closing connection

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >