Search Results

Search found 6511 results on 261 pages for 'everyday usage'.

Page 78/261 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • OWB 11gR2 &ndash; Flexible and extensible

    - by David Allan
    The Oracle data integration extensibility capabilities are something I love, nothing more frustrating than a tool or platform that is very constraining. I think extensibility and flexibility are invaluable capabilities in the data integration arena. I liked Uli Bethke's posting on some extensibility capabilities with ODI (see Nesting ODI Substitution Method Calls here), he has some useful guidance on making customizations to existing KMs, nice to learn by example. I thought I'd illustrate the same capabilities with ODI's partner OWB for the OWB community. There is a whole new world of potential. The LKM/IKM/CKM/JKMs are the primary templates that are supported (plus the Oracle Target code template), so there is a lot of potential for customizing and extending the product in this release. Enough waffle... Diving in at the deep end from Uli's post, in OWB the table operator has a number of additional properties in OWB 11gR2 that let you annotate the column usage with ODI-like properties such as the slowly changing usage or for your own user-defined purpose as in Uli's post, below you see for the target table SALES_TARGET we can use the UD5 property which when assigned the code template (knowledge module) which has been modified with Uli's change we can do custom things such as creating indices - provides The code template used by the mapping has the additional step which is basically the code illustrated from Uli's posting just used directly, the ODI 10g substitution references also supported from within OWB's runtime. Now to see whether this does what we expect before we execute it, we can check out the generated code similar to how the traditional mapping generation and preview works, you do this by clicking on the 'Inspect Code' button on the execution units code template assignment. This then  creates another tab with prefix 'Code - <mapping name>' where the generated code is put, scrolling down we find the last step with the indices being created, looks good, so we are ready to deploy and execute. After executing the mapping we can then use the 'Audit Information' panel (select the mapping in the designer tree and click on View/Audit Information), this gives us a view of the execution where we can drill into the tasks that were executed and inspect both the template and the generated code that was executed and any potential errors. Reflecting back on earlier versions of OWB, these were the kinds of features that were always highly desirable, getting under the hood of the code generation and tweaking bit and pieces - fun and powerful stuff! We can step it up a bit here and explore some further ideas. The example below is a daisy-chained set of execution units where the intermediate table is a target of one unit and the source for another. We want that table to be a global temporary table, so can tweak the templates. Back to the copy of SQL Control Append (for demo purposes) we modify the create target table step to make the table a global temporary table, with the option of on commit preserve rows. You can get a feel for some of the customizations and changes possible, providing some great flexibility and extensibility for the data integration tools.

    Read the article

  • Cursors 1 Sets 0

    - by GrumpyOldDBA
    I had an interesting experience with a database I essentially know nothing about. On the server is a database which stores session state, Microsoft provide the code/database with their dot net, so I'm told. Anyway this database has sat happily on the production server for the past 4 years I guess, we've finally made the upgrade to SQL 2008 and the ASPState database has also been upgraded. It seems most likely that the performance increase of our upgrade tipped the usage of this database into...(read more)

    Read the article

  • ASP.NET Asynchronous Pages and when to use them

    - by rajbk
    There have been several articles posted about using  asynchronous pages in ASP.NET but none of them go into detail as to when you should use them. I finally found a great post by Thomas Marquardt that explains the process in depth. He addresses a key misconception also: So, in your ASP.NET application, when should you perform work asynchronously instead of synchronously? Well, only 1 thread per CPU can execute at a time.  Did you catch that?  A lot of people seem to miss this point...only one thread executes at a time on a CPU. When you have more than this, you pay an expensive penalty--a context switch. However, if a thread is blocked waiting on work...then it makes sense to switch to another thread, one that can execute now.  It also makes sense to switch threads if you want work to be done in parallel as opposed to in series, but up until a certain point it actually makes much more sense to execute work in series, again, because of the expensive context switch. Pop quiz: If you have a thread that is doing a lot of computational work and using the CPU heavily, and this takes a while, should you switch to another thread? No! The current thread is efficiently using the CPU, so switching will only incur the cost of a context switch. Ok, well, what if you have a thread that makes an HTTP or SOAP request to another server and takes a long time, should you switch threads? Yes! You can perform the HTTP or SOAP request asynchronously, so that once the "send" has occurred, you can unwind the current thread and not use any threads until there is an I/O completion for the "receive". Between the "send" and the "receive", the remote server is busy, so locally you don't need to be blocking on a thread, but instead make use of the asynchronous APIs provided in .NET Framework so that you can unwind and be notified upon completion. Again, it only makes sense to switch threads if the benefit from doing so out weights the cost of the switch. Read more about it in these posts: Performing Asynchronous Work, or Tasks, in ASP.NET Applications http://blogs.msdn.com/tmarq/archive/2010/04/14/performing-asynchronous-work-or-tasks-in-asp-net-applications.aspx ASP.NET Thread Usage on IIS 7.0 and 6.0 http://blogs.msdn.com/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx   PS: I generally do not write posts that simply link to other posts but think it is warranted in this case.

    Read the article

  • Start Developing a Multiplayer Online Client to host existing video game

    - by Rami.Shareef
    GameRanger Garena ... etc I'm planning to start developing a small online client like these mentioned above (for friends usage), where the player that hosts the game is the server him self. was looking through the web for something to start with, but couldnt find any resources for this request!. Planning to do it with .NET technology, I have a good decent development experience. Any good resources to start with? the game I'm aiming to support is WarCraft III The frozen throne as start

    Read the article

  • How Assassin’s Creed Should Have Ended [Video]

    - by Asian Angel
    Altair is on the run yet again from Italy’s finest and keeps managing to hide in plain sight. But will his luck hold out or will his final attempt to escape end in tragedy? How It Should Have Ended: Video…: Assassin’s Creed [via Dorkly Bits] How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Week in Geek: Internet Service Providers to Implement New Anti-Piracy Monitoring in July

    - by Asian Angel
    Our latest edition of WIG is filled with news link goodness such as Google’s plans for a Metro version of Chrome, Microsoft’s seeking of a patent for TV-viewing tolls, Encyclopaedia Britannica’s switch to a digital only format, and more. Screenshot by Asian Angel. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • T-SQL Tuesday #31: Paradox of the Sawtooth Log

    - by merrillaldrich
    Today’s T-SQL Tuesday, hosted by Aaron Nelson ( @sqlvariant | sqlvariant.com ) has the theme Logging . I was a little pressed for time today to pull this post together, so this will be short and sweet. For a long time, I wondered why and how a database in Full Recovery Mode, which you’d expect to have an ever-growing log -- as all changes are written to the log file -- could in fact have a log usage pattern that looks like this: This graph shows the Percent Log Used (bold, red) and the Log File(s)...(read more)

    Read the article

  • SQL SERVER – Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video

    - by pinaldave
    You might have heard many times that one should not use SELECT * as there are many disadvantages to the usage of the SELECT *. I also believe that there are always rare occasion when we need every single column of the query. In most of the cases, we only need a few columns of the query and we should retrieve only those columns. SELECT * has many disadvantages. Let me list a few and remaining you can add as a comment.  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug. There are two quick tricks I have discussed in the video which explains how users can avoid using SELECT * but instead list the column names. 1) Drag the columns folder from SQL Server Management Studio to Query Editor 2) Right Click on Table Name >> Script TAble AS >> SELECT To… >> Select option It is extremely easy to list the column names in the table. In today’s sixty seconds video, you will notice that I was able to demonstrate both the methods very quickly. From now onwards there should be no excuse for not listing ColumnName. Let me ask a question back – is there ever a reason to SELECT *? If yes, would you please share that as a comment. More on SELECT *: SQL SERVER – Solution – Puzzle – SELECT * vs SELECT COUNT(*) SQL SERVER – Puzzle – SELECT * vs SELECT COUNT(*) SQL SERVER – SELECT vs. SET Performance Comparison I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Google I/O 2011: Memory management for Android Apps

    Google I/O 2011: Memory management for Android Apps Patrick Dubroy Android apps have more memory available to them than ever before, but are you sure you're using it wisely? This talk will cover the memory management changes in Gingerbread and Honeycomb (concurrent GC, heap-allocated bitmaps, "largeHeap" option) and explore tools and techniques for profiling the memory usage of Android apps. From: GoogleDevelopers Views: 5698 45 ratings Time: 58:42 More in Science & Technology

    Read the article

  • BoundingBox created from mesh to origin, making it bigger

    - by Gunnar Södergren
    I'm working on a level-based survival game and I want to design my scenes in Maya and export them as a single model (with multiple meshes) into XNA. My problem is that when I try to create Bounding Boxes(for Collision purposes) for each of the meshes, the are calculated from origin to the far-end of the current mesh, so to speak. I'm thinking that it might have something to do with the position each mesh brings from Maya and that it's interpreted wrongly... or something. Here's the code for when I create the boxes: private static BoundingBox CreateBoundingBox(Model model, ModelMesh mesh) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); BoundingBox result = new BoundingBox(); foreach (ModelMeshPart meshPart in mesh.MeshParts) { BoundingBox? meshPartBoundingBox = GetBoundingBox(meshPart, boneTransforms[mesh.ParentBone.Index]); if (meshPartBoundingBox != null) result = BoundingBox.CreateMerged(result, meshPartBoundingBox.Value); } result = new BoundingBox(result.Min, result.Max); return result; } private static BoundingBox? GetBoundingBox(ModelMeshPart meshPart, Matrix transform) { if (meshPart.VertexBuffer == null) return null; Vector3[] positions = VertexElementExtractor.GetVertexElement(meshPart, VertexElementUsage.Position); if (positions == null) return null; Vector3[] transformedPositions = new Vector3[positions.Length]; Vector3.Transform(positions, ref transform, transformedPositions); for (int i = 0; i < transformedPositions.Length; i++) { Console.WriteLine(" " + transformedPositions[i]); } return BoundingBox.CreateFromPoints(transformedPositions); } public static class VertexElementExtractor { public static Vector3[] GetVertexElement(ModelMeshPart meshPart, VertexElementUsage usage) { VertexDeclaration vd = meshPart.VertexBuffer.VertexDeclaration; VertexElement[] elements = vd.GetVertexElements(); Func<VertexElement, bool> elementPredicate = ve => ve.VertexElementUsage == usage && ve.VertexElementFormat == VertexElementFormat.Vector3; if (!elements.Any(elementPredicate)) return null; VertexElement element = elements.First(elementPredicate); Vector3[] vertexData = new Vector3[meshPart.NumVertices]; meshPart.VertexBuffer.GetData((meshPart.VertexOffset * vd.VertexStride) + element.Offset, vertexData, 0, vertexData.Length, vd.VertexStride); return vertexData; } } Here's a link to the picture of the mesh(The model holds six meshes, but I'm only rendering one and it's bounding box to make it clearer: http://www.gsodergren.se/portfolio/wp-content/uploads/2011/10/Screen-shot-2011-10-24-at-1.16.37-AM.png The mesh that I'm refering to is the Cubelike one. The cylinder is a completely different model and not part of any bounding box calculation. I've double- (and tripple-)-checked that this mesh corresponds to this bounding box. Any thoughts on what I'm doing wrong?

    Read the article

  • Today’s Performance Tip: Views are for Convenience, Not Performance!

    - by Jonathan Kehayias
    I tweeted this last week on twitter and got a lot of retweets so I thought that I’d blog the story behind the tweet. Most vendor databases have views in them, and when people want to retrieve data from a database, it seems like the most common first stop they make are the vendor supplied Views.  This post is in no way a bash against the usage or creation of Views in a SQL Server Database, I have created them before to simplify code and compartmentalize commonly required queries so that there...(read more)

    Read the article

  • Router in taskflow

    - by raghu.yadav
    A simple one of usecase to demonstrate router usage in taskflows with only jspx pages ( no frags ) main page with 2 commandmenuItems employees and departments. upon clicking employees menuitem should navigate to employees page and similarly clicking department menuitem should navigate to department page, all pages are in droped in there respective taskflows. emp.jspx dep.jspx emp_TF.xml dep_TF.xml mn_TF.xml ( main taskflow calling emp and dep TF's through router ) adf-config.xml ( main page navigates to mn_TF.xml ). Here is the screen shots..

    Read the article

  • Inside Red Gate - Ricky Leeks

    - by Simon Cooper
    So, one of our profilers has a problem. Red Gate produces two .NET profilers - ANTS Performance Profiler (APP) and ANTS Memory Profiler (AMP). Both products help .NET developers solve problems they are virtually guaranteed to encounter at some point in their careers - slow code, and high memory usage, respectively. Everyone understands slow code - the symptoms are very obvious (an operation takes 2 hours when it should take 10 seconds), you know when you've solved it (the same operation now takes 15 seconds), and everyone understands how you can use a profiler like APP to help solve your particular problem. High memory usage is a much more subtle and misunderstood concept. How can .NET have memory leaks? The garbage collector, and how the CLR uses and frees memory, is one of the most misunderstood concepts in .NET. There's hundreds of blog posts out there covering various aspects of the GC and .NET memory, some of them helpful, some of them confusing, and some of them are just plain wrong. There's a lot of misconceptions out there. And, if you have got an application that uses far too much memory, it can be hard to wade through all the contradictory information available to even get an idea as to what's going on, let alone trying to solve it. That's where a memory profiler, like AMP, comes into play. Unfortunately, that's not the end of the issue. .NET memory management is a large, complicated, and misunderstood problem. Even armed with a profiler, you need to understand what .NET is doing with your objects, how it processes them, and how it frees them, to be able to use the profiler effectively to solve your particular problem. And that's what's wrong with AMP - even with all the thought, designs, UX sessions, and research we've put into AMP itself, some users simply don't have the knowledge required to be able to understand what AMP is telling them about how their application uses memory, and so they have problems understanding & solving their memory problem. Ricky Leeks This is where Ricky Leeks comes in. Created by one of the many...colourful...people in Red Gate, he headlines and promotes several tutorials, pages, and articles all with information on how .NET memory management actually works, with the goal to help educate developers on .NET memory management. And educating us all on how far you can push various vegetable-based puns. This, in turn, not only helps them understand and solve any memory issues they may be having, but helps them proactively code against such memory issues in their existing code. Ricky's latest outing is an interview on .NET Rocks, providing information on the Top 5 .NET Memory Management Gotchas, along with information on a free ebook on .NET Memory Management. Don't worry, there's loads more vegetable-based jokes where those came from...

    Read the article

  • Single Instance of Child Forms in MDI Applications

    - by Akshay Deep Lamba
    In MDI application we can have multiple forms and can work with multiple forms i.e. MDI childs at a time but while developing applications we don't pay attention to the minute details of memory management. Take this as an example, when we develop application say preferably an MDI application, we have multiple child forms inside one parent form. On MDI parent form we would like to have menu strip and tab strip which in turn calls other forms which build the other parts of the application. This also makes our application looks pretty and eye-catching (not much actually). Now on a first go when a user clicks a menu item or a button on a tab strip an application initialize a new instance of a form and shows it to the user inside the MDI parent, if a user again clicks the same button the application creates another new instance for the form and presents it to the user, this will result in the un-necessary usage of the memory. Therefore, if you wish to have your application to prevent generating new instances of the forms then use the below method which will first check if the the form is visible among the list of all the child forms and then compare their types, if the form types matches with the form we are trying to initialize then the form will get activated or we can say it will be bring to front else it will be initialize and set visible to the user in the MDI parent window. The method we are using: private bool CheckForDuplicateForm(Form newForm) { bool bValue = false; foreach (Form frm in this.MdiChildren) { if (frm.GetType() == newForm.GetType()) { frm.Activate(); bValue = true; } } return bValue; } Usage: First we need to initialize the form using the NEW keyword ReportForm ReportForm = new ReportForm(); We can now check if there is another form present in the MDI parent. Here, we will use the above method to check the presence of the form and set the result in a bool variable as our function return bool value. bool frmPresent = CheckForDuplicateForm(Reportfrm); Once the above check is done then depending on the value received from the method we can set our form. if (frmPresent) return; else if (!frmPresent) { Reportfrm.MdiParent = this; Reportfrm.Show(); } In the end this is the code you will have at you menu item or tab strip click: ReportForm Reportfrm = new ReportForm(); bool frmPresent = CheckForDuplicateForm(Reportfrm); if (frmPresent) return; else if (!frmPresent) { Reportfrm.MdiParent = this; Reportfrm.Show(); }

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • Flash Analytics: The Tracking of the Flash Content

    The usage of flash player games has increased with the passage of time. In fact these days the flash games are available at the social networking web sites as well. The number of people playing these... [Author: Abel Nickson - Computers and Internet - April 05, 2010]

    Read the article

  • How to Work with the Network from the Linux Terminal: 11 Commands You Need to Know

    - by Chris Hoffman
    Whether you want to download files, diagnose network problems, manage your network interfaces, or view network statistics, there’s a terminal command for that. This collection contains the tried and true tools and a few newer commands. You can do most of this from a graphical desktop, although even Linux users that rarely use the terminal often launch one to use ping and other network diagnostic tools. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • ASP.NET MVC 2: Strongly Typed Html Helpers

    In this article, Scott examines the usage of Strongly Typed Html Helpers included with ASP.NET MVC 2. He begins with a short description of the existing HTML Helper method in ASP.NET MVC 1 and discusses the new methods, providing screenshots and a detailed listing of these new methods.

    Read the article

  • OAuth 2.0 for Google Drive and the Adsense API

    OAuth 2.0 for Google Drive and the Adsense API Google engineers Nicolas Garnier, Ali Afshar, and Sergio Gomes discuss the OAuth 2.0 playground and how to use it with the Google Drive And AdSense APIs. OAuth 2.0 and its inner workings are explained in detail, and usage of the OAuth 2.0 playground in context of Google Drive and the AdSense API is demonstrated thoroughly. The sessions wraps up with some discussion of questions from live viewers. From: GoogleDevelopers Views: 9 0 ratings Time: 57:02 More in Science & Technology

    Read the article

  • Let the RAM improves performance

    - by user1717079
    I have a low profile machine but with a lot of fast RAM, 4 Gb, which is really an amount of memory that i probably will never use, not even an half, since i just use this machine for coding and browsing the web. The HDD is really slow and so the overall performance are bad when booting, caching or starting new program, I'm wondering if Ubuntu can provide some setting or utility to solve this situation and let my system rely more on the RAM usage.

    Read the article

  • A Perspective on Database Performance Tuning

    Fundamentally, database performance tuning is done for two basic reasons, to reduce response time and to reduce resource usage, both of which can apply for any given situation. Julian Stuhler looks at database performance tuning, and why it remains one of the most important topics for any DBA, developer or systems administrator.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >