Search Results

Search found 78653 results on 3147 pages for 'performance object name s'.

Page 106/3147 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Do you tend to write your own name or your company name in your code?

    - by Connell Watkins
    I've been working on various projects at home and at work, and over the years I've developed two main APIs that I use in almost all AJAX based websites. I've compiled both of these into DLLs and called the namespaces Connell.Database and Connell.Json. My boss recently saw these namespaces in a software documentation for a project for the company and said I shouldn't be using my own name in the code. (But it's my code!) One thing to bear in mind is that we're not a software company. We're an IT support company, and I'm the only full-time software developer here, so there's not really any procedures on how we should write software in the company. Another thing to bear in mind is that I do intend on one day releasing these DLLs as open-source projects. How do other developers group their namespaces within their company? Does anyone use the same class libraries in personal and in work projects? Also does this work the other way round? If I write a class library entirely at work, who owns that code? If I've seen the library through from start to finish, designed it and programmed it. Can I use that for another project at home? Thanks, Update I've spoken to my boss about this issue and he agrees that they're my objects and he's fine for me to open-source them. Before this conversation I started changing the objects anyway, which was actually quite productive and the code now suits this specific project more-so than it did previously. But thank you to everyone involved for a very interesting debate. I hope all this text isn't wasted and someone learns from it. I certainly did. Cheers,

    Read the article

  • Wifi interface changes name seemingly at random

    - by ray_voelker
    I'm currently having some issues getting a wireless interface to work continuously under an install of Ubuntu 12.04.1 LTS. Some of the issues I'm experiencing include Connection will drop out after some time after it has initially worked. Interface will be a different name after a reboot. For example, wlan0 will become wlan4 when using the ifconfig -a command. Ubuntu will take a long time to boot, looking for network adapters. The purpose of this build is to function as a web kiosk in a library. The computer is supposed to boot up into a web browser, and allow for browsing of the catalog. For some reason this interface does not appear to be working as it should. Are there any explanations for some of these problems I'm having, and perhaps some solutions? The wireless card appears as this after doing an lspci ... Ralink corp. RT2561/RT61 802.11g PCI In the /etc/network/interfaces file I have the following configuration for the interface. auto wlan0 iface wlan0 inet dhcp wireless-essid UDwireless wireless-mode Managed Thanks in advance for help on this.

    Read the article

  • determine an application's process name on linux (ubuntu)

    - by Jacob
    This is the situation: Working on (the next version of) a Unity quicklist editor, I would like to add a reliable way of "restarting" launcher icons. To do so, I need to remove the icon (editing gsettings) and replace it on the same position. So far no problem. However, if the application in question is running, user will possibly lose data, as the application will quit when it's icon is removed from the launcher. What I need is a reliable way to find an application's process name, to let the editor check in the list of running processes if the application is running, and send a warning message to the user that the icon can not be restarted if the application is running. What i did so far is make the editor look into the desktop file, to read the command, also read the command, stripped from the directory section, and furthermore look into possible remote scripts the desktop file command might refer to, looking for strings starting with "./" Although te method seems to work well with all applications I tested it on, I have the feeling there must be an easier way to cover the problem in an "all in one" way... Is there? also suggestions to catch more exceptional situations are welcome!

    Read the article

  • SQL SERVER – SQL Server Misconceptions and Resolution – A Practical Perspective – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there in two different sessions. On the very first day of this event, my presentation will be all about SQL Server Misconceptions and Resolution – A Practical Perspective. The dictionary tells us that a “misconception” means a view or opinion that is incorrect and is based on faulty thinking or understanding. In SQL Server, there are so many misconceptions. In fact, when I hear some of these misconceptions, I feel like fainting at that very moment! Seriously, at one time, I came across the scenario where instead of using INSERT INTO…SELECT, the developer used CURSOR believing that cursor is faster (duh!). Here is the link the blog post related to this. Pinal and Vinod in 2009 I have been presenting in TechEd India for last three years. This is my fourth opportunity to present a technical session on SQL Server. Just like the previous years, I decided to present something different. Here is a novelty of this year: I will be presenting this session with Vinod Kumar. Vinod Kumar and I have a great synergy when we work together. So far, we have written one SQL Server Interview Questions and Answers book and 2 video courses: (1) SQL Server Questions and Answers (2) SQL Server Performance: Indexing Basics. Pinal and Vinod in 2011 When we sat together and started building an outline for this course, we had many options in mind for this tango session. However, we have decided that we will make this session as lively as possible while keeping it natural at the same time. We know our flow and we know our conversation highlight, but we do not know what exactly each of us is going to present. We have decided to challenge each other on stage and push each other’s knowledge to the verge. We promise that the session will be entertaining with lots of SQL Server trivia, tips and tricks. Here are the challenges that I’ll take on: I will puzzle Vinod with my difficult questions I will present such misconception that Vinod will have no resolution for it. I need your help.  Will you help me stump Vinod? If yes, come and attend our session and join me to prove that together we are superior (a friendly brain clash, but we must win!). SQL Server enthusiasts and SQL Server fans are going to have gala time at #TechEdIn as we have a very solid lineup of the speaker and extremely interesting sessions at TechEdIn. Read the complete blog post of Vinod. Session Details Title: SQL Server Misconceptions and Resolution – A Practical Perspective (Add to Calendar) Abstract: “Earth is flat”! – An ancient common misconception, which has been proven incorrect as we progressed in modern times. In this session we will see various database misconceptions prevailing and their resolution with the aid of the demos. In this unique session audience will be part of the conversation and resolution. Date and Time: March 21, 2012, 15:15 to 16:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • OBIEE 11.1.1 - How to Enable Caching in Internet Information Services (IIS) 7.0+

    - by Ahmed A
    Follow these steps to configure static file caching and content expiration if you are using IIS 7.0 Web Server with Oracle Business Intelligence. Tip: Install IIS URL Rewrite that enables Web administrators to create powerful outbound rules. Following are the steps to set up static file caching for IIS 7.0+ Web Server: 1. In “web.config” file for OBIEE static files virtual directory (ORACLE_HOME/bifoundation/web/app) add the following highlight in bold the outbound rule for caching:<?xml version="1.0" encoding="UTF-8"?><configuration>    <system.webServer>        <urlCompression doDynamicCompression="true" />        <rewrite>            <outboundRules>                <rule name="header1" preCondition="FilesMatch" patternSyntax="Wildcard">                    <match serverVariable="RESPONSE_CACHE_CONTROL" pattern="*" />                    <action type="Rewrite" value="max-age=604800" />                </rule>                <preConditions>    <preCondition name="FilesMatch">                        <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/css|^text/x-javascript|^text/javascript|^image/gif|^image/jpeg|^image/png" />                    </preCondition>                </preConditions>            </outboundRules>        </rewrite>    </system.webServer></configuration>2. Restart IIS. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • More SharePoint 2010 Expression Builders

    - by Ricardo Peres
    Introduction Following my last post, I decided to publish the whole set of expression builders that I use with SharePoint. For all who don’t know about expression builders, they allow us to employ a declarative approach, so that we don’t have to write code for “gluing” things together, like getting a value from the query string, the page’s underlying SPListItem or the current SPContext and assigning it to a control’s property. These expression builders are for some quite common scenarios, I use them quite often, and I hope you find them useful as well. SPContextExpression This expression builder allows us to specify an expression to be processed on the SPContext.Current property object. For example: 1: <asp:Literal runat="server" Text=“<%$ SPContextExpression:Site.RootWeb.Lists[0].Author.LoginName %>”/> It is identical to having the following code: 1: String authorName = SPContext.Current.Site.RootWeb.Lists[0].Author.LoginName; SPFarmProperty Returns a property stored on the farm level: 1: <asp:Literal runat="server" Text="<%$ SPFarmProperty:SomeProperty %>"/> Identical to: 1: Object someProperty = SPFarm.Local.Properties["SomeProperty"]; SPField Returns the value of a selected page’s list item field: 1: <asp:Literal runat="server" Text="<%$ SPField:Title %>"/> Does the same as: 1: String title = SPContext.Current.ListItem["Title"] as String; SPIsInAudience Checks if the current user belongs to an audience: 1: <asp:CheckBox runat="server" Checked="<%$ SPIsInAudience:SomeAudience %>"/> Equivalent to: 1: AudienceManager audienceManager = new AudienceManager(SPServiceContext.Current); 2: Audience audience = audienceManager.Audiences["SomeAudience"]; 3: Boolean isMember = audience.IsMember(SPContext.Current.Web.User.LoginName); SPIsInGroup Checks if the current user belongs to a group: 1: <asp:CheckBox runat="server" Checked="<%$ SPIsInGroup:SomeGroup %>"/> The equivalent C# code is: 1: SPContext.Current.Web.CurrentUser.Groups.OfType<SPGroup>().Any(x => String.Equals(x.Name, “SomeGroup”, StringComparison.OrdinalIgnoreCase)); SPProperty Returns the value of a user profile property for the current user: 1: <asp:Literal runat="server" Text="<%$ SPProperty:LastName %>"/> Where the same code in C# would be: 1: UserProfileManager upm = new UserProfileManager(SPServiceContext.Current); 2: UserProfile u = upm.GetUserProfile(false); 3: Object property = u["LastName"].Value; SPQueryString Returns a value passed on the query string: 1: <asp:GridView runat="server" PageIndex="<%$ SPQueryString:PageIndex %>" /> Is equivalent to (no SharePoint code this time): 1: Int32 pageIndex = Convert.ChangeType(typeof(Int32), HttpContext.Current.Request.QueryString["PageIndex"]); SPWebProperty Returns the value of a property stored at the site level: 1: <asp:Literal runat="server" Text="<%$ SPWebProperty:__ImagesListId %>"/> You can get the same result as: 1: String imagesListId = SPContext.Current.Web.AllProperties["__ImagesListId"] as String; Code OK, let’s move to the code. First, a common abstract base class, mainly for inheriting the conversion method: 1: public abstract class SPBaseExpressionBuilder : ExpressionBuilder 2: { 3: #region Protected static methods 4: protected static Object Convert(Object value, PropertyInfo propertyInfo) 5: { 6: if (value != null) 7: { 8: if (propertyInfo.PropertyType.IsAssignableFrom(value.GetType()) == false) 9: { 10: if (propertyInfo.PropertyType.IsEnum == true) 11: { 12: value = Enum.Parse(propertyInfo.PropertyType, value.ToString(), true); 13: } 14: else if (propertyInfo.PropertyType == typeof(String)) 15: { 16: value = value.ToString(); 17: } 18: else if ((typeof(IConvertible).IsAssignableFrom(propertyInfo.PropertyType) == true) && (typeof(IConvertible).IsAssignableFrom(value.GetType()) == true)) 19: { 20: value = System.Convert.ChangeType(value, propertyInfo.PropertyType); 21: } 22: } 23: } 24:  25: return (value); 26: } 27: #endregion 28:  29: #region Public override methods 30: public override CodeExpression GetCodeExpression(BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 31: { 32: if (String.IsNullOrEmpty(entry.Expression) == true) 33: { 34: return (new CodePrimitiveExpression(String.Empty)); 35: } 36: else 37: { 38: return (new CodeMethodInvokeExpression(new CodeMethodReferenceExpression(new CodeTypeReferenceExpression(this.GetType()), "GetValue"), new CodePrimitiveExpression(entry.Expression.Trim()), new CodePropertyReferenceExpression(new CodeArgumentReferenceExpression("entry"), "PropertyInfo"))); 39: } 40: } 41: #endregion 42:  43: #region Public override properties 44: public override Boolean SupportsEvaluate 45: { 46: get 47: { 48: return (true); 49: } 50: } 51: #endregion 52: } Next, the code for each expression builder: 1: [ExpressionPrefix("SPContext")] 2: public class SPContextExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String expression, PropertyInfo propertyInfo) 6: { 7: SPContext context = SPContext.Current; 8: Object expressionValue = DataBinder.Eval(context, expression.Trim().Replace('\'', '"')); 9:  10: expressionValue = Convert(expressionValue, propertyInfo); 11:  12: return (expressionValue); 13: } 14:  15: #endregion 16:  17: #region Public override methods 18: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 19: { 20: return (GetValue(entry.Expression, entry.PropertyInfo)); 21: } 22: #endregion 23: }   1: [ExpressionPrefix("SPFarmProperty")] 2: public class SPFarmPropertyExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String propertyName, PropertyInfo propertyInfo) 6: { 7: Object propertyValue = SPFarm.Local.Properties[propertyName]; 8:  9: propertyValue = Convert(propertyValue, propertyInfo); 10:  11: return (propertyValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: }   1: [ExpressionPrefix("SPField")] 2: public class SPFieldExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String fieldName, PropertyInfo propertyInfo) 6: { 7: Object fieldValue = SPContext.Current.ListItem[fieldName]; 8:  9: fieldValue = Convert(fieldValue, propertyInfo); 10:  11: return (fieldValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: }   1: [ExpressionPrefix("SPIsInAudience")] 2: public class SPIsInAudienceExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String audienceName, PropertyInfo info) 6: { 7: Debugger.Break(); 8: audienceName = audienceName.Trim(); 9:  10: if ((audienceName.StartsWith("'") == true) && (audienceName.EndsWith("'") == true)) 11: { 12: audienceName = audienceName.Substring(1, audienceName.Length - 2); 13: } 14:  15: AudienceManager manager = new AudienceManager(); 16: Object value = manager.IsMemberOfAudience(SPControl.GetContextWeb(HttpContext.Current).CurrentUser.LoginName, audienceName); 17:  18: if (info.PropertyType == typeof(String)) 19: { 20: value = value.ToString(); 21: } 22:  23: return(value); 24: } 25:  26: #endregion 27:  28: #region Public override methods 29: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 30: { 31: return (GetValue(entry.Expression, entry.PropertyInfo)); 32: } 33: #endregion 34: }   1: [ExpressionPrefix("SPIsInGroup")] 2: public class SPIsInGroupExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String groupName, PropertyInfo info) 6: { 7: groupName = groupName.Trim(); 8:  9: if ((groupName.StartsWith("'") == true) && (groupName.EndsWith("'") == true)) 10: { 11: groupName = groupName.Substring(1, groupName.Length - 2); 12: } 13:  14: Object value = SPControl.GetContextWeb(HttpContext.Current).CurrentUser.Groups.OfType<SPGroup>().Any(x => String.Equals(x.Name, groupName, StringComparison.OrdinalIgnoreCase)); 15:  16: if (info.PropertyType == typeof(String)) 17: { 18: value = value.ToString(); 19: } 20:  21: return(value); 22: } 23:  24: #endregion 25:  26: #region Public override methods 27: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 28: { 29: return (GetValue(entry.Expression, entry.PropertyInfo)); 30: } 31: #endregion 32: }   1: [ExpressionPrefix("SPProperty")] 2: public class SPPropertyExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String propertyName, System.Reflection.PropertyInfo propertyInfo) 6: { 7: SPServiceContext serviceContext = SPServiceContext.GetContext(HttpContext.Current); 8: UserProfileManager upm = new UserProfileManager(serviceContext); 9: UserProfile up = upm.GetUserProfile(false); 10: Object propertyValue = (up[propertyName] != null) ? up[propertyName].Value : null; 11:  12: propertyValue = Convert(propertyValue, propertyInfo); 13:  14: return (propertyValue); 15: } 16:  17: #endregion 18:  19: #region Public override methods 20: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 21: { 22: return (GetValue(entry.Expression, entry.PropertyInfo)); 23: } 24: #endregion 25: }   1: [ExpressionPrefix("SPQueryString")] 2: public class SPQueryStringExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String parameterName, PropertyInfo propertyInfo) 6: { 7: Object parameterValue = HttpContext.Current.Request.QueryString[parameterName]; 8:  9: parameterValue = Convert(parameterValue, propertyInfo); 10:  11: return (parameterValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: }   1: [ExpressionPrefix("SPWebProperty")] 2: public class SPWebPropertyExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String propertyName, PropertyInfo propertyInfo) 6: { 7: Object propertyValue = SPContext.Current.Web.AllProperties[propertyName]; 8:  9: propertyValue = Convert(propertyValue, propertyInfo); 10:  11: return (propertyValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: } Registration You probably know how to register them, but here it goes again: add this following snippet to your Web.config file, inside the configuration/system.web/compilation/expressionBuilders section: 1: <add expressionPrefix="SPContext" type="MyNamespace.SPContextExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 2: <add expressionPrefix="SPFarmProperty" type="MyNamespace.SPFarmPropertyExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 3: <add expressionPrefix="SPField" type="MyNamespace.SPFieldExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 4: <add expressionPrefix="SPIsInAudience" type="MyNamespace.SPIsInAudienceExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 5: <add expressionPrefix="SPIsInGroup" type="MyNamespace.SPIsInGroupExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 6: <add expressionPrefix="SPProperty" type="MyNamespace.SPPropertyExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 7: <add expressionPrefix="SPQueryString" type="MyNamespace.SPQueryStringExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 8: <add expressionPrefix="SPWebProperty" type="MyNamespace.SPWebPropertyExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> I’ll leave it up to you to figure out the best way to deploy this to your server!

    Read the article

  • SQL SERVER – Partition Parallelism Support in expressor 3.6

    - by pinaldave
    I am very excited to learn that there is a new version of expressor’s data integration platform coming out in March of this year.  It will be version 3.6, and I look forward to using it and telling everyone about it.  Let me describe a little bit more about what will be so great in expressor 3.6: Greatly enhanced user interface Parallel Processing Bulk Artifact Upgrading The User Interface First let me cover the most obvious enhancements. The expressor Studio user interface (UI) has had some significant work done. Kudos to the expressor Engineering team; the entire UI is a visual masterpiece that is very responsive and intuitive. The improvements are more than just eye candy; they provide significant productivity gains when developing expressor Dataflows. Operator shape icons now include a description that identifies the function of each operator, instead of having to guess at the function by the icon. Operator shapes and highlighting depict the current function and status: Disabled, enabled, complete, incomplete, and error. Each status displays an appropriate message in the message panel with correction suggestions. Floating or docking property panels provide descriptive tool tips for each property as well as auto resize when adjusting the canvas, without having to search Help or the need to scroll around to get access to the property. Progress and status indicators let you know when an operation is working. “No limit” canvas with snap-to-grid allows automatic sizing and accurate positioning when you have numerous operators in the Dataflow. The inline tool bar offers quick access to pan, zoom, fit and overview functions. Selecting multiple artifacts with a right click context allows you to easily manage your workspace more efficiently. Partitioning and Parallel Processing Partitioning allows each operator to process multiple subsets of records in parallel as opposed to processing all records that flow through that operator in a single sequential set. This capability allows the user to configure the expressor Dataflow to run in a way that most efficiently utilizes the resources of the hardware where the Dataflow is running. Partitions can exist in most individual operators. Using partitions increases the speed of an expressor data integration application, therefore improving performance and load times. With the expressor 3.6 Enterprise Edition, expressor simplifies enabling parallel processing by adding intuitive partition settings that are easy to configure. Bulk Artifact Upgrading Bulk Artifact Upgrading sounds a bit intimidating, but it actually is not and it is a welcome addition to expressor Studio. In past releases, users were prompted to confirm that they wanted to upgrade their individual artifacts only when opened. This was a cumbersome and repetitive process. Now with bulk artifact upgrading, a user can easily select what artifact or group of artifacts to upgrade all at once. As you can see, there are many new features and upgrade options that will prove to make expressor Studio quicker and more efficient.  I hope I’m not the only one who is excited about all these new upgrades, and that I you try expressor and share your experience with me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Strategies for invoking subclass methods on generic objects

    - by Brad Patton
    I've run into this issue in a number of places and have solved it a bunch of different ways but looking for other solutions or opinions on how to address. The scenario is when you have a collection of objects all based off of the same superclass but you want to perform certain actions based only on instances of some of the subclasses. One contrived example of this might be an HTML document made up of elements. You could have a superclass named HTMLELement and subclasses of Headings, Paragraphs, Images, Comments, etc. To invoke a common action across all of the objects you declare a virtual method in the superclass and specific implementations in all of the subclasses. So to render the document you could loop all of the different objects in the document and call a common Render() method on each instance. It's the case where again using the same generic objects in the collection I want to perform different actions for instances of specific subclass (or set of subclasses). For example (an remember this is just an example) when iterating over the collection, elements with external links need to be downloaded (e.g. JS, CSS, images) and some might require additional parsing (JS, CSS). What's the best way to handle those special cases. Some of the strategies I've used or seen used include: Virtual methods in the base class. So in the base class you have a virtual LoadExternalContent() method that does nothing and then override it in the specific subclasses that need to implement it. The benefit being that in the calling code there is no object testing you send the same message to each object and let most of them ignore it. Two downsides that I can think of. First it can make the base class very cluttered with methods that have nothing to do with most of the hierarchy. Second it assumes all of the work can be done in the called method and doesn't handle the case where there might be additional context specific actions in the calling code (i.e. you want to do something in the UI and not the model). Have methods on the class to uniquely identify the objects. This could include methods like ClassName() which return a string with the class name or other return values like enums or booleans (IsImage()). The benefit is that the calling code can use if or switch statements to filter objects to perform class specific actions. The downside is that for every new class you need to implement these methods and can look cluttered. Also performance could be less than some of the other options. Use language features to identify objects. This includes reflection and language operators to identify the objects. For example in C# there is the is operator that returns true if the instance matches the specified class. The benefit is no additional code to implement in your object hierarchy. The only downside seems to be the lack of using something like a switch statement and the fact that your calling code is a little more cluttered. Are there other strategies I am missing? Thoughts on best approaches?

    Read the article

  • Too complex/too many objects?

    - by Mike Fairhurst
    I know that this will be a difficult question to answer without context, but hopefully there are at least some good guidelines to share on this. The questions are at the bottom if you want to skip the details. Most are about OOP in general. Begin context. I am a jr dev on a PHP application, and in general the devs I work with consider themselves to use many more OO concepts than most PHP devs. Still, in my research on clean code I have read about so many ways of using OO features to make code flexible, powerful, expressive, testable, etc. that is just plain not in use here. The current strongly OO API that I've proposed is being called too complex, even though it is trivial to implement. The problem I'm solving is that our permission checks are done via a message object (my API, they wanted to use arrays of constants) and the message object does not hold the validation object accountable for checking all provided data. Metaphorically, if your perm containing 'allowable' and 'rare but disallowed' is sent into a validator, the validator may not know to look for 'rare but disallowed', but approve 'allowable', which will actually approve the whole perm check. We have like 11 validators, too many to easily track at such minute detail. So I proposed an AtomicPermission class. To fix the previous example, the perm would instead contain two atomic permissions, one wrapping 'allowable' and the other wrapping 'rare but disallowed'. Where previously the validator would say 'the check is OK because it contains allowable,' now it would instead say '"allowable" is ok', at which point the check ends...and the check fails, because 'rare but disallowed' was not specifically okay-ed. The implementation is just 4 trivial objects, and rewriting a 10 line function into a 15 line function. abstract class PermissionAtom { public function allow(); // maybe deny() as well public function wasAllowed(); } class PermissionField extends PermissionAtom { public function getName(); public function getValue(); } class PermissionIdentifier extends PermissionAtom { public function getIdentifier(); } class PermissionAction extends PermissionAtom { public function getType(); } They say that this is 'not going to get us anything important' and it is 'too complex' and 'will be difficult for new developers to pick up.' I respectfully disagree, and there I end my context to begin the broader questions. So the question is about my OOP, are there any guidelines I should know: is this too complicated/too much OOP? Not that I expect to get more than 'it depends, I'd have to see if...' when is OO abstraction too much? when is OO abstraction too little? how can I determine when I am overthinking a problem vs fixing one? how can I determine when I am adding bad code to a bad project? how can I pitch these APIs? I feel the other devs would just rather say 'its too complicated' than ask 'can you explain it?' whenever I suggest a new class.

    Read the article

  • Why does my performance slow to a crawl I move methods into a base class?

    - by Juliet
    I'm writing different implementations of immutable binary trees in C#, and I wanted my trees to inherit some common methods from a base class. However, I find. I have lots of binary tree data structures to implement, and I wanted move some common methods into in a base binary tree class. Unfortunately, classes which derive from the base class are abysmally slow. Non-derived classes perform adequately. Here are two nearly identical implementations of an AVL tree to demonstrate: AvlTree: http://pastebin.com/V4WWUAyT DerivedAvlTree: http://pastebin.com/PussQDmN The two trees have the exact same code, but I've moved the DerivedAvlTree.Insert method in base class. Here's a test app: using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using Juliet.Collections.Immutable; namespace ConsoleApplication1 { class Program { const int VALUE_COUNT = 5000; static void Main(string[] args) { var avlTreeTimes = TimeIt(TestAvlTree); var derivedAvlTreeTimes = TimeIt(TestDerivedAvlTree); Console.WriteLine("avlTreeTimes: {0}, derivedAvlTreeTimes: {1}", avlTreeTimes, derivedAvlTreeTimes); } static double TimeIt(Func<int, int> f) { var seeds = new int[] { 314159265, 271828183, 231406926, 141421356, 161803399, 266514414, 15485867, 122949829, 198491329, 42 }; var times = new List<double>(); foreach (int seed in seeds) { var sw = Stopwatch.StartNew(); f(seed); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } // throwing away top and bottom results times.Sort(); times.RemoveAt(0); times.RemoveAt(times.Count - 1); return times.Average(); } static int TestAvlTree(int seed) { var rnd = new System.Random(seed); var avlTree = AvlTree<double>.Create((x, y) => x.CompareTo(y)); for (int i = 0; i < VALUE_COUNT; i++) { avlTree = avlTree.Insert(rnd.NextDouble()); } return avlTree.Count; } static int TestDerivedAvlTree(int seed) { var rnd = new System.Random(seed); var avlTree2 = DerivedAvlTree<double>.Create((x, y) => x.CompareTo(y)); for (int i = 0; i < VALUE_COUNT; i++) { avlTree2 = avlTree2.Insert(rnd.NextDouble()); } return avlTree2.Count; } } } AvlTree: inserts 5000 items in 121 ms DerivedAvlTree: inserts 5000 items in 2182 ms My profiler indicates that the program spends an inordinate amount of time in BaseBinaryTree.Insert. Anyone whose interested can see the EQATEC log file I've created with the code above (you'll need EQATEC profiler to make sense of file). I really want to use a common base class for all of my binary trees, but I can't do that if performance will suffer. What causes my DerivedAvlTree to perform so badly, and what can I do to fix it?

    Read the article

  • Delegate performance of Roslyn Sept 2012 CTP is impressive

    - by dotneteer
    I wanted to dynamically compile some delegates using Roslyn. I came across this article by Piotr Sowa. The article shows that the delegate compiled with Roslyn CTP was not very fast. Since the article was written using the Roslyn June 2012, I decided to give Sept 2012 CTP a try. There are significant changes in Roslyn Sept 2012 CTP in both C# syntax supported as well as API. I found Anoop Madhisidanan’s article that has an example of the new API. With that, I was able to put together a comparison. In my test, the Roslyn compiled delegate is as fast as C# (VS 2012) compiled delegate. See the source code below and give it a try. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; using Roslyn.Compilers; using Roslyn.Scripting.CSharp; using Roslyn.Scripting; namespace RoslynTest { class Program { public Func del; static void Main(string[] args) { Stopwatch stopWatch = new Stopwatch(); Program p = new Program(); p.SetupDel(); //Comment out this line and uncomment the next line to compare //p.SetupScript(); stopWatch.Start(); int result = DoWork(p.del); stopWatch.Stop(); Console.WriteLine(result); Console.WriteLine("Time elapsed {0}", stopWatch.ElapsedMilliseconds); Console.Read(); } private void SetupDel() { del = (s, i) => ++s; } private void SetupScript() { //Create the script engine //Script engine constructor parameters go changed var engine=new ScriptEngine(); //Let us use engine's Addreference for adding the required //assemblies new[] { typeof (Console).Assembly, typeof (Program).Assembly, typeof (IEnumerable<>).Assembly, typeof (IQueryable).Assembly }.ToList().ForEach(asm => engine.AddReference(asm)); new[] { "System", "System.Linq", "System.Collections", "System.Collections.Generic" }.ToList().ForEach(ns=>engine.ImportNamespace(ns)); //Now, you need to create a session using engine's CreateSession method, //which can be seeded with a host object var session = engine.CreateSession(); var submission = session.CompileSubmission>("new Func((s, i) => ++s)"); del = submission.Execute(); //- See more at: http://www.amazedsaint.com/2012/09/roslyn-september-ctp-2012-overview-api.html#sthash.1VutrWiW.dpuf } private static int DoWork(Func del) { int result = Enumerable.Range(1, 1000000).Aggregate(del); return result; } } }  Since Roslyn Sept 2012 CTP is already over a year old, I cannot wait to see a new version coming out.

    Read the article

  • SSDT - What's in a name?

    - by jamiet
    SQL Server Data Tools (SSDT) recently got released as part of SQL Server 2012 and depending on who you believe it can be described as either: a suite of tools for building SQL Server database solutions or a suite of tools for building SQL Server database, Integration Services, Analysis Services & Reporting Services solutions Certainly the SQL Server 2012 installer seems to think it is the latter because it describes SQL Server Data Tools as "the SQL server development environment, including the tool formerly named Business Intelligence Development Studio. Also installs the business intelligence tools and references to the web installers for database development tools" as you can see here: Strange then that, seemingly, there is no consensus within Microsoft about what SSDT actually is. On yesterday's blog post First Release of SSDT Power Tools reader Simon Lampen asked the quite legitimate question:I understand (rightly or wrongly) that SSDT is the replacement for BIDS for SQL 2012 and have just installed this. If this is the case can you please point me to how I can edit rdl and rdlc files from within Visual Studio 2010 and import MS Access reports.To which came the following reply:SSDT doesn't include any BIDs (sic) components. Following up with the appropriate team (Analysis Services, Reporting Services, Integration Services) via their forum or msdn page would be the best way to answer you questions about these kinds of services. That's from a Microsoft employee by the way. Simon is even more confused by this and replies with:I have done some more digging and am more confused than ever. This documentation (and many others) : msdn.microsoft.com/.../ms156280.aspx expressly states that SSDT is where report editing tools are to be foundAnd on it goes....You can see where Simon's confusion stems from. He has official documentation stating that SSDT includes all the stuff for building SSIS/SSAS/SSRS solutions (this is confirmed in the installer, remember) yet someone from Microsoft tells him "SSDT doesn't include any BIDs components".I have been close to this for a long time (all the way through the CTPs) so I can kind of understand where the confusion stems from. To my understanding SSDT was originally the name of the database dev stuff but eventually that got expanded to include all of the dev tools - I guess not everyone in Microsoft got the memo.Does this sound familiar? Have we not been down this road before? The database dev tools have had upteen names over the years (do any of datadude, TSData, VSTS for DB Pros, DBPro, VS2010 Database Projects sound familiar) and I was hoping that the SSDT moniker would put all confusion to bed - evidently its as complicated now as it has ever been.Forgive me for whinging but putting meaningful, descriptive, accurate, well-defined and easily-communicated names onto a product doesn't seem like a difficult thing to do. I guess I'm mistaken!Onwards and upwards...@Jamiet

    Read the article

  • Expando Object and dynamic property pattern

    - by Al.Net
    I have read about 'dynamic property pattern' of Martin Fowler in his site under the tag 1997 in which he used dictionary kind of stuff to achieve this pattern. And I have come across about Expando object in c# very recently. When I see its implementation, I am able to see IDictionary implemented. So Expando object uses dictionary to store dynamic properties and is it what, Martin Fowler already defined 15 years ago?

    Read the article

  • Throwing exception from a property when my object state is invalid

    - by Rumi P.
    Microsoft guidelines say: "Avoid throwing exceptions from property getters", and I normally follow that. But my application uses Linq2SQL, and there is the case where my object can be in invalid state because somebody or something wrote nonsense into the database. Consider this toy example: [Table(Name="Rectangle")] public class Rectangle { [Column(Name="ID", IsPrimaryKey = true, IsDbGenerated = true)] public int ID {get; set;} [Column(Name="firstSide")] public double firstSide {get; set;} [Column(Name="secondSide")] public double secondSide {get; set;} public double sideRatio { get { return firstSide/secondSide; } } } Here, I could write code which ensures that my application never writes a Rectangle with a zero-length side into the database. But no matter how bulletproof I make my own code, somebody could open the database with a different application and create an invalid Rectangle, especially one with a 0 for secondSide. (For this example, please forget that it is possible to design the database in a way such that writing a side length of zero into the rectangle table is impossible; my domain model is very complex and there are constraints on model state which cannot be expressed in a relational database). So, the solution I am gravitating to is to change the getter to: get { if(firstSide > 0 && secondSide > 0) return firstSide/secondSide; else throw new System.InvalidOperationException("All rectangle sides should have a positive length"); } The reasoning behind not throwing exceptions from properties is that programmers should be able to use them without having to make precautions about catching and handling them them. But in this case, I think that it is OK to continue to use this property without such precautions: if the exception is thrown because my application wrote a non-zero rectangle side into the database, then this is a serious bug. It cannot and shouldn't be handled in the application, but there should be code which prevents it. It is good that the exception is visibly thrown, because that way the bug is caught. if the exception is thrown because a different application changed the data in the database, then handling it is outside of the scope of my application. So I can't do anything about it if I catch it. Is this a good enough reasoning to get over the "avoid" part of the guideline and throw the exception? Or should I turn it into a method after all? Note that in the real code, the properties which can have an invalid state feel less like the result of a calculation, so they are "natural" properties, not methods.

    Read the article

  • Check if an object is facing another based on angles

    - by Isaiah
    I already have something that calculates the bearing angle to get one object to face another. You give it the positions and it returns the angle to get one to face the other. Now I need to figure out how tell if on object is facing toward another object within a specified field and I can't find any information about how to do this. The objects are obj1 and obj2. Their angles are at obj1.angle and obj2.angle. Their vectors are at obj1.pos and obj2.pos. It's in the format [x,y]. The angle to have one face directly at another is found with direction(obj1.pos,obj2.pos). I want to set the function up like this: isfacing(obj1,obj2,area){...} and return true/false depending if it's in the specified field area to the angle to directly see it. I've got a base like this: var isfacing = function (obj1,obj2,area){ var toface = direction(obj1.pos,obj2.pos); if(toface+area >= obj1.angle && ob1.angle >= toface-area){ return true; } return false; } But my problem is that the angles are in 360 degrees, never above 360 and never below 0. How can I account for that in this? If the first object's angle is say at 0 and say I subtract a field area of 20 or so. It'll check if it's less than -20! If I fix the -20 it becomes 340 but x < 340 isn't what I want, I'd have to x 340 in that case. Is there someone out there with more sleep than I that can help a new dev pulling an all-nighter just to get enemies to know if they're attacking in the right direction? I hope I'm making this harder than it seems. I'd just make them always face the main char if the producer didn't want attacks from behind to work while blocking. In which case I'll need the function above anyways. I've tried to give as much info as I can think would help. Also this is in 2d.

    Read the article

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Hiding System Objects in Object Explorer in SQL Server Management Studio

    While looking through the new features and improvements in SQL Server Management Studio (SSMS) we found a potentially interesting one to Hide System Objects in Object Explorer in SQL Server Management Studio. In this tip we will take a look at how to Hide System Objects in Object Explorer. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Term for a single C++ endpoint/object file

    - by Qix
    I have heard several terms for a C++ "Codepoint" (which is what I've heard used the most often), or a .cpp file that is compiled into an object file. For instance, .cpp files can include other .cpp files (or any other file, really, so long as it compiles), but during compilation, there is really only one 'main' code file that is used/generated. I know there is a widely accepted term, I just can't recall what it is. What is the accepted term for the final .c/.cpp file used to generate an object file?

    Read the article

  • In Java, is there a performance gain in using interfaces for complex models?

    - by Gnoupi
    The title is hardly understandable, but I'm not sure how to summarize that another way. Any edit to clarify is welcome. I have been told, and recommended to use interfaces to improve performances, even in a case which doesn't especially call for the regular "interface" role. In this case, the objects are big models (in a MVC meaning), with many methods and fields. The "good use" that has been recommended to me is to create an interface, with its unique implementation. There won't be any other class implementing this interface, for sure. I have been told that this is better to do so, because it "exposes less" (or something close) to the other classes which will use methods from this class, as these objects are referring to the object from its interface (all public methods from the implementation being reproduced in the interface). This seems quite strange to me, as it seems like a C++ use to me (with header files). There I see the point, but in Java? Is there really a point in making an interface for such unique implementation? I would really appreciate some clarifications on the topic, so I could justify not following such kind of behavior, and the hassle it creates from duplicating all declarations. Edit: Plenty of valid points in most answers, I'm wondering if I won't switch this question for a community wiki, so we can regroup these points in more structured answers.

    Read the article

  • Object Dependency in SQL Server

    When a SQL Server object is created that references another SQL Server object, such as a stored procedure called from a trigger, that dependency is recorded by the database engine. This article details how to get at that dependency information.

    Read the article

  • gradient coloring of an object

    - by perrakdar
    I have an object(FBX format) in my project, it's a line drawn in 3D max. I want to color the line in XNA so that the color starts from a specific RGB color in both the start and end points of the line and finish in a specific RGB color.(e.x., from (255,255,255) to (128,128,128). Something like gradient coloring of an object. I need to do that programmatically, since later in my code I have to change these two specific colors a lot.

    Read the article

  • Moving from WDS to MDT + WDS - Prestaged Computer Name

    - by MSCF
    We previously used just WDS to deploy our images. WDS was setup to request approval for new machines. We used the "Name and Approve" option to name the machines as we added them. If it was pre-existing, it would just use the existing computer name from AD. Then in our unattend.xml file we had Computername=%MACHINENAME%. This picked up the name we gave it during approval and set the computer name accordingly. We are now implementing MDT to manage our images and drivers. But upon testing, we noticed it would assign random computer names. I went into the Unattend.xml for the deploy task sequence and added that value under Specialize amd64_Microsoft-Windows-Shell-Setup_neutral Computername=%MACHINENAME%. But when we try applying the image, it errors out at that point of the install. How can an MDT deployment be configured to leverage the pre-staged name? Some additional info: Error message during the imaging process: Windows could not parse or process the unattend answer file for pass [specialize]. The settings specified in the answer file cannot be applied. The error was detected while processing settings for component [Microsoft-Windows-Shell-Setup].? setuperr.log: 2014-07-22 14:02:13, Error [setup.exe] [Action Queue] : Unattend action failed with exit code 4 2014-07-22 14:02:13, Error [setup.exe] Execution of unattend GCs failed; hr = 0x0; pResults-hrResult = 0x8030000b

    Read the article

  • Window 7 Computer name changing on its own?

    - by DC
    Very odd problem... I have a Dell Latitude D830 with XP Pro that has been running on my local domain for many years. I recently Installed Windows 7 Enterprise on the D830 using a brand new HDD so that I could still use XP if I needed by just swapping out the HDD's. I added the W7 installed system to my domain using a completely different machine name than that used for the XP system and everything seemed to be functioning as it should. On boot up over the last 2 weeks or so I occasionally (3 times now) get to the login screen and try to login to the domain only to get an error saying that the Computer name is not a trusted machine in the domain I'm trying to log in to. Come to find out that the machine name on the W7 system has been changed somehow to that of my old XP system. If on the W7 system I then change the name back to the correct name, disjoin the domain, reboot, add the machine back into the domain … all is well for an unknown period of time until this happens again. This last time, I know for a fact that everything was fine the day before when I shut down the system. I came in today, powered up the system and the machine name had been changed to that of my old XP system again. Has anybody else seen this behavior or hav any ideas on what could be causing it? Thanks!

    Read the article

  • Performance tweaks and upgrades for VMWare Server 2

    - by sjohnston
    Our software department has a server running VMWare Server 2. We typically have 8-10 VMs running as test environments (Win XP and Server 08) for various versions of our software, and one VM that is used as a build server (Win XP). The host is running Server 2003 R2. It has 32GB RAM, 8 core Xeon 3.16GHz CPU, one disk for host OS and two raid disks for VMs. The majority of the time, this setup behaves very well and there are no complaints. Other times, the VMs can be very laggy. This is sometimes, but not always, correlated to heavy load on the build server. I'm a software developer, not an IT pro, but it seems to me that this machine should be beefy enough to handle this many VMs. Is this occasional performance hit likely just because we're hitting the limits of the hardware, or should I be looking for another culprit? From what I've read, I'm guessing if there's a bottleneck, it's probably disk I/O with all these VMs running off two disks (especially the build server). Would spreading the VMs over more disks, and/or switching to SSDs give us a significant performance boost? Other things I've read may increase performance: single virtual processor per VM removing/disabling unused virtual hardware preallocated disk space not using snapshots setting a reserved memory limit on the host and disabling VM memory swapping Can anyone confirm or deny if any of these improve performance? What other good tweaks have I missed?

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >