Search Results

Search found 10677 results on 428 pages for 'feature comparison'.

Page 64/428 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Creating a Document Library with Content Type in code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/15/154360.aspxIn the past, I have shown how to create a list content type and add the content type to a list in code.  As a Developer, many of the artifacts which we create are widgets which have a List or Document Library as the back end.   We need to be able to create our applications (Web Part, etc.) without having the user involved except to enter the list item data.  Today, I will show you how to do the same with a document library.    A summary of what we will do is as follows:   1.   Create an Empty SharePoint Project in Visual Studio 2.   Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution 3.   Create a new Feature and add and event receiver  all the code will be in the event receiver 4.   Add the fields which will extend the built-in Document content type 5.   If the Content Type does not exist, Create it 6.   If the Document Library does not exist, Create it with the new Content Type inherited from the Document Content Type 7.   Delete the Document Content Type from the Library (as we have a new one which inherited from it) 8.   Add the fields which we want to be visible from the fields added to the new Content Type   Here we go:   Create an Empty SharePoint Project in Visual Studio      Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution       The Utilities and Extensions Library will be part of this project which I will provide a download link at the end of this post.  Drag and drop them into your project.  If Dragged and Dropped from windows explorer you will need to show all files and then include them in your project.  Change the Namespace to agree with your project.   Create a new Feature and add and event receiver  all the code will be in the event receiver.  Here We added a new Feature called “CreateDocLib”  and then right click to add an Event Receiver All of our code will be in this Event Receiver.  For this Demo I will only be using the Feature Activated Event.      From this point on we will be looking at code!    We are adding two constants for use columGroup (How we want SharePoint to Group them, usually Company Name) and ctName(ContentType Name)  using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace CreateDocLib.Features.CreateDocLib { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("56e6897c-97c4-41ac-bc5b-5cd2c04f2dd1")] public class CreateDocLibEventReceiver : SPFeatureReceiver { const string columnGroup = "DJ"; const string ctName = "DJDocLib"; } }     Here we are creating the Feature Activated event.   Adding the new fields (Site Columns) ,  Testing if the Content Type Exists, if not adding it.  Testing if the document Library exists, if not adding it.   #region DocLib public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { //add the fields addFields(spWeb); //add content type SPContentType testCT = spWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(spWeb, ctName); } if ((spWeb.Lists.TryGetList("MyDocuments") == null)) { //create the list if it dosen't to exist CreateDocLib(spWeb); } } } #endregion The addFields method uses the utilities library to add site columns to the site. We can add as many fields within this method as we like. Here we are adding one for demonstration purposes. Icon as a Url type.  public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Icon", SPFieldType.URL, false, columnGroup); }The addContentType method add the new Content Type to the site Content Types. We have already checked to see that it does not exist. In addition, here is where we add the linkages from our site columns previously created to our new Content Type   private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Document"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } Here we are adding just one linkage as we only have one additional field in our Content Type public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Icon", ct); } Next we add the logic to create our new Document Library, which we have already checked to see if it exists.  We create the document library and turn on content types.  Add the new content type and then delete the old “Document” content types.   private void CreateDocLib(SPWeb web) { using (var site = new SPSite(web.Url)) { var web1 = site.RootWeb; var listId = web1.Lists.Add("MyDocuments", string.Empty, SPListTemplateType.DocumentLibrary); var lib = web1.Lists[listId] as SPDocumentLibrary; lib.ContentTypesEnabled = true; var docType = web.ContentTypes[ctName]; lib.ContentTypes.Add(docType); lib.ContentTypes.Delete(lib.ContentTypes["Document"].Id); lib.Update(); AddLibrarySettings(web1, lib); } }  Finally, we set some document library settings on our new document library with the AddLibrarySettings method. We then ensure that the new site column is visible when viewed in the browser.  private void AddLibrarySettings(SPWeb web, SPDocumentLibrary lib) { lib.OnQuickLaunch = true; lib.ForceCheckout = true; lib.EnableVersioning = true; lib.MajorVersionLimit = 5; lib.EnableMinorVersions = true; lib.MajorWithMinorVersionsLimit = 5; lib.Update(); var view = lib.DefaultView; view.ViewFields.Add("Icon"); view.Update(); } Okay, what's cool here: In a few lines of code, we have created site columns, A content Type, a document library. As a developer, I use this functionality all the time. For instance, I could now just add a web part to this same solutionwhich uses this document Library. I love SharePoint! Here is the complete solution: Create Document Library Code

    Read the article

  • SQLAuthority News Bookmark Deprecated Database Engine Features in SQL Server 2008

    When anybody asked me if any specific feature is available in SQL Server 2008 or if any feature will be disabled in future versions of SQL Server, I always point everybody to following list where all the deprecated database engine features are listed. Deprecated Database Engine Features in SQL Server 2008 R2 Deprecated Database Engine [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • apache sendmail: trying to change user "from" address from apache to domain account

    - by Wes
    I apologize if I am asking a question already answered, but my problem isn't really that I haven't found an answer. I have, in fact, found a half-dozen different "solutions" to my problem, tried them all, in various combinations, and have been consistently unsuccessful. The goal All I want to do is change the envelope "from" address for all email sent from [email protected] to [email protected], always. What I've already done I am running Apache, PHP, and sendmail on CentOS 5.5, [email protected]. We have an SMTP server at 192.168.0.4. The domain's email accounts are all at @domain.org. I have successfully set up "smart host" using this line in the sendmail.mc file: define(`SMART_HOST', `192.168.0.4')dnl Then I set up masquerading, and was hopeful this would solve it. I have this in the .mc file: FEATURE(`masquerade_entire_domain')dnl FEATURE(`masquerade_envelope')dnl FEATURE(`allmasquerade')dnl MASQUERADE_AS(`domain.org')dnl MASQUERADE_DOMAIN(`domain.org.')dnl MASQUERADE_DOMAIN(`localhost.localdomain.')dnl This rewrites "to" addresses, but not "from" addresses. Testing from the command line: sendmail -v [email protected] Always is shown from the local user (in this case root, or my local user account). I had read that "sendmail" command sometimes bypasses masquerading. Nevertheless, using the "mail" command has the same result. After that, I have explored several "solutions", including: mailertable virtusertable FEATURE(`accept_unresolvable_domains')dnl LOCAL_DOMAIN(`localhost.localdomain')dnl FEATURE(`genericstable')dnl /etc/mail/access file /etc/mail/local-host-names file /etc/mail/trusted-users file All to no affect. The last thing I've tried So, I decided to go in a different direction, and try to set the envelope "from" address via PHP, using either the configuration in /etc/php.ini, or adding the -f parameter to the mail() function or to sendmail command. If I run this command: sendmail -v -f [email protected] [email protected] I get this error in /var/log/maillog: Mar 30 08:56:16 localhost sendmail[24022]: p2UCuE8w024022: [email protected], size=5, class=0, nrcpts=1, msgid=<[email protected]>, relay=user@localhost Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: [email protected], [email protected] (500/502), delay=00:00:05, xdelay=00:00:03, mailer=relay, pri=30005, relay=[192.168.0.4] [192.168.0.4], dsn=5.1.1, stat=User unknown Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: p2UCuE8x024022: DSN: User unknown Mar 30 08:56:23 localhost sendmail[24022]: p2UCuE8x024022: [email protected], delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=31029, relay=[192.168.0.4] [192.168.0.4], dsn=2.0.0, stat=Sent (Ok: queued as B5E2E40E0A2) Which is basically a "User unknown" 550 error. Help Please help. What do I need to change? Should I just start over in the sendmail.mc file? It has a ton of config options stuffed in it, over days of trying things. Why is changing the envelope "from" address via the command line generating a "User unknown" error?

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Platinum SEO Plugin Vs All-In-One SEO Pack

    The two biggest SEO optimization tools for WordPress are probably Platinum SEO and All-in-One SEO Pack. This article explains the relationship of the two and gives a brief rundown of installing Platinum SEO plugin on your blog. It then provides a comparison of the two, feature-for-feature.

    Read the article

  • Picasa 3.9 login fails with 2-factor authentication

    - by Paul Pomes
    I've installed Picasa 3.9 via the instructions at webupd8, however the login window keeps failing with the message, "You must be connected to the Internet to use this feature." If "Try again" is tried I'll successfully pass the first login screen of username and password. Next I'm prompted for the verification code which then takes me back to the "You must be connected to the Internet to use this feature" screen again.

    Read the article

  • How do you track display impressions in Google Analytics on non Google networks?

    - by dee
    Google Analytics has a Multi-Channel funnel analysis feature that we’d like to use to understand assisted conversions and how each channel has impacted on conversion beyond just last interaction attribution. My current understanding is that the impression tracking part of this feature works really well when playing within Google’s search and display networks. Outside of Google’s network I suspect that impression tracking will no longer “just work” and feed back into GA appropriately. What our options are for tracking display impressions on other advertising networks so that we can be attributing value correctly with GA?

    Read the article

  • Getting Started with Maps in SQL Server 2008 R2 Reporting Services

    I noticed a new feature in SQL Server 2008 R2 Reporting Services that allows you to render maps in your reports. Can you provide some details on this new feature and can I take advantage of it even though don't have any spatial columns in my data warehouse? Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • TechEd 2014 Day 3

    - by John Paul Cook
    There is some confusion about durability of data stored in SQL Server in-memory tables, so some review of the concepts is appropriate. The in-memory option is enabled at the database level. Enabling it at the database level only gives you the option to specify the in-memory feature on a table by table basis. No existing tables or new tables will by default become in-memory tables when you enable the feature at the database level. If you choose to make a table an in-memory table, by default it is...(read more)

    Read the article

  • How to be successful at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Silverlight 3 as a Desktop Application (Out-of-Browser Applications)

    - by kaleidoscope
    A cool new feature in Silverlight 3 is the ability to run Silverlight applications out of the browser which resembles a desktop application. Only a few configuration steps are needed to enable your application to run offline. To enable this feature you only need to open the AppManifest.xml file which can be found in the Properties folder and add some settings as follows: - EntryPointAssembly - EntryPointType - ShortName - Title - Blurb More details can be found at: http://www.silverlightshow.net/items/Silverlight-3-as-a-Desktop-Application-Out-of-Browser-Applications.aspx   Amit, S

    Read the article

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • TechEd 2014 Day 3

    - by John Paul Cook
    There is some confusion about durability of data stored in SQL Server in-memory tables, so some review of the concepts is appropriate. The in-memory option is enabled at the database level. Enabling it at the database level only gives you the option to specify the in-memory feature on a table by table basis. No existing tables or new tables will by default become in-memory tables when you enable the feature at the database level. If you choose to make a table an in-memory table, by default it is...(read more)

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Why can't Java/C# implement RAII?

    - by mike30
    Question: Why can't Java/C# implement RAII? Clarification: I am aware the garbage collector is not deterministic. So with the current language features it is not possible for an object's Dispose() method to be called automatically on scope exit. But could such a deterministic feature be added? My understanding: I feel an implementation of RAII must satisfy two requirements: 1. The lifetime of a resource must be bound to a scope. 2. Implicit. The freeing of the resource must happen without an explicit statement by the programmer. Analogous to a garbage collector freeing memory without an explicit statement. The "implicitness" only needs to occur at point of use of the class. The class library creator must of course explicitly implement a destructor or Dispose() method. Java/C# satisfy point 1. In C# a resource implementing IDisposable can be bound to a "using" scope: void test() { using(Resource r = new Resource()) { r.foo(); }//resource released on scope exit } This does not satisfy point 2. The programmer must explicitly tie the object to a special "using" scope. Programmers can (and do) forget to explicitly tie the resource to a scope, creating a leak. In fact the "using" blocks are converted to try-finally-dispose() code by the compiler. It has the same explicit nature of the try-finally-dispose() pattern. Without an implicit release, the hook to a scope is syntactic sugar. void test() { //Programmer forgot (or was not aware of the need) to explicitly //bind Resource to a scope. Resource r = new Resource(); r.foo(); }//resource leaked!!! I think it is worth creating a language feature in Java/C# allowing special objects that are hooked to the stack via a smart-pointer. The feature would allow you to flag a class as scope-bound, so that it always is created with a hook to the stack. There could be a options for different for different types of smart pointers. class Resource - ScopeBound { /* class details */ void Dispose() { //free resource } } void test() { //class Resource was flagged as ScopeBound so the tie to the stack is implicit. Resource r = new Resource(); //r is a smart-pointer r.foo(); }//resource released on scope exit. I think implicitness is "worth it". Just as the implicitness of garbage collection is "worth it". Explicit using blocks are refreshing on the eyes, but offer no semantic advantage over try-finally-dispose(). Is it impractical to implement such a feature into the Java/C# languages? Could it be introduced without breaking old code?

    Read the article

  • Terminal command autocomplete

    - by Edhoari
    I'm currently trying to switch from OpenSUSE to Ubuntu as my main OS. While most of opensuse features is there in ubuntu, there is one feature that doesn't. In Opensuse, I can always use Ctrl+Up to autocomplete the command line using previously typed command. That feature is very useful for me as it allows me to work faster without having to retype long command. Can anyone provide a way to enable this on Ubuntu? Thank you

    Read the article

  • Exploring Semantic Search Key Term Relevance

    SQL Server's 'Semantic Search' feature seemed an exciting feature when first shown. Was it really true that Microsoft had come up with a system to rival the industry-leaders, one that could extract the contextual meaning of terms in text, or automatically categorise the subject matter of text? On first inspection, it seems unlikely. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • General type conversion without risking Exceptions

    - by Mongus Pong
    I am working on a control that can take a number of different datatypes (anything that implements IComparable). I need to be able to compare these with another variable passed in. If the main datatype is a DateTime, and I am passed a String, I need to attempt to convert the String to a DateTime to perform a Date comparison. if the String cannot be converted to a DateTime then do a String comparison. So I need a general way to attempt to convert from any type to any type. Easy enough, .Net provides us with the TypeConverter class. Now, the best I can work out to do to determine if the String can be converted to a DateTime is to use exceptions. If the ConvertFrom raises an exception, I know I cant do the conversion and have to do the string comparison. The following is the best I got : string theString = "99/12/2009"; DateTime theDate = new DateTime ( 2009, 11, 1 ); IComparable obj1 = theString as IComparable; IComparable obj2 = theDate as IComparable; try { TypeConverter converter = TypeDescriptor.GetConverter ( obj2.GetType () ); if ( converter.CanConvertFrom ( obj1.GetType () ) ) { Console.WriteLine ( obj2.CompareTo ( converter.ConvertFrom ( obj1 ) ) ); Console.WriteLine ( "Date comparison" ); } } catch ( FormatException ) { Console.WriteLine ( obj1.ToString ().CompareTo ( obj2.ToString () ) ); Console.WriteLine ( "String comparison" ); } Part of our standards at work state that : Exceptions should only be raised when an Exception situation - ie. an error is encountered. But this is not an exceptional situation. I need another way around it. Most variable types have a TryParse method which returns a boolean to allow you to determine if the conversion has succeeded or not. But there is no TryConvert method available to TypeConverter. CanConvertFrom only dermines if it is possible to convert between these types and doesnt consider the actual data to be converted. The IsValid method is also useless. Any ideas? EDIT I cannot use AS and IS. I do not know either data types at compile time. So I dont know what to As and Is to!!! EDIT Ok nailed the bastard. Its not as tidy as Marc Gravells, but it works (I hope). Thanks for the inpiration Marc. Will work on tidying it up when I get the time, but I've got a bit stack of bugfixes that I have to get on with. public static class CleanConverter { /// <summary> /// Stores the cache of all types that can be converted to all types. /// </summary> private static Dictionary<Type, Dictionary<Type, ConversionCache>> _Types = new Dictionary<Type, Dictionary<Type, ConversionCache>> (); /// <summary> /// Try parsing. /// </summary> /// <param name="s"></param> /// <param name="value"></param> /// <returns></returns> public static bool TryParse ( IComparable s, ref IComparable value ) { // First get the cached conversion method. Dictionary<Type, ConversionCache> type1Cache = null; ConversionCache type2Cache = null; if ( !_Types.ContainsKey ( s.GetType () ) ) { type1Cache = new Dictionary<Type, ConversionCache> (); _Types.Add ( s.GetType (), type1Cache ); } else { type1Cache = _Types[s.GetType ()]; } if ( !type1Cache.ContainsKey ( value.GetType () ) ) { // We havent converted this type before, so create a new conversion type2Cache = new ConversionCache ( s.GetType (), value.GetType () ); // Add to the cache type1Cache.Add ( value.GetType (), type2Cache ); } else { type2Cache = type1Cache[value.GetType ()]; } // Attempt the parse return type2Cache.TryParse ( s, ref value ); } /// <summary> /// Stores the method to convert from Type1 to Type2 /// </summary> internal class ConversionCache { internal bool TryParse ( IComparable s, ref IComparable value ) { if ( this._Method != null ) { // Invoke the cached TryParse method. object[] parameters = new object[] { s, value }; bool result = (bool)this._Method.Invoke ( null, parameters); if ( result ) value = parameters[1] as IComparable; return result; } else return false; } private MethodInfo _Method; internal ConversionCache ( Type type1, Type type2 ) { // Use reflection to get the TryParse method from it. this._Method = type2.GetMethod ( "TryParse", new Type[] { type1, type2.MakeByRefType () } ); } } }

    Read the article

  • Right div pushing center div further down

    - by Chase
    I cannot get this last div to go up properly in my layout and have tried countless things. I'm not sure what's going on with my css? Here is a screenshot: http://img291.imageshack.us/img291/5377/screenshot20100528at123.png #events { float: left; width: 420px; margin:0 0 5px 0; font-family:Helvetica, Arial, sans-serif; font-size: 16px; background-image: url(images/lastfmhead.jpg); background-repeat: no-repeat; height:360px; overflow:hidden; display: inline; } #events table { width:419px; } #events th, td { padding: 3px 3px; } #whatsup ul, #citywhatsup ul { margin:0 5px 0 5px; text-align:left; font-family:Helvetica, Arial, sans-serif; } #whatsup ul li, #citywhatsup ul li{ list-style-type:none; list-style: none; } #whatsup hr, #citywhatsup hr{ border: none 0; border-top: 1px dashed #990000;/*the border*/ width: 100%; height: 1px; margin: 1px auto 5px auto;/*whatever the total width of the border-top and border-bottom equal*/ } #events ul { margin:5px 5px 0 5px; } #events ul li{ list-style-type:none; list-style: none; } #attending ul { display: inline-block; margin: 0; width:200px; } #attending ul li { display: inline-block; list-style-image:none; margin:0; padding:2px 5px 2px 5px; } #attending { width: 230px; margin:0 13px 5px 12px; float: left; display: inline; background-image: url(images/otherhead.jpg); background-repeat: no-repeat; text-align:center; height:360px; overflow:hidden; } #whatsup { width: 230px; margin:0 0 5px 0; float: left; display:inline; background-image: url(images/otherhead.jpg); background-repeat: no-repeat; text-align:center; } #eventtitle{ margin: 3px 0 -3px 0; } #eventtitle { color: #900; margin-left: 5px; font-size:16px; } #tweetit { color: #487B96 !important; font-size:16px; margin: 3px 0 -4px 0; } #photos { background-image: url(images/flickrheader.jpg); background-repeat: no-repeat; width: 665px; clear:both; } <div id="cityevents"> <h2> Events </h2> <table> <th>Date </th><th> Who's Playing </th><th> Venue </th><th> City </th><th> Tickets </th> <tr><td>May 28</td><td><a href='http://www.songkick.com/concerts/5384486?utm_source=1121&utm_medium=partner' target='_blank'>Jill King</a></td><td>Open Eye Cafe</td><td>Carrboro</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>May 28</td><td><a href='http://www.songkick.com/concerts/5281141?utm_source=1121&utm_medium=partner' target='_blank'>Ahleuchatistas</a></td><td>Nightlight</td><td>Chapel Hill</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>May 28</td><td><a href='http://www.songkick.com/concerts/4970896?utm_source=1121&utm_medium=partner' target='_blank'>Sam Quinn</a></td><td>Local 506</td><td>Chapel Hill</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>May 29</td><td><a href='http://www.songkick.com/concerts/5303661?utm_source=1121&utm_medium=partner' target='_blank'>Cagematch Mayhem, Champion Vs Au Jus, Heartbreaker Vs Au Jus</a></td><td>DSI Comedy Theater</td><td>Carrboro</td><td style='text-align:center;'><a href='http://www.songkick.com/concerts/5303661/tickets?utm_source=1121&utm_medium=partner' target='_blank'> Find </a></td></tr><tr><td>May 29</td><td><a href='http://www.songkick.com/concerts/4722066?utm_source=1121&utm_medium=partner' target='_blank'>Lewd Acts, Converge, Gaza, Black Breath</a></td><td>Cat's Cradle</td><td>Carrboro</td><td style='text-align:center;'><a href='http://www.songkick.com/concerts/4722066/tickets?utm_source=1121&utm_medium=partner' target='_blank'> Find </a></td></tr><tr><td>May 29</td><td><a href='http://www.songkick.com/concerts/4647076?utm_source=1121&utm_medium=partner' target='_blank'>Nate Currin</a></td><td>Broad Street Cafe</td><td>Durham</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>May 29</td><td><a href='http://www.songkick.com/concerts/5580211?utm_source=1121&utm_medium=partner' target='_blank'>International Night</a></td><td>Serena Rtp</td><td>Durham</td><td style='text-align:center;'><a href='http://www.songkick.com/concerts/5580211/tickets?utm_source=1121&utm_medium=partner' target='_blank'> Find </a></td></tr><tr><td>May 29</td><td><a href='http://www.songkick.com/concerts/4770241?utm_source=1121&utm_medium=partner' target='_blank'>Jill King</a></td><td>Caffe Driade</td><td>Chapel Hill</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>May 29</td><td><a href='http://www.songkick.com/concerts/5406411?utm_source=1121&utm_medium=partner' target='_blank'>Sunbears!</a></td><td>Local 506</td><td>Chapel Hill</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>May 29</td><td><a href='http://www.songkick.com/concerts/4924136?utm_source=1121&utm_medium=partner' target='_blank'>Studio Gangsters</a></td><td>The Reservoir</td><td>Carrboro</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>May 30</td><td><a href='http://www.songkick.com/concerts/5252161?utm_source=1121&utm_medium=partner' target='_blank'>She Wants Revenge</a></td><td>Cat's Cradle</td><td>Carrboro</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>May 30</td><td><a href='http://www.songkick.com/concerts/4436326?utm_source=1121&utm_medium=partner' target='_blank'>Unheard Radio Battle of the Bands</a></td><td>Mansion 462</td><td>Chapel Hill</td><td style='text-align:center;'><a href='http://www.songkick.com/concerts/4436326/tickets?utm_source=1121&utm_medium=partner' target='_blank'> Find </a></td></tr><tr><td>May 30</td><td><a href='http://www.songkick.com/concerts/4924141?utm_source=1121&utm_medium=partner' target='_blank'>Studio Gangsters</a></td><td>The Cave</td><td>Chapel Hill</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>Jun 2</td><td><a href='http://www.songkick.com/concerts/5252881?utm_source=1121&utm_medium=partner' target='_blank'>Jeanne Jolly</a></td><td>Caffe Driade</td><td>Chapel Hill</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>Jun 2</td><td><a href='http://www.songkick.com/concerts/4628026?utm_source=1121&utm_medium=partner' target='_blank'>James Husband, Of Montreal</a></td><td>Cat's Cradle</td><td>Carrboro</td><td style='text-align:center;'><a href='http://www.songkick.com/concerts/4628026/tickets?utm_source=1121&utm_medium=partner' target='_blank'> Find </a></td></tr><tr><td>Jun 2</td><td><a href='http://www.songkick.com/concerts/5019466?utm_source=1121&utm_medium=partner' target='_blank'>Camera Obscura</a></td><td>Duke Gardens</td><td>Durham</td><td style='text-align:center;'><a href='http://www.songkick.com/concerts/5019466/tickets?utm_source=1121&utm_medium=partner' target='_blank'> Find </a></td></tr><tr><td>Jun 3</td><td><a href='http://www.songkick.com/concerts/4226511?utm_source=1121&utm_medium=partner' target='_blank'>Reverend Horton Heat, Cracker, Legendary Shack Shakers</a></td><td>Cat's Cradle</td><td>Carrboro</td><td style='text-align:center;'><a href='http://www.songkick.com/concerts/4226511/tickets?utm_source=1121&utm_medium=partner' target='_blank'> Find </a></td></tr><tr><td>Jun 3</td><td><a href='http://www.songkick.com/concerts/5253371?utm_source=1121&utm_medium=partner' target='_blank'>American Aquarium</a></td><td>Local 506</td><td>Chapel Hill</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>Jun 4</td><td><a href='http://www.songkick.com/concerts/4285251?utm_source=1121&utm_medium=partner' target='_blank'>Laurence Juber</a></td><td>The ArtsCenter</td><td>Carrboro</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr><tr><td>Jun 4</td><td><a href='http://www.songkick.com/concerts/5642566?utm_source=1121&utm_medium=partner' target='_blank'>Community Jam, Pt Scarborough Is a Movie, Armageddon'it</a></td><td>DSI Comedy Theater</td><td>Carrboro</td><td style='text-align:center;'><a href='http://www.songkick.com/concerts/5642566/tickets?utm_source=1121&utm_medium=partner' target='_blank'> Find </a></td></tr><tr><td>Jun 4</td><td><a href='http://www.songkick.com/concerts/4676216?utm_source=1121&utm_medium=partner' target='_blank'>Big Bill Morganfield</a></td><td>Papa Mojos Roadhouse</td><td>Durham</td><td style='text-align:center;'><span style='color: #999'> Find </span></td></tr> </table> </div> <!-- Events --> <div id="citywhatsup"> <h2> What's Up? <div id="tweetit"><a class="btn-slide">Tell em'</a> </div></h2> <div id="twitpanel"></div> <script type="text/javascript"> twttr.anywhere(function (T) { T("#twitpanel").tweetBox({ height: 100, width: 215, label: '', defaultContent: "" }); }); </script> <script type="text/javascript"> twttr.anywhere(function (T) { T("#whatsup").linkifyUsers(); }); </script> <ul> <li><img src='http://a1.twimg.com/profile_images/898693876/4604414396_0464180430_b_normal.jpg' alt='kaiten_keiku' height=40px; width=40px; style='border:0px; float: left; padding-right:4px;'/> @kaiten_keiku: <span style='text-align:justify;'>@Charlotte_Nao ????????~?????????!</span> - <span class='twittertime'>May 28 12:37AM</span></li><hr/><li><img src='http://a3.twimg.com/profile_images/612153581/bowdown_normal.jpg' alt='bugn' height=40px; width=40px; style='border:0px; float: left; padding-right:4px;'/> @bugn: <span style='text-align:justify;'>@Bravotv (sitc2 as rhony) Bethenny-Carrie, Sonja-Samantha, Alex-Miranda, Ramona-Charlotte</span> - <span class='twittertime'>May 28 12:36AM</span></li><hr/><li><img src='http://a1.twimg.com/profile_images/844630278/mj_normal.jpg' alt='Myra_Jones' height=40px; width=40px; style='border:0px; float: left; padding-right:4px;'/> @Myra_Jones: <span style='text-align:justify;'>@t_weet123 If you're still in Charlotte then you need to head to Whiskey River...they say Luke B. just walked in and started drinking.</span> - <span class='twittertime'>May 28 12:36AM</span></li><hr/><li><img src='http://a1.twimg.com/profile_images/936667468/110971230_normal.jpg' alt='THEORACLE2' height=40px; width=40px; style='border:0px; float: left; padding-right:4px;'/> @THEORACLE2: <span style='text-align:justify;'>@MsKamilah08 are yall in charlotte?</span> - <span class='twittertime'>May 28 12:36AM</span></li><hr/><li><img src='http://a1.twimg.com/profile_images/767244842/7AM_normal.jpg' alt='mtollefsrud' height=40px; width=40px; style='border:0px; float: left; padding-right:4px;'/> @mtollefsrud: <span style='text-align:justify;'>@vosler09 thinks I'm Charlotte.</span> - <span class='twittertime'>May 28 12:36AM</span></li><hr/><li><img src='http://a3.twimg.com/profile_images/936496517/DSCF0317_-_Copy_normal.JPG' alt='Thasian' height=40px; width=40px; style='border:0px; float: left; padding-right:4px;'/> @Thasian: <span style='text-align:justify;'>I like #CharMeck #Charlotte | Atlanta = #No #FAIL #EPICFAIL</span> - <span class='twittertime'>May 28 12:36AM</span></li><hr/><li><img src='http://a3.twimg.com/profile_images/695551715/NASCAR_logo_flag_normal.jpg' alt='NascarNewsNow' height=40px; width=40px; style='border:0px; float: left; padding-right:4px;'/> @NascarNewsNow: <span style='text-align:justify;'>#NASCAR #RACING News from the track: Charlotte Motor Speedway | Nascar Leath: Ahh, the waiting is... http://bit.ly/b2DToq #NHRA #DAYTONA500</span> - <span class='twittertime'>May 28 12:35AM</span></li> </ul> </div> <div id="photos"> <h2> Recent Photos </h2> <ul> <li><a href='http://farm5.static.flickr.com/4029/4646832962_980f936db9.jpg' target='_blank' rel='lightbox-photos' title='05 23 10 Jamie's Baby Shower 097'><img src='http://farm5.static.flickr.com/4029/4646832962_980f936db9_t.jpg' alt='05 23 10 Jamie's Baby Shower 097' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm4.static.flickr.com/3176/4646218481_d06829a778.jpg' target='_blank' rel='lightbox-photos' title='summer'><img src='http://farm4.static.flickr.com/3176/4646218481_d06829a778_t.jpg' alt='summer' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4032/4646833312_7b1de5390a.jpg' target='_blank' rel='lightbox-photos' title='100_0064'><img src='http://farm5.static.flickr.com/4032/4646833312_7b1de5390a_t.jpg' alt='100_0064' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4008/4646832834_784a0a9ed1.jpg' target='_blank' rel='lightbox-photos' title=''><img src='http://farm5.static.flickr.com/4008/4646832834_784a0a9ed1_t.jpg' alt='' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4066/4646218735_b37d8fd9e5.jpg' target='_blank' rel='lightbox-photos' title='DSC05524'><img src='http://farm5.static.flickr.com/4066/4646218735_b37d8fd9e5_t.jpg' alt='DSC05524' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4054/4646830604_97afd54623.jpg' target='_blank' rel='lightbox-photos' title='DTLA graff'><img src='http://farm5.static.flickr.com/4054/4646830604_97afd54623_t.jpg' alt='DTLA graff' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4066/4646833048_7a9ab28733.jpg' target='_blank' rel='lightbox-photos' title='100_0243.jpg'><img src='http://farm5.static.flickr.com/4066/4646833048_7a9ab28733_t.jpg' alt='100_0243.jpg' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm4.static.flickr.com/3399/4646832626_de89d0fb0e.jpg' target='_blank' rel='lightbox-photos' title='6W????'><img src='http://farm4.static.flickr.com/3399/4646832626_de89d0fb0e_t.jpg' alt='6W????' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4005/4646832826_0d9e8afe19.jpg' target='_blank' rel='lightbox-photos' title='IMG_9381'><img src='http://farm5.static.flickr.com/4005/4646832826_0d9e8afe19_t.jpg' alt='IMG_9381' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4046/4646830752_dfc32b1740.jpg' target='_blank' rel='lightbox-photos' title='11'><img src='http://farm5.static.flickr.com/4046/4646830752_dfc32b1740_t.jpg' alt='11' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4010/4646215929_80b78f0007.jpg' target='_blank' rel='lightbox-photos' title='GEDC8592'><img src='http://farm5.static.flickr.com/4010/4646215929_80b78f0007_t.jpg' alt='GEDC8592' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4064/4646832384_7fc8d31e11.jpg' target='_blank' rel='lightbox-photos' title='2010 Advanced Grappling'><img src='http://farm5.static.flickr.com/4064/4646832384_7fc8d31e11_t.jpg' alt='2010 Advanced Grappling' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4049/4646218143_c108276325.jpg' target='_blank' rel='lightbox-photos' title='P1270352'><img src='http://farm5.static.flickr.com/4049/4646218143_c108276325_t.jpg' alt='P1270352' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4009/4646217767_3900f39475.jpg' target='_blank' rel='lightbox-photos' title='P1270351'><img src='http://farm5.static.flickr.com/4009/4646217767_3900f39475_t.jpg' alt='P1270351' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4027/4646831284_30b6e6da36.jpg' target='_blank' rel='lightbox-photos' title='Image245'><img src='http://farm5.static.flickr.com/4027/4646831284_30b6e6da36_t.jpg' alt='Image245' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm4.static.flickr.com/3399/4646218295_63a899d322.jpg' target='_blank' rel='lightbox-photos' title='IMG_0037'><img src='http://farm4.static.flickr.com/3399/4646218295_63a899d322_t.jpg' alt='IMG_0037' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4059/4646218159_01d5b02c3f.jpg' target='_blank' rel='lightbox-photos' title='DooDah2010-6665'><img src='http://farm5.static.flickr.com/4059/4646218159_01d5b02c3f_t.jpg' alt='DooDah2010-6665' height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://farm5.static.flickr.com/4053/4646834404_615e09b715.jpg' target='_blank' rel='lightbox-photos' title='IMG_2668'><img src='http://farm5.static.flickr.com/4053/4646834404_615e09b715_t.jpg' alt='IMG_2668' height=100px; width=100px; style='border:0px;'/></a></li> </ul> </div> <div id="videos"> <h2> Recent Videos </h2> <ul> <li><a href='http://www.youtube.com/watch?v=_oc5-0yFoYg&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/_oc5-0yFoYg/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=1pRXKeYVYHc&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/1pRXKeYVYHc/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=yzRQUu-ZBdw&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/yzRQUu-ZBdw/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=mud-A76nLro&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/mud-A76nLro/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=TLOboW19_OA&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/TLOboW19_OA/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=PcYja2jjvi0&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/PcYja2jjvi0/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=KKjklCEMrPk&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/KKjklCEMrPk/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=AyUwY6PRX0Y&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/AyUwY6PRX0Y/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=8Sf2-7RjVYs&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/8Sf2-7RjVYs/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=xGMayoJmhE8&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/xGMayoJmhE8/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=paseBeB6Cb8&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/paseBeB6Cb8/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li><li><a href='http://www.youtube.com/watch?v=l_FSRUtMMik&feature=youtube_gdata' target='_blank'><img src='http://i.ytimg.com/vi/l_FSRUtMMik/default.jpg'height=100px; width=100px; style='border:0px;'/></a></li> </ul> </div> </div><!-- #content -->

    Read the article

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >