Search Results

Search found 5214 results on 209 pages for 'j unit 122'.

Page 146/209 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Oracle Brings Analytics to Project Management

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss  Nonprofit and for-profit organizations have many differences, but there is one way they are alike—managers struggle with huge amounts of data generated every day. Project data by itself has limited use—but any organization that can gain insight to make accurate predictions or to use resources more effectively can gain an operational advantage. Oracle’s Primavera P6 Analytics 2.0 business intelligence solution enables organizations using Oracle’s Primavera P6 Professional Project Management to do just that: identify critical issues and uncover trends in stores of project data. Primavera P6 Analytics provides management with the ability to look at not only how a single effort is progressing, but also how the entire organization is doing from a project perspective. The latest release includes new features that make it even easier to gather and analyze critical information. For example, the addition of geocoding gives Primavera P6 Analytics users the ability to track resources geographically on longitude and latitude and use a map to get an overall view of how projects, programs, and activities are deployed. “A nonprofit with relief projects in Vietnam, for example, can drill down to the project and get a world view and a regional view,” says Yasser Mahmud, vice president of product strategy and industry marketing in Oracle’s Primavera Global Business Unit. “Then they can drill down further to show statistics; key performance indicators; and how that program, portfolio, or project work is actually getting done.” The addition of new mobile capabilities to Primavera P6 Analytics puts deep-dive analysis into project managers’ hands with compatibility with major tablet operating systems. Now, nonprofits or for-profits working in remote locations can provide real-time visibility into projects to alert management if issues are occurring that need to be addressed immediately. “Primavera P6 Analytics generates information that can help organizations improve their utilization and trim down overall operating costs,” says Mahmud. “But more importantly, it gives organizations improved visibility.”

    Read the article

  • We've Been Busy: World Tour 2010

    - by [email protected]
    By Brian Dayton on March 11, 2010 11:17 PM Right after Oracle OpenWorld 2009 we went right into planning for our 2010 World Tour. An ambitious 90+ city tour visiting cities on every continent. The Oracle Applications Strategy Update Tour started January 19th and is in full swing right now. We've put some heavy hitters on the road. If you didn't get a chance to see Steve Miranda, Senior Vice President of Oracle Application Development in Tokyo, Anthony Lye, Senior Vice President of Oracle CRM Development in New Delhi or Sonny Singh, Senior Vice President of Oracle Industries Business Unit in Stockholm don't worry...we're not done yet. The theme, Smart Strategies: Your Roadmap to the Future is a nod to the fact that everyone needs to be smart about what's going on in their business and industry right now. But just as important---how to make sure that you're on the course to where you need to be down the road. Get the big picture and key trends in "The New Normal" of today's business climate and drill down and find out about the latest and greatest innovations in Oracle Applications. Check out http://www.oracle.com/events/applicationstour/index.html for an upcoming tour date near you. Pictures, feedback, summaries and learnings from the tour to come soon.

    Read the article

  • Multiline Replacement With Visual Studio

    - by Alois Kraus
    I had to remove some file headers in a bigger project which were all of the form #region File Header /*[ Compilation unit ----------------------------------------------------------       Name            : Class1.cs       Language        : C#     Creation Date   :      Description     : -----------------------------------------------------------------------------*/ /*] END */ #endregion I know that would be a cool thing to write a simple C# program use a recursive file search, read all lines skip the first n lines and write the files back to disc. But I wanted to test things first before I ruin my source files with one little typo. There comes the Visual Studio Search and Replace in Files dialog into the game. I can test my regular expression to do a multiline match with the Find button before actually breaking anything. And if something goes wrong I have the Undo button.   There is a nice blog post from Paulo Morgado online who deals with Multiline Regular expressions. The Visual Studio Regular expressions are non standard so you have to adapt your usual Regex know how to the other patterns. The pattern I cam finally up with is \#region File Header:b*(.*\n)@\#endregion The Regular expression can be read as \#region File Header Match “#region File Header” \# Escapes the # character since it is a quantifier. :b* After this none or more spaces or tabs can follow (:b stands for space or tab) (.*\n)@ Match anything across lines in a non greedy way (the @ character makes it non greedy) to prevent matching too much until the #endregion somewhere in our source file. \#endregion Match everything until “#endregion” is found I had always knew that Visual Studio can do it but I never bothered to learn the non standard Regex syntax. This is powerful and it is inside Visual Studio since 2005!

    Read the article

  • Calculating the "power" of a player in a "Defend Your Castle" type game

    - by Jesse Emond
    I'm a making a "Defend Your Castle" type game, where each player has a castle and must send units to destroy the opponent's castle. It looks like this (and yeah, this is the actual game, not a quick paint drawing..): Now, I'm trying to implement the AI of the opponent, and I'd like to create 4 different AI levels: Easy, Normal, Hard and Hardcore. I've never made any "serious" AI before and I'd like to create a quite complete one this time. My idea is to calculate a player's "power" score, based on the current health of its castle and the individual "power" score of its units. Then, the AI would just try to keep a score close to the player's one(Easy would stay below it, Normal would stay near it and Hard would try to get above it). But I just don't know how to calculate a player's power score. There are just too many variables to take into account and I don't know how to properly use them to create one significant number(the power level). Could anyone help me out on this one? Here are the variables that should influence a player's power score: Current castle health, the unit's total health, damage, speed and attack range. Also, the player can have increased Income(the money bag), damage(the + Damage) and speed(the + speed)... How could I include them in the score? I'm really stuck here... Or is there an other way that I could implement AI for this type of game? Thanks for your precious time.

    Read the article

  • Architecture Standards &ndash; BPMN vs. BPEL for Business Process Management

    - by pat.shepherd
    I get asked often which business process standard an organization should use; BPMN or BPEL?  As I explain to folks, they both have strengths.  Here is a great article that helps understand the benefits of both and where to use them.  The good news is that, with Oracle SOA Suite and BPM suite, you have the option and flexibility to use both in the same SCA model and runtime container.  Good stuff. Here is the great article that Mark Nelson wrote: The right tool for the right job BPEL and BPMN are both ‘languages’ or ‘notations’ for describing and executing business processes. Both are open standards. Most business process engines will support one or the other of these languages. Oracle however has chosen to support both and treat them as equals. This means that you have the freedom to choose which language to use on a process by process basis. And you can freely mix and match, even within a single composite. (A composite is the deployment unit in an SCA environment.) So why support both? Well it turns out that BPEL is really well suited to modeling some kinds of processes and BPMN is really well suited to modeling other kinds of processes. Of course there is a pretty significant overlap where either will do a great job What BPM adds to SOA Suite | RedStack

    Read the article

  • The Silverlight 4 Training Kit and Green Eggs &amp; Ham

    - by Jim Duffy
    Microsoft has released the Silverlight 4 Training Kit that steps you through the process of constructing Silverlight 4 business applications. “The Silverlight 4 Training Course includes a whitepaper explaining all of the new Silverlight 4 features, several hands-on-labs that explain the features, and a 8 unit course for building business applications with Silverlight 4. The business applications course includes 8 modules with extensive hands on labs as well as 25 accompanying videos that walk you through key aspects of building a business application with Silverlight. Key aspects in this course are working with numerous sandboxed and elevated out of browser features, the new RichTextBox control, implicit styling, webcam, drag and drop, multi touch, validation, authentication, MEF, WCF RIA Services, right mouse click, and much more!” What I think is pretty cool is that there are two ways to access this content, online and offline. Obviously the online version is great when you’re sitting at your desk and you’re connected to the web. What about when you don’t have a connection like when you’re located where you won’t eat green eggs & ham, like on a train or on plane perhaps? :-) You can download the offline version and hope that Sam I Am won’t be to distracting while you try to watch the videos or work your way through the labs. :-) Have a day. :-|

    Read the article

  • How do you cope with change in open source frameworks that you use for your projects?

    - by Amy
    It may be a personal quirk of mine, but I like keeping code in living projects up to date - including the libraries/frameworks that they use. Part of it is that I believe a web app is more secure if it is fully patched and up to date. Part of it is just a touch of obsessive compulsiveness on my part. Over the past seven months, we have done a major rewrite of our software. We dropped the Xaraya framework, which was slow and essentially dead as a product, and converted to Cake PHP. (We chose Cake because it gave us the chance to do a very rapid rewrite of our software, and enough of a performance boost over Xaraya to make it worth our while.) We implemented unit testing with SimpleTest, and followed all the file and database naming conventions, etc. Cake is now being updated to 2.0. And, there doesn't seem to be a viable migration path for an upgrade. The naming conventions for files have radically changed, and they dropped SimpleTest in favor of PHPUnit. This is pretty much going to force us to stay on the 1.3 branch because, unless there is some sort of conversion tool, it's not going to be possible to update Cake and then gradually improve our legacy code to reap the benefits of the new Cake framework. So, as usual, we are going to end up with an old framework in our Subversion repository and just patch it ourselves as needed. And this is what gets me every time. So many open source products don't make it easy enough to keep projects based on them up to date. When the devs start playing with a new shiny toy, a few critical patches will be done to older branches, but most of their focus is going to be on the new code base. How do you deal with radical changes in the open source projects that you use? And, if you are developing an open source product, do you keep upgrade paths in mind when you develop new versions?

    Read the article

  • Oracle Supply Chain at Pella Showcase, April 24-25, 2012

    - by Stephen Slade
    Nothing promtes a product like a grest customer testimony! For nearly a decade, Pella has been holding these 'open-houses' or Showcases as they are called, to illustrate the utilization of Oracle products in their operations. Building custom windows and doors is not an easy task.  With about a trillion combinations of unique sizes, colors and features availalbe, getting the complex multi-unit custom order wrong can be easy to do. I've been to a few of these Showcases and each time,  continually impressed by the precision, best practices and lean disciplines enacted at Pella. Operations representatives and users at Pella, demonstrate the way in which they use Oracle Supply Chain products to deliver fulfillment excellence. Orders are all custom made and delivered in about a week.  Factory tours are conducted and visitors have a chance to see Oracle in operation on the shop floor, driving informational flow and order accuracy in the 99+% range.  It's a must see for anyone considering expansion of their supply chain footprint.  The event is April 24-25 in Pella Iowa, outside Des Moines.   This year, there is a seperate track for CIOs and executives. Register at 1.800.820.5592  - ask for event 10281

    Read the article

  • How can I perform 2D side-scroller collision checks in a tile-based map?

    - by bill
    I am trying to create a game where you have a player that can move horizontally and jump. It's kind of like Mario but it isn't a side scroller. I'm using a 2D array to implement a tile map. My problem is that I don't understand how to check for collisions using this implementation. After spending about two weeks thinking about it, I've got two possible solutions, but both of them have some problems. Let's say that my map is defined by the following tiles: 0 = sky 1 = player 2 = ground The data for the map itself might look like: 00000 10002 22022 For solution 1, I'd move the player (the 1) a complete tile and update the map directly. This make the collision easy because you can check if the player is touching the ground simply by looking at the tile directly below the player: // x and y are the tile coordinates of the player. The tile origin is the upper-left. if (grid[x][y+1] == 2){ // The player is standing on top of a ground tile. } The problem with this approach is that the player moves in discrete tile steps, so the animation isn't smooth. For solution 2, I thought about moving the player via pixel coordinates and not updating the tile map. This will make the animation much smoother because I have a smaller movement unit per frame. However, this means I can't really accurately store the player in the tile map because sometimes he would logically be between two tiles. But the bigger problem here is that I think the only way to check for collision is to use Java's intersection method, which means the player would need to be at least a single pixel "into" the ground to register collision, and that won't look good. How can I solve this problem?

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • CodePlex Daily Summary for Tuesday, June 25, 2013

    CodePlex Daily Summary for Tuesday, June 25, 2013Popular ReleasesCrmRestKit.js: CrmRestKit-2.6.1.js: Improvments for "ByQueryAll": Instead of defering unit all records are loaded, the progress-handler is invoked for each 50 records Added "progress-handler" support for "ByQueryAll" See: http://api.jquery.com/deferred.notify/* See Unit-tests for details ('Retrieve all with progress notification')VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterMinesweeper by S. Joshi: Minesweeper 1.0.0.1: Zipped!Alfa: Alfa.Common 1.0.4923: Removed useless property ErrorMessage from StringRegularExpressionValidationRule Repaired ErrorWindow and disabled report button after click action Implemented: DateTimeRangeValidationRule DateTimeValidationRule DateValidationRule NotEmptyValidationRule NotNullValidationRule RequiredValidationRule StringMaxLengthValidationRule StringMinLengthValidationRule TimeValidationRule DateTimeToStringConverter DecimalToStringConverter StringToDateTimeConverter StringToDecima...Document.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsStyleMVVM: 3.0.2: This is a minor feature and bug fix release Features: ExportWhenDebuggerIsAttacedAttribute - new attribute that marks an attribute to only be exported when the debugger is attahced InjectedFilterAttributeFilterProvider - new Attribute Filter provider for MVC that injects the attributes Performance Improvements - minor speed improvements all over, and Import collections is now 50% faster Bug Fixes: Open Generic Constraints are now respected when finding exports Fix for fluent registrat...Base64 File Converter: First release: - One file can be converted at a time. - Bytes to Base64 and Base64 to Bytes conversions are availableADExtractor - work with Active-Directory: V1.3.0.2: 1.3.0.2- added optional value-index to properties - added new aggregation "Literal" to pass-through values - added new SQL commandline-switch --db to override the connection string - added new global commandline-switch --quiet to reduce output to minimum 1.3.0.1- sub-property resolution now working with aggregation (ie. All(member.objectGuid)) 1.3.0.0- added correct formatting for guids (GuidFormatter)EMICHAG: EMICHAG NETMF 4.2: Alpha release of the 4.2 version.WPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Windows.Forms.Controls Revisited (SSTA.WinForms): SSTA.WinForms 1.0.0: Latest stable releaseAscend 3D: Ascend 2.0: Release notes: Implemented bone/armature animation Refactored class hierarchy and naming Addressed high CPU usage issue during animation Updated the Blender exporter and Ascend model format (now XML) Created AscendViewer, a tool for viewing Ascend modelsIndent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Hyper-V Management Pack Extensions 2012: HyperVMPE2012 (v1.0.1.126): Hyper-V Management Pack Extensions 2012 Beta ReleaseOutlook 2013 Add-In: Email appointments: This new version includes the following changes: - Ability to drag emails to the calendar to create appointments. Will gather all the recipients from all the emails and create an appointment on the day you drop the emails, with the text and subject of the last selected email (if more than one selected). - Increased maximum of numbers to display appointments to 30. You will have to uninstall the previous version (add/remove programs) if you had installed it before. Before unzipping the file...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.5.2: v1.5.2 - This is a service release. We've fixed a number of issues with Tasks and IoC. We've made some consistency improvements across platforms and fixed a number of minor bugs. See changes.txt for details. Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro inversion of control container (IoC); source code...SQL Compact Query Analyzer: 1.0.1.1511: Beta build of SQL Compact Query Analyzer Bug fixes: - Resolved issue where the application crashes when loading a database that contains tables without a primary key Features: - Displays database information (database version, filename, size, creation date) - Displays schema summary (number of tables, columns, primary keys, identity fields, nullable fields) - Displays the information schema views - Displays column information (database type, clr type, max length, allows null, etc) - Support...New Projectsalgorithm interview: Write some code for programming interview questionsAmbiTrack: AmbiTrack provides marker-free localization and tracking using an array of cameras, e.g. the PlayStation Eye.DSeX DragonSpeak eXtended Editor: Furcadia DragonSpeak EditorEasyProject: easy project is finel engineering project this sys made for easy way to manage emploees of a small business or organization .EDIVisualizer SDK: EDIVisualizer SDK is a software developpement kit (SDK) for creating plugin used by EDIVisualizer program.Flame: Interactive shell for scripting in IronRuby IronPython Powershell C#, with syntax highlighting, autocomplete,and for easily work within a .NET framework consoleFusion-io MS SQL Server scripts: Fusion-io's Sumeet Bansal shows how Microsoft SQL Server users can quickly determine whether their database will benefit from Fusion-io ioMemory products.hotelprimero: Proyecto de hotelImage filters: It's a simple image filter :)JSLint.NET: JSLint.NET is a wrapper for Douglas Crockford's JSLint, the JavaScript code quality tool. It can validate JavaScript anywhere .NET runs.KZ.Express.H: KZ.Express.Hmoleshop: moleshop???asp.net ???????B2C????,????????.net???????????。moleshop??asp.net mvc 4.0???,???.net 4.0??。 moleshop???????,??AL2.0 ????,????????????????,????????。PDF Generator Net Free Xml Parser: PDF Generator Net Free Xml Parser?XML?????????????????????。?????????????????????CSV?????????PDF???????????????????????。ScreenCatcher: ScreenCatcher is a program for creating screenshots.SharePoint Configuration Guidance for 21 CFR Part 11 Compliance – SP 2013: This CodePlex project provides the Paragon Solutions configurations and code to support 21 CFR Part 11 compliance with SharePoint 2013.Sky Editor: Save file editor for Pokémon Mystery Dungeon: Explorers of Sky with limited support for Explorers of Time and DarknessTRANCIS: For foundation and runnning online operations for Trancis.UGSF GRAND OUEST: UGSF GRAND OUESTWP.JS: WP.JS (Windows Phone (dot) Javascript) aims to be a compelling library for developing HTML5-based applications for Windows Phone 8.X-Aurora: X-Aurora ?? MVC3

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • lirc_zilog IR transmission no longer working with HD-PVR on 12.04

    - by johnf
    I have been running a ubuntu 10.04 with a patched version of lirc_zilog for two years. I upgraded to 12.04 and lirc_zilog is no longer working with my HD-PVR. The MythTV wiki reports that it did work out of the box with 11.04. The error message I get on irsend is as follows: johnf@carbon:~$ /usr/local/bin/irsend SEND_ONCE blaster 0_130_KEY_POWER irsend: command failed: SEND_ONCE blaster 0_130_KEY_POWER irsend: hardware does not support sending The lircd daemon, run interactively, reports the following: lircd: accepted new client on /var/run/lirc/lircd lircd: could not get hardware features lircd: this device driver does not support the LIRC ioctl interface lircd: major number of /dev/lirc0 is 250 lircd: LIRC major number is 61 lircd: check if /dev/lirc0 is a LIRC device lircd: WARNING: Failed to initialize hardware lircd: error processing command: SEND_ONCE blaster 0_130_KEY_POWER lircd: hardware does not support sending lircd: removed client Checking dmesg seems to indicate that the kernel module is loading properly: [56497.730743] lirc_zilog: module is from the staging directory, the quality is unknown, you have been warned. [56497.730999] lirc_zilog: Zilog/Hauppauge IR driver initializing [56497.732484] lirc_zilog: ir_probe: ir_rx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x71 [56497.732493] lirc_zilog: ir_probe: ir_tx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x70 [56497.732496] lirc_zilog: probing IR Tx on Hauppage HD PVR I2C (i2c-0) [56497.756822] lirc_zilog: firmware of size 302355 loaded [56497.756945] lirc_zilog: 743 IR blaster codesets loaded [56497.757030] i2c i2c-0: lirc_dev: driver lirc_zilog registered at minor = 0 [56497.757033] lirc_zilog: IR unit on Hauppage HD PVR I2C (i2c-0) registered as lirc0 and ready [56497.757035] lirc_zilog: probe of IR Tx on Hauppage HD PVR I2C (i2c-0) done [56497.757056] lirc_zilog: initialization complete Here is my /etc/lirc/hardware.conf #Chosen IR Transmitter TRANSMITTER="HD-PVR" TRANSMITTER_MODULES="lirc_dev lirc_zilog" TRANSMITTER_DRIVER="" TRANSMITTER_DEVICE="/dev/lirc0" TRANSMITTER_SOCKET="" TRANSMITTER_LIRCD_CONF="" TRANSMITTER_LIRCD_ARGS="" My lircd.conf is a copy of the recommended one. Examination of the kernel source seems to indicate that the lirc_zilog module should support transmission, it's newer than the patched version I was manually compiling on 10.04. I was previously using a manually built version of lirc 0.8.7 and not the packaged one. I'm now running the packaged version 9.0. I can provide any additional information required and will perform tests quickly. I'm very eager to get this issue resolved.

    Read the article

  • Dev Lead Job opening on my team

    My product unit (Parallel Developer Tools) is hiring a developer lead here in Redmond. This position is specifically on the debugger feature team that I "Program Manage".So, if you have what it takes and don't mind working with me every single day, click on the link below to read more and apply. You can also send me your resume and I'll make sure it gets to the right place and that you get a prompt response.There is a very long job description on the Microsoft careers site under job id 707388.Here is an excerpt from the middle (emphasis mine):"...We are in search of a talented and innovative senior lead software design engineer to own development of the debugging tools for data parallelism (including GP-GPU) and HPC Clusters being built by our team.To be successful, you need to be able to guide careers, design and architect well, communicate and share the best development practices, collaborate with your peers, contribute to the vision, and code significant portions of the solution. We want to hear from you if you're passionate about making your mark in the parallel development space, improving people, and building world-class tools."Responsibilities include:Managing a team of senior and junior developersDesign and coding high-quality software..."For the full background story, requirements, qualifications and responsibilities please visit the official page. Comments about this post welcome at the original blog.

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • How Do Top Performing High Tech Companies Measure Online Marketing Success?

    - by Charles Knapp
    You might expect a focus on Net Promoter scores, open rates, and click metrics. The real answers from top performers may surprise you. I've been working for a few months with Aberdeen Group and colleagues from IBM and Oracle to survey high technology firms worldwide on best practices in marketing and channel sales effectiveness.  Now, we will share the results of our original customer research in a new white paper and webcast. Register today to learn how leading High Tech companies are increasing their Return on Marketing Investment (ROMI) and growing channel sales revenue. Discover how top performing high tech companies manage and use customer data, measure marketing spend effectiveness, and support internal and channel sales. Learn how best in class high tech companies use enterprise data throughout their customer lifecycle -- messaging to leads, selling to prospects, and serving customers. Our speakers will be: Peter Ostrow, Research Director - Sales Effectiveness, Aberdeen Group David Lasher, Global Business Services Partner, IBM Jonathan Oomrigar, Vice President, Global High Technology Business Unit, Oracle Reserve your place now! This global webinar is on Tuesday, November 15, 10-11 am PST / 1-2 pm EST / 6-7 GMT / 7-8 CET

    Read the article

  • Can I assume interface oriented programming as a good object oriented programming?

    - by david
    I have been programming for decades but I have not been used to object oriented programming. But for recenet years, I had a great opportunity to learn OOP, its principles, and a lot of patterns that are great. Since I've learned OOP, I tried to apply them to a couple of projects and found those projects successful. Unfortunately I didn't follow extreme programming that suggests writing test first, mainly because their time frame were tight. What I did for those projects were Identify all necessary classes and create them with proper properties and methods whenever there is dependency between classes, write interface between them see if there is any patterns for certain relationships between classes to replace By successful, I meant that it was quick development effort, the classes can be reused better, and flexible enough so that another programmer does not have to change something else to fix another part. But I wonder if this is a good practice. Of course, I know I need to put writing unit tests first in my work process. But other than that, is there any problem with this approach - creating lots of interfaces - in long term?

    Read the article

  • Visual Studio 2010 Modeling and Architecture Tools

    - by MikeParks
    Jennifer Marsman (Microsoft Evangelist) and Cameron Skinner (Microsoft Visual Studio Product Unit Manager) recently stopped by our office while they were passing through Louisville on their tour to give us a presentation on the new Visual Studio 2010 Modeling and Architecture Tools. I checked out these new features when Visual Studio 2010 Beta versions originally rolled out and have been really impressed with this stuff ever since then. So it was pretty cool to actually learn some new techniques from Cameron himself since he helped write the actual code behind some of those features. If you've upgraded to Visual Studio 2010 recently I would highly recommend using the Architecture tools. They're awesome. If you want to make improvements to it, they even have their own SDK for it. There are plenty of blogs out there to show you how to use it. I've been waiting to find a tool that works like this where I can really analyze the code in solutions and projects and see how everything ties together. It's really handy if you're asked to work on a new project and aren't familiar with how it works. Just run the tools, analyze the DLL's, learn how everything works, and then you'll be ready to implement new code! It's a great tool to learn new systems quick and easy and it's all housed within the Visual Studio IDE. I just wanted to write a blog to brag about it a little bit, so I figured I'd throw this up here. It's a must have tool for Developers/Architects. Here's some screenshots of when I was using it earlier:   Thanks everyone! - Mike

    Read the article

  • Setup a Autoreply Only Account

    - by dabrain
    For some very good reason you might would like to setup a 'autoreply' only account, without storing the incoming mail into a mailbox. If not already done, create an account via Delegated Admin Gui or commadmin Commandline Tool. Example: /opt/sun/comms/da/bin/commadmin user create -D admin -d vmdomain.tld -w enigma -F Mike -l    mparis -L Paris -W tester -E [email protected] -S mail -H mars.vmdomain.tld Setup mailDeliveryOption to autoreply mode only, so no email will be stored in the user mailbox, skip this step if you want incoming emails stored in the mailbox. ldapmodify -D "cn=Directory Manager" -w enigma -f /tmp/modfile [/tmp/modfile] dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify replace: mailDeliveryOption mailDeliveryOption: autoreply Setup mailSieveRuleSource with the autoreply text and 'do-not-reply' From address. The "Thank you ..." part becomes the subject. The next string in quotes is the body part of the message. The ":hours 0" denotes that we want a reply sent for every message. Finally,  the \n is used because of the wanted newlines in the body. ldapmodify -D "cn=Directory Manager" -w enigma -f /tmp/addfile [/tmp/addfile] dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: mailSieveRuleSource mailSieveRuleSource: require "vacation"; vacation :hours 0 :reply :from "do-not-reply   @domain.com" :subject "Thank you for contacting webpost" "Your Mail is being review   ed.\nTo access contact information please visit : http://www.domain.com \nPlease do    not reply to this e-mail as it is an automated response on your mail being accessed   .\n\nPublic Respose Unit.\n"

    Read the article

  • Unity does not load when selecting Xorg open source radeon driver

    - by Teddy Thorpe
    When I select the X.org Radeon open source driver, Unity does not load. However, when I select the Proprietary AMD Driver from the official Ubuntu repository, Unity does load. Why is Unity doing this? I had no problems with switching between these drivers in Ubuntu 12.04 LTS. This started happening when I upgraded to Ubuntu 12.10. Before I ever installed the propriatery AMD driver, the X.org driver loaded Unity fine, but very sluggishly. I downloaded the AMD driver from their official website and installed that first and that is what started this problem. I removed that driver and went back to X.org and X.org didn't load Unity either. Then using Synaptic, I installed the AMD driver from the Ubuntu repositories and thats what I'm using now. I'm very confused why that downloaded AMD driver would effect X.org. I have a AMD Radeon HD 6620G. It's integrated graphics on my AMD A8-3500M APU (Accelerated Processing Unit as AMD advertises it).

    Read the article

  • How do I prevent my filesystems from being mounted read-only after suspending?

    - by Chas. Owens
    I have Ubuntu 12.04 installed on an SDHC card (only one ext2 partition, no swap). When I suspend using pm-suspend, my root filesystem is mounted read-only. I am currently "fixing" this with the following file: /etc/pm/sleep.d/99_make_disk_rw: #!/bin/sh mount -o remount,rw / But the disk is marked as needing an fsck on reboot. How can I prevent the filesystem from being mounted read-only or whatever is going wrong here. Update: It looks like it is getting mounted read-only because an error occurred. I have changed the mount options for / in /etc/fstab to noatime,nodiratime,errors=continue and it no longer mounts the SDHC card as read-only after it resumes. So the problem is happening when it suspends, not when it resumes as I had thought. I checked /sys/bus/usb/devices/1-4/power/persist and it is set to 1. So the SDHC card shouldn't appear disconnected to the OS (or more accurately it should recover from the disconnection without error). Here seems to be the relevant section of the syslog Sep 10 10:34:23 iubit kernel: [ 748.246226] sd 4:0:0:0: [sdb] Media Changed Sep 10 10:34:23 iubit kernel: [ 748.246234] sd 4:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Sep 10 10:34:23 iubit kernel: [ 748.246243] sd 4:0:0:0: [sdb] Sense Key : Unit Attention [current] Sep 10 10:34:23 iubit kernel: [ 748.246253] Info fld=0x0 Sep 10 10:34:23 iubit kernel: [ 748.246258] sd 4:0:0:0: [sdb] Add. Sense: Not ready to ready change, medium may have changed Sep 10 10:34:23 iubit kernel: [ 748.246271] sd 4:0:0:0: [sdb] CDB: Read(10): 28 00 00 5d 3e f0 00 00 08 00 Sep 10 10:34:23 iubit kernel: [ 748.246291] end_request: I/O error, dev sdb, sector 6110960 Sep 10 10:34:23 iubit kernel: [ 748.247027] EXT2-fs (sdb1): error: ext2_fsync: detected IO error when writing metadata buffers Sep 10 10:34:23 iubit anacron[6954]: Anacron 2.3 started on 2012-09-10 Sep 10 10:34:23 iubit anacron[6954]: Normal exit (0 jobs run) Sep 10 10:34:24 iubit laptop-mode: Laptop mode Sep 10 10:34:24 iubit laptop-mode: enabled, not active Sep 10 10:34:24 iubit kernel: [ 749.055376] sd 4:0:0:0: [sdb] No Caching mode page present Sep 10 10:34:24 iubit kernel: [ 749.055387] sd 4:0:0:0: [sdb] Assuming drive cache: write through Sep 10 10:34:25 iubit anacron[7555]: Anacron 2.3 started on 2012-09-10 Sep 10 10:34:25 iubit anacron[7555]: Normal exit (0 jobs run) Sep 10 10:34:31 iubit kernel: [ 756.090861] EXT2-fs (sdb1): previous I/O error to superblock detected

    Read the article

  • How to get that first development job

    - by cju
    I have been in QA for 10 years, trying to get into developement for about 5 of them. I have taken classes in C++, Java and C#. I was able to write some tools and unit tests in C# at my current job and (by all accounts) did a good job of it. However, 8 months ago, my employer tasked me with the responsibility of establishing the new QA group. Now, I'm doing manual testing and deployment with no promise of returning to development. I have looked at the job boards and there are a lot of jobs for Web developers and wondered how I could break into that. I've picked up some books on Ruby on Rails that I plan to work through on the Mac at home, but I'm not sure employers would be interested in anything but commercial web development. Do you have any suggestions on how I can use my experience to get a job as a junior developer? And I mean one that entailes programming...the postings I've seen for junior developer amount to doing all the grunt work besides coding. They should just call them "Technical Secretaries".

    Read the article

  • Who should write the test plan?

    - by Cheng Kiang
    Hi, I am in the in-house development team of my company, and we develop our company's web sites according to the requirements of the marketing team. Before releasing the site to them for acceptance testing, we were requested to give them a test plan to follow. However, the development team feels that since the requirements came from the requestors, they would have the best knowledge of what to test, what to lookout for, how things should behave etc and a test plan is thus not required. We are always in an argument over this, and developers find it a waste of time to write down things like:- Click on button A. Key in XYZ in the form field and click button B. You should see behaviour C. which we have to repeat for each requirement/feature requested. This is basically rephrasing what's already in the requirements document. We are moving towards using an Agile approach for managing our projects and this is also requested at the end of each iteration. Unit and integration testing aside, who should be the one to come up with the end user acceptance test plan? Should it be the reqestors or the developers? Many thanks in advance. Regards CK

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >