Search Results

Search found 1678 results on 68 pages for 'workflow'.

Page 31/68 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • CodePlex Daily Summary for Sunday, September 09, 2012

    CodePlex Daily Summary for Sunday, September 09, 2012Popular ReleasesMishra Reader: Mishra Reader Beta 4: Additional bug fixes and logging in this release to try to find the reason some users aren't able to see the main window pop up. Also, a few UI tweaks to tighten up the feed item list. This release requires the final version of .NET 4.5. If the ClickOnce installer doesn't work for you, please try the additional setup exe.Xenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.9.0: Release Notes Imporved framework architecture Improved the framework security More import/export formats and operations New WebPortal application which includes forum, new, blog, catalog, etc. UIs Improved WebAdmin app. Reports, navigation and search Perfomance optimization Improve Xenta.Catalog domain More plugin interfaces and plugin implementations Refactoring Windows Azure support and much more... Package Guide Source Code - package contains the source code Binaries...Json.NET: Json.NET 4.5 Release 9: New feature - Added JsonValueConverter Fix - Fixed DefaultValueHandling.Ignore not igoring default values of non-nullable properties Fix - Fixed DefaultValueHandling.Populate error with non-nullable properties Fix - Fixed error when writing JSON for a JProperty with no value Fix - Fixed error when calling ToList on empty JObjects and JArrays Fix - Fixed losing decimal precision when writing decimal JValuesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.66: Just going to bite the bullet and rip off the band-aid... SEMI-BREAKING CHANGE! Well, it's a BREAKING change to those who already adjusted their projects to use the previous breaking change's ill-conceived renamed DLLs (versions 4.61-4.65). For those who had not adapted and were still stuck in this-doesn't-work-please-fix-me mode, this is more like a fixing change. The previous breaking change just broke too many people, I'm sorry to say. Renaming the DLL from AjaxMin.dll to AjaxMinLibrary.dl...DotNetNuke® Community Edition CMS: 07.00.00 CTP (Not for Production Use): NOTE: New Minimum Requirementshttp://www.dotnetnuke.com/Portals/25/Blog/Files/1/3418/Windows-Live-Writer-1426fd8a58ef_902C-MinimumVersionSupport_2.png Simplified InstallerThe first thing you will notice is that the installer has been updated. Not only have we updated the look and feel, but we also simplified the overall install process. You shouldn’t have to click through a series of screens in order to just get your website running. With the 7.0 installer we have taken an approach that a...BIDS Helper: BIDS Helper 1.6.1: In addition to fixing a number of bugs that beta testers reported, this release includes the following new features for Tabular models in SQL 2012: New Features: Tabular Display Folders Tabular Translations Editor Tabular Sync Descriptions Fixed Issues: Biml issues 32849 fixing bug in Tabular Actions Editor Form where you type in an invalid action name which is a reserved word like CON or which is a duplicate name to another action 32695 - fixing bug in SSAS Sync Descriptions whe...Umbraco CMS: Umbraco 4.9.0: Whats newThe media section has been overhauled to support HTML5 uploads, just drag and drop files in, even multiple files are supported on any HTML5 capable browser. The folder content overview is also much improved allowing you to filter it and perform common actions on your media items. The Rich Text Editor’s “Media” button now uses an embedder based on the open oEmbed standard (if you’re upgrading, enable the media button in the Rich Text Editor datatype settings and set TidyEditorConten...menu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)Lightweight Fluent Workflow: objectflow 1.4.0.1: Changes in this release;Exception handling policies Install on NuGet console; Install-Package objectflow.core -pre Supported work-flow patterns Exception policies Sequence Parallel split Simple merge Exclusive choice Retry Arbitrary cycles Framework Features Handle exceptions with workflow.Configure().On<Exception>() Repeat operations and functions an arbitrary number of times Retry failed lamda functions Generic interface for concise workflow registration Operations t...Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...LibXmlSocket: Binary: .net4.5,.netCore???????Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comThe Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.New ProjectsAjax based Multi Forms ASP.NET Framework (Amfan): Ajax based Multi Forms ASP.NET Framework (Amfan) reduces the limitations of the Asp.Net by creating multiple sub-forms of the page as separate aspx pages.APO-CS: A straight port of Apophysis to C#.BaceCP / ???? ????? ?? ???????? ????: BaceCP e ??????????? ???????? ?? ????????, ??????????? ? ????????? ?? ???? ????? ?? ???????? ????.CafeAdm Messenger: CafeAdm Messenger Allows a server connection for CafeAdm Schedule and MessengerDECnet 2.0 Router: A user mode DECnet 2.0 router that will interoperate with the HECnet bridge.DriveManager: This is a common utility to provide an interface for all cloud based drive providers like google drive, dropbox, MS Sky Drive. Email Tester: This utility will help you check your SMTP settings for SharePoint. It is best for Pre-PROD and PROD environment where you can't modify code.EvoGame: The evo game that im working onforebittims: Project Owner : ForeBitt Inc. Sri Lanka Project Manager : Chackrapani Wickramarathne Technologies : VS 2010, C#, Linq to Sql, Sql server 2008 R2, Dev ExpressMCServe: MCServe is the minecraft-server which takes advantage of the performance of the .NET FrameworkMental: Ðang trong quá trình phát tri?nSalaryManagementSys: SalaryManagementSysVKplay: This application is audio player, launches on windows phone platform and uses music from vk.com social network.

    Read the article

  • Servlet/JSP Flow Control: Enums, Exceptions, or Something Else?

    - by Christopher Parker
    I recently inherited an application developed with bare servlets and JSPs (i.e.: no frameworks). I've been tasked with cleaning up the error-handling workflow. Currently, each <form> in the workflow submits to a servlet, and based on the result of the form submission, the servlet does one of two things: If everything is OK, the servlet either forwards or redirects to the next page in the workflow. If there's a problem, such as an invalid username or password, the servlet forwards to a page specific to the problem condition. For example, there are pages such as AccountDisabled.jsp, AccountExpired.jsp, AuthenticationFailed.jsp, SecurityQuestionIncorrect.jsp, etc. I need to redesign this system to centralize how problem conditions are handled. So far, I've considered two possible solutions: Exceptions Create an exception class specific to my needs, such as AuthException. Inherit from this class to be more specific when necessary (e.g.: InvalidUsernameException, InvalidPasswordException, AccountDisabledException, etc.). Whenever there's a problem condition, throw an exception specific to the condition. Catch all exceptions via web.xml and route them to the appropriate page(s) with the <error-page> tag. enums Adopt an error code approach, with an enum keeping track of the error code and description. The descriptions can be read from a resource bundle in the finished product. I'm leaning more toward the enum approach, as an authentication failure isn't really an "exceptional condition" and I don't see any benefit in adding clutter to the server logs. Plus, I'd just be replacing one maintenance headache with another. Instead of separate JSPs to maintain, I'd have separate Exception classes. I'm planning on implementing "error" handling in a servlet that I'm writing specifically for this purpose. I'm also going to eliminate all of the separate error pages, instead setting an error request attribute with the error message to display to the user and forwarding back to the referrer. Each target servlet (Logon, ChangePassword, AnswerProfileQuestions, etc.) would add an error code to the request and redirect to my new servlet in the event of a problem. My new servlet would look something like this: public enum Error { INVALID_PASSWORD(5000, "You have entered an invalid password."), ACCOUNT_DISABLED(5002, "Your account has been disabled."), SESSION_EXPIRED(5003, "Your session has expired. Please log in again."), INVALID_SECURITY_QUESTION(5004, "You have answered a security question incorrectly."); private final int code; private final String description; Error(int code, String description) { this.code = code; this.description = description; } public int getCode() { return code; } public String getDescription() { return description; } }; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String sendTo = "UnknownError.jsp"; String message = "An unknown error has occurred."; int errorCode = Integer.parseInt((String)request.getAttribute("errorCode"), 10); Error errors[] = Error.values(); Error error = null; for (int i = 0; error == null && i < errors.length; i++) { if (errors[i].getCode() == errorCode) { error = errors[i]; } } if (error != null) { sendTo = request.getHeader("referer"); message = error.getDescription(); } request.setAttribute("error", message); request.getRequestDispatcher(sendTo).forward(request, response); } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } Being fairly inexperienced with Java EE (this is my first real exposure to JSPs and servlets), I'm sure there's something I'm missing, or my approach is suboptimal. Am I on the right track, or do I need to rethink my strategy?

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

  • CRM 2011 - Workflows Vs JavaScripts

    - by Kanini
    In the Contact entity, I have the following attributes Preferred email - A read only field of type Email Personal email 1 - An email field Personal email 2 - An email field Work email 1 - An email field Work email 2 - An email field School email - An email field Other email - An email field Preferred email option - An option set with the following values {Personal email 1, Personal email 2, Work email 1, Work email 2, School email and Other email). None of the above mentioned fields are required. Requirement When user picks a value from Preferred email option, we copy the email address available in that field and apply the same in the Preferred email field. Implementation The Solution Architect suggested that we implement the above requirement as a Workflow. The reason he provided was - most of the times, these values are to be populated by an external website and the data is then fed into CRM 2011 system. So, when they update Preferred email option via a Web Service call to CRM, the WF will run and updated the Preferred email field. My argument / solution What will happen if I do not pick a value from the Preferred email Option Set? Do I set it to any of the email addresses that has a value in it? If so, what if there is more than one of the email address fields are populated, i.e., what if Personal email 1 and Work email 1 is populated but no value is picked in the Option Set? What if a value existed in the Preferred email Option Set and I then change it to NULL? Should the field Preferred email (where the text value of email address is stored) be set to Read Only? If not, what if I have picked Personal email 1 in the Option Set and then edit the Preferred email address text field with a completely new email address If yes, then we are enforcing that the preferred email should be one among Personal email 1, Personal email 2, Work email 1, Work email 2, School email or Other email [My preference would be this] What if I had a value of [email protected] in the personal email 1 field and personal email 2 is empty and choose value of Personal email 1 in the drop down for Preferred email (this will set the Preferred email field to [email protected]) and later, I change the value to Personal email 2 in the Preferred email. It overwrites a valid email address with nothing. I agree that it would be highly unlikely that a user will pick Preferred email as Personal email 2 and not have a value in it but nevertheless it is a possible scenario, isn’t it? What if users typed in a value in Personal email 1 but by mistake picked Personal email 2 in the option set and Personal email 2 field had no value in it. Solution The field Preferred email option should be a required field A JS should run whenever Preferred email option is changed. That JS function should set the relevant email field as required (based on the option chosen) and another JS function should be called (see step 3). A JS function should update the value of Preferred email with the value in the email field (as picked in the option set). The JS function should also be run every time someone updates the actual email field which is chosen in the option set. The guys who are managing the external website should update the Preferred email field - surely, if they can update Preferred email option via a Web Service call, it is easy enough to update the Preferred email right? Question Which is a better method? Should it be written as a JS or a WorkFlow? Also, whose responsibility is it to update the Preferred email field when the data flows from an external website? I am new to CRM 2011 but have around 6 years of experience as a CRM consultant (with other products). I do not come from a development background as I started off as a Application Support Engineer but have picked up development in the last couple of years.

    Read the article

  • Adobe Photoshop CS5 vs Photoshop CS5 extended

    - by Edward
    Adobe Photoshop has been an industry standard for most web designers & photographers worldwide. Photoshop CS5 has made photography editing much more refined and the composition process has become much easier than ever before.  To study the advantage of Photoshop CS5 extended over Photoshop CS5 we have written this comparison article, with both a Designer’s & Photographer’s perspective. Hopefully it shall help you in your buying/upgrade decision. Photoshop CS5 Photoshop CS5 has refining feature with powerful photography tools. It made editing process easy as fewer steps are involved to remove noise, add grain, create vignettes, correct lens distortions, sharpen, and create HDR images. It has quick image correction and color and tone control for professional purpose. Intelligent image editing and enhancement , extraordinary advanced compositing has made it a better tool than earlier versions for photographers. It allows users to accelerate workflow with fast performance on 64-bit Windows® and Mac hardware systems and smoother interactions due to more GPU-accelerated features. It also boasts of a state-of-the-art processing with Adobe Photoshop Camera Raw 6 and helps to maximize creative impact. It provides for tremendous precision and freedom. It allows user to easily select intricate image elements, such as hair and create realistic painting effects. It also allows to remove any image element and see the space fill in almost magically. It has easy access to core editing and streamlined work flow and flexible work ambience. It has creative tools and contents. Photoshop CS5 Extended Photoshop CS5 extended is quite innovative and has incorporated 3D elements to 2D artwork directly within digital imaging application, which enables user to do an easy on-ramp to 3D image creation. It also provides for 3D editing. It has intelligent image editing and enhancement. It offers advance composing and has extraordinary painting and drawing toolset. It provides for video and animation designing. It helps to work with specialized images for architecture, manufacturing, engineering, science, and medicine. Where CS5 extended scores over CS5 CS5 extended has many features, which were not included in CS5. These features make it score more over CS5. These features are: Technology for creating 3D extrusion 3D material library and picker Field depth for 3D 3D merging and scene composition improvements 3D workflow improvement Customization of 3D features Image based light source Shadow catcher for shadow creation Enhanced ray tracer Context sensitive widgets, which allows easy control of objects, lights and cameras. Overlays for materials and mesh boundaries Photoshop CS5 extended is far better than CS5 as it incorporates all the features of CS5 and have more advanced features. It allows 3D creation and editing and has other advanced tools to make it better. Redefining the Image-Editing Experience  : A Photographer’s point of View Photoshop CS5 delivers amazing features and creative options so even new users can perform advanced image manipulations and compositions. Breath taking image intelligence behind Content-Aware Fill magically removes any image detail or object, examines the surroundings and seamlessly fills in the space left behind. Lighting, tone and noise of the surrounding area can be matched. New Refine Edge makes nearly-impossible image selections possible. Masking was never easier, the toughest types of edges, such as hair and foliage seem easier to fix. To sum up following are few advantages of CS5 extended over previous versions 64-bit processing Content Aware Fill Refine Edge, “makes nearly-impossible image selections impossible” HDR Pro, including ghost artifact removal and HDR toning, which gives the look of HDR with a single exposure New brush options Improved image management with enhanced Adobe Bridge Lens corrections Improved black-and-white conversions Puppet Warp: Precisely reposition or warp any image element Adobe Camera Raw 6 Upgrade Buy Online Pricing and Availability Adobe Photoshop CS5 and CS5 Extended are available through Adobe Authorized Resellers & the Adobe Store. Estimated street price for Adobe Photoshop CS5 is US$699 and US$999 for Photoshop CS5 Extended. Upgrade pricing and volume licensing are also available. Related posts:10 Free Alternatives for Adobe Photoshop Software Web based Alternatives to Photoshop 15 Useful Adobe Illustrator Tutorials For Designers

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • A developer&rsquo;s WBS &ndash; 3 factors of 5

    - by johndoucette
    As a development manager, I have requested work breakdown structures (WBS) many times from the dev leads. Everyone has their own approach and why it takes sometimes days to get this simple list is often frustrating. Here is a simple way to get that elusive WBS done in 30 minutes and have 125 items in your list – well, 126. The WBS is made up of parent-child entities representing the overall outcome of the project. At the bottom of the hierarchical list should be the task item that a developer would perform in support of the branch in the list or WBS. Because I work with different dev leads on every project, I always ask the “what time value would you like to see at the lowest task in order to assign it to a developer and ensure it gets done within the timeframe”. I am particular to a task being 8 hours. Some like 8 to 24 hours. Stay away from tasks defaulting to 1 week. The task becomes way to vague and hard to manage completeness, especially on short budgets. As a developer, your focus is identifying the tasks you to accomplish in order to deliver the product. As a project manager, you will take the developer's WBS and add all the “other stuff” like quality testing, meetings, documentation, transition to maintenance, etc… Start your exercise with the name of the product you are delivering as a result of the project. You should be able to represent what you are building and deploying with one to three words. Example; XYZ Public Website Middleware BizTalk Application The reason you start with that single identifier is to always see the list as the product. It helps during each of the next three passes. Now, choose 5 tasks which in their entirety represent the product you will be delivering and add them to list under the product name you created earlier; Public Website     Security     Sites     Infrastructure     Publishing     Creative Continue this concept of seeing the list as the complete picture and decompose it one more level. You should have 25 items. Public Website     Security         Authentication         Login Control         Administration         DRM         Workflow     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... And one more time for a total of 125 items. The top item makes the list 126. Public Website     Security         Authentication             Install (AD/ADAM/LDAP/SQL)             Configuration             Management             Web App Configuration             Implement Provider         Login Control             Login Form             Login/Logoff             pw change             pw recover/forgot             email verification         Administration             ...         DRM             ...         Workflow             ...     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... The next step is to make sure the task at the bottom of every branch represents the “time value” you planned for the project. You can add more to the WBS and of course if you can’t find 5 items, 4 is fine. If a task can be done in a fraction of the time value you determined for the project, try to roll it up into a larger task. In the task actions (later when the iteration is being planned), decompose the details back to the simple tasks. Now, go estimate!

    Read the article

  • Managing software projects - advice needed

    - by Callum
    I work for a large government department as part of an IT team that manages and develops websites as well as stand alone web applications. We’re running in to problems somewhere in the SDLC that don’t rear their ugly head until time and budget are starting to run out. We try to be “Agile” (software specifications are not as thorough as possible, clients have direct access to the developers any time they want) and we are also in a reasonably peculiar position in that we are not allowed to make profit from the services we provide. We only service the divisions within our government department, and can only charge for the time and effort we actually put in to a project. So if we deliver a project that we have over-quoted on, we will only invoice for the actual time spent. Our software specifications are not as thorough as they could be, but they always include at a minimum: Wireframe mockups for every form view A data dictionary of all field inputs Descriptions of any business rules that affect the system Descriptions of the outputs I’m new to software management, but I’ve overseen enough software projects now to know that as soon as users start observing demos of the system, they start making a huge amount of requests like “Can we add a few more fields to this report.. can we redesign the look of this interface.. can we send an email at this part of the workflow.. can we take this button off this view.. can we make this function redirect to a different screen.. can we change some text on this screen… can we create a special account where someone can log in and get access to X… this report takes too long to run can it be optimised.. can we remove this step in the workflow… there’s got to be a better image we can put here…” etc etc etc. Some changes are tiny and can be implemented reasonably quickly.. but there could be up to 50-100 or so of such requests during the course of the SDLC. Other change requests are what clients claim they “just assumed would be part of the system” even if not explicitly spelled out in the spec. We are having a lot of difficulty managing this process. With no experienced software project managers in our team, we need to come up with a better way to both internally identify whether work being requested is “out of spec”, and be able to communicate this to a client in such a manner that they can understand why what they are asking for is “extra” work. We need a way to track this work and be transparent with it. In the spirit of Agile development where we are not spec'ing software systems in to the ground and back again before development begins, and bearing in mind that clients have access to any developer any time they want it, I am looking for some tips and pointers from experienced software project managers on how to handle this sort of "scope creep" problem, in tracking it, being transparent with it, and communicating it to clients such that they understand it. Happy to clarify anything as needed. I really appreciate anyone who takes the time to offer some advice. Thanks.

    Read the article

  • Dashboard for collaborative science / data processing projects

    - by rescdsk
    Hi, Continuous Integration servers like Hudson are a pretty amazing addition to software development. I work in an academic research lab, and I'd love to apply similar principles to scientific data analysis. I want a dashboard-like view of which collections of data are fine, which ones are failing their tests (simple shell scripts, mostly), and so on. A lot like the Chromium dashboard (WARNING: page takes a long time to load). It takes work from at least 4 people, and maybe 10 or 12 hours of computer time, to bring our data (from behavioral studies) from its raw form to its final, easily-analyzed form. I've tried Hudson and buildbot, but neither is really appropriate to our workflow. We just want to run a bunch of tests on maybe fifty independent collections of subject data, and display the results nicely. SO! Does anyone have a recommendation of a way to generate this kind of report easily? Or, can you think of a good way to shoehorn this kind of workflow into a continuous integration server? Or, can you recommend a unit testing dashboard that could deal with tests that are little shell scripts rather than little functions? Thank you!

    Read the article

  • Dynamically create a text file from a C# program

    - by techstu
    Can I dynamically create a text file from a C# program, using data from a previously created xml file and text file, I have written half the code, but can't go any further please help using System; using System.IO; using System.Xml; namespace Task3 { class TextFileReader { static void Main(string[] args) { String strn=" ", strsn=String.Empty; XmlTextReader reader = new XmlTextReader("my.xml"); while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: // The node is an element. if (reader.HasAttributes) { strn = reader.GetAttribute(0); strsn = reader.GetAttribute(1); int counter = 0; string line; // Read the file and display it line by line. System.IO.StreamReader file = new System.IO.StreamReader("read_file.txt"); string ch, ch1; while ((line = file.ReadLine()) != null) { if (line.Substring(0, 1).Equals("%")) { int a = line.IndexOf('%'); int b = line.LastIndexOf('%'); ch = line.Substring(a + 1, b - 1); ch1 = line.Substring(a, b+1); if (ch == "name") { string test = line.Replace(ch1, strn); Console.WriteLine(test); } else if (ch == "sirname") { string test = line.Replace(ch1, strsn); Console.WriteLine(test); } } else { Console.WriteLine(line); } counter++; } file.Close(); } break; } } // Suspend the screen. Console.ReadLine(); } } } the xml file from which i am reading is: <?xml version="1.0" encoding="utf-8" ?> - <Workflow> <User UserName="pqr" Sirname="sbd" /> <User UserName="abc" Sirname="xyz" /> </Workflow> and the text file is: hi this is me %sirname% %name% but this is not wat i want..please help

    Read the article

  • Merging changes to a workspace with uncommitted changes

    - by Kim L
    We've just recently switched over from SVN to Mercurial, but now we are running into problems with our workflow. Example I have my local clone of the repository which I work on. I'm making some highly experimental changes to our code base, something that I don't want to commit before I'm sure it works the way it is supposed to, I don't want to commit it even locally. Now, simultaneously, my co-worker has made some significant improvements/bug fixes which I need. He pushes his commits to our main repository. The question is, how can I merge his changes to my workspace without the requirement that I have to commit all my changes, since I need his changes to test my own code? A more day-to-day problem we have with the exact same workflow is where we have a couple of configuration files which are in the repository. Each developer makes a couple of small environment specific changes to the configuration files, but do not commit the changes. These couple of uncommitted files hinders us from making any merges to our workspace, just like with the example above. Ideally, the configuration files probably shouldn't be in the repository, unfortunately, that's just how it has to be for here unnamed reasons.

    Read the article

  • Usability for content editors: Drupal or PHP framework?

    - by Jim
    Greetings: I am going to develop a basic Web site that supports some custom content types, is multilingual, and has content moderation workflow for a restricted group of content editors. This seems like an obvious choice for Drupal, except... the content editors will have little computer experience. In my opinion, that is a show-stopper for Drupal. For example, placing arbitrary inline images in content is a task that WordPress does well. I find Drupal's alternatives (IMCE, img_assist, etc.) clunky and not well integrated, and that will be a problem for this group of content editors. Also, none of the Drupal content workflow modules I tried seemed well-integrated; they all had a "tacked on" feel to them. As an admin I can understand why an "Accessible content" menu item (via Module Grants module) is necessary to view draft content (fixed in D7 but I can't wait for all the modules to be ported), but I'm pretty sure it'll confuse the content editors. An alternative is to use a PHP framework. I've read a few threads suggesting that it will take roughly the same amount of time using a good framework as it will to bend Drupal to my willing... maybe wishful thinking? I'm looking at Symfony, which gives me a basic auto-generated back-end, but which I believe I can customize to my heart's content. How do you make Drupal accessible to non-savvy content editors? If you recommend a PHP framework, which one? TIA!

    Read the article

  • Is the RESTORE process dependent on schema?

    - by Martin Aatmaa
    Let's say I have two database instances: InstanceA - Production server InstanceB - Test server My workflow is to deploy new schema changes to InstanceB first, test them, and then deploy them to InstanceA. So, at any one time, the instance schema relationship looks like this: InstanceA - Schema Version 1.5 InstanceB - Schema Version 1.6 (new version being tested) An additional part of my workflow is to keep the data in InstanceB as fresh as possible. To fulfill this, I am taking the database backups of InstanceA and applying them (restoring them) to InstanceB. My question is, how does schema version affect the restoral process? I know I can do this: Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.5 But can I do this? Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.6 (new version being tested) If no, what would the failure look like? If yes, would the type of schema change matter? For example, if Schema Version 1.6 differed from Schema Version 1.5 by just having an altered storec proc, I imagine that this type of schema change should't affect the restoral process. On the other hand, if Schema Version 1.6 differed from Schema Version 1.5 by having a different table definition (say, an additional column), I image this would affect the restoral process. I hope I've made this clear enough. Thanks in advance for any input!

    Read the article

  • Rails: Multi-Step New User Signup Form (FSM?)

    - by neezer
    I've read the "Create Multi-Step Wizard" in Advanced Rails Recipes. I've also read and re-read the documentation for the updated FSM I'm using called Workflow, and looked here and here. The Advanced Rails Recipe focuses on records (quizzes) that already exist, and doesn't cover creating new ones. The Workflow docs don't cover any code for controllers or views, so I've no idea what to do with all this model magic, and the last two links barely touch on implementation either. From the aforementioned resources, I have a good understanding of what a FSM in Rails is and how to play with it in the console or IRB, but I've got very little direction or understanding how to implement one into my Rails app. What I would like is this: a simple, multi-step user signup process. Step 1: User enters in their critical details (with validations). Step 2: User enters in their search criteria, for their profile (with validations). Step 3: User agrees to the Terms of Service (with validations). Step 4: User is greeted by a confirmation page, including a link that takes them to their newly created account. I'd also like full navigation between the steps and full capture (saves to the database) with each transition. Can someone please give me a clear implementation of something similar to this? I would LOVE an example app that includes a multi-step signup process where I can look at the code (FULL source code--models AND controllers and views) under the hood, but I've been unable to find anything like that. Any guidance would be appreciated! EDIT: Please help make this a Railscast! Ryan B. (a.k.a. Superman), if you're reading this, we need you! http://feedback.railscasts.com/forums/77-episode-suggestions/suggestions/35553-multi-step-forms-and-wizards

    Read the article

  • Bespoke Development or Leverage SharePoint With Web Parts etc?

    - by Asim
    Hi all, We are currently in the process of drawing up a solution for an existing client, creating a number of eServices. The client currently have MOSS 2007. The proposed solution is to use MOSS as the launching pad for the eServices… The requirement involves drawing up several online forms which provide registration facilities as well as facilitating a workflow of some sort. I have been told that the proposed solution requires complex web forms. Most are complex forms with parent child details that have multiple windows. The proposed solution is to do some bespoke development, developing ASP .NET forms. These forms would be deployed under the _layouts folder of the current MOSS portal, inheriting the master page design on the current site. I have been told that this approach make development and deployment more simple, as well has having ‘complete integration’ with MOSS. My questions are: Is this the best way to leverage SharePoint – it seems like the proposed solution is not leveraging MOSS at all..! I thought perhaps utilizing Web Parts would be better, but I have been told that this is more complex and developing more smarter intuitive UI is more difficult. Is this really the case? If not, what should be the recommended approach? We will be utilizing Ultimus as the workflow engine. However, I have been recommended K2 Workflows. Anyone used both/have any opinions on either? Many thanks in advance! Kind Regards,

    Read the article

  • How to Create a Folder in the Current Document Library if it's not already present?

    - by Rosh Malai
    All I want to do is to create a folder "MetaFolder" inside a document library. User can be on any document library and I would like to create this folder after item is added (so on itemAdded event handler). I do NOT want workflow so please dont suggest workflow. This code works but I have to hardcode the url but need to get url from current url. also need to verify the folder uHippo does not exists in the current doc library... public override void ItemAdded(SPItemEventProperties properties) { base.ItemAdded(properties); using (SPSite currentSite = new SPSite(properties.WebUrl)) using (SPWeb currentWeb = currentSite.OpenWeb()) { // This code works and creates Folder in the "My TEST Doc library" //SPList docLib = currentWeb.Lists["My TEST Doc Library"]; //SPListItem folder = docLib.Folders.Add(docLib.RootFolder.ServerRelativeUrl, SPFileSystemObjectType.Folder, "My folder"); //folder.Update(); string doclibname = "Not a doclib"; //SPList doclibList = currentWeb.GetList(HttpContext.Current.Request.RawUrl); // NOT WORKING. Tried properties.weburl SPList doclibList = currentWeb.GetListFromUrl("https://mycompanyportal/sites/testsitecol/testwebsite/My%20TEST%20Doc%20Library/Forms/AllItems.aspx"); if (null != doclibList) { doclibname = doclibList.Title; } // this section also not working. // getting Object reference not set to an instance of an object or something like that. //if (currentWeb.GetFolder("uHippo").Exists == false) //{ SPListItem folder = doclibList.Folders.Add(doclibList.RootFolder.ServerRelativeUrl, SPFileSystemObjectType.Folder, "uHippo"); folder.Update(); //} } }

    Read the article

  • Identify server that made call to web service

    - by sleepybobos
    I am working within an intranet environment. We have both a production and development sharepoint server (WSS 3). We have a 3rd party workflow product which runs on top of sharepoint. It is installed on both the production and development sharepoint servers. The workflow product can call web services I have written which are hosted on our web server. How would I have the web services determine which sharepoint server made the call to the web service, be it the production or development server? I would then use this information to server specific information from web.config or database etc. Currently the site hosting web services is setup to allow anonymous access so code such as System.Web.HttpContext.Current.User.Identity.Name; returns and empty string. If windows authenticaion is used it returns the identity of the currently logged in user, which is no user in identifying the server the call was made from. I need a push in the right direction to address what I believe is probably a common scenario please.

    Read the article

  • 1. Get the Current Document Library 2. Create a Folder "MetaFolder" if it's not already present

    - by Rosh Malai
    All I want to do is to create a folder "MetaFolder" inside a document library. User can be on any document library and I would like to create this folder after item is added (so on itemAdded event handler). I do NOT want workflow so please dont suggest workflow. This code works but I have to hardcode the url but need to get url from current url. also need to verify the folder uHippo does not exists in the current doc library... Thanks public override void ItemAdded(SPItemEventProperties properties) { base.ItemAdded(properties); using (SPSite currentSite = new SPSite(properties.WebUrl)) using (SPWeb currentWeb = currentSite.OpenWeb()) { // This code works and creates Folder in the "My TEST Doc library" //SPList docLib = currentWeb.Lists["My TEST Doc Library"]; //SPListItem folder = docLib.Folders.Add(docLib.RootFolder.ServerRelativeUrl, SPFileSystemObjectType.Folder, "My folder"); //folder.Update(); string doclibname = "Not a doclib"; //SPList doclibList = currentWeb.GetList(HttpContext.Current.Request.RawUrl); // NOT WORKING. Tried properties.weburl SPList doclibList = currentWeb.GetListFromUrl("https://mycompanyportal/sites/testsitecol/testwebsite/My%20TEST%20Doc%20Library/Forms/AllItems.aspx"); if (null != doclibList) { doclibname = doclibList.Title; } // this section also not working. // getting Object reference not set to an instance of an object or something like that. //if (currentWeb.GetFolder("uHippo").Exists == false) //{ SPListItem folder = doclibList.Folders.Add(doclibList.RootFolder.ServerRelativeUrl, SPFileSystemObjectType.Folder, "uHippo"); folder.Update(); //} } }

    Read the article

  • Fast, Unicode-capable, cross-platform programmer's text editor that shows invisibles like ZWSP?

    - by Roger_S
    Our publishing workflow includes Windows and Linux machines (there are some Macs too, but not in the critical-path workflow). Many texts include both English and Khmer and are marked-up in XML. XML Copy Editor is the best cross-platform open-source XML editor I've discovered. It utilizes the Scintilla editing component, which is generally good with Unicode but which does not enable non-printing or invisible characters like U+200B (zero-width space) and U+200C (zero-width non-joiner) to be displayed. Khmer does not separate words with a space character as Western languages do, so ZWSP is used in electronic texts to enable applications to break lines easily. Ideally I'd edit the markup and the content in a single editor, but XML awareness is less important at times than being able to display invisibles. (OpenOffice.org Writer and Microsoft Word are the only two apps I know that will display ZWSP. They are not suitable for the markup and text manipulations that need to be done to prepare manuscripts for publication, unfortunately, although I guess they're fine for authoring.) I tried out a promising editor last week, but a search-and-replace regex operation that took under a second in TextPad 4.7.3 lasted over twenty seconds. So I want to mention that speed and the ability to handle large (up to 150mb) files is also a concern. Is there a good, fast, free or not too expensive text editor, with versions on Windows and Linux and maybe mac too, Unicode-aware and capable of displaying invisibles like ZWSP? That has syntax highlighting, can handle large files and is customizable enough that I won't tear my hair out in frustration? Thanks, Roger_S

    Read the article

  • anti-if campaign

    - by Andrew Siemer
    I recently ran against a very interesting site that expresses a very interesting idea - the anti-if campaign. You can see this here at www.antiifcampaign.com. I have to agree that complex nested IF statements are an absolute pain in the rear. I am currently on a project that up until very recently had some crazy nested IFs that scrolled to the right for quite a ways. We cured our issues in two ways - we used Windows Workflow Foundation to address routing (or workflow) concerns. And we are in the process of implementing all of our business rules utilizing ILOG Rules for .NET (recently purchased by IBM!!). This for the most part has cured our nested IF pains...but I find myself wondering how many people cure their pains in the manner that the good folks at the AntiIfCampaign suggest (see an example here) by creating numerous amounts of abstract classes to represent a given scenario that was originally covered by the nested IF. I wonder if another way to address the removal of this complexity might also be in using an IoC container such as StructureMap to move in and out of different bits of functionality. Either way... Question: Given a scenario where I have a nested complex IF or SWITCH statement that is used to evaluate a given type of thing (say evaluating an Enum) to determine how I want to handle the processing of that thing by enum type - what are some ways to do the same form of processing without using the IF or SWITCH hierarchical structure? public enum WidgetTypes { Type1, Type2, Type3, Type4 } ... WidgetTypes _myType = WidgetTypes.Type1; ... switch(_myType) { case WidgetTypes.Type1: //do something break; case WidgetTypes.Type2: //do something break; //etc... }

    Read the article

  • In Mercurial, can I apply changes from one file to another file in the same branch?

    - by Stephen
    In the good old days of Subversion, I would sometimes derive a new file from an existing one using svn copy. Then if something changed in sections they had in common, I could still use svn merge to update the derived version. To use the example from hginit.com, say the "guac" recipe already exists, and I want to create a "superguac" that includes instructions on how to serve guacamole to 1000 raving soccer fans. Using the process I just described, I could: svn cp guac superguac svn ci -m "Created superguac by copying guac" (edit superguac) svn ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) svn ci -m "Fixed a typo in guac" svn merge -r3:4 guac superguac and thus the typo fix would be applied to superguac. Mercurial provides an hg copy command that marks a file as a copy of the original, but I'm not sure the repository structure supports a similar workflow. Here's the same example, and I carefully only edit a single file in the commit I want to use in the merge: hg cp guac superguac hg ci -m "Created superguac by copying guac" (edit superguac) hg ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) hg ci -m "Fixed a typo in guac" I now want to apply the change in guac to superguac. Is that possible? If so, what's the right command? Is there a different workflow in Mercurial that achieves the same results (limited to a single branch)?

    Read the article

  • Need some advice before starting coding my next iPhone app...

    - by Tom
    Hi! I need some advice about how should I start coding something. So here is the context: I've just finished building a CMS that manage a SQLite database. My application will be picking this database and use its content as the application's content. So far it's pretty simple. The application will have a navigation that will browse through various workflows, and once at the end workflow, it'll show contents from the database. A consultation kind a thing, example: Liquids - Juice - Orange Juice - Informations about Orange Juice. For my SQLite transactions, so far I believe I'll be using fmdb. It looks like a great wrapper. Here's a simple schema from one of the database: Workflow: id: { type: integer(3), primary: true, autoincrement: true } workflow_id: { type: integer(1) } name: { type: string(255) } That table's rows will be my navigations. Do you believe I should use a navigation controller? If so, then how could I generate the navigation tree from it? I have a good working knowledge of Objective-C and Foundation framework, but never went too far with it so that is why I'm asking before starting in the wrong direction :) Thanks a lot.

    Read the article

  • How to solve this problem with Python

    - by morpheous
    I am "porting" an application I wrote in C++ into Python. This is the current workflow: Application is started from the console Application parses CLI args Application reads an ini configuration file which specifies which plugins to load etc Application starts a timer Application iterates through each loaded plugin and orders them to start work. This spawns a new worker thread for the plugin The plugins carry out their work and when completed, they die When time interval (read from config file) is up, steps 5-7 is repeated iteratively Since I am new to Python (2 days and counting), the distinction between script, modules and packages are still a bit hazy to me, and I would like to seek advice from Pythonista as to how to implement the workflow described above, using Python as the programing language. In order to keep things simple, I have decided to leave out the time interval stuff out, and instead run the python script/scripts as a cron job instead. This is how I am thinking of approaching it: Encapsulate the whole application in a package which is executable (i.e. can be run from the command line with arguments. Write the plugins as modules (I think maybe its better to implement each module in a separate file?) I havent seen any examples of using threading in Python yet. Could someone provide a snippet of how I could spawn a thread to run a module. Also, I am not sure how to implement the concept of plugins in Python - any advice would be helpful - especially with a code snippet.

    Read the article

  • Handling Email Bouncebacks in Rails

    - by aressidi
    Hi there, I've built a very basic CRM system for my rails app that allows me to send weekly user activity digests with custom text and create multi-part marketing messages which I can configure and send through a basic admin interface. I'm happy with what I've put together on the send-side of things (minus the fact that I haven't tried to volume test its capabilities), but I'm concerned about how to handle bounce-backs. I came across this plugin with associated scripts: http://github.com/kovyrin/bounces-handler I'm using Google Apps to handle my mail and really don't know enough about Perl to want to mess with the above plugin - I have enough headaches. I'm looking for a simple solution for processing bounce-backs in Rails. All my email will go out from an address like this which will be managed in Google Apps: "[email protected]." What's the best workflow for this? Can anyone post an example solution they're using keeping in mind the fact that I'm using Google Apps for the mail? Any guidance, links, or basic workflow best-practices to handle this would be greatly appreciated. Thanks! -A

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >