Search Results

Search found 20146 results on 806 pages for 'accounting software'.

Page 100/806 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Where do we put "asking the world" code when we separate computation from side effects?

    - by Alexey
    According to Command-Query Separation principle, as well as Thinking in Data and DDD with Clojure presentations one should separate side effects (modifying the world) from computations and decisions, so that it would be easier to understand and test both parts. This leaves an unanswered question: where relatively to the boundary should we put "asking the world"? On the one hand, requesting data from external systems (like database, extental services' APIs etc) is not referentially transparent and thus should not sit together with pure computational and decision making code. On the other hand, it's problematic, or maybe impossible to tease them apart from computational part and pass it as an argument as because we may not know in advance which data we may need to request.

    Read the article

  • Dude, what’s up with POP Forums vNext?

    - by Jeff
    Yeah, it has been awhile. I posted v9.2 back in January, about five months ago. That’s a real change from the release pace I had there for awhile. Let me explain what’s going on. First off, in the interim, I re-launched CoasterBuzz, which required a lot of my attention for about two of those months. That’s a good thing though, because that site is just about the best test bed I could ask for. The other thing is that I committed to make the next version use ASP.NET MVC 4, which is now at the RC stage. I didn’t think much about when they’d hit their RTW point, but RC is good enough for me. To that end, there is enough change in the next version that I recently decided to make it a major version upgrade, and finish up the loose ends and science projects to make it whole. Here’s what’s in store… Mobile views: I sat on this or a long time. Originally, I was going to use jQuery Mobile, and waited and waited for a new release, but in the end, decided against using it. Sometimes buttons would unexplainably not work, I felt like I was fighting it at times, and the CSS just felt too heavy. I rolled my own mobile sugar at a fraction of the size, and I think you’ll find it easy to modify. And it’s Metro-y, of course! Re-do of background services: A number of things run in the background, and I did quite a bit of “reimagining” of that code. It’s the weirdness of running services in a Web site context, because so many folks can’t run a bona fide service on their host’s box. The biggest change here is that these service no longer start up by default. You’ll need to call a new method from global.asax called PopForumsActivation.StartServices(). This is also a precursor to running the app in a Web farm (new data layer and caching is the second part of that). I learned about this the hard way when I had three apps using the forum library code but only one was actually the forum. The services were all running three times as often with race conditions and hits on the same data. That was particularly bad for e-mail. CSS clean up: It’s still not ideal, but it’s getting better. That’s one of those things that comes with integrating to a real site… you discover all of the dumb things you did. The mobile CSS is particularly easier to live with. Bug fixes: There are a whole lot of them. Most were minor, but it’s feeling pretty solid now. So that’s where I am. I’m going to call it v10.0, and I’m going to really put forth some effort toward finishing the mobile experience and getting through the remaining bugs. The roadmap beyond that will likely not be feature oriented, but rather work on some other things, like making it run in Azure, perhaps using SQL CE, a better install experience, etc. As usual, I’ll post the latest here. Stay tuned!

    Read the article

  • Issues installing synapse launcher

    - by George Morton
    I am trying to install synapse launcher on my desktop . I am using these two commands: sudo add-apt-repository ppa:synapse-core/ppa sudo apt-get update && sudo apt-get install synapse However I am getting an error with the second command saying E: Some index files failed to download, they have been ignored, or old ones used instead. I presume this has something to do with my connection to the hosting servers. But what I don't understand is the fact that synaptic is working it just seems to be something about that ppa. I don't know what I am doing wrong as the commands are widely suggested around the web, But they don't seem to work for me! I would greatly appreciate some advice on this as it is proving to be very frustrating. Many thanks, George

    Read the article

  • Is there an Android remote app compatible with LibreOffice Impress under Ubuntu?

    - by WarriorIng64
    I have an upcoming presentation I want to make in LibreOffice Impress on my Ubuntu 12.10 laptop. I was wondering if I could get ahold of an Android app which would act like a remote control, allowing me to switch between slides from my phone over WiFi without having to stay near the laptop (I've been told I need to move around more during my presentations). A quick look on the Google Play store seemed to turn up a handful of PowerPoint remote apps for Windows or maybe Mac. I also took a look at Can an Android phone control Ubuntu like a remote?, but that seems more for controlling an Ubuntu media center. Is there anything which will work with LibreOffice Impress on Ubuntu? My phone is a Samsung Galaxy Precedent on Straight Talk, running Android 2.2.2 (latest available version from the carrier). It's not rooted, and there's not much memory on it either, so the smaller and simpler the app the better. Also, it has to be free because I currently don't have a means to pay for an app I may/may not like.

    Read the article

  • First impressions of Scala

    - by Scott Weinstein
    I have an idea that it may be possible to predict build success/failure based on commit data. Why Scala? It’s a JVM language, has lots of powerful type features, and it has a linear algebra library which I’ll need later. Project definition and build Neither maven or the scala build tool (sbt) are completely satisfactory. This maven **archetype** (what .Net folks would call a VS project template) mvn archetype:generate `-DarchetypeGroupId=org.scala-tools.archetypes `-DarchetypeArtifactId=scala-archetype-simple `-DremoteRepositories=http://scala-tools.org/repo-releases `-DgroupId=org.SW -DartifactId=BuildBreakPredictor gets you started right away with “hello world” code, unit tests demonstrating a number of different testing approaches, and even a ready made `.gitignore` file - nice! But the Scala version is behind at v2.8, and more seriously, compiling and testing was painfully slow. So much that a rapid edit – test – edit cycle was not practical. So Lab49 colleague Steve Levine tells me that I can either adjust my pom to use fsc – the fast scala compiler, or use sbt. Sbt has some nice features It’s fast – it uses fsc by default It has a continuous mode, so  `> ~test` will compile and run your unit test each time you save a file It’s can consume (and produce) Maven 2 dependencies the build definition file can be much shorter than the equivalent pom (about 1/5 the size, as repos and dependencies can be declared on a single line) And some real limitations Limited support for 3rd party integration – for instance out of the box, TeamCity doesn’t speak sbt, nor does IntelliJ IDEA Steeper learning curve for build steps outside the default Side note: If a language has a fast compiler, why keep the slow compiler around? Even worse, why make it the default? I choose sbt, for the faster development speed it offers. Syntax Scala APIs really like to use punctuation – sometimes this works well, as in the following map1 |+| map2 The `|+|` defines a merge operator which does addition on the `values` of the maps. It’s less useful here: http(baseUrl / url >- parseJson[BuildStatus] sure you can probably guess what `>-` does from the context, but how about `>~` or `>+`? Language features I’m still learning, so not much to say just yet. However case classes are quite usefull, implicits scare me, and type constructors have lots of power. Community A number of projects, such as https://github.com/scalala and https://github.com/scalaz/scalaz are split between github and google code – github for the src, and google code for the docs. Not sure I understand the motivation here.

    Read the article

  • Goodby jQuery Templates, Hello JsRender

    - by SGWellens
    A funny thing happened on my way to the jQuery website, I blinked and a feature was dropped: jQuery Templates have been discontinued. The new pretender to the throne is JsRender. jQuery Templates looked pretty useful when they first came out. Several articles were written about them but I stayed away because being on the bleeding edge of technology is not a productive place to be. I wanted to wait until it stabilized…in retrospect, it was a serendipitous decision. This time however, I threw all caution to the wind and took a close look at JSRender. Why? Maybe I'm having a midlife crisis; I'll go motorcycle shopping tomorrow. Caveat, here is a message from the site: Warning: JsRender is not yet Beta, and there may be frequent changes to APIs and features in the coming period. Fair enough, we've been warned. The first thing we need is some data to render. Below is some JSON formatted data. Typically this will come from an asynchronous call to a web service. For simplicity, I hard coded a variable:     var Golfers = [         { ID: "1", "Name": "Bobby Jones", "Birthday": "1902-03-17" },         { ID: "2", "Name": "Sam Snead", "Birthday": "1912-05-27" },         { ID: "3", "Name": "Tiger Woods", "Birthday": "1975-12-30" }         ]; We also need some templates, I created two. Note: The script blocks have the id property set. They are needed so JsRender can locate them.     <script id="GolferTemplate1" type="text/html">         {{=ID}}: <b>{{=Name}}</b> <i>{{=Birthday}}</i> <br />     </script>       <script id="GolferTemplate2" type="text/html">         <tr>             <td>{{=ID}}</td>             <td><b>{{=Name}}</b></td>             <td><i>{{=Birthday}}</i> </td>         </tr>     </script> Including the correct JavaScript files is trivial:     <script src="Scripts/jquery-1.7.js" type="text/javascript"></script>     <script src="Scripts/jsrender.js" type="text/javascript"></script> Of course we need some place to render the output:     <div id="GolferDiv"></div><br />     <table id="GolferTable"></table> The code is also trivial:     function Test()     {         $("#GolferDiv").html($("#GolferTemplate1").render(Golfers));         $("#GolferTable").html($("#GolferTemplate2").render(Golfers));           // you can inspect the rendered html if there are poblems.         // var html = $("#GolferTemplate2").render(Golfers);     } And here's what it looks like with some random CSS formatting that I had laying around.    Not bad, I hope JsRender lasts longer than jQuery Templates. One final warning, a lot of jQuery code is ugly, butt-ugly. If you do look inside the jQuery files, you may want to cover your keyboard with some plastic in case you get vertigo and blow chunks. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • Better Understand the 'Strategy' Design Pattern

    - by Imran Omar Bukhsh
    Greetings Hope you all are doing great. I have been interested in design patterns for a while and started reading 'Head First Design Patterns'. I started with the first pattern called the 'Strategy' pattern. I went through the problem outlined in the images below and first tried to propose a solution myself so I could really grasp the importance of the pattern. So my question is that why is my solution ( below ) to the problem outlined in the images below not good enough. What are the good / bad points of my solution vs the pattern? What makes the pattern clearly the only viable solution ? Thanks for you input, hope it will help me better understand the pattern. MY SOLUTION Parent Class: DUCK <?php class Duck { public $swimmable; public $quackable; public $flyable; function display() { echo "A Duck Looks Like This<BR/>"; } function quack() { if($this->quackable==1) { echo("Quack<BR/>"); } } function swim() { if($this->swimmable==1) { echo("Swim<BR/>"); } } function fly() { if($this->flyable==1) { echo("Fly<BR/>"); } } } ?> INHERITING CLASS: MallardDuck <?php class MallardDuck extends Duck { function MallardDuck() { $this->quackable = 1; $this->swimmable = 1; } function display() { echo "A Mallard Duck Looks Like This<BR/>"; } } ?> INHERITING CLASS: WoddenDecoyDuck <?php class WoddenDecoyDuck extends Duck { function woddendecoyduck() { $this->quackable = 0; $this->swimmable = 0; } function display() { echo "A Wooden Decoy Duck Looks Like This<BR/>"; } } Thanking you for your input. Imran

    Read the article

  • What's the best license for my website?

    - by John Maxim
    I have developed a unique website but do not have a lot of fund to protect it with trademark or patents. I'm looking for suggestions so that when my supervisor gets my codes, some laws restrict anyone from copying it and claim their work. I'm in the middle of thinking, making the application a commercial one or never allowing it to be copied at all. What kind of steps am I required to take in order to make full measurements so my applications are fully-protected? I've come across a few, one under my consideration is MIT license. Some say we can have a mixture of both commercial and MIT. I would also like to be able to distribute some functions so it can be modified by others but I'd still retain the ownership. Last but not least, it's confusing when I think of protecting the whole website, and protecting codes by codes in division. How should we go about this? Thanks. N.B I have to pass it over to the supervisor as this is a Uni project. I need this to be done within 2 weeks. So time to get my App protected is a factor here.

    Read the article

  • New Visual Studio 2012 Project Templates for DotNetNuke

    - by Chris Hammond
    Earlier this month Microsoft put the bits up for Visual Studio 2012 RTM out on MSDN Subscriber downloads, and during the first two weeks of September they will officially be releasing Visual Studio 2012. I started working with VS2012 late in the release candidate cycle, doing some DNN module development using my templates at http://christoctemplate.codeplex.com . These templates work fine in Visual Studio 2012 from my testing, but they still face the same problem that they had in Visual Studio 2008...(read more)

    Read the article

  • Find CheckBox from GridView in Content Page/Master Page

    - by Suthish Nair
    How to find a control from GridView which resides in Content Page Here the example using to find the CheckBox, hope this will help you all... .aspx code <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" Runat="Server"> <asp:GridView ID="GridView1" runat="server"> <Columns> <asp:TemplateField> <ItemTemplate> <asp:CheckBox ID="chkID" runat="server" /> </ItemTemplate> </asp...(read more)

    Read the article

  • ORM Profiler v1.1 has been released!

    - by FransBouma
    We've released ORM Profiler v1.1, which has the following new features: Real time profiling A real time viewer (RTV) has been added, which gives insight in the activity as it is received by the client, in two views: a chronological connection overview and an activity graph overview. This RTV allows the user to directly record to a snapshot using record buttons, pause the view, mark a range to create a snapshot from that range, and view graphs about the # of connection open actions and # of commands per second. The RTV has a 'range' in which it keeps live data and auto-cleans data that's older than this range. Screenshot of the activity graphs part of the real-time viewer: Low-level activity tab A new tab has been added to the Application tabs: the Low-level activity tab. This tab shows the main activity as it has been received over the named pipe. It can help to get insight in the chronological activity without the grouping over connections, so multiple connections at the same time per thread are easier to spot. Clicking a command will sync the rest of the application tabs, clicking a row will show the details below the splitter bar, as it is done with the other application tabs as well. Default application name in interceptor When an empty string or null is passed for application name to the Initialize method of the interceptor, the AppDomain's friendly name is used instead. Copy call stack to clipboard A call stack viewed in a grid in various parts of the UI is now copyable to the clipboard by clicking a button. Enable/Disable interceptor from the config file It's now possible to enable/disable the interceptor Initialization from the application's config file, using: Code: <appSettings> <add key="ORMProfilerEnabled" value="true"/> </appSettings> if value is true, the interceptor's Initialize method will proceed. If the value is false, the interceptor's Initialize method will not proceed and initialization won't be performed, meaning no interception will take place. If the setting is absent, or misconfigured, the Initialize method will proceed as normal and perform the initialization. Stored procedure calls for select databases are now properly displayed as a call For the databases: SQL Server, Oracle, DB2, Sybase ASA, Sybase ASE and Informix a stored procedure call is displayed as an execute/call statement and copy to clipboard works as-is. I'm especially happy with the new real-time profiling feature in ORM Profiler, which is the flagship feature for this release: it offers a completely new way to use the profiler, namely directly during debugging: you can immediately see what's going on without the necessity of a snapshot. The activity graph feature combined with the auto-cleanup of older data, allows you to keep the profiler open for a long period of time and see any spike of activity on the profiled application.

    Read the article

  • Application for taking pretty screenshots (like OS X does)

    - by Oli
    I've been building a website for a guy who uses Mac OS X and occasionally he sends me screenshots of bugs. They come out looking like this: This is fairly typical of Mac screenshots. You get the window decorations, the shadow from the window and a white or transparent background (not the desktop wallpaper -- I've checked). Compare this to an Ubuntu window-shot (Alt+Print screen): It's impossible to keep a straight face and say the Ubuntu one anywhere near as elegant. My question is: Is there an application that can do this in Ubuntu? Edit: Follow up: Is there an application that can do this in one move? Shutter is pretty good but running the plugin for every screenshot is pretty tiresome as it doesn't seem to remember my preference (I want south-shadow and that requires selecting south, then clicking refresh, then save) and it's more clicks than I'd like. Is there a simple way of telling shutter I want south-shadow for all screenshots (except entire desktop and area-selection)?

    Read the article

  • T-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers

    - by joycsharp
    I just released an open source project at codeplex, which includes a set of T-4 templates to enable you to build logical layers (i.e. DAL/BLL) with just few clicks! The logical layers implemented here are  based on Entity Framework 4.0, ASP.NET Web Form Data Bound control friendly and fully unit testable. In this open source project you will get Entity Framework 4.0 based T-4 templates for following types of logical layers: Data Access Layer: Entity Framework 4.0 provides excellent ORM data access layer. It also includes support for T-4 templates, as built-in code generation strategy in Visual Studio 2010, where we can customize default structure of data access layer based on Entity Framework. default structure of data access layer has been enhanced to get support for mock testing in Entity Framework 4.0 object model. Business Logic Layer: ASP.NET web form based data bound control friendly business logic layer, which will enable you few clicks to build data bound web applications on top of ASP.NET Web Form and Entity Framework 4.0 quickly with great support of mock testing. Download it to make your web development productive. Enjoy!

    Read the article

  • Integrate Microsoft Translator into your ASP.Net application

    - by sreejukg
    In this article I am going to explain how easily you can integrate the Microsoft translator API to your ASP.Net application. Why we need a translation API? Once you published a website, you are opening a channel to the global audience. So making the web content available only in one language doesn’t cover all your audience. Especially when you are offering products/services it is important to provide contents in multiple languages. Users will be more comfortable when they see the content in their native language. How to achieve this, hiring translators and translate the content to all your user’s languages will cost you lot of money, and it is not a one time job, you need to translate the contents on the go. What is the alternative, we need to look for machine translation. Thankfully there are some translator engines available that gives you API level access, so that automatically you can translate the content and display to the user. Microsoft Translator API is an excellent set of web service APIs that allows developers to use the machine translation technology in their own applications. The Microsoft Translator API is offered through Windows Azure market place. In order to access the data services published in Windows Azure market place, you need to have an account. The registration process is simple, and it is common for all the services offered through the market place. Last year I had written an article about Bing Search API, where I covered the registration process. You can refer the article here. http://weblogs.asp.net/sreejukg/archive/2012/07/04/integrate-bing-search-api-to-asp-net-application.aspx Once you registered with Windows market place, you will get your APP ID. Now you can visit the Microsoft Translator page and click on the sign up button. http://datamarket.azure.com/dataset/bing/microsofttranslator As you can see, there are several options available for you to subscribe. There is a free version available, great. Click on the sign up button under the package that suits you. Clicking on the sign up button will bring the sign up form, where you need to agree on the terms and conditions and go ahead. You need to have a windows live account in order to sign up for any service available in Windows Azure market place. Once you signed up successfully, you will receive the thank you page. You can download the C# class library from here so that the integration can be made without writing much code. The C# file name is TranslatorContainer.cs. At any point of time, you can visit https://datamarket.azure.com/account/datasets to see the applications you are subscribed to. Click on the Use link next to each service will give you the details of the application. You need to not the primary account key and URL of the service to use in your application. Now let us start our ASP.Net project. I have created an empty ASP.Net web application using Visual Studio 2010 and named it Translator Sample, any name could work. By default, the web application in solution explorer looks as follows. Now right click the project and select Add -> Existing Item and then browse to the TranslatorContainer.cs. Now let us create a page where user enter some data and perform the translation. I have added a new web form to the project with name Translate.aspx. I have placed one textbox control for user to type the text to translate, the dropdown list to select the target language, a label to display the translated text and a button to perform the translation. For the dropdown list I have selected some languages supported by Microsoft translator. You can get all the supported languages with their codes from the below link. http://msdn.microsoft.com/en-us/library/hh456380.aspx The form looks as below in the design surface of Visual Studio. All the class libraries in the windows market place requires reference to System.Data.Services.Client, let us add the reference. You can find the documentation of how to use the downloaded class library from the below link. http://msdn.microsoft.com/en-us/library/gg312154.aspx Let us evaluate the translatorContainer.cs file. You can refer the code and it is self-explanatory. Note the namespace name used (Microsoft), you need to add the namespace reference to your page. I have added the following event for the translate button. The code is self-explanatory. You are creating an object of TranslatorContainer class by passing the translation service URL. Now you need to set credentials for your Translator container object, which will be your account key. The TranslatorContainer support a method that accept a text input, source language and destination language and returns DataServiceQuery<Translation>. Let us see this working, I just ran the application and entered Good Morning in the Textbox. Selected target language and see the output as follows. It is easy to build great translator applications using Microsoft translator API, and there is a reasonable amount of translation you can perform in your application for free. For enterprises, you can subscribe to the appropriate package and make your application multi-lingual.

    Read the article

  • I tried to transplant the wireless_tools to Android4.0 system, but I have a problem with Java

    - by Chen Guoli
    My Linux system is Ubuntu Kylin, a new branch of Ubuntu spreading mainly in China. I have changed some files, such as wireless.22.h, ifrename.c and iwlib.h, in wireless_tools.29/ which is located in Android4.0 root directory. Then I followed these steps: $ cd ~/Android4.0 $ su $[key](change to root) root# source build/envsetup.sh root# cd ~/Android4.0/wireless_tools.2.9/ root# mm Then I got a message telling me that: Your version is: java version "1.7.0_21". The correct version is: Java SE 1.6. Then I did as How can I uninstall my current java and install sun java 1.6 The java was installed successfully, but when i tried mm again: Your version is: java version "1.6.0_27". The correct version is: Java SE 1.6. Then I tried https://source.android.com/source/initializing.html , but it didn't work. How can I install "java SE 1.6." correctly?

    Read the article

  • Writing C# Code Using SOLID Principles

    - by bipinjoshi
    Most of the modern programming languages including C# support objected oriented programming. Features such as encapsulation, inheritance, overloading and polymorphism are code level features. Using these features is just one part of the story. Equally important is to apply some object oriented design principles while writing your C# code. SOLID principles is a set of five such principles--namely Single Responsibility Principle, Open/Closed Principle, Liskov Substitution Principle, Interface Segregation Principle and Dependency Inversion Principle. Applying these time proven principles make your code structured, neat and easy to maintain. This article discusses SOLID principles and also illustrates how they can be applied to your C# code.http://www.binaryintellect.net/articles/7f857089-68f5-4d76-a3b7-57b898b6f4a8.aspx 

    Read the article

  • Errors while updating Ubuntu 13.10

    - by santiago
    When I execute the updater it always throws the same error saying: Failed to download repository information. Check your internet connection. And when I run the sudo apt-get update I get these lines at the end: W: Failed to fetch cdrom://Ubuntu 13.10 Saucy Salamander - Release i386 (20131016.1)/dists/saucy/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs W: Failed to fetch cdrom://Ubuntu 13.10 Saucy Salamander - Release i386 (20131016.1)/dists/saucy/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs W: Failed to fetch http://ppa.launchpad.net/tiheum/equinox/ubuntu/dists/saucy/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/tiheum/equinox/ubuntu/dists/saucy/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • Using old code on new version of Visual Studio [migrated]

    - by Tu Tran
    I have a project which was started from 90s in C/C++. Therefore, it contains many old coding styles such as K&R-style function declaration, obsolete function, ... The project works fine in Visual Studio 2008, but now I want to use it in the new version of Visual Studio (specifically VS 2010) because we have other projects in Visual Studio 2010/2012. I don't want to have too many versions of Visual Studio on my machine. When I try to compile the old project, Visual Studio throws too many errors. I can fix all of them but I am scared to edit the source code and I want other people to be able to pen it in the old version of VS too. I want the project to remain backwards compatible with VS. My question is how to use the old code in Visual Studio 2010/2012 without changing the code. Or if necessary how do I just fix a few lines of code, but make sure it won't cause an error if someone else opens that code in the older version of VS. Is there a way to tell newer Visual Studio versions to use older compiler flags or something like that?

    Read the article

  • Microsoft OneNote alternative?

    - by YSN
    Hi folks! Is there any program for Linux that has about the same functionality and usability as Microsoft OneNote? At the moment I am checking out Basket (for KDE), that seems to point to the right direction, but still lacks much of the functionality of OneNote and is very buggy unfortunately. For those of you, who do not know OneNote and want to get an idea of what I am talking about, please see the following video: http://www.youtube.com/watch?v=xdi67tnx6nA Any alternative suggestions (including a combination of different tools or apps running in wine) are very much appreciated. Thanks, YSN

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • apt sources.list disabled on upgrade to 12.04

    - by user101089
    After a do-release-upgrade, I'm now running ubuntu 12.04 LTS, as indicated below > lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise However, I find that all the entries in my /etc/apt/sources.list were commented out except for one. QUESTION: Is it safe for me to edit these, replacing the old 'lucid' with 'precise' in what is shown below? ## unixteam source list # deb http://debian.yorku.ca/ubuntu/ precise main main/debian-installer restricted restricted/debian-installer # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ precise main restricted # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu/ lucid-updates main restricted # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ lucid-updates main restricted # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu/ precise universe # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ precise universe # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu/ precise multiverse # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ precise multiverse # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu lucid-security main restricted # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu lucid-security main restricted # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu lucid-security universe # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu lucid-security universe # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu lucid-security multiverse # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu lucid-security multiverse # disabled on upgrade to precise # R sources # see http://cran.us.r-project.org/bin/linux/ubuntu/ for details # deb http://probability.ca/cran/bin/linux/ubuntu lucid/ # disabled on upgrade to precise deb http://archive.ubuntu.com/ubuntu precise main multiverse universe

    Read the article

  • Show #14 DotNetNuke 5.6.1, Razor/Webmatrix and WebCamps

    - by Chris Hammond
    Once again, it’s been far too long since the last show, this time just over 4 months, For Show #14 I am joined by Joe Brinkman. Take a listen and see what has been going on in the DNN world. Length: 47:56 Size: 43.8mb MP3 Download Welcome back to DNNVoice Welcome to guest Joe Brinkman ( http://blog.theaccidentalgeek.com/ ) Introduction to Joe and Welcome back from Chris Hammond ( http://www.chrishammond.com ) and what he's been doing DotNetNuke Training Free Extensions Module Development Templates...(read more)

    Read the article

  • Adding a website link to the Member Directory in DotNetNuke 6.2

    - by Chris Hammond
    In case you missed it, DotNetNuke 6.2 was released today, check out Will Morgenweck’s blog post for more details on the release . With some of the new features DotNetNuke 6.2 makes it easier to start to customize the listing of members on your site, and also the Profile display for users on the website. I started implementing DotNetNuke 6.2 on one of my racing websites last night (yeah, so I upgraded before the release happened, a benefit of working for the corp ). In doing so I configured the profile...(read more)

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >