Search Results

Search found 26740 results on 1070 pages for 'general software developm'.

Page 35/1070 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Adding a SQL Server Membership Provider using the aspnet_regsql.exe Utility

    - by nannette
    You may add a SQL Server Membership Provider using the aspnet_regsql.exe Utility on either your SQL Server Express local database or on a full-blown SQL Server database . In both implementations, you would use the aspnet_regsql.exe utility. This tool is installed when you install your .NET Framework. To use this on your SQL Server 2008 database server, for instance, you would need to first download and install the .NET Framework onto your server. Then you would need to find the location of the aspnet_regsql...(read more)

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • DNNWorld Discounts!

    - by Chris Hammond
    If you are going to be attending DotNetNuke World this year ( http://dnnworld.dotnetnuke.com ) don’t forget that today is the last day to use the discount code 2011Attendee to get both the conference and training for $599. After today the price goes up! The conference runs October 10-12, 2012 in Orlando Florida. DotNetNuke World is the annual user conference specifically designed for developers, web designers, administrators, business decision makers, and end users on the DotNetNuke Platform.This...(read more)

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • How to stay productive? What time management software is available?

    - by andrewsomething
    So since I started using askubuntu.com I've spent entirely too much time here answering other people's questions. Now maybe someone could help me with that by answering this one. I'm looking for time management software for Ubuntu. There are a number of these programs floating around for Windows. RescueTime is one that is very popular. The key features that I'd like to see in a linux app that RescueTime has are: Automatically records what application you are using, including what websites you visit. Reports and graphs on your time usage. Notifications for when you have spent too much time on "distractions." While RescueTime doesn't officially support linux, there is an open source RescueTime Linux Uploader. Unfortunately, it seems to only support Firefox and Epiphany for website tracking. I'm a Chromium user. The other major drawback to RescueTime is that it is a web service. I'd much rather not upload detailed information about how I spend my time to some third party. Google already knows too much about me as it is. Project Hamster, a GNOME time management app, comes so close. Sadly, it does not automatically track what you are doing. If I had enough discipline to manually report to an applet what I was up to, I doubt I'd need this. (How cool would it be if they provided some Zeitgeist integration to handle that part?)

    Read the article

  • Internet Explorer 9 is coming Monday to a web near you

    - by brian_ritchie
    Internet Explorer 9 is finally here...well almost.  Microsoft is releasing their new browser on March 14, 2011. IE9 has a number of improvements, including: Faster, Faster, Faster.  Did I mention it is faster?   With the new browsers coming out from Mozilla, Google, and Microsoft, there have been a flood of speed test coverage.  Chrome has long held the javascript speed crown.  But according to Steven J. Vaughan-Nichols over at ZDNET..."for the moment at least IE9 is actually the fastest browser I’ve tested to date."  He came to this revelation after figuring out that the 32-bit version of IE9 has the new Chakra JIT (the 64-bit version doesn't).  It also has a DirectX-based rendering engine so it can do cool tricks once reserved for desktop applications. Windows 7 Desktop Integration.  Read my post for more details.  Unfortantely, they didn't integrate my ideas...at least not yet :) Hot new UI.  Ok, they "borrowed" some ideas from Chrome...but that is the best form of flattery. Standards Compliance.  A real focus on HTML5 and CSS3.  Definite goodness for developers. So, go get yourself some IE9 on Monday and enjoy! 

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Worthless Anti-Spam (What can we learn)

    - by smehaffie
    I recently can across a site that had a “anti-spam” field at the bottom of the entry from.  The first issue I had with it was that at 1280X800 you could not read the value you were suppose to enter (see below).  You tell me, should you enter div, dlv, piv, or plv. But even worse than not being readable at high resolutions is the fact that the programmer who coded it really did not understand what this was used for.  An anti-spam (aka: catpcha) entry field should not be able to be read by looking at the HTML DOM object (so entry of value cannot be scripted).  In this case the value is simply a disabled text input filed that has the value you need to type.  So a hacker would simply need to search for text input field named “spam2” and then they could flood the site with spam. 1: <td> 2: <label> 3: <input name="spam1" type="text" class="small" id="spam1" size="6" maxlength="3" /> 4: <input name="spam2" type="text" class="small" id="spam2" value="plv" 5: disabled="disabled" size="6" maxlength="3" /> 6: * <span class="small">- Anti-SPAM key - please enter matching value</span> 7: </label> 8: </td>   There are some things to learn from this example: 1) Always make sure you understand why you are coding a feature/function for any program you write.  Just following the requirements without realizing the “why” will sooner or later come back to bite you.  I think the above example appears to be an example of this. 2) Always check how the screen appears in different resolutions.  In this case it was pretty much unreadable in 1280x800, but you could read it in 800X600 (but most people I know do not have their resolution set that low).  Lucky for me I could “View Source” and get the value I needed to enter.

    Read the article

  • Generating radial indicator images using C#

    - by DigiMortal
    In one of my projects I needed to draw radial indicators for processes measured in percent. Simple images like the one shown on right. I solved the problem by creating images in C# and saving them on server hard disc so if image is once generated then it is returned from disc next time. I am not master of graphics or geometrics but here is the code I wrote. Drawing radial indicator To get things done quick’n’easy way – later may some of younger developers be the one who may need to changes things – I divided my indicator drawing process to four steps shown below. 1. Fill pie 2. Draw circles 3. Fill inner circle 4. Draw text Drawing image Here is the code to draw indicators. private static void SaveRadialIndicator(int percent, string filePath) {     using (Bitmap bitmap = new Bitmap(100, 100))     using (Graphics objGraphics = Graphics.FromImage(bitmap))     {         // Initialize graphics         objGraphics.Clear(Color.White);         objGraphics.SmoothingMode = SmoothingMode.AntiAlias;         objGraphics.TextRenderingHint = TextRenderingHint.ClearTypeGridFit;           // Fill pie         // Degrees are taken clockwise, 0 is parallel with x         // For sweep angle we must convert percent to degrees (90/25 = 18/5)         float startAngle = -90.0F;                        float sweepAngle = (18.0F / 5) * percent;           Rectangle rectangle = new Rectangle(5, 5, 90, 90);         objGraphics.FillPie(Brushes.Orange, rectangle, startAngle, sweepAngle);           // Draw circles         rectangle = new Rectangle(5, 5, 90, 90);         objGraphics.DrawEllipse(Pens.LightGray, rectangle);         rectangle = new Rectangle(20, 20, 60, 60);         objGraphics.DrawEllipse(Pens.LightGray, rectangle);           // Fill inner circle with white         rectangle = new Rectangle(21, 21, 58, 58);         objGraphics.FillEllipse(Brushes.White, rectangle);           // Draw text on image         // Use rectangle for text and align text to center of rectangle         var font = new Font("Arial", 13, FontStyle.Bold);         StringFormat stringFormat = new StringFormat();         stringFormat.Alignment = StringAlignment.Center;         stringFormat.LineAlignment = StringAlignment.Center;           rectangle = new Rectangle(20, 40, 62, 20);         objGraphics.DrawString(percent + "%", font, Brushes.DarkGray, rectangle, stringFormat);           // Save indicator to file         objGraphics.Flush();         if (File.Exists(filePath))             File.Delete(filePath);           bitmap.Save(filePath, ImageFormat.Png);     }        } Using indicators on web page To show indicators on your web page you can use the following code on page that outputs indicator images: protected void Page_Load(object sender, EventArgs e) {     var percentString = Request.QueryString["percent"];     var percent = 0;     if(!int.TryParse(percentString, out percent))         return;     if(percent < 0 || percent > 100)         return;       var file = Server.MapPath("~/images/percent/" + percent + ".png");     if(!File.Exists(file))         SaveImage(percent, file);       Response.Clear();     Response.ContentType = "image/png";     Response.WriteFile(file);     Response.End(); } Om your pages where you need indicator you can set image source to Indicator.aspx (if you named your indicator handling file like this) and add percent as query string:     <img src="Indicator.aspx?percent=30" /> That’s it! If somebody knows simpler way how to generate indicators like this I am interested in your feedback.

    Read the article

  • Microsoft joins the rest of us...and counts down to the end of IE6

    - by brian_ritchie
    Microsoft launched a website dedicated to the demise of IE6.  Here's their pitch...10 years ago a browser was born.  Its name was Internet Explorer 6. Now that we’re in 2011, in an era of modern web standards, it’s time to say goodbye. We'll watch Internet Explorer 6 usage drop to less than 1% worldwide, so more websites can choose to drop support for Internet Explorer 6, saving hours of work for web developers.  Thanks Microsoft!  We've been waiting for this day to come for a long time.  Of course, it would have been nice if IE6 was gone 5 years ago...but who's counting.

    Read the article

  • Portable class libraries and fetching JSON

    - by Jeff
    After much delay, we finally have the Windows Phone 8 SDK to go along with the Windows 8 Store SDK, or whatever ridiculous name they’re giving it these days. (Seriously… that no one could come up with a suitable replacement for “metro” is disappointing in an otherwise exciting set of product launches.) One of the neat-o things is the potential for code reuse, particularly across Windows 8 and Windows Phone 8 apps. This is accomplished in part with portable class libraries, which allow you to share code between different types of projects. With some other techniques and quasi-hacks, you can share some amount of code, and I saw it mentioned in one of the Build videos that they’re seeing as much as 70% code reuse. Not bad. However, I’ve already hit a super annoying snag. It appears that the HttpClient class, with its idiot-proof async goodness, is not included in the Windows Phone 8 class libraries. Shock, gasp, horror, disappointment, etc. The delay in releasing it already caused dismay among developers, and I’m sure this won’t help. So I started refactoring some code I already had for a Windows 8 Store app (ugh) to accommodate the use of HttpWebRequest instead. I haven’t tried it in a Windows Phone 8 project beyond compiling, but it appears to work. I used this StackOverflow answer as a starting point since it’s been a long time since I used HttpWebRequest, and keep in mind that it has no exception handling. It needs refinement. The goal here is to new up the client, and call a method that returns some deserialized JSON objects from the Intertubes. Adding facilities for headers or cookies is probably a good next step. You need to use NuGet for a Json.NET reference. So here’s the start: using System.Net; using System.Threading.Tasks; using Newtonsoft.Json; using System.IO; namespace MahProject {     public class ServiceClient<T> where T : class     {         public ServiceClient(string url)         {             _url = url;         }         private readonly string _url;         public async Task<T> GetResult()         {             var response = await MakeAsyncRequest(_url);             var result = JsonConvert.DeserializeObject<T>(response);             return result;         }         public static Task<string> MakeAsyncRequest(string url)         {             var request = (HttpWebRequest)WebRequest.Create(url);             request.ContentType = "application/json";             Task<WebResponse> task = Task.Factory.FromAsync(                 request.BeginGetResponse,                 asyncResult => request.EndGetResponse(asyncResult),                 null);             return task.ContinueWith(t => ReadStreamFromResponse(t.Result));         }         private static string ReadStreamFromResponse(WebResponse response)         {             using (var responseStream = response.GetResponseStream())                 using (var reader = new StreamReader(responseStream))                 {                     var content = reader.ReadToEnd();                     return content;                 }         }     } } Calling it in some kind of repository class may look like this, if you wanted to return an array of Park objects (Park model class omitted because it doesn’t matter): public class ParkRepo {     public async Task<Park[]> GetAllParks()     {         var client = new ServiceClient<Park[]>(http://superfoo/endpoint);         return await client.GetResult();     } } And then from inside your WP8 or W8S app (see what I did there?), when you load state or do some kind of UI event handler (making sure the method uses the async keyword): var parkRepo = new ParkRepo(); var results = await parkRepo.GetAllParks(); // bind results to some UI or observable collection or something Hopefully this saves you a little time.

    Read the article

  • Microsoft Innovation Day (Moscow, Russia)

    - by Bigtrend
    New event from Microsoft about new techonologies which are going to be released during the current year. What can we say about these new technologies? Many advertising without exceptional ideas. Unfortunately it is true for the whole industry - the only idea from the far future is clouding computing which is not relevant to the really new approach in development. I understand that we can provide facilitation to the development community as well as new UI features for the end-users but in fact the...(read more)

    Read the article

  • The best computer ever

    - by Jeff
    (This is a repost from my personal blog… wow… I need to write more technical stuff!) About three years and three months ago, I bought a 17" MacBook Pro, and it turned out to be the best computer I've ever owned. You might think that every computer with better specs is automatically better than the last, but that hasn't been my experience. My first one was a Sony, back in the Pentium III days, and it cost an astonishing $2,500. That was even more ridiculous in 1999 dollars. It had a dial-up modem, and a CD-ROM, built-in! It may have even played DVD's. A few years later I bought an HP, and it ended up being a pile of shit. The power connector inside came loose from the board, and on occasion would even short. In 2005, I bought a Dell, and it wasn't bad. It had a really high resolution screen (complete with dead pixels, a problem in those days), and it was the first laptop I felt I could do real work on. When 2006 rolled around, Apple started making computers with Intel CPU's, and I bought the very first one the week it came out. I used Boot Camp to run Windows. I still have it in its box somewhere, and I used it for three years. The current 17" was new in 2009. The goodness was largely rooted in having a big screen with lots of dots. This computer has been the source of hundreds of blog posts, tens of thousands of lines of code, video and photo editing, and of course, a whole lot of Web surfing. It connected to corpnet at Microsoft, WiFi in Hawaii and has presented many a deck. It has traveled with me tens of thousands of miles. Last year, I put a solid state drive in it, and it was like getting a new computer. I can boot up a Windows 7 VM in about 19 seconds. Having 8 gigs of RAM has always been fantastic. Everything about it has been fast and fun. When new, the battery (when not using VM's) could get as much as 10 hours. I can still do 7 without much trouble. After 460 charge cycles, the battery health is still between 85 and 90%. The only real negative has been the size and weight. It's only an inch thick, but naturally it's pretty big with a 17" screen. You don't get battery life like that without a huge battery, either, so it's heavy. It was never a deal breaker, but sometimes a long haul across a large airport, you know you're carrying it. Today, Apple announced a new, thinner and lighter 15" laptop, with twice the RAM and CPU cores, and four times the screen resolution. It basically handles my size and weight issues while retaining the resolution, and it still costs less than my 17" did. So I ordered one. Three years is an excellent run, but I kind of budgeted for a new workhorse this year anyway. So if you're interested in a 17" MacBook Pro with a Core 2 Duo 2.66 GHz CPU, 8 gigs of RAM and a 320 gig hard drive (sorry, I'm keeping the SSD), I have one to sell. They've apparently discontinued the 17", which is going to piss off the video community. It's in excellent condition, with a few minor scratches, but I take care of my stuff.

    Read the article

  • What would be a good opensource software for graphing circles, circular regions, and regions bounded by curves and/or arcs?

    - by Michael Dykes
    Hullo all. I have just started s job with Chegg, and my 1st assignment has me writign solutions for Stewart's Essential Calculus. I am dealing with the chapter on multiple integration, and need a good open-source software that I can easily use to draw regions (domains) that would require multiple integrals: i.e. circular regions, portions of circles, regions bounded by curves and/or arcs, and at some point 3D pictures. In most of these cases, I am not working with an exact equation or perhaps need to draw the region bounded by r (radius) between 1 and 2, and the angle theta bounded between pi/4 and 3 pi/4. I am not too terribly familiar with programs like Corel Draw, but the documents I have from this company suggest Corel Draw. So I think I am looking for an open-source free program like Corel Draw or something similar. Any additional suggestions would also be appreciated. I know I can do most, if not all of this using TikZ, but the learning curve is a bit steep, and at the moment I an on a time constraint. Thanks.

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • DXperience v2010 vol 2 Review

    - by kulrom
    Hi guys, this is my second review of these controls, and it will be quite different than the first. A few months ago I was engaged by a client to develop a web based application. He asked me (I guess they all ask the same) to make it good looking and attractive. The first thing I thought of when I heard this was "I must renew my DXperience subscription". And now I am glad I did that. Before I continue, I would like to say something to those readers who are totally new to DXperience. Guys, we all know that one of the more frustrating things for the ASP.NET developers is designing the good looking application. Well folks, your troubles are over! The DXperience takes much of the agony out of developing and designing an outstanding web application. Here we go!...(read more)

    Read the article

  • Software-related but not programming-specific questions

    - by jayrdub
    I have often fought the urge to ask questions that I know aren't appropriate on SO, because I personally haven't come across another online group who's opinions I would trust as much. What sites do you frequent that you have found good participation from a smart group of people where you can ask questions that are related to software, but not programming problems? This community also has a vast depth of knowledge about things related to software like marketing, graphics/UI, running a small business, working in bad jobs, etc. that would greatly benefit everyone else. Where do we go to tap all that knowledge? On stackoverflow.uservoice.com there is a popular suggested feature to sanction, or add to SO, a place to hold discussions that aren't about specific programming questions. It seems that the suggestion has been denied in the past though.

    Read the article

  • The Utilization of Software Engineering Development Principles

    - by Chance
    Being a CS student I've had to take a course in basic software engineering. I was a little curious to find such elaborate "software development processes", like the spiral model, the waterfall model, et cetera. Some of these methodologies seem a little antiquated to me and, after speaking with several employed developers, I can't seem to find anyone who actually adheres to these models. Does anyone here have experience working under the guidance of these models? Were they useful to you and your team during the development of your product? Or are these models just some way to communicate a sense of progression to interested parties outside of the development team?

    Read the article

  • Is Software Development a Part of IT

    - by kzh
    I am not sure if this question is in the scope of SO, but I will ask anyway... I am currently going to school, majoring in Computer Science. Often I overhear students in the Management of Information Systems major call themselves programmers. These comments make me angry for a few reasons: They seem to trivialize what I do. Most of them are not even capable of doing what I can do. To me, MIS is IT and the are technicians and software developers are engineers and are not IT. So I guess my question is, Is software development part of IT? I often see them lumped together.

    Read the article

  • What's the best Java-based forum software?

    - by SamS
    Somebody asked about .NET-based forum software here: http://stackoverflow.com/questions/11046/forum-software-recommandations-net Now I have a similar question but I prefer Java. I'm interested in building a community site so I'd like to know my options. It doesn't have to be free but it needs to be something a normal engineer can afford. Thanks! Update: I prefer a Java-based solution because I'm a Java developer and I figure that it would be easier for me to customize it or modify its code when necessary. I know that PHP forums such as phpBB and vBulletin are very popular but I wouldn't want to touch PHP code.

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • What makes great software?

    - by VirtuosiMedia
    From the perspective of an end user, what makes a software great rather than just good or functional? What are some fundamental principles that can shift the way a software is used and perceived? What are some of the little finishing touches that help put an application over the top? I'm in the later stages of developing a web app and I'm looking for ideas or concepts that I may have missed. If you have specific examples of software or apps that you absolutely love, please share the reasons or features that make it special. Keep in mind that I'm looking for examples that directly affect the end user, but not necessarily just UI suggestions. Here are some of the principles and little touches I'm trying to use: Keep the UI as simple as possible. Remove absolutely everything that isn't necessary. Use progressive disclosure when more information can be needed sometimes but isn't needed all the time. Provide inline help and useful error messages. Verbs on buttons wherever possible. Make anything that's clickable obvious. Fast, responsive UI. Accessibility (this is a work in progress). Reusable UI patterns. Once a user learns a skill, they will be able to use it in multiple places. Intelligent default settings. Auto-focusing forms when filling out the form is the primary action to be taken on the page. Clear metaphors (like tabs) and headings indicating location within the app. Automating repetitive tasks (with the ability to disable the automation). Use standardized or accepted metaphors for icons (like an "x" for delete). Larger text sizes for improved readability. High contrast so that each section is distinct. Making sure that it's obvious on every page what the user is supposed to do by establishing a clear information hierarchy and drawing the eye to the call to action. Most deletions can be undone. Discoverability - Make it easy to learn how to do new tasks. Group similar elements together.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >