Search Results

Search found 8692 results on 348 pages for 'patterns and practices'.

Page 90/348 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • ASP.NET MVC static-asset aides/practices

    - by shannon
    I want to keep assets that are only used by one view in a view-specific folder, so my Search.aspx properly finds images/*.jpg, and helps me maintain my convention: ~/Areas/Candidate/Views/Job/Search.aspx -> ~/Assets/Candidate/Job/Search/images/*.jpg Perhaps with the ability to easily reference controller- or area-common assets manually or automatically: ~/Assets/Candidate/Job/images/*.jpg ~/Assets/Candidate/images/*.jpg If you wonder why I'm doing this, then speak up; I'm probably missing something. But here's why: I don't want stale static assets sitting in my ASP.NET MVC projects, which I expect to be an automatic outcome of the ~/Assets/Images folder: i.e. As a shared asset loses its last reference-count, who knows to delete it, especially with it being so difficult to trace content link validity in MVC projects? How do you, personally, do this? I can imagine, for example: Implement HtmlHelper extension methods for URL-generation. Extending ViewPage and ViewMasterPage with URL-generation methods. Implementing an inbound request filter to search related folders for static assets. and, are there good libraries out there for this? For example, something that also automatically appends timestamps for .JS and .CSS files, writes the / tags for me, and maybe even that allows me to inject includes in the head section from outside head code?

    Read the article

  • Rails - asynchronous tasks, forked processes, best practices

    - by LisaPatton
    I'm using a Observer on my classes. When one of the records is created/updated I need to notfify another service (via a URL call). What is the best way to do this to avoid slowing down my class? Would using a gem liked delayed_job be overkill? In my Observer's after_update() / after_create() I just want to launch a thread that calls the URL...

    Read the article

  • Best practices for using memcached in Rails?

    - by Matt
    Hello everybody, as database transcations in our app are getting more and more time consuming, we have started to use memcached to reduce the amount of queries passed to MySQL. All in all, it works fine and really saves a lot of time. But as caching was "silently appearing" as a workaround to give the app more juice, a lot of our models now contain code like this: def self.all_cached Rails.cache.fetch('object_name') { find( :all, :include => [associations]) } end This is getting more and more a pain as filling and flushing the cache happens in several classes accross the application. Now, I was wondering if there was a better way to abstract memcached logic to make it more powerful and easy to use across all needed models? I was thinking about having some kind of memcached-module which is included in all needed modules. But before playing around, I thought: Let's ask experts first :-) Thanks Matt

    Read the article

  • Commenting practices?

    - by Tarmon
    Hey Everyone, As a student in computer engineering I have been pressured to type up very detailed comments for everything I do. I can see this being very useful for group projects or in the work place but when you work on your own projects do you spend as much time commenting? As a personal project I am working on grows more and more complicated I sometimes feel as though I should be commenting more but I also feel as though it's a waste of time since I will probably be the only one working on it. Is it worth the time and cluttered code? Thoughts?

    Read the article

  • Good practices for intialising properties ?

    - by Rubans
    HI, I have a class property that is a list of strings, List. Sometimes this property is null or if it has been set but the list is empty then count is 0. However elsewhere in my code I need to check whether this property is set, so currently my code check whether it's null and count is 0 which seems messy. if(objectA.folders is null) { if(objectA.folders.count == 0) { // do something } } Any recommendation on how this should be handled? Maybe I should always initialise the property so that it's never null? Appolgies if this is a silly question.

    Read the article

  • Best practices for caching results of JSP pages?

    - by Spines
    My application has an MVC structure. How should I structure my application to allow for maximum caching? Is it sufficient to only cache the model objects that are passed to the JSP views? Or will there be a significant performance boost from caching the results of the rendering of the JSP views too?

    Read the article

  • What 'best practices' exist for handing enum heirarchies?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing enum heirarchies. I'm working through some docs on Entity Framework 4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • what pattern to use for multi-argument method?

    - by Omid S
    I have a method with the following signature: foo (Sample sample, Aliquot aliquot) "foo" needs to alter a Sample object, either the first argument or from the second argument it can extract its Sample. For example: foo (Sample sample, Aliquot aliquot) { Sample out = null; if (sample != null) out = sample else out = aliquot.getSample() return out; } But that is so un-elegant, other than reading the API a developer does not know the reference of the first argument overrides the Sample of the second argument. Now, I could change "foo" to foo (SomeMagicalObject bar) where SomeMagicalObject is a tuple for Sample and Aliquot and holds some logic ... etc. But I am wondering, are there some patterns for this question?

    Read the article

  • Best Practices with MVC Route

    - by codemnky
    if this is asked before just point me in the right direction I am a OO and MVC newbie. I am following along the MVC Storefront(a little outdated now) where they are talking about routes and adding them to global.asax.cs My question is this. wouldn't it be better if only 1 route is defined and after that everything is done programatically? I dont want the user to navigate using the address bar thank you

    Read the article

  • Create UML diagrams after or before coding?

    - by ajsie
    I can clearly see the benefits of having UML diagrams showing your infrastructure of the application (class names, their members, how they communicate with each other etc). I'm starting a new project right now and have already structured the database (with visual paradigm). I want to use some design patterns to guide me how to code the classes. I wonder, should I code the classes first before I create UML diagram of it (maybe out of the code... seems possible) or should I first create UML diagram and then code (or generate code from the UML, seems possible that too). What are you experiences telling you is the best way?

    Read the article

  • Php PEAR, Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a special-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: 1.) Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? 2.) Can PEAR interface with the MySqli prepared statements I currently use? 3.) Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) 4.) Will it slow down my site once I have a significantly large member base?

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • C# myths about best practices?

    - by TheMachineCharmer
    My colleague keeps telling me of the things listed in comments. I am confused. Can somebody please demystify these things for me? class Bar { private int _a; public int A { get { return _a; } set { _a = value; } } private Foo _objfoo; public Foo OFoo { get { return _objfoo; } set { _objfoo = value; } } public Bar(int a, Foo foo) { // this is a bad idea A = a; OFoo = foo; } // MYTHS private void Method() { this.A //1 - this._a //2 - use this when inside the class e.g. if(this._a == 2) A //3 - use this outside the class e.g. barObj.A _a //4 - // Not using this.xxx creates threading issues. } } class Foo { // implementation }

    Read the article

  • how to: handle exceptions, best practices

    - by b0x0rz
    need to implement a global error handling, so maybe you can help out with the following example... i have this code: public bool IsUserAuthorizedToSignIn(string userEMailAddress, string userPassword) { // get MD5 hash for use in the LINQ query string passwordSaltedHash = this.PasswordSaltedHash(userEMailAddress, userPassword); // check for email / password / validity using (UserManagementDataContext context = new UserManagementDataContext()) { var users = from u in context.Users where u.UserEMailAdresses.Any(e => e.EMailAddress == userEMailAddress) && u.UserPasswords.Any(p => p.PasswordSaltedHash == passwordSaltedHash) && u.IsActive == true select u; // true if user found return (users.Count() == 1) ? true : false; } } and the md5 as well: private string PasswordSaltedHash(string userEMailAddress, string userPassword) { MD5 hasher = MD5.Create(); byte[] data = hasher.ComputeHash(Encoding.Default.GetBytes(userPassword + userEMailAddress)); StringBuilder stringBuilder = new StringBuilder(); for (int i = 0; i < data.Length; i++) { stringBuilder.Append(data[i].ToString("x2")); } Trace.WriteLine(String.Empty); Trace.WriteLine("hash: " + stringBuilder.ToString()); return stringBuilder.ToString(); } so, how would i go about handling exceptions from these functions? they first one is called from the Default.aspx page. the second one is only called from other functions from the class library. what is the best practice? surround code INSIDE each function with try-catch surround the FUNCTION CALL with try-catch something else?? what to do if exceptions happen? in this example: this is a user sign in, so somehow even if everything fails, the user should get some meaningful info - along the lines: sign in ok (just redirect), sign in not ok (wrong user name / password), sign in not possible due to internal problems, sorry (exception happened). for the first function i am worried if there is a problem with database access. not sure if there is anything that needs to be handled in the second one. thnx for the info. how would you do it? need specific info on this (easier for me to understand), but also general info on how to handle other tasks/functions. i looked around the internet but everyone has different things to say, so unsure what to do... will go with either most votes here, or most logicaly explained answer :) thank you.

    Read the article

  • Best practices for resending to hard bounced emails after X days

    - by Vivian Hsu
    If I see an email returned due to a hard bounce, after how many days is it acceptable to resend to that email address. It is possible for emails to be reactivated or for temporary outages, so it doesn't make sense to keep an email in my hard bounce email list forever. I've already seen cases where I receive emails from addresses that were put in my hard bounce email list months ago. Any recommendations? Are there specific recommendations from ISPs?

    Read the article

  • Which design pattern should I be using?

    - by Gabriel
    Here's briefly what I'm trying to do. The user supplies me with a link to a photo from one of several photo-sharing websites (such as Flickr, Zooomr, et. al). I then do some processing on the photo using their respective APIs. Right now, I'm only implementing one service, but I will most likely add more in the near future. I don't want to have a bunch of if/else or switch statements to define the logic for the different websites (but maybe that's necessary?) I'd rather just call GetImage(url) and have it get me the image from whatever service the url's domain is from. I'm confused how the GetImage function and classes should be designed. Maybe I need the strategy pattern? I'm still reading and trying to understand the various design patterns and how I could make one fit in this case. I'm doing this in C#, but this question is language-agnostic.

    Read the article

  • Linq2Sql Best Practices

    - by this. __curious_geek
    I'm recently migrating to Linq2Sql and all my future projects will be using Linq2Sql. Having said that, I researched a lot on how to properly plug-in Linq2Sql in application design. what to put at what layer ? How do you design your repositories and business layer services ? Should I use DTOs over Linq2Sql entities on interaction layer ? what things should I be careful about ? what problems did you face ? I did not find any rock-solid material that really talked about one single thing and everyone have their own opinions. I'm looking forward to your ideas on how to integrate/use Linq2Sql in projects. My priority is maintenance[it should be maintenable and when multiple people work on same project] and scalabilty [it should have scope of evolution]. Thanks.

    Read the article

  • C# Accepting sockets in async fasion - best practices

    - by psulek
    What is the best way to accept new sockets in async way. First way: while (!abort && listener.Server.IsBound) { acceptedSocketEvent.Reset(); listener.BeginAcceptSocket(AcceptConnection, null); bool signaled = false; do { signaled = acceptedSocketEvent.WaitOne(1000, false); } while (!signaled && !abort && listener.Server.IsBound); } where AcceptConnection should be: private void AcceptConnection(IAsyncResult ar) { // Signal the main thread to continue. acceptedSocketEvent.Set(); Socket socket = listener.EndAcceptSocket(ar); // continue to receive data and so on... .... } or Second way: listener.BeginAcceptSocket(AcceptConnection, null); while (!abort && listener.Server.IsBound) { Thread.Sleep(500); } and AcceptConnection will be: private void AcceptConnection(IAsyncResult ar) { Socket socket = listener.EndAcceptSocket(ar); // begin accepting next socket listener.BeginAcceptSocket(AcceptConnection, null); // continue to receive data and so on... .... } What is your prefered way and why?

    Read the article

  • Taming the malloc/free beast -- tips & tricks

    - by roufamatic
    I've been using C on some projects for a master's degree but have never built production software with it. (.NET & Javascript are my bread and butter.) Obviously, the need to free() memory that you malloc() is critical in C. This is fine, well and good if you can do both in one routine. But as programs grow, and structs deepen, keeping track of what's been malloc'd where and what's appropriate to free gets harder and harder. I've looked around on the interwebs and only found a few generic recommendations for this. What I suspect is that some of you long-time C coders have come up with your own patterns and practices to simplify this process and keep the evil in front of you. So: how do you recommend structuring your C programs to keep dynamic allocations from becoming memory leaks?

    Read the article

  • abstract method signature, inheritance, and "Do" naming convention

    - by T. Webster
    I'm learning about design patterns and in examples of code I've seen a convention where the abstract class declares a method, for example: public abstract class ServiceBase { ... public virtual object GetSomething(); and then protected abstract object DoGetSomething(); My question is on why these two methods exist, since they appear to serve the same purpose. Is this so that the base class GetSomething() method logic cannot be overridden by inherited classes? But then again, the method is marked virtual, so it can be overridden anyway. What is the usefulness here in requiring derived class implementers to implement the abstract method when the virtual method can be called anyway?

    Read the article

  • Best practices for using Amazon SQS - Polling the queue

    - by alex
    I'm designing a service for sending out emails for our eCommerce site (order confirmations, alerts etc...) The plan is to have a "SendEmail" method, that generates a chunk of XML representing the email to be sent, and sticks it on an Amazon SQS queue. My web app(s) and other applications will use this to "send" emails. I then require a way of checking the queue, and physically sending out the email messages. (I know how I'm going to be dispatching emails) I'm curious as to what the best way to "poll" the queue would be? Should I create a windows service, and use something like Quartz.net to schedule it to check the queue every x number of minutes for example? Is there a better way of doing this?

    Read the article

  • What is the difference between MVC model 1 and model 2?

    - by Alex Ciminian
    I've recently discovered that MVC is supposed to have two different flavors, model one and model two. I'm supposed to give a presentation on MVC1 and I was instructed that "it's not the web based version, that is refered to as MVC2". As the presentations are about design patterns in general, I doubt that this separation is related to Java (I found some info on Sun's site, but it seemed far off) or ASP. I have a pretty good understanding of what MVC is and I've used several (web) frameworks that enforce it, but this terminology is new to me. How is the web-based version different from other MVC (I'm guessing GUI) implementations? Does it have something to do with the stateless nature of HTTP? Thanks, Alex

    Read the article

  • Best practices managing JavaScript on a single-page app

    - by seanmonstar
    With a single page app, where I change the hash and load and change only the content of the page, I'm trying to decide on how to manage the JavaScript that each "page" might need. I've already got a History module monitoring the location hash which could look like domain.com/#/company/about, and a Page class that will use XHR to get the content and insert it into the content area. function onHashChange(hash) { var skipCache = false; if(hash in noCacheList) { skipCache = true; } new Page(hash, skipCache).insert(); } // Page.js var _pageCache = {}; function Page(url, skipCache) { if(!skipCache && (url in _pageCache)) { return _pageCache[url]; } this.url = url; this.load(); } The cache should let pages that have already been loaded skip the XHR. I also am storing the content into a documentFragment, and then pulling the current content out of the document when I insert the new Page, so I the browser will only have to build the DOM for the fragment once. Skipping the cache could be desired if the page has time sensitive data. Here's what I need help deciding on: It's very likely that any of the pages that get loaded will have some of their own JavaScript to control the page. Like if the page will use Tabs, needs a slide show, has some sort of animation, has an ajax form, or what-have-you. What exactly is the best way to go around loading that JavaScript into the page? Include the script tags in the documentFragment I get back from the XHR? What if I need to skip the cache, and re-download the fragment. I feel the exact same JavaScript being called a second time might cause conflicts, like redeclaring the same variables. Would the better way be to attach the scripts to the head when grabbing the new Page? That would require the original page know all the assets that every other page might need. And besides knowing the best way to include everything, won't I need to worry about memory management, and possible leaks of loading so many different JavaScript bits into a single page instance?

    Read the article

  • Best practices for JQuery namespaces + general purpose utility functions

    - by Armchair Bronco
    What are some current "rules of thumb" for implementing JQuery namespaces to host general purpose utility functions? I have a number of JavaScript utility methods scattered in various files that I'd like to consolidate into one (or more) namespaces. What's the best way to do this? I'm currently looking at two different syntaxes, listed in order of preference: //****************************** // JQuery Namespace syntax #1 //****************************** if (typeof(MyNamespace) === "undefined") { MyNamespace = {}; } MyNamespace.SayHello = function () { alert("Hello from MyNamespace!"); } MyNamespace.AddEmUp = function (a, b) { return a + b; } //****************************** // JQuery Namespace syntax #2 //****************************** if (typeof (MyNamespace2) === "undefined") { MyNamespace2 = { SayHello: function () { alert("Hello from MyNamespace2!"); }, AddEmUp: function (a, b) { return a + b; } }; } Syntax #1 is more verbose but it seems like it would be easier to maintain down the road. I don't need to add commas between methods, and I can left align all my functions. Are there other, better ways to do this?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >