Search Results

Search found 10115 results on 405 pages for 'coding practices'.

Page 130/405 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Where should my "filtering" logic reside with Linq-2-SQL and ASP.NET-MVC in View or Controller?

    - by Nate Bross
    I have a main Table, with several "child" tables. TableA and TableAChild1 and TableAChild2. I have a view which shows the information in TableA, and then has two columns of all items in TableAChild1 and TableAChild2 respectivly, they are rendered with Partial views. Both child tables have a bit field for VisibleToAll, and depending on user role, I'd like to either display all related rows, or related rows where VisibleToAll = true. This code, feels like it should be in the controller, but I'm not sure how it would look, because as it stands, the controller (limmited version) looks like this: return View("TableADetailView", repos.GetTableA(id)); Would something like this be even work, and would it be bad what if my DataContext gets submitted, would that delete all the rows that have VisibleToAll == false? var tblA = repos.GetTableA(id); tblA.TableAChild1 = tblA.TableAChild1.Where(tmp => tmp.VisibleToAll == true); tblA.TableAChild2 = tblA.TableAChild2.Where(tmp => tmp.VisibleToAll == true); return View("TableADetailView", tblA); It would also be simple to add that logic to the RendarPartial call from the main view: <% Html.RenderPartial("TableAChild1", Model.TableAChild1.Where(tmp => tmp.VisibleToAll == true); %>

    Read the article

  • Approaching Java from a Ruby perspective

    - by Travis
    There are plenty of resources available to a Java developer for getting a jump-start into Ruby/Rails development. The reverse doesn't appear to be true. What resources would you suggest for getting up-to-date on the current state of java technologies? How about learning how to approach DRY (don't repeat yourself) without the use of metaprogramming? Or how to approach various scenarios where a ruby developer is used to passing in a function (proc/lambda/block) as an argument (callbacks, etc)?

    Read the article

  • Documentation style: how do you differentiate variable names from the rest of the text within a comm

    - by Alix
    Hi, This is a quite superfluous and uninteresting question, I'm afraid, but I always wonder about this. When you're commenting code with inline comments (as opposed to comments that will appear in the generated documentation) and the name of a variable appears in the comment, how do you differentiate it from normal text? E.g.: // Try to parse type. parsedType = tryParse(type); In the comment, "type" is the name of the variable. Do you mark it in any way to signify that it's a symbol and not just part of the comment's text? I've seen things like this: // Try to parse "type". // Try to parse 'type'. // Try to parse *type*. // Try to parse <type>. // Try to parse [type]. And also: // Try to parse variable type. (I don't think the last one is very helpful; it's a bit confusing; you could think "variable" is an adjective there) Do you have any preference? I find that I need to use some kind of marker; otherwise the comments are sometimes ambiguous, or at least force you to reread them when you realise a particular word in the comment was actually the name of a variable. (In comments that will appear in the documentation I use the appropriate tags for the generator, of course: @code, <code></code>, etc) Thanks!

    Read the article

  • How work with common utils project.

    - by ais
    For example, I have some project Common.Utils.csproj and use it in all other projects. I can store its (Utils) sourses in one repository and modify it only there, register dll in gac and use it as dll in other projects, or I can clone sourse anywhere I need, include project in solution, use it as source and push modifications. So, what is best practice?

    Read the article

  • How to evade writing a lot of repetitive code when mapping?

    - by JPCF
    I have a data access layer (DAL) using Entity Framework, and I want to use Automapper to communicate with upper layers. I will have to map data transfer objects (DTOs) to entities as the first operation on every method, process my inputs, then proceed to map from entities to DTOs. What would you do to skip writing this code? As an example, see this: //This is a common method in my DAL public CarDTO getCarByOwnerAndCreditStatus(OwnerDTO ownerDto, CreditDto creditDto) { //I want to automatize this code on all methods similar to this Mapper.CreateMap<OwnerDTO,Owner>(); Mapper.CreateMap<CreditDTO,Credit>(); Owner owner = Mapper.map(ownerDto); Owner credit = Mapper.map(creditDto) //... Some code processing the mapped DTOs //I want to automatize this code on all methods similar to this Mapper.CreateMap<Car,CarDTO>(); Car car = Mapper.map(ownedCar); return car; }

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • When to rewrite vs. upgrade?

    - by MrGumbe
    All custom legacy software needs changing, or so say our users. Sometimes they want a feature or two added and all that is necessary to change a bit of code, add a control, or some other minor upgrade task. Sometimes they want to ditch their error-prone VB5 desktop solution and rewrite the whole thing as a rich Web 2.0 ASP.NET MVC application. More often, however, the scope of changes to legacy functionality lies somewhere between these two extremes. What rules of thumb to you use to decide whether you should upgrade an existing application or start from scratch?

    Read the article

  • Elegant solution for line-breaks (PHP)

    - by Nimbuz
    $var = "Hi there"."<br/>"."Welcome to my website"."<br/>;" echo $var; Is there an elegant way to handle line-breaks in PHP? I'm not sure about other languages, but C++ has eol so something thats more readable and elegant to use? Thanks

    Read the article

  • Getting up to speed on modern architecture

    - by Matt Thrower
    Hi, I don't have any formal qualifications in computer science, rather I taught myself classic ASP back in the days of the dotcom boom and managed to get myself a job and my career developed from there. I was a confident and, I think, pretty good programmer in ASP 3 but as others have observed one of the problems with classic ASP was that it did a very good job of hiding the nitty-gritty of http so you could become quite competent as a programmer on the basis of relatively poor understanding of the technology you were working with. When I changed on to .NET at first I treated it like classic ASP, developing stand-alone applications as individual websites simply because I didn't know any better at the time. I moved jobs at this point and spent the next several years working on a single site whose architecture relied heavily on custom objects: in other words I gained a lot of experience working with .NET as a middle-tier development tool using a quite old-fashioned approach to OO design along the lines of the classic "car" class example that's so often used to teach OO. Breaking down programs into blocks of functionality and basing your classes and methods around that. Although we worked under an Agile approach to manage the work the whole setup was classic client/server stuff. That suited me and I gradually got to grips with .NET and started using it far more in the manner that it should be, and I began to see the power inherent in the technology and precisely why it was so much better than good old ASP 3. In my latest job I have found myself suddenly dropped in at the deep end with two quite young, skilled and very cutting-edge programmers. They've built a site architecture which is modelling along a lot of stuff which is new to me and which, in truth I'm having a lot of trouble understanding. The application is built on a cloud computing model with multi-tenancy and the architecture is all loosely coupled using a lot of interfaces, factories and the like. They use nHibernate a lot too. Shortly after I joined, both these guys left and I'm now supposedly the senior developer on a system whose technology and architecture I don't really understand and I have no-one to ask questions of. Except you, the internet. Frankly I feel like I've been pitched in at the deep end and I'm sinking. I'm not sure if this is because I lack the educational background to understand this stuff, if I'm simply not mathematically minded enough for modern computing (my maths was never great - my approach to design is often to simply debug until it works, then refactor until it looks neat), or whether I've simply been presented with too much of too radical a nature at once. But the only way to find out which it is is to try and learn it. So can anyone suggest some good places to start? Good books, tutorials or blogs? I've found a lot of internet material simply presupposes a level of understanding that I just don't have. Your advice is much appreciated. Help a middle-aged, stuck in the mud developer get enthusastic again! Please!

    Read the article

  • best-practice on for loop's condition

    - by guest
    what is considered best-practice in this case? for (i=0; i<array.length(); ++i) or for (i=array.length(); i>0; --i) assuming i don't want to iterate from a certain direction, but rather over the bare length of the array. also, i don't plan to alter the array's size in the loop body. so, will the array.length() become constant during compilation? if not, then the second approach should be the one to go for..

    Read the article

  • Is it weird or strange to make multiple WCF Calls to build a ViewModel before presenting it?

    - by Nate Bross
    Am I doing something wrong if I need code like this in a Controller? Should I be doing something differently? public ActionResult Details(int id) { var svc = new ServiceClient(); var model = new MyViewModel(); model.ObjectA = svc.GetObjectA(id); model.ObjectB = svc.GetObjectB(id); model.ObjectC = svc.GetObjectC(id); return View(model); } The reason I ask, is because I've got Linq-To-Sql on the back end and a WCF Service which exposes functionality through a set of DTOs which are NOT the Linq-To-Sql generated classes and thus do not have the parent/child properties; but in the detail view, I would like to see some of the parent/child data.

    Read the article

  • MapReduce programming system in java-actionscript

    - by eco_bach
    Just finished reading ch23 in the excellent 'Beautiful Code' http://oreilly.com/catalog/9780596510046 on Distributed Programming with MapReduce. I understand that MapReduce is a programming system designed for large-scale data processing problems, but I have a hard time getting my head around the basic examples given and how I might apply them in real world situations. Can someone give a simple example of MapReduce implemented using either java, javascript or actionscript?

    Read the article

  • Best practice for debug Asserts during Unit testing

    - by Steve Steiner
    Does heavy use of unit tests discourage the use of debug asserts? It seems like a debug assert firing in the code under test implies the unit test shouldn't exist or the debug assert shouldn't exist. "There can be only one" seems like a reasonable principle. Is this the common practice? Or do you disable your debug asserts when unit testing, so they can be around for integration testing? Edit: I updated 'Assert' to debug assert to distinguish an assert in the code under test from the lines in the unit test that check state after the test has run. Also here is an example that I believe shows the dilema: A unit test passes invalid inputs for a protected function that asserts it's inputs are valid. Should the unit test not exist? It's not a public function. Perhaps checking the inputs would kill perf? Or should the assert not exist? The function is protected not private so it should be checking it's inputs for safety.

    Read the article

  • How to differentiate between exceptions i can show the user, and ones i can't?

    - by Ian Boyd
    i have some business logic that traps some logically invalid situations, e.g. trying to reverse a transaction that was already reversed. In this case the correct action is to inform the user: Transaction already reversed or Cannot reverse a reversing transaction or You do not have permission to reverse transactions or This transaction is on a session that has already been closed or This transaction is too old to be reversed The question is, how do i communicate these exceptional cases back to the calling code, so they can show the user? Do i create a separate exception for each case: catch (ETransactionAlreadyReversedException) MessageBox.Show('Transaction already reversed') catch (EReversingAReversingTransactionException) MessageBox.Show('Cannot reverse a reversing transaction') catch (ENoPermissionToReverseTranasctionException) MessageBox.Show('You do not have permission to reverse transactions') catch (ECannotReverseTransactionOnAlredyClosedSessionException) MessageBox.Show('This transaction is on a session that has already been closed') catch (ECannotReverseTooOldTransactionException) MessageBox.Show('This transaction is too old to be reversed') Downside for this is that when there's a new logical case to show the user: Tranasctions created by NSL cannot be reversed i don't simply show the user a message, and instead it leaks out as an unhandled excpetion, when really it should be handled with another MessageBox. The alternative is to create a single exception class: `EReverseTransactionException` With the understanding that any exception of this type is a logical check, that should be handled with a message box: catch (EReverseTransactionException) But it's still understood that any other exceptions, ones that involve, for example, an memory ECC parity error, continue unhandled. In other words, i don't convert all errors that can be thrown by the ReverseTransaction() method into EReverseTransactionException, only ones that are logically invalid cause of the user.

    Read the article

  • What is the reason not to use select * ?

    - by Chris Lively
    I've seen a number of people claim that you should specifically name each column you want in your select query. Assuming I'm going to use all of the columns anyway, why would I not use SELECT *? Even considering the question from 9/24, I don't think this is an exact duplicate as I'm approaching the issue from a slightly different perspective. One of our principles is to not optimize before it's time. With that in mind, it seems like using SELECT * should be the preferred method until it is proven to be a resource issue or the schema is pretty much set in stone. Which, as we know, won't occur until development is completely done. That said, is there an overriding issue to not use SELECT *?

    Read the article

  • Preprocessor #define vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • Finding databases for use in applications

    - by JonF
    Does anyone have some recommendations on how I can find databases for random things that I might want to use in my application. For example, a database of zip code locations, area code cities, car engines, IP address locations, or whatever. I'm just asking generally when you decide you need a bunch of data where are some good places to start looking other than google?

    Read the article

  • Find recipes that can be cooked from provided ingridients

    - by skaurus
    Sorry for bad English :( Suppose i can preliminary organize recipes and ingredients data in any way. How can i effectively conduct search of recipes by user-provided ingredients, preferably sorted by max match - so, first going recipes that use maximum of provided ingridients and do not contain any other ingrs, after them recipes that uses less of provided set and still not any other ingrs, after them recipes with minimum additional requirements and so on? All i can think about is represent recipe ingridients like bitmasks, and compare required bitmask with all recipes, but it is obviously a bad way to go. And related things like Levenstein distance i don't see how to use here. I believe it should be quite common task...

    Read the article

  • Technical non-terminating condition in a loop

    - by Snarfblam
    Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error. void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254 for(; i < 255; i += 2) { Console.WriteLine(i); } } Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream: int CountZeros(Stream s) { int total = 0; while(s.ReadByte() == 0) total++; return total; } Now, suppose you feed it this thing: class InfiniteEmptyStream:Stream { // ... Other members ... public override int Read(byte[] buffer, int offset, int count) { Array.Clear(buffer, offset, count); // Output zeros return count; // Never returns -1 (end of stream) } } Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't. A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source). How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API? Edit: One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.

    Read the article

  • C++ Singleton design pattern.

    - by Artem Barger
    Recently I've bumped into realization/implementation of Singleton design pattern for C++. It has looked in the following way (I have adopted it from real life example): // a lot of methods is omitted here class Singleton { public: static Singleton* getInstance( ); ~Singleton( ); private: Singleton( ); static Singleton* instance; }; From this declaration I can deduce that instance field is initiated on the heap, that means there is a memory allocation. That is completely unclear for me is when does exactly memory is going to be deallocated? Or there is a bug and memory leak? It seems like there is a problem in implementation. PS. And main question how to implement it in the right way?

    Read the article

  • C++ Headers/Source Files

    - by incrediman
    (Duplicate of C++ Code in Header Files) What is the standard way to split up C++ classes between header and source files? Am I supposed to put everything in the header file? Or should I declare the classes in the header file and define them in a .cpp file (source file)? Sorry if I'm shaky on the terminology here (declare, define, etc). So what's the standard?

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >