Search Results

Search found 8692 results on 348 pages for 'patterns practices'.

Page 59/348 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • What are DRY, KISS, SOLID, etc. classified as?

    - by Morgan Herlocker
    Is something like DRY a design pattern, a methodology, or something in between? They do not have specific implementations that could neccessarily be demonstrated(even if you can easily demonstrate a case NOT using something like KISS... see The Daily WTF for a plethora of examples), nor do they fully explain a development process like a methodology generally would. Where does that leave these types of "rule of thumb"'s?

    Read the article

  • Is an event loop just a for/while loop with optimized polling?

    - by Alan
    I'm trying to understand what an event loop is. Often the explanation is that in the event loop, you do something until you're notified that an event occurred. You than handle the event and continue doing what you did before. To map the above definition with an example. I have a server which 'listens' in a event loop, and when a socket connection is detected, the data from it gets read and displayed, after which the server goes to the listening it did before. However, this event happening and us getting notified 'just like that' are to much for me to handle. You can say: "It's not 'just like that' you have to register an event listener". But what's an event listener but a function which for some reason isn't returning. Is it in it's own loop, waiting to be notified when an event happens? Should the event listener also register an event listener? Where does it end? Events are a nice abstraction to work with, however just an abstraction. I believe that in the end, polling is unavoidable. Perhaps we are not doing it in our code, but the lower levels (the programming language implementation or the OS) are doing it for us. It basically comes down to the following pseudo code which is running somewhere low enough so it doesn't result in busy waiting: while(True): do stuff check if event has happened (poll) do other stuff This is my understanding of the whole idea, and i would like to hear if this is correct. I am open in accepting that the whole idea is fundamentally wrong, in which case I would like the correct explanation. Best regards

    Read the article

  • Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits

    - by Anand Akela
    Earlier this month at the Oracle Open World 2012, we celebrated the first anniversary of Oracle Enterprise Manager 12c . Early adopters of  Oracle Enterprise manager 12c have benefited from its federated self-service access to complete application stacks, automated provisioning, elastic scalability, metering, and charge-back capabilities. Crimson Consulting Group recently interviewed multiple early adopters of Oracle Enterprise Manager 12c and captured their finding in a white Paper "Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains".  Here is summary of the finding :- On October 25th at 10 AM pacific time, Kirk Bangstad from the Crimson Consulting group will join us in a live webcast and share what learnt from the early adopters of Oracle Enterprise Manager 12c. Don't miss this chance to hear how private clouds could impact your business and ask questions from our experts. Webcast: Real-World Benefits of Private Cloud Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits Date: Thursday, October 25, 2012 Time: 10:00 AM PDT | 1:00 PM EDT Register Today All attendees will receive the White Paper: Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains. Stay Connected Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • What the best way to wire up Entity Framework database context (model) to ViewModel in MVVM WPF?

    - by hal9k2
    As in the question above: What the best way to wire up Entity Framework database model (context) to viewModel in MVVM (WPF)? I am learning MVVM pattern in WPF, alot of examples shows how to implement model to viewModel, but models in that examples are just simple classes, I want to use MVVM together with entity framework model (base first approach). Whats the best way to wire model to viewModel. Thanks for answers. //ctor of ViewModel public ViewModel() { db = new PackageShipmentDBEntities(); // Entity Framework generated class ListaZBazy = new ObservableCollection<Pack>(db.Packs.Where(w => w.IsSent == false)); } This is my usual ctor of ViewModel, think there is a better way, I was reading about repository pattern, not sure if I can adapt this to WPF MVVM

    Read the article

  • Is having a class have a handleAction(type) method bad practice?

    - by zhenka
    My web application became a little too complicated to do everything in a controller so I had to build large wrapper classes for ORM models. The possible actions a user can trigger are all similar and after a certain point I realized that the best way to go would be to just have constructor method receive action type as a parameter to take care of the small differences internally, as opposed to either passing many arguments or doing a lot of things in the controller. Is this a good practice? I can't really give details for privacy issues.

    Read the article

  • BDD: Getting started

    - by thom
    I'm starting with BDD and this is my story: Feature: Months and days to days In order to see months and days as days As a date conversion fan I need a webpage where users can enter days and months and convert them to days. I have some doubts ... Should I write my scenarios before coding anything or should I first write a scenario and then write code, write a scenario again and then write code, and so on ... ? If I should write my scenarios before, can my steps be approved and production code still does not get done? When should I do refactoring on my code? After the feature is done or after each scenario implementation?

    Read the article

  • Techniques for separating game model from presentation

    - by liortal
    I am creating a simple 2D game using XNA. The elements that make up the game world are what i refer to as the "model". For instance, in a board game, i would have a GameBoard class that stores information about the board. This information could be things such as: Location Size Details about cells on the board (occupied/not occupied) etc This object should either know how to draw itself, or describe how to draw itself to some other entity (renderer) in order to be displayed. I believe that since the board only contains the data+logic for things regarding it or cells on it, it should not provide the logic of how to draw things (separation of concerns). How can i achieve a good partitioning and easily allow some other entity to draw it properly? My motivations for doing so are: Allow multiple "implementations" of presentation for a single game entity Easier porting to other environments where the presentation code is not available (for example - porting my code to Unity or other game technology that does not rely on XNA).

    Read the article

  • Best Method of function parameter validation

    - by Aglystas
    I've been dabbling with the idea of creating my own CMS for the experience and because it would be fun to run my website off my own code base. One of the decisions I keep coming back to is how best to validate incoming parameters for functions. This is mostly in reference to simple data types since object validation would be quite a bit more complex. At first I debated creating a naming convention that would contain information about what the parameters should be, (int, string, bool, etc) then I also figured I could create options to validate against. But then in every function I still need to run some sort of parameter validation that parses the parameter name to determine what the value can be then validate against it, granted this would be handled by passing the list of parameters to function but that still needs to happen and one of my goals is to remove the parameter validation from the function itself so that you can only have the actual function code that accomplishes the intended task without the additional code for validation. Is there any good way of handling this, or is it so low level that typically parameter validation is just done at the start of the function call anyway, so I should stick with doing that.

    Read the article

  • How essential is it to make a service layer?

    - by BornToCode
    I started building an app in 3 layers (DAL, BL, UI) [it mainly handles CRM, some sales reports and inventory]. A colleague told me that I must move to service layer pattern, that developers came to service pattern from their experience and it is the better approach to design most applications. He said it would be much easier to maintain the application in the future that way. Personally, I get the feeling that it's just making things more complex and I couldn't see much of a benefit from it that would justify that. This app does have an additional small partial ui that uses some (but only few) of the desktop application functions so I did find myself duplicating some code (but not much). Just because of some code duplication I wouldn't convert it to be service oriented, but he said I should use it anyway because in general it's a very good architecture, why programmers are so in love with services?? I tried to google on it but I'm still confused and can't decide what to do.

    Read the article

  • Stuff you learned in school, that you have never used again?

    - by Mercfh
    Obviously we learn plenty of things in our University/College/Whatever that probably don't apply to everyday use, but is there anything that stands out particularly? Maybe something that was concentrated ALOT on? For me it was def. 2 things: OO Concepts and Pointers I still use OO, but not nearly to the amount people made it out to be, i can see where it'd be useful but in my line of work we don't have huge amounts of classes, maybe a couple at most. And there certainly isn't much OO reuse (i finally figured out what that means lol) Pointers are another thing, again I can see where they'd be useful...however I barely barely ever touch them, nor do the others I work with. I guess language choice has alot to do with that but still. What about you guys? edit: For those who are asking I work for a Large Printer company, and most of the Applications we work on are Java+XML and Actionscript for "Printer Apps". But we are moving towards other languages (think like webkits and stuff). So the Code amounts per parts are quite small. I never say OO wasn't useful I just said I personally havent seen it used in my workplace much.

    Read the article

  • Starting all over again?

    - by kyndigs
    Have you ever been developing something and just came to a point where you think that this is rubbish, the design is bad and although I will lose time it will be better to just start all over again? What should you consider before making this step? I know it can be drastic in some cases, is it best to just totally ignore what you did before, or take some of the best bits from it? Some real life examples would be great.

    Read the article

  • Android: Layouts and views or a single full screen custom view?

    - by futlib
    I'm developing an Android game, and I'm making it so that it can run on low end devices without GPU, so I'm using the 2D API. I have so far tried to use Android's mechanisms such as layouts and activities where possible, but I'm beginning to wonder if it's not easier to just create a single custom view (or one per activity) and do all the work there. Here's an example of how I currently do things: I'm using a layout to display the game's background as an image view and the square game area, which is a custom view, centered in the middle. What would you say? Should I continue to use layouts where possible or is it more common/reasonable to just use a large custom view? I'm thinking that this would probably also make it easier to port my code to other platforms.

    Read the article

  • How to stop getting too focused on a train of thought when programming?

    - by LDM91
    I often find myself getting too focused on a train of thought when programming, which results in me having what I guess could be described as "tunnel vision". As a result of this I miss important details/clues, which means I waste a fair amount of time before finally deciding the path I'm taking to solve the task is wrong. Afterwards, I take a step back which almost always results in me discovering what I've missed in a lot less time.. It's becoming really frustrating as it feels like I'm wasting a lot of time and effort, so I was wondering if anyone else had experienced similar issues, and had some suggestions to stop going down dead ends and programming "blindly" as it were!

    Read the article

  • Is using ELSE bad programming?

    - by dave.b
    I've often come across bugs that have been caused by using the ELSE construct. A prime example is something along the lines of: If (passwordCheck() == false){ displayMessage(); }else{ letThemIn(); } To me this screams security problem. I know that passwordCheck is likely to be a boolean, but I wouldn't place my applications security on it. What would happen if its a string, int etc? I usually try to avoid using ELSE, and instead opt for two completely separate IF statements to test for what I expect. Anything else then either gets ignored OR is specifically handled. Surely this is a better way to prevent bugs / security issues entering your app. How do you guys do it?

    Read the article

  • What are your worst experiences with whitespace?

    - by CheeseConQueso
    What are some good examples of whitespace being the cause of any type of error and/or total disruption of scripts and/or markups? I am more interested in any accounts related to languages that are used commonly today, but I would like to hear about any cases in general. PS - This should be a wiki, but I don't know what happened to the "make this a wiki" check box. If someone comes across this with the rights to set it as a "wiki", please do so. If SO decided they wanted to keep away from wiki's altogether, please comment me about that. Thanks

    Read the article

  • Philosophy behind the memento pattern

    - by TheSilverBullet
    I have been reading up on memento pattern from various sources of the internet. Differing information from different sources has left me in confusion regarding why this pattern is actually needed. The dofactory implementation says that the primary intention of this pattern is to restore the state of the system. Wiki says that the primary intention is to be able to restore the changes on the system. This gives a different impact - saying that it is possible for a system to have memento implementation with no need to restore. And that ability of restore is a feature of this. OODesign says that It is sometimes necessary to capture the internal state of an object at some point and have the ability to restore the object to that state later in time. Such a case is useful in case of error or failure. So, my question is why exactly do we use this one? Is it to save previous states - or to promote encapsulation between the Caretaker and the Memento? Why is this type of encapsulation so important? Edit: For those visiting, check out this Implementation!

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Methods of ordering function definitions in code

    - by xralf
    When I work on some programming project (usually command line application in Python with many switches), I'm usually creating about 30 and more functions. Most of the functions are in one file (except some helpers that I utilize in more projects). Some of the functions are called on particular switch (like -p or --print) but many functions do some helper computations, print operations or database operations because I don't want to main functions be too large. When I have an idea for a new functionality I often put new functions randomly to the file. Should I think more about it and place it to some particular place? Are there some methods for this?

    Read the article

  • Why should a class be anything other than "abstract" or "final/sealed"

    - by Nicolas Repiquet
    After 10+ years of java/c# programming, I find myself creating either: abstract classes: contract not meant to be instantiated as-is. final/sealed classes: implementation not meant to serve as base class to something else. I can't think of any situation where a simple "class" (i.e. neither abstract nor final/sealed) would be "wise programming". Why should a class be anything other than "abstract" or "final/sealed" ? EDIT This great article explains my concerns far better than I can.

    Read the article

  • How to implement child-parent aggregation link in C++?

    - by Giorgio
    Suppose that I have three classes P, C1, C2, composition (strong aggregation) relations between P <>- C1 and P <>- C2, i.e. every instance of P contains an instance of C1 and an instance of C2, which are destroyed when the parent P instance is destroyed. an association relation between instances of C1 and C2 (not necessarily between children of the same P). To implement this, in C++ I normally define three classes P, C1, C2, define two member variables of P of type boost::shared_ptr<C1>, boost::shared_ptr<C2>, and initialize them with newly created objects in P's constructor, implement the relation between C1 and C2 using a boost::weak_ptr<C2> member variable in C1 and a boost::weak_ptr<C1> member variable in C2 that can be set later via appropriate methods, when the relation is established. Now, I also would like to have a link from each C1 and C2 object to its P parent object. What is a good way to implement this? My current idea is to use a simple constant raw pointer (P * const) that is set from the constructor of P (which, in turn, calls the constructors of C1 and C2), i.e. something like: class C1 { public: C1(P * const p, ...) : paren(p) { ... } private: P * const parent; ... }; class P { public: P(...) : childC1(new C1(this, ...)) ... { ... } private: boost::shared_ptr<C1> childC1; ... }; Honestly I see no risk in using a private constant raw pointer in this way but I know that raw pointers are often frowned upon in C++ so I was wondering if there is an alternative solution.

    Read the article

  • How often is seq used in Haskell production code?

    - by Giorgio
    I have some experience writing small tools in Haskell and I find it very intuitive to use, especially for writing filters (using interact) that process their standard input and pipe it to standard output. Recently I tried to use one such filter on a file that was about 10 times larger than usual and I got a Stack space overflow error. After doing some reading (e.g. here and here) I have identified two guidelines to save stack space (experienced Haskellers, please correct me if I write something that is not correct): Avoid recursive function calls that are not tail-recursive (this is valid for all functional languages that support tail-call optimization). Introduce seq to force early evaluation of sub-expressions so that expressions do not grow to large before they are reduced (this is specific to Haskell, or at least to languages using lazy evaluation). After introducing five or six seq calls in my code my tool runs smoothly again (also on the larger data). However, I find the original code was a bit more readable. Since I am not an experienced Haskell programmer I wanted to ask if introducing seq in this way is a common practice, and how often one will normally see seq in Haskell production code. Or are there any techniques that allow to avoid using seq too often and still use little stack space?

    Read the article

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >