Search Results

Search found 14326 results on 574 pages for 'design by contract'.

Page 196/574 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Testing system where App-level and Request-level IoC containers exist

    - by Bobby
    My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there. We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy. Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing. The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container. We are also using MSTest, which presents some concurrency dilemmas during testing as well. So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament? How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers. I hope the question is clear, thanks!

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • UnitOfWork & StrcutureMap & Desktop Application

    - by Afshin Gh
    When developing Web App and using StrcutureMap i normally use HybridHttpOrThreadLocalScoped for my UnitOfWork Part. (unit of work starts up per web request and ...) What about a desktop app(windows service, console, ...)? There is no session and request in a desktop app. How should i manage UoW in this situation? any good reference or article? I don't want to make it singleton. What should I do?

    Read the article

  • Aren't Information Expert / Tell Don't Ask at odds with Single Responsibility Principle?

    - by moffdub
    It is probably just me, which is why I'm asking the question. Information Expert, Tell Don't Ask, and SRP are often mentioned together as best practices. But I think they are at odds. Here is what I'm talking about: Code that favors SRP but violates Tell Don't Ask, Info Expert: Customer bob = ...; // TransferObjectFactory has to use Customer's accessors to do its work, // violates Tell Don't Ask CustomerDTO dto = TransferObjectFactory.createFrom(bob); Code that favors Tell Don't Ask / Info Expert but violates SRP: Customer bob = ...; // Now Customer is doing more than just representing the domain concept of Customer, // violates SRP CustomerDTO dto = bob.toDTO(); If they are indeed at odds, that's a vindication of my OCD. Otherwise, please fill me in on how these practices can co-exist peacefully. Thank you. Edit: someone wants a definition of the terms - Information Expert: objects that have the data needed for the operation should host the operation Tell Don't Ask: don't ask objects for data in order to do work; tell the objects to do the work Single Responsibility Principle: each object should have a narrowly defined responsibility

    Read the article

  • GRAPH PROBLEM: find an algorithm to determine the shortest path from one point to another in a recta

    - by newba
    I'm getting such an headache trying to elaborate an appropriate algorithm to go from a START position to a EXIT position in a maze. For what is worth, the maze is rectangular, maxsize 500x500 and, in theory, is resolvable by DFS with some branch and bound techniques ... 10 3 4 7 6 3 3 1 2 2 1 0 2 2 2 4 2 2 5 2 2 1 3 0 2 2 2 2 1 3 3 4 2 3 4 4 3 1 1 3 1 2 2 4 2 2 1 Output: 5 1 4 2 Explanation: Our agent looses energy every time he gives a step and he can only move UP, DOWN, LEFT and RIGHT. Also, if the agent arrives with a remaining energy of zero or less, he dies, so we print something like "Impossible". So, in the input 10 is the initial agent's energy, 3 4 is the START position (i.e. column 3, line 4) and we have a maze 7x6. Think this as a kind of labyrinth, in which I want to find the exit that gives the agent a better remaining energy (shortest path). In case there are paths which lead to the same remaining energy, we choose the one which has the small number of steps, of course. I need to know if a DFS to a maze 500x500 in the worst case is feasible with these limitations and how to do it, storing the remaining energy in each step and the number of steps taken so far. The output means the agent arrived with remaining energy= 5 to the exit pos 1 4 in 2 steps. If we look carefully, in this maze it's also possible to exit at pos 3 1 (column 3, row 1) with the same energy but with 3 steps, so we choose the better one. With these in mind, can someone help me some code or pseudo-code? I have troubles working this around with a 2D array and how to store the remaining energy, the path (or number of steps taken)....

    Read the article

  • Perl vs Python: implementation of algorithms to deal with advanced data structures

    - by user350571
    I'm learning perl and everytime I search for perl stuff in the internet I get some random page with people saying that perl should die because code written in it looks like a lesson in steganography. Then they say that python is clean and stuff like that. Now, I know that those comparisons are always stupid and made by fellows that feel that languages are a extension of their boring personality so, let me ask instead: can you give me the implementation of a widely known algorithm to deal with a data structure like red-black trees in both languages so I can compare?

    Read the article

  • Circular Dependency Solution

    - by gfoley
    Our current project has ran into a circular dependency issue. Our business logic assembly is using classes and static methods from our SharedLibrary assembly. The SharedLibrary contains a whole bunch of helper functions, such as a SQL Reader class, Enumerators, Global Variables, Error Handling, Logging and Validation. The SharedLibrary needs access to the Business objects, but the Business objects need access to SharedLibrary. The old developers solved this obvious code smell by replicating the functionality of the business objects in the shared library (very anti-DRY). I've spent a day now trying to read about my options to solve this but i'm hitting a dead end. I'm open to the idea of architecture redesign, but only as a last resort. So how can i have a Shared Helper Library which can access the business objects, with the business objects still accessing the Shared Helper Library?

    Read the article

  • Builder pattern and singletons

    - by Berryl
    Does anyone have any links to some code they like that shows a good example of this in c#? As an example of bad code, here is what a builder I have now looks like. I'm trying to have a way to keep the flexibility of the builder pattern but not rebuild the properties. Cheers, Berryl public abstract class ActivityBuilder { public abstract ActivityBuilder Build(); public bool IsBuilt { get; protected set; } public IEnumerable<Project> Projects { get { if(_projects==null) { Build(); } return _projects; } } protected IEnumerable<Project> _projects; // .. other properties }

    Read the article

  • Why eGet() in EMF returns Object rather than EObject?

    - by Gabriel Šcerbák
    I am working on some code using the EMF framework in Java, but it is really hard to use, e.g. I cannot implement OCL-like query API on top of EMF which would be type-safe. One of the reasons is that eGet() for a EStructuralFeature return just an Object, not EObject. So anything I would write must use much of null checking, type checking and type casting which is unsafe, not performant and cannot be generalized in a reusable way. Why doesn't EMF generate dummy implementations with EObject wrappers for arbitrary Object value? Implementing the EObject and hence the EClass interfaces even with simple throw UnsupportedOperationException is really a pain (the APIs are too big). The same holds for the eContainer() method which makes navigatinng the model upwards painful.

    Read the article

  • Codechef practice question help needed - find trailing zeros in a factorial

    - by manugupt1
    I have been working on this for 24 hours now, trying to optimize it. The question is how to find the number of trailing zeroes in factorial of a number in range of 10000000 and 10 million test cases in about 8 secs. The code is as follows: #include<iostream> using namespace std; int count5(int a){ int b=0; for(int i=a;i>0;i=i/5){ if(i%15625==0){ b=b+6; i=i/15625; } if(i%3125==0){ b=b+5; i=i/3125; } if(i%625==0){ b=b+4; i=i/625; } if(i%125==0){ b=b+3; i=i/125; } if(i%25==0){ b=b+2; i=i/25; } if(i%5==0){ b++; } else break; } return b; } int main(){ int l; int n=0; cin>>l; //no of test cases taken as input int *T = new int[l]; for(int i=0;i<l;i++) cin>>T[i]; //nos taken as input for the same no of test cases for(int i=0;i<l;i++){ n=0; for(int j=5;j<=T[i];j=j+5){ n+=count5(j); //no of trailing zeroes calculted } cout<<n<<endl; //no for each trialing zero printed } delete []T; } Please help me by suggesting a new approach, or suggesting some modifications to this one.

    Read the article

  • Producer/consumer in Grails?

    - by Mulone
    Hi all, I'm trying to implement a Consumer/Producer app in Grails, after several unsuccessful attempts at implementing concurrent threads. Basically I want to store all the events coming from a clients (through separate AJAX calls) in a single queue and then process such a queue in a linear way as soon as new events are added. This looks like a Producer/Consumer problem: http://en.wikipedia.org/wiki/Producer-consumer_problem How can I implement this in Grails (maybe with a timer or even better by generating an event 'process queue')? Basically I'd like to a have a singleton service waiting for new events in the queue and processing them linearly (even if the queue is loaded by several concurrent processes). Any hints? Cheers!

    Read the article

  • Teradata equivalent of persisted computed column (in SQL Server)

    - by Cade Roux
    We have a few tables with persisted computed columns in SQL Server. Is there an equivalent of this in Teradata? And, if so, what is the syntax and are there any limitations? The particular computed columns I am looking at conform some account numbers by removing leading zeros - an index is also created on this conformed account number: ACCT_NUM_std AS ISNULL(CONVERT(varchar(39), SUBSTRING(LTRIM(RTRIM([ACCT_NUM])), PATINDEX('%[^0]%', LTRIM(RTRIM([ACCT_NUM])) + '.' ), LEN(LTRIM(RTRIM([ACCT_NUM]))) ) ), '' ) PERSISTED With the Teradata TRIM function, the trimming part would be a little simpler: ACCT_NUM_std AS COALESCE(CAST(TRIM(LEADING '0' FROM TRIM(BOTH FROM ACCT_NUM))) AS varchar(39)), '' ) I guess I could just make this a normal column and put the code to standardize the account numbers in all the processes which insert into the table. We did this to put the standardization code in one place.

    Read the article

  • Reasons to start a new project in COBOL

    - by luvieere
    Are there any feasible reasons to start a new project in COBOL? What benefits of this language one would find convincing enough to start a new project in it? I'm thinking more about viewing the language in terms of Behavior Driven Development, something related to the steps involved in using a framework such as Cucumber, only that behavior description and step definition would be integrated into one unit by using tha language's features.

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

  • I cannot grok MVC, what it is, and what it is not?

    - by Hao
    I cannot grok what MVC is, what mindset or programming model should I acquire so MVC stuff can instantly "lightbulb" on my head? If not instantly, what simple programs/projects should I try to do first so I can apply the neat things MVC brings to programming. OOP is intuitive and easier, object is all around us, and the benefits of code reuse using OOP-paradigm instantly click to anyone. You can probably talk to anybody about OOP in a few minutes and lecture some examples and they would get it. While OOP somehow raise the intuitiveness aspect of programming, MVC seems to do the opposite. I'm getting negative thoughts that some future employers(or even clients) would look down upon me for not using MVC technology. Though I probably get the skinnable aspect of MVC, but when I try to apply it to my own project, I don't know where to start. And also some programmers even have diverging views on how to accomplish MVC properly. Take this for instance from Jeff's post about MVC: The view is simply how you lay the data out, how it is displayed. If you want a subset of some data, for example, my opinion is that is a responsibility of the model. So maybe some programmers use MVC, but they somehow inadvertently use the View or the Controller to extract a subset of data. Why we can't have a definitive definition of what and how to accomplish MVC properly? And also, when I search for MVC .NET programs, most of it applies to web programs, not desktop apps, this intrigue me further. My guess is, this is most advantageous to web apps, there's not much problem about intermixed view(html) and controller(program code) in desktop apps.

    Read the article

  • Is it possible to create static classes in PHP (like in C#)?

    - by aleemb
    I want to create a static class in PHP and have it behave like it does in C#, so Constructor is automatically called on the first call to the class No instantiation required Something of this sort... static class Hello { private static $greeting = 'Hello'; private __construct() { $greeting .= ' There!'; } public static greet(){ echo $greeting; } } Hello::greet(); // Hello There!

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • Memcache key generation strategy

    - by Maxim Veksler
    Given function f1 which receives n String arguments, would be considered better random key generation strategy for memcache for the scenario described below ? Our Memcache client does internal md5sum hashing on the keys it gets public class MemcacheClient { public Object get(String key) { String md5 = Md5sum.md5(key) // Talk to memcached to get the Serialization... return memcached(md5); } } First option public static String f1(String s1, String s2, String s3, String s4) { String key = s1 + s2 + s3 + s4; return get(key); } Second option /** * Calculate hash from Strings * * @param objects vararg list of String's * * @return calculated md5sum hash */ public static String stringHash(Object... strings) { if(strings == null) throw new NullPointerException("D'oh! Can't calculate hash for null"); MD5 md5sum = new MD5(); // if(prevHash != null) // md5sum.Update(prevHash); for(int i = 0; i < strings.length; i++) { if(strings[i] != null) { md5sum.Update("_" + strings[i] + "_"); // Convert to String... } else { // If object is null, allow minimum entropy by hashing it's position md5sum.Update("_" + i + "_"); } } return md5sum.asHex(); } public static String f1(String s1, String s2, String s3, String s4) { String key = stringHash(s1, s2, s3, s4); return get(key); } Note that the possible problem with the second option is that we are doing second md5sum (in the memcache client) on an already md5sum'ed digest result. Thanks for reading, Maxim.

    Read the article

  • HMVC/PAC - how to handle shared abstractions/models?

    - by fig-gnuton
    In HMVC/PAC, what's the recommended way to code if two or more triads/agents share a common model/abstraction? Do you instantiate a new instance of that model wherever needed, and propogate a change in one to all the other instances via the controllers? Or do instantiate one model at some common upper level, and inject that instance wherever needed? (Or neither if I'm missing something fundamental about these patterns?)

    Read the article

  • O(log n) algorithm for computing rank of union of two sorted lists?

    - by Eternal Learner
    Given two sorted lists, each containing n real numbers, is there a O(log?n) time algorithm to compute the element of rank i (where i coresponds to index in increasing order) in the union of the two lists, assuming the elements of the two lists are distinct? I can think of using a Merge procedure to merge the 2 lists and then find the A[i] element in constant time. But the Merge would take O(n) time. How do we solve it in O(log n) time?

    Read the article

  • split a database web application - good idea or bad idea?

    - by Khou
    Is it a bad idea to split up a application and the database? Application1 uses database1 on ServerX Application2 uses database2 on ServerY Both application communicates over web service API, they are apart of the same application, one application is used to manage user's profile/personal data, while the other application is used to manages user's financial data. Or should just put them together and just use 1 database on the same server?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >