Search Results

Search found 19635 results on 786 pages for 'materialized path pattern'.

Page 555/786 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • php parsing speed optimization

    - by Arnaud
    I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is: to reboot your linux host in single-user mode you can ... I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux reboot: restart a computeur linux: operating system now my output will look like (replaced < and by @) to @a href="ref/reboot"@reboot@/a@ your @a href="ref/linux"@linux@/a@ host in single-user mode you can ... Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive. My main concerne and question is how can I create a efficient enough process to do it ? Should I store all the db entry in an array and compare them ? Do an sql query for each word (seems to be crazy) Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it? Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things. Cheers!

    Read the article

  • ANT propertyfile entry is not resolving to its value

    - by Brian
    I have a value in a properties file that I want to increment while the build is running. The goal is to copy a set of files and append a number to the front of each in order to maintain the order in which they were copied into the directory. I am using the <propertyfile> task as follows: <propertyfile file="jsfiles.properties"> <entry key="file.number" type="int" operation="=" value="10" /> <entry key="file.number" type="int" default="010" operation="+" value="10" pattern="000" /> </propertyfile> Then I do the copy: <copy todir="${js-in.dir}"> <resources> ... </resources> <chainedmapper> <flattenmapper /> <globmapper from="*.js" to="${file.number}-*.js"/> </chainedmapper> </copy> This does exactly what I need it to EXCEPT that instead of the following output: 010-file1.js 020-file2.js 030-file3.js ... I get: ${file.number}-file1.js ${file.number}-file2.js ${file.number}-file3.js ... What am I doing wrong?

    Read the article

  • Resolve php endless recursion issue

    - by Matt
    Hey all, I'm currently running into an endless recursion situation. I'm implementing a message service that calls various object methods.. it's quite similar to observer pattern.. Here's whats going on: Dispatcher.php class Dispatcher { ... public function message($name, $method) { // Find the object based on the name $object = $this->findObjectByName($name); // Slight psuedocode.. for ease of example if($this->not_initialized($object)) $object = new $object(); // This is where it locks up. } return $object->$method(); ... } class A { function __construct() { $Dispatcher->message("B", "getName"); } public function getName() { return "Class A"; } } class B { function __construct() { // Assume $Dispatcher is the classes $Dispatcher->message("A", "getName"); } public function getName() { return "Class B"; } } It locks up when neither object is initialized. It just goes back and forth from message each other and no one can be initialized. I'm looking for some kind of queue implementation that will make messages wait for each other.. One where the return values still get set. I'm looking to have as little boilerplate code in class A and class B as possible Any help would be immensely helpful.. Thanks! Matt Mueller

    Read the article

  • Using git filter-branch to remove commits by their commit message

    - by machineghost
    In our repository we have a convention where every commit message starts with a certain pattern: Redmine #555: SOME_MESSAGE We also do a bit of rebasing to bring in the potential release branch's changes to a specific issue's branch. In other words, I might have branch "foo-555", but before I merge it in to branch "pre-release" I need to get any commits that pre-release has that foo-555 doesn't (so that foo-555 can fast-forward merge in to pre-release). However, because pre-release sometimes changes, we sometimes wind up with situations where you bring in a commit from pre-release, but then that commit later gets removed from pre-release. It's easy to identify commits that came from pre-release, because the number from their commit message won't match the branch number; for instance, if I see "Redmine #123: ..." in my foo-555 branch, I know that its not a commit from my branch. So now the question: I'd like to remove all of the commits that "don't belong" to a branch; in other words, any commit that: Is in my foo-555 branch, but not in the pre-release branch (pre-release..foo-555) Has a commit message that doesn't start with "Redmine #555" but of course "555" will vary from branch to branch. Is there any way to use filter-branch (or any other tool) to accomplish this? Currently the only way I can see to do it is to do go an interactive rebase ("git rebase -i") and manually remove all the "bad" commits.

    Read the article

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • PHP: What is an efficient way to parse a text file containing very long lines?

    - by Shaun
    I'm working on a parser in php which is designed to extract MySQL records out of a text file. A particular line might begin with a string corresponding to which table the records (rows) need to be inserted into, followed by the records themselves. The records are delimited by a backslash and the fields (columns) are separated by commas. For the sake of simplicity, let's assume that we have a table representing people in our database, with fields being First Name, Last Name, and Occupation. Thus, one line of the file might be as follows [People] = "\Han,Solo,Smuggler\Luke,Skywalker,Jedi..." Where the ellipses (...) could be additional people. One straightforward approach might be to use fgets() to extract a line from the file, and use preg_match() to extract the table name, records, and fields from that line. However, let's suppose that we have an awful lot of Star Wars characters to track. So many, in fact, that this line ends up being 200,000+ characters/bytes long. In such a case, taking the above approach to extract the database information seems a bit inefficient. You have to first read hundreds of thousands of characters into memory, then read back over those same characters to find regex matches. Is there a way, similar to the Java String next(String pattern) method of the Scanner class constructed using a file, that allows you to match patterns in-line while scanning through the file? The idea is that you don't have to scan through the same text twice (to read it from the file into a string, and then to match patterns) or store the text redundantly in memory (in both the file line string and the matched patterns). Would this even yield a significant increase in performance? It's hard to tell exactly what PHP or Java are doing behind the scenes.

    Read the article

  • How to receive Email in JEE application

    - by Hank
    Obviously it's not so difficult to send out emails from a JEE application via JavaMail. What I am interested in is the best pattern to receive emails (notification bounces, mostly)? I am not interested in IMAP/POP3-based approaches (polling the inbox) - my application shall react to inbound emails. One approach I could think of would be Keep existing MTA (postfix on linux in my case) - ops team already knows how to configure / operate it For every mail that arrives, spawn a Java app that receives the data and sends it off via JMS. I could do this via an entry in /etc/aliases like myuser: "|/path/to/javahelper" with javahelper calling the Java app, passing STDIN along. MDB (part of JEE application) receives JMS message, parses it, detects bounce message and acts accordingly. Another approach could be Open a listening network socket on port 25 on the JEE application container. Associate a SessionBean with the socket. Bean is part of JEE application and can parse/detect bounces/handle the messages directly. Keep existing MTA as inbound relay, do all its security/spam filtering, but forward emails to myuser (that pass the filter) to the JEE application container, port 25. The first approach I have done before (albeit in a different language/setup). From a performance and (perceived) cleanliness point of view, I think the second approach is better, but it would require me to provide a proper SMTP transport implementation. Also, I don't know if it's at all possible to connect a network socket with a bean... What is your recommendation? Do you have details about the second approach?

    Read the article

  • Extending the .NET type system so the compiler enforces semantic meaning of primitive values in cert

    - by Drew Noakes
    I'm working with geometry a bit at the moment and am converting a lot between degrees and radians. Unfortunately, both of these are represented by double, so there's compile time warning/error if I try to pass a value in degrees where radians are expected. I believe F# has a compile-time solution for this (called units of measure.) I'd like to do something similar in C#. As another example, imagine a SQL library that accepts various query parameters as strings. It'd be good to have a way of enforcing that only clean strings were allowed to be passed in at runtime, and the only way to get a clean string was to pass through some SQL injection attack preventing logic. The obvious solution is to wrap the double/string/whatever in a new type to give it the type information the compiler needs. I'm curious if anyone has an alternative solution. If you do think wrapping is the only/best way, then please go into some of the downsides of the pattern (and any upsides I haven't mentioned too.) I'm especially concerned about the performance of abstracted primitive numeric types on my calculations at runtime.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Are monads Writer m and Either e categorically dual?

    - by sdcvvc
    I noticed there is a dual relation between Writer m and Either e monads. If m is a monoid, then unit :: () -> m join :: (m,m) -> m can be used to form a monad: return is composition: a -> ((),a) -> (m,a) join is composition: (m,(m,a)) -> ((m,m),a) -> (m,a) The dual of () is Void (empty type), the dual of product is coproduct. Every type e can be given "comonoid" structure: unit :: Void -> e join :: Either e e -> e in the obvious way. Now, return is composition: a -> Either Void a -> Either e a join is composition: Either e (Either e a) -> Either (Either e e) a -> Either e a and this is the Either e monad. The arrows follow exactly the same pattern. Question: Is it possible to write a single generic code that will be able to perform both as Either e and as Writer m depending on the monoid given?

    Read the article

  • How to replace pairs of strings in two files to identical IDs?

    - by Péter Török
    Sorry if the title is not very intelligible, I couldn't come up with anything better. Hopefully my explanation is clear enough: I have a pair of rather large log files with very similar content, except that some strings are different between the two. A couple of examples: UnifiedClassLoader3@19518cc | UnifiedClassLoader3@d0357a JBossRMIClassLoader@13c2d7f | JBossRMIClassLoader@191777e That is, wherever the first file contains UnifiedClassLoader3@19518cc, the second contains UnifiedClassLoader3@d0357a, and so on. [Update] There are about 40 distinct pairs of such identifiers.[/Update] I want to replace these with identical IDs so that I can spot the really important differences between the two files. I.e. I want to replace all occurrences of both UnifiedClassLoader3@19518cc in file1 and UnifiedClassLoader3@d0357a in file2 with UnifiedClassLoader3@1; all occurrences of both JBossRMIClassLoader@13c2d7f in file1 and JBossRMIClassLoader@191777e in file2 with JBossRMIClassLoader@2 etc. Using the Cygwin shell, so far I managed to list all different identifiers occurring in one of the files with grep -o -e 'ClassLoader[0-9]*@[0-9a-f][0-9a-f]*' file1.log | sort | uniq However, now the original order is lost, so I don't know which is the pair of which ID in the other file. With grep -n I can get the line number, so the sort would preserve the order of appearance, but then I can't weed out the duplicate occurrences. Unfortunately grep can not print only the first match of a pattern. I figured I could save the list of identifiers produced by the above command into a file, then iterate over the patterns in the file with grep -n | head -n 1, concatenate the results and sort them again. The result would be something like 2 ClassLoader3@19518cc 137 ClassLoader@13c2d7f 563 ClassLoader3@1267649 ... Then I could (either manually or with sed itself) massage this into a sed command like sed -e 's/ClassLoader3@19518cc/ClassLoader3@2/g' -e 's/ClassLoader@13c2d7f/ClassLoader@137/g' -e 's/ClassLoader3@1267649/ClassLoader3@563/g' file1.log > file1_processed.log and similarly for file2. However, before I start, I would like to verify that my plan is the simplest possible working solution to this. Is there any flaw in this approach? Is there a simpler way?

    Read the article

  • Wicket: How to implement an IDataProvider/LoadableDetachableModel for indexed lists

    - by cretzel
    What's the best way to implement an IDataProvider and a LoadableDetachable in Wicket for an indexed list? Suppose I have a Customer who has a list of Adresses. class Customer { List adresses; } Now I want to implement a data provider/ldm for the adresses of a customer. I suppose the usual way is an IDataProvider as an inner class which refers to the customer model of the component, like: class AdressDataProvider implements IDataProvider { public Iterator iterator() { Customer c = (Customer)Component.this.getModel(); // somehow get the customer model return c.getAdresses().iterator(); } public IModel model(Object o) { Adress a = (Adress) o; // Return an LDM which loads the adress by id. return new AdressLoadableDetachableModel(a.getId()); } } Question: How would I implement this, when the adress does not have an ID (e.g. it's a Hibernate Embeddable/CollectionOfElements) but can only be identified by its index in the customer.adresses list? How do I keep reference to the owning entity and the index? In fact, I know a solution, but I wonder if there's a common pattern to do this.

    Read the article

  • How to use keyword this in a mouse wrapper in right context in Javascript?

    - by MartyIX
    Hi, I'm trying to write a simple wrapper for mouse behaviour. This is my current code: function MouseWrapper() { this.mouseDown = 0; this.OnMouseDownEvent = null; this.OnMouseUpEvent = null; document.body.onmousedown = this.OnMouseDown; document.body.onmouseup = this.OnMouseUp; } MouseWrapper.prototype.Subscribe = function (eventName, fn) { // Subscribe a function to the event if (eventName == 'MouseDown') { this.OnMouseDownEvent = fn; } else if (eventName == 'MouseUp') { this.OnMouseUpEvent = fn; } else { alert('Subscribe: Unknown event.'); } } MouseWrapper.prototype.OnMouseDown = function () { this.mouseDown = 1; // Fire event $.dump(this.OnMouseDownEvent); if (this.OnMouseDownEvent != null) { alert('test'); this.OnMouseDownEvent(); } } MouseWrapper.prototype.OnMouseUp = function () { this.mouseDown = 0; // Fire event if (this.OnMouseUpEvent != null) { this.OnMouseUpEvent(); } } From what I gathered it seems that in MouseWrapper.prototype.OnMouseUp and MouseWrapper.prototype.OnMouseDown the keyword "this" doesn't mean current instance of MouseWrapper but something else. And it makes sense that it doesn't point to my instance but how to solve the problem? I want to solve the problem properly I don't want to use something dirty. My thinking: * use a singleton pattern (mouse is only one after all) * pass somehow my instance to OnMouseDown/Up - how? Thank you for help!

    Read the article

  • Avoiding a fork()/SIGCHLD race condition

    - by larry
    Please consider the following fork()/SIGCHLD pseudo-code. // main program excerpt for (;;) { if ( is_time_to_make_babies ) { pid = fork(); if (pid == -1) { /* fail */ } else if (pid == 0) { /* child stuff */ print "child started" exit } else { /* parent stuff */ print "parent forked new child ", pid children.add(pid); } } } // SIGCHLD handler sigchld_handler(signo) { while ( (pid = wait(status, WNOHANG)) > 0 ) { print "parent caught SIGCHLD from ", pid children.remove(pid); } } In the above example there's a race-condition. It's possible for "/* child stuff */" to finish before "/* parent stuff */" starts which can result in a child's pid being added to the list of children after it's exited, and never being removed. When the time comes for the app to close down, the parent will wait endlessly for the already-finished child to finish. One solution I can think of to counter this is to have two lists: started_children and finished_children. I'd add to started_children in the same place I'm adding to children now. But in the signal handler, instead of removing from children I'd add to finished_children. When the app closes down, the parent can simply wait until the difference between started_children and finished_children is zero. Another possible solution I can think of is using shared-memory, e.g. share the parent's list of children and let the children .add and .remove themselves? But I don't know too much about this. EDIT: Another possible solution, which was the first thing that came to mind, is to simply add a sleep(1) at the start of /* child stuff */ but that smells funny to me, which is why I left it out. I'm also not even sure it's a 100% fix. So, how would you correct this race-condition? And if there's a well-established recommended pattern for this, please let me know! Thanks.

    Read the article

  • Adroid's DateFormat replacement - missing the format() with FieldPosition

    - by user331244
    Hi, I need to split a date string into pieces and I'm doing it using the public final StringBuffer format (Object object, StringBuffer buffer, FieldPosition field) from the java.text.DateFormat class. However, the implementation of this function is really slow, hence Android has an own implementation in android.text.format.DateFormat. BUT, in my case, I want to extract the different pieces of the date string (year, minute and so on). Since I need to be locale independent, I can not use SimpleDateFormat and custom strings. I do it as follows: Calendar c = ... // find out what field to extract int field = getField(); // Create a date string Field calendarField = DateFormat.Field.ofCalendarField(field); FieldPosition fieldPosition = new FieldPosition(calendarField); StringBuffer label = new StringBuffer(); label = getDateFormat().format(c.getTime(), label, fieldPosition); // Find the piece that we are looking for int beginIndex = fieldPosition.getBeginIndex(); int endIndex = fieldPosition.getEndIndex(); String asString = label.substring(beginIndex, endIndex); For some reason, the format() overload with the FieldPosition argument is not included in the android platform. Any ideas of how to do this in another way? Is there any easy way to tokenize the pattern string? Any other ideas?

    Read the article

  • WPF/MVVM - keep ViewModels in application component or separate?

    - by Anders Juul
    Hi all, I have a WPF application where I'm using the MVVM pattern. I get the VM activated for actions that require user input and thus need to activate views from the VM. I've started out separating the VMs into a separate component/assembly, partly because I see them as the unit testable part, partly because views should rely on VM, not the other way round. But when I then need to bring up a window, the window is not known by the VM. All introductions I find place the VM in the WPF/App component, thus eliminating the problem. This article recommends keeping them in separate layers : http://waf.codeplex.com/wikipage?title=Architecture%20-%20Get%20The%20Big%20Picture&referringTitle=Home As I see it, I have the following choices Move VMs to the WPF/App assembly to allow VMs to access the windows directly. Place interfaces of views in VM-assembly, implement views in WPF/App assembly and register the connection through IOC or alternative ways. File a 'request' from the VM into some mechanism/bus that routes the request (but which mechanism!? E.g something in Prism?!) What's the recommendation? Thanks for any comments, Anders, Denmark

    Read the article

  • Set form MinWidth and MinHeight based on child property

    - by Jon Mitchell
    I'm writing an application in WPF using the MVVM pattern. In my application I've got an IPopupWindowService which I use to create a popup dialog window. So to show a ViewModel in a popup window you'd do something like this: var container = ServiceLocator.Current.GetInstance<IUnityContainer>(); var popupService = container.Resolve<IPopupWindowService>(); var myViewModel = container.Resolve<IMyViewModel>(); popupService.Show((ViewModelBase)myViewModel); This is all well and good. What I want to do is be able to set the MinHeight and MinWidth on the View bound to myViewModel and have the popup window use those settings so that a user cannot make the window smaller than its contents will allow. At the moment when the user shrinks the window the contents stops resizing but the window doesn't. EDIT: I map my Views to my ViewModels in ResourceDictionarys like so: <DataTemplate DataType="{x:Type ViewModels:MyViewModel}"> <Views:MyView /> </DataTemplate> And my popup view looks like this: <Window x:Class="TheCompany.Cubit.Shell.Views.PopupWindowView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" SizeToContent="WidthAndHeight" WindowStartupLocation="CenterOwner"> <DockPanel x:Name="panelContent"> <StackPanel HorizontalAlignment="Right" DockPanel.Dock="Bottom" Orientation="Horizontal" Visibility="{Binding RelativeSource={RelativeSource AncestorType=Window},Path=ButtonPanelVisibility}"> <Button Width="75" IsDefault="True" x:Uid="ViewDialog_AcceptButton" Click="OnAcceptButtonClick" Margin="4">OK</Button> <Button Width="75" IsCancel="True" x:Uid="ViewDialog_CancelButton" Click="OnCancelButtonClick" Margin="0,4,4,4">Cancel</Button> </StackPanel> <ContentPresenter Content="{Binding}" /> </DockPanel>

    Read the article

  • WCF service reference namespace differs from original

    - by Thorarin
    I'm having a problem regarding namespaces used by my service references. I have a number of WCF services, say with the namespace MyCompany.Services.MyProduct (the actual namespaces are longer). As part of the product, I'm also providing a sample C# .NET website. This web application uses the namespace MyCompany.MyProduct. During initial development, the service was added as a project reference to the website and uses directly. I used a factory pattern that returns an object instance that implements MyCompany.Services.MyProduct.IMyService. So far, so good. Now I want to change this to use an actual service reference. After adding the reference and typing MyCompany.Services.MyProduct in the namespace textbox, it generates classes in the namespace MyCompany.MyProduct.MyCompany.Services.MyProduct. BAD! I don't want to have to change using directives in several places just because I'm using a proxy class. So I tried prepending the namespace with global::, but that is not accepted. Note that I hadn't even deleted the original assembly references yet, and "reuse types" is enabled, but no reusing was done, apparently. However, I don't want to keep the assembly references around in my sample website for it to work anyway. The only solution I've come up with so far is setting the default namespace for my web application to MyCompany (because it cannot be empty), and adding the service reference as Services.MyProduct. Suppose that a customer wants to use my sample website as a starting point, and they change the default namespace to OtherCompany.Whatever, this will obviously break my workaround. Is there a good solution to this problem? To summarize: I want to generate a service reference proxy in the original namespace, without referencing the assembly. Note: I have seen this question, but there was no solution provided that is acceptable for my use case. Edit: As John Saunders suggested, I've submitted some feedback to Microsoft about this: Feedback item @ Microsoft Connect

    Read the article

  • Array of templated structs

    - by Jakub Mertlik
    I have structs templated by int derived from a Base struct. struct Base { int i; double d; }; template< int N > struct Derv : base { static const int mN = N; }; I need to make an array of Derv< N where N can vary for each struct in that array. I know C/C++ does not allow arrays of objects of different types, but is there a way around this? I was thinking of separating the type information somehow (hints like pointers to Base struct or usage of union spring to my mind, but with all of these I don't know how to store the type information of each array element for usage DURING COMPILE TIME). As you can see, the memory pattern of each Derv< N is the same. I need to access the type of each array element for template specialization later in my code. The general aim of this all is to have a compile-time dispatch mechanism without the need to do a runtime "type switch" somewhere in the code. Thank you.

    Read the article

  • [Delphi] How would you refactor this code?

    - by Al C
    This hypothetical example illustrates several problems I can't seem to get past, even though I keep trying!! ... Suppose the original code is a long event handler, coded in the UI, triggered when a user clicks a cell in a grid. Expressed as pseudocode it's: if Condition1=true then begin //loop through every cell in row, //if aCell/headerCellValue>1 then //color aCell red end else if Condition2=true then begin //do some other calculation adding cell and headerCell values, and //if some other product>2 then //color the whole row green end else show an error message I look at this and say "Ah, refactor to the strategy pattern! The code will be easier to understand, easier to debug, and easier to later extend!" I get that. And I can easily break the code into multiple procedures. The problem is ultimately scope related. Assume the pseudocode makes extensive use of grid properties, values displayed in cells, maybe even built-in grid methods. How do you move all that to another unit, without referencing the grid component in the UI--which would break all the "rules" about loose coupling that make OOP valuable? ... I'm really looking forward to responses. Thanks, as always -- Al C.

    Read the article

  • .Net Entity Framework & POCO ... querying full table problem

    - by Chris Klepeis
    I'm attempting to implement a repository pattern with my poco objects auto generated from my edmx. In my repository class, I have: IObjectSet<E> _objectSet; private IObjectSet<E> objectSet { get { if (_objectSet == null) { _objectSet = this._context.CreateObjectSet<E>(); } return _objectSet; } } public IQueryable<E> GetQuery(Func<E, bool> where) { return objectSet.Where(where).AsQueryable<E>(); } public IList<E> SelectAll(Func<E, bool> where) { return GetQuery(where).ToList(); } Where E is the one of my POCO classes. When I trace the database and run this: IList<Contact> c = contactRepository.SelectAll(r => r.emailAddress == "[email protected]"); It shows up in the sql trace as a select for everything in my Contact table. Where am I going wrong here? Is there a better way to do this? Does an objectset not lazy load... so it omitted the where clause? This is the article I read which said to use objectSet's... since with POCO, I do not have EntityObject's to pass into "E" http://devtalk.dk/CommentView,guid,b5d9cad2-e155-423b-b66f-7ec287c5cb06.aspx

    Read the article

  • Need help with strange Class#getResource() issue

    - by Andreas_D
    I have some legacy code that reads a configuration file from an existing jar, like: URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = url.openStream(); Obviously it worked before but when I execute this code, the URL is valid but I get an IOException on the third line, saying it can't find the file. The url is something like "file:jar:c:/path/to/jar/somejar.jar!configuration.properties" so it doesn't look like a classpath issue - java knows pretty well where the file can be found.. The above code is part of an ant task and it fails while the task is executed. Strange enough - I copied the code and the jar file into a separate class and it works as expected, the properties file is readable. At some point I changed the code of the ant task to URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = SomeClass.class.getResourceAsStream("/configuration.properties"); and now it works - just until it crashes in another class where a similiar access pattern is implemented.. Why could it have worked before, why does it fail now? The only difference I see at the moment is, that the old build was done with java 1.4 while I'm trying it with Java 6 now. Workaround Today I installed Java 1.4.2_19 on the build server and made ant to use it. To my totally frustrating surprise: The problem is gone. It looks to me, that java 1.4.2 can handle URLs of this type while Java 1.6 can't (at least in my context/environment). I'm still hoping for an explanation although I'm facing the work to rewrite parts of the code to use Class#getRessourceAsStream which behaved much more stable...

    Read the article

  • Should a Trim method generally in the Data Access Layer or with in the Domain Layer?

    - by jpierson
    I'm dealing with a database that contains data with inconsistencies such as white leading and trailing white space. In general I see a lot of developers practice defensive coding by trimming almost all strings that come from the database that may have been entered by a user at some point. In my oppinoin it is better to do such formating before data is persisted so that it is done only once and then the data can be in a consistent and reliable state. Unfortunatley this is not the case however which leads me to the next best solution, using a Trim method. If I trim all data as part of my data access layer then I don't have to concern myself with defensive trimming within the business objects of my domain layer. If I instead put the trimming responsibility in my business objects, such as with set accessors of my C# properties, I should get the same net results however the trim will be operating on all values assigned to my business objects properties not just the ones that come from the inconsistent database. I guess as a somewhat philisophical question that may determine the answer I could ask "Should the domain later be responsible for defensive/coercive formatting of data?" Would it make sense to have a set accessor for a PhoneNumber property on a business object accept a unformatted or formatted string and then attempt to format it as required or should this responsibility be pushed to the presentation and data access layers leaving the domain layer more strict in the type of data that it will accept? I think this may be the more fundamental question. Update: Below are a few links that I thought I should share about the topic of data cleansing. Information service patterns, Part 3: Data cleansing pattern LINQ to SQL - Format a string before saving? How to trim values using Linq to Sql?

    Read the article

  • How should we setup up complex situations for tests?

    - by ShaneC
    I'm currently working on what I would call integration tests. I want to verify that if a WCF service is called it will do what I expect. Let's take a very simple scenario. Assume we have a contract object that we can put on hold or take off hold. Now writing the put on hold test is quite simple. You create a contract instance and execute the code that puts it on code. The question I have comes when we want to test the taking off hold service call. The problem is that putting a contract on hold can be actually quite complicated leading to various objects all be modified. So usually I would use the Builder pattern and do something like this.. var onHoldContract = new ContractBuilder().PutOnHold().Build(); The problem I have with this is now I have to pretty much replicate a large part of my put on hold service. Now when I change what putting something on hold means I have two places I have to modify. The other option that immediately jumps out at me is to just use the put on hold service as part of my test setup but now I'm coupling my test to the success of another piece of code which is something I don't like to do since it can lead to failures in one spot breaking unrelated tests elsewhere (if put on hold failed for example). Any other options I'm missing out here? or opinions on which method is preferable and why?

    Read the article

  • Entity Framework in layered architecture

    - by Kamyar
    I am using a layered architecture with the Entity Framework. Here's What I came up with till now (All the projects Except UI are class library): Entities: The POCO Entities. Completely persistence ignorant. No Reference to other projects. Generated by Microsoft's ADO.Net POCO Entity Generator. DAL: The EDMX (Entity Model) file with the context class. (t4 generated). References: Entities BLL: Business Logic Layer. Will implement repository pattern on this layer. References: Entities, DAL. This is where the objectcontext gets populated: var ctx=new DAL.MyDBEntities(); UI: The presentation layer: ASP.NET website. References: Entities, BLL + a connection string entry to entities in the config file (question #2). Now my three questions: Is my layer discintion approach correct? In my UI, I access BLL as follows: var customerRep = new BLL.CustomerRepository(); var Customer = customerRep.GetByID(myCustomerID); The problem is that I have to define the entities connection string in my UI's web.config/app.config otherwise I get a runtime exception. IS defining the entities connectionstring in UI spoils the layers' distinction? Or is it accesptible in a muli layered architecture. Should I take any additional steps to perform chage tracking, lazy loading, etc (by etc I mean the features that Entity Framework covers in a conventional, 1 project, non POCO code generation)? Thanks and apologies for the lengthy question.

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >