Search Results

Search found 63182 results on 2528 pages for 'data driven tests'.

Page 65/2528 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Maven2 compiles my tests, but doesn't run them

    - by Vincenzo
    I have a simple Maven2 project with tests written for TestNG. When I say mvn test Maven2 compiles my test, but don't run them. I already checked this page: http://maven.apache.org/general.html#test-property-name. This is not my case. Anybody can help? My directory structure: pom.xml src main java com ... test java com ... target classes <— .class files go there test-classes <— .class files with tests go there This is what I see if I run mvn -X test (end of the log): ... [INFO] Surefire report directory: <mydir>/target/surefire-reports Forking command line: /bin/sh -c cd <mydir> && /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/bin/java -jar /var/folders/+j/+jAx0g2xGA8-Ns9lWNOWgk+++TM/-Tmp-/surefirebooter7645642850235508331.jar /var/folders/+j/+jAx0g2xGA8-Ns9lWNOWgk+++TM/-Tmp-/surefire4544000548243268568tmp /var/folders/+j/+jAx0g2xGA8-Ns9lWNOWgk+++TM/-Tmp-/surefire7481499683857473873tmp ------------------------------------------------------- T E S T S ------------------------------------------------------- Running TestSuite Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec

    Read the article

  • .NET TDD with a Database and ADO.NET Entity Framework - Integration Tests

    - by Brian
    Hello, I'm using ADO.NET entity framework, and am using an AdventureWorks database attached to my local database server. For unit testing, what approaches have people taken to work with a database? Obviously, the database has to be in a pre-defined state of change so that the tests can have some isolation from each other... so I need to be able to run through the inserts and updates, then rollback either between tests or after the batch of tests are done. Any advice? Thanks.

    Read the article

  • Silverlight Unit Testing Framework running tests in external class library

    - by Jonas Follesø
    I'm currently looking into different options for unit testing Silverlight applications. One of the frameworks available is the Silverlight Unit Test Framework from Microsoft (developed primary by Jeff Wilcox, http://www.jeff.wilcox.name/2010/05/sl3-utf-bits/). One of the scenarios I'm looking into is running the same tests on both Silverlight 3 (PC) and Windows Phone 7. The Silverlight Unit Test Framework (SLUT) runs on both PC and phone. To prevent having to copy or link files I would like to put my tests into a shared test library, that can be loaded by either a WP7 application using the SLUT, or a Silverlight 3 application using SLUT. So my question is: will SLUT load unit tests defined in a referenced class library, or only in the executing assembly?

    Read the article

  • Running junit tests in parallel ?

    - by krosenvold
    I'm using junit 4.4 and maven and I have a large number of long-running integration tests. When it comes to parallellizing test suites there are a few solutions that allow me to run each test method in a single test-class in parallel. But all of these require that I change the tests in one way or another. I really think it would be a much cleaner solution to run X different test classes in X threads in parallel. I have hundreds of tests so I don't really care about threading individual test-classes. Is there any way to do this ?

    Read the article

  • How to access project files from NUnit tests

    - by Daren Thomas
    I have some Tests that I run with ReSharpers "Run All Tests from Solution" feature. One of the classes being tested has a dependency on a file in the same folder as the assembly containing it. This file is copied to the output directory via MSBuild (set "Copy To Output Directory" to "Copy always"). Problem: The tests are not being run from the normal assembly output directory, but instead some temporary location in my user profile. Therefore, I don't really know where to look for the file - the test runner does not copy it there. Can I force it to?

    Read the article

  • who wrote 250k tests for webkit?

    - by amwinter
    assuming a yield of 3 per hour, that's 83000 hours. 8 hours a day makes 10,500 days, divide by thirty to get 342 mythical man months. I call them mythical because writing 125 tests per person per week is unreal. can any wise soul out there on SO shed some light on what sort of mythical men write unreal quantities of tests for large software projects? thank you. update chrisw thinks there are only 20k tests (check out his explanation below). PS I'd really like to hear from folks who have worked on projects with large test bases

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • environment configuration for tests running in NUnit

    - by Frank Schwieterman
    I have some integration tests that hit a webserver and verify certain functionalities. Depending on the build environment, the server will be at a different address (http://localhost:8080/, http://test-vm/, etc). I would like to run these tests from a TFS build. I'm wondering whats the appropriate way to configure these tests? Would I just add a setting to the config file? I'm doing that currently. Incidentally we do have a separate branch per test environment, so I could have a different config file checked in for each environment. I wonder if there is a better way though? I'd like the build project to be able to tell the test what server to test. This seems better because then I don't have to maintain config information on a per branch basis. I believe I'd be using NUnit for Team Build (http://nunit4teambuild.codeplex.com/) to get NUnit/TFS to play together.

    Read the article

  • Framework/tool for processing C++ unit tests with numerical output

    - by David Claridge
    Hi, I am working on a C++ application that uses computer vision techniques to identify various types of objects in a sequence of images. The (1000+) images have been hand-classified, so we have an XML file for each image containing a description of where the objects are actually located in the images. I would like to know if there is a testing framework that can understand/graph results from tests that are numeric, in this case some measure of the error in the program's classification of the images, rather than just pass/fail style unit tests. We would like to use something like CDash/CTest for running these automated tests, and viewing over time how improvements to the vision algorithms are causing the images to be more correctly classified. Does anyone know of a tool/framework that can do this?

    Read the article

  • JUnit unable to find tests in Eclipse

    - by mikera
    I have a strange issue with JUnit 4 tests in Eclipse 3.5 that I couldn't solve - any hints gratefully received! Initially: I had a test suite working properly, with 100+ tests all configured with JUnit 4 annotations. I'd run these typically by right clicking on my source folder and selecting "Run as JUnit test". All worked perfectly. Now: When I try to run the test messages all I get is an error "No tests found with test runner 'JUnit 4'". Any idea what is happening? I simply can't work out what could have changed to make this fail. My guess is that it is some configuration issue based on the build path or class path?

    Read the article

  • How to find Junit tests that are using a given Java method directly or indirectly

    - by IT-Worx
    Assume there are Java project and Junit project in an Eclipse workspace. And All the unit tests are located in the Junit project and dependent on the application Java project. When making changes to a Java method, I need to find the unit tests that are using the method directly or indirectly, so that I can run the corresponding tests locally in my PC before checking into source control. I don't want to run the entire junit project since it takes time. I could use Eclipse call hierarchy to expand caller methods one by one until I find a test method. But for a project including more than 1 million lines of source code, digging down the call hierarchy takes time too. The search scope within call hierarchy view doesn't seem help much. Appreciate any help.

    Read the article

  • Dev Environment Tests Not 100% Compatible with Staging/Production in Rails

    - by aronchick
    We use a bunch of specific apps/APIs that (unfortunately) differ quite a bit from dev to staging/production. We use tests and continuous integration at each stage, but in dev, the tests fail annoyingly (throwing dialogs, etc - thanks Windows for the 64-bit notification!). I hate to write custom code, but are there some best practices for how to allow a subset of testing in ruby/rails - or for patching out specific tests when you're running on Windows? Some specific situations that: Identify.exe does not support 64-bit Windows and throws a dialog. Sethostname is not supported, and throws an error (at least it's command line).

    Read the article

  • Core Data managed object context thread synchronisation

    - by Ben Reeves
    I'm have an issue where i'm updating a many-to-many relationship in a background thread, which works fine in that threa, but when I send the object back to the main thread the changes do not show. If I close the app and reopen the data is saved fine and the changes show on the main thread. Also using [context lock] instead of a different managed object context works fine. I have tried NSManagedObjectContext: - (BOOL)save:(NSError **)error; - (void)refreshObject:(NSManagedObject *)object mergeChanges:(BOOL)flag; at different stages throughout the process but it doesn't seem to help. My core data code uses the following getter to ensure any operations are thread safe: - (NSManagedObjectContext *) managedObjectContext { NSThread * thisThread = [NSThread currentThread]; if (thisThread == [NSThread mainThread]) { //Main thread just return default context return managedObjectContext; } else { //Thread safe trickery NSManagedObjectContext * threadManagedObjectContext = [[thisThread threadDictionary] objectForKey:CONTEXT_KEY]; if (threadManagedObjectContext == nil) { threadManagedObjectContext = [[[NSManagedObjectContext alloc] init] autorelease]; [threadManagedObjectContext setPersistentStoreCoordinator: [self persistentStoreCoordinator]]; [[thisThread threadDictionary] setObject:threadManagedObjectContext forKey:CONTEXT_KEY]; } return threadManagedObjectContext; } } and when I pass object between threads i'm using -(NSManagedObject*)makeSafe:(NSManagedObject*)object { if ([object managedObjectContext] != [self managedObjectContext]) { NSError * error = nil; object = [[self managedObjectContext] existingObjectWithID:[object objectID] error:&error]; if (error) { NSLog(@"Error makeSafe: %@", error); } } return object; } Any help appreciated

    Read the article

  • .Net Windows Service Throws EventType clr20r3 system.data.sqlclient.sql error

    - by William Edmondson
    I have a .Net/c# 2.0 windows service. The entry point is wrapped in a try catch block yet when I look at the server's application event log I seem a number of "EventType clr20r3" errors that are causing the service to die unexpectedly. The catch block has a "catch (Exception ex)". Each sql commands is of the type "CommandType.StoredProcedure" and are executed with SqlDataReader's. These sproc calls function correctly 99% of time and have all been thoroughly unit tested, profiled, and QA'd. I additionally wrapped these calls in try catch blocks just to be sure and am still experiencing these unhandled exceptions. This only in our production environment and cannot be duplicated in our dev or staging environments (even under heavy load). Why would my error handling not catch this particular error? Is there anyway to capture more detail as to the root cause of the problem? Here is an example of the event log: EventType clr20r3, P1 RDC.OrderProcessorService, P2 1.0.0.0, P3 4ae6a0d0, P4 system.data, P5 2.0.0.0, P6 4889deaf, P7 2490, P8 2c, P9 system.data.sqlclient.sql, P10 NIL. Additionally The Order Processor service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service.

    Read the article

  • Handling multiple column data with Java

    - by Ender
    I am writing an application that reads in a large number of basic user details in the following format; once read in it then allows the user to search for a user's details using their email: NAME ROLE EMAIL --------------------------------------------------- Joe Bloggs Manager [email protected] John Smith Consultant [email protected] Alan Wright Tester [email protected] ... The problem I am suffering is that I need to store a large number of details of all people that have worked at the company. The file containing these details will be written on a yearly basis simply for reporting purposes, but the program will need to be able to access these details quickly. The way I aim to access these files is to have a program that asks the user for the name of the unique email of the member of staff and for the program to then return the name and the role from that line of the file. I've played around with text files, but am struggling with how I would handle multiple columns of data when it comes to searching this large file. What is the best format to store such data in? A text file? XML? The size doesn't bother me, but I'd like to be able to search it as quickly as possible. The file will need to contain a lot of entries, probably over the 10K mark over time.

    Read the article

  • Compression algorithm for IEEE-754 data

    - by David Taylor
    Anyone have a recommendation on a good compression algorithm that works well with double precision floating point values? We have found that the binary representation of floating point values results in very poor compression rates with common compression programs (e.g. Zip, RAR, 7-Zip etc). The data we need to compress is a one dimensional array of 8-byte values sorted in monotonically increasing order. The values represent temperatures in Kelvin with a span typically under of 100 degrees. The number of values ranges from a few hundred to at most 64K. Clarifications All values in the array are distinct, though repetition does exist at the byte level due to the way floating point values are represented. A lossless algorithm is desired since this is scientific data. Conversion to a fixed point representation with sufficient precision (~5 decimals) might be acceptable provided there is a significant improvement in storage efficiency. Update Found an interesting article on this subject. Not sure how applicable the approach is to my requirements. http://users.ices.utexas.edu/~burtscher/papers/dcc06.pdf

    Read the article

  • IIS error hosting WCF Data Service on shared web host

    - by jkohlhepp
    My client has a website hosted on a shared web server. I don't have access to IIS. I am trying to deploy a WCF Data Service onto his site. I am getting this error: IIS specified authentication schemes 'IntegratedWindowsAuthentication, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used. I have searched SO and other sites quite a bit but can't seem to find someone with my exact situation. I cannot change the IIS settings because this is a third party's server and it is a shared web server. So my only option is to change things in code or in the service config. My service config looks like this: <system.serviceModel xdt:Transform="Insert"> <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://www.somewebsite.com"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <bindings> <webHttpBinding> <binding name="{Binding Name}" > <security mode="None" /> </binding> </webHttpBinding> </bindings> <services> <service name="{Namespace to Service}"> <endpoint address="" binding="webHttpBinding" bindingConfiguration="{Binding Name}" contract="System.Data.Services.IRequestHandler"> </endpoint> </service> </services> </system.serviceModel> As you can see I tried to set the security mode to "None" but that didn't seem to help. What should I change to resolve this error?

    Read the article

  • Flex / Flash builder : no returning data using database

    - by Tristan
    Hello, i'm following some flex tutorials everything's working as wanted expected for one thing : When i use my function getServerByBrand($brand) there is no returned data into my datagrid and i don't know why because it uses the same schema as getAllserver() which is working . I don't know whether it's cause by the function itselft or the configuration in flash builder : protected function RechercheGSP_clickHandler(event:MouseEvent):void { getServerByBrandResult.token = dbClass.getServerByBrand(SearchInput.text); } Here's what i've got in Data/Services : getServerByBrand(brand : string) : Object And finally the function : public function getServerByBrand($brand) { $stmt = mysqli_prepare($this->connection, "SELECT DISTINCT * FROM $this->tablename where GSP_nom=? "); $this->throwExceptionOnError(); mysqli_stmt_execute($stmt); $this->throwExceptionOnError(); $rows = array(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->GSP_nom, $row->IPserv, $row->port, $row->tickrate, $row->membre, $row->nomPays, $row->finContrat, $row->actif, $row->timestamp, $row->type, $row->jeux, $row->slot, $row->ipClient, $row->essai, $row->reussite, $row->echec, $row->valide, $row->email); while (mysqli_stmt_fetch($stmt)) { $row->timestamp = new DateTime($row->timestamp); $rows[] = $row; $row = new stdClass(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->GSP_nom, $row->IPserv, $row->port, $row->tickrate, $row->membre, $row->nomPays, $row->finContrat, $row->actif, $row->timestamp, $row->type, $row->jeux, $row->slot, $row->ipClient, $row->essai, $row->reussite, $row->echec, $row->valide, $row->email); } mysqli_stmt_free_result($stmt); mysqli_close($this->connection); return $rows; } I tested the settings with configure return type and it tells me : "the operation returned a primitive "object". test settings : Parameters (brand) / Input type (String) / Value (woop) To conclude, there is no returned object at all. Do you see the problem ? Thanks

    Read the article

  • Single value data to multiple values of data in database relation

    - by Sofiane Merah
    I have such a hard time picturing this. I just don't have the brain to do it. I have a table called reports. --------------------------------------------- | report_id | set_of_bads | field1 | field2 | --------------------------------------------- | 123 | set1 | qwe | qwe | --------------------------------------------- | 321 | 123112 | ewq | ewq | --------------------------------------------- I have another table called bads. This table contains a list of bad data. ------------------------------------- | bad_id | set_it_belongs_to | field2 | field3 | ------------------------------------- | 1 | set1 | qwe | qwe | ------------------------------------- | 2 | set1 | qee | tte | ------------------------------------- | 3 | set1 | q44w | 3qwe | ------------------------------------- | 4 | 234 | qoow | 3qwe | ------------------------------------- Now I have set the first field of every table as the primary key. My question is, how do I connect the field set_of_bads to set_it_belongs_to in the bads table. This way if I want to get the entire set of data that is set1 by calling on the reports table I can do it. Example: hey reports table.. bring up the row that has the report_id 123. Okay thank you.. Now get all the rows from bads that has the set_of_bads value from the row with the report_id 123. Thanks.

    Read the article

  • How To: Group Column Headings In WPF.

    - by VoidDweller
    -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? A working prototype would be even better! Envision this nicely styled. :-)

    Read the article

  • Finding Common Phrases in SQL Server TEXT Column

    - by regex
    Short Desc: I'm curious to see if I can use SQL Analysis services or some other SQL Server service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification.

    Read the article

  • Core Data + Core Animation/CALayer together??

    - by ivanTheTerrible
    I am making an Cocoa app with custom interfaces. So far I have implemented one version of the app using CALayer doing the rendering, which has been great given the hierarchical structure of CALayers, and its [hitTest:] function for handling mouse events. In this early version, the model of the app are my custom classes. However, as the program grows I feel the urge of using Core Data for the model, not just for the ease of binding/undo management, but also want to try out the new technology. My method so far: In Core Data: creating a Block entity, with attributes xPos, yPos, width, height...etc. Then, creating a BlockView : CALayer class for drawing, which uses methods such as self.position.x = [self valueForKey:@"xPos"] to fetch the values from the model. In this case, every BlockView object has to also keep a local copy of xPos, which is NOT good. Do any of you guys have better suggestions? Edit: This app is a information visualization tool. So the positions, dimensions of the blocks are important, and should be persisted for later analysis.

    Read the article

  • Appending data to NSFetchedResultsController during find or create loop

    - by Justin Williams
    I have a table view that is managed by an NSFetchedResultsController. I am having an issue with a find-or-create operation, however. When the user hits the bottom of my table view, I am querying my server for another batch of content. If it doesn't exist in the local cache, we create it and store it. If it does exist, however, I want to append that data to the fetched results controller and display it. I can't quite figure that part out. Here's what I'm doing thus far: Passing the returned array of values from my server to an NSOperation to process. In the operation, create a new managed object context to work with. In the operation, I iterate through the array and execute a fetch request to see if the object exists (based on its server id). If the object doesn't exist, we create it and insert it into the operations' managed object context. After the iteration completes, we save the managed object context, which triggers a merge notification on my main thread. At this point, any objects that weren't locally cached in my Core Data store before will appear, but the ones that previously existed do not come along for the ride. I feel like it's something simple I'm missing, and could use a nudge in the right direction.

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • Data Mappers, Models and Images

    - by James
    Hi, I've seen and read plenty of blog posts and forum topics talking about and giving examples of Data Mapper / Model implementations in PHP, but I've not seen any that also deal with saving files/images. I'm currently working on a Zend Framework based project and I'm doing some image manipulation in the model (which is being passed a file path), and then I'm leaving it to the mapper to save that file to the appropriate location - is this common practise? But then, how do you deal with creating say 3 different size images from the one passed in? At the moment I have a "setImage($path_to_tmp_name)" which checks the image type, resizes and then saves back to the original filename. A call to "getImagePath()" then returns the current file path which the data mapper can use and then change with a call to "setImagePath($path)" once it's saved it to the appropriate location, say "/content/my_images". Does this sound practical to you? Also, how would you deal with getting the URL to that image? Do you see that as being something that the model should be providing? It seems to me like that model should worry about where the images are being stored or ultimately how they're accessed through a browser and so I'm inclined to put that in the ini file and just pass the URL prefix to the view through the controller. Does that sound reasonable? I'm using GD for image manipulation - not that that's of any relevance. UPDATE: I've been wondering if the image resizing should be done in the model at all. The model could require that it's provided a "main" image and a "thumb" image, both of certain dimensions. I've thought about creating a "getImageSpecs()" function in the model that would return something that defines the required sizes, then a separate image manipulation class could carry out the resizing and (perhaps in the controller?) and just pass the final paths in to the model using something like "setImagePaths($images)". Any thoughts much appreciated :) James.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >