Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 456/1952 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • Creating a procedure with PLSQL

    - by user1857460
    I am trying to write a PLSQL function that implements a constraint that an employee cannot be both a driver and a mechanic at the same time, in other words, the same E# from TRKEMPLOYEE cannot be in TRKDRIVER and TRKMECHANIC at the same time. The abovementioned tables are something like as follows: TRKEMPLOYEE(E# NUMBER(12) NOT NULL CONSTRAINT TRKEMPLOYEE_PKEY PRIMARY KEY(E#)) TRKDRIVER(E# NUMBER(12) NOT NULL CONSTRAINT TRKDRIVER_PKEY PRIMARY KEY(E#), CONSTRAINT TRKDRIVER_FKEY FOREIGN KEY(E#) REFERENCES TRKEMPLOYEE(E#)) TRKMECHANIC(E# NUMBER(12) NOT NULL CONSTRAINT TRKMECHANIC_PKEY PRIMARY KEY(E#), CONSTRAINT TRKMECHANIC_FKEY FOREIGN KEY(E#) REFERENCES TRKEMPLOYEE(E#)) I have attempted to write a function but keep getting a compile error in line 1 column 7. Can someone tell me why my code doesn't work? My code is as follows CREATE OR REPLACE FUNCTION Verify() IS DECLARE E# TRKEMPLOYEE.E#%TYPE; CURSOR C1 IS SELECT E# FROM TRKEMPLOYEE; BEGIN OPEN C1; LOOP FETCH C1 INTO EMPNUM; IF(EMPNUM IN(SELECT E# FROM TRKMECHANIC )AND EMPNUM IN(SELECT E# FROM TRKDRIVER)) SELECT E#, NAME FROM TRKEMPLOYEE WHERE E#=EMPNUM; ELSE dbms_output.put_line(“OK”); ENDIF EXIT WHEN C1%NOTFOUND; END LOOP; CLOSE C1; END; / Any help would be appreciated. Thanks.

    Read the article

  • How can manage a FIFO-queue in an database with SQL?

    - by Jonas
    I have two tables in my database, one for In and one for Out. They have two columns, Quantity and Price. How can I write a SQL-query that selects the correct price? In example: If I have 3 items in for 75 and then 3 items in for 80. Then I have two out for 75, and the third out should be for 75 (X) and the fourth out should be for 80 (Y). How can I write the price query for X and Y? They should use the price from the third and forth row. In example, is there any way to SELECT the third row in the In-table? I can not use auto_increment as identifier for i.e. "third" row, because the tables will contain post for other items too. The rows will not be deleted, they will be saved for accountability reasons. SELECT Price FROM In WHERE ...? NEW database design: +----+ | In | +----+------+-------+ | Supply_ID | Price | +-----------+-------+ | 1 | 75 | | 1 | 75 | | 1 | 75 | | 2 | 80 | | 2 | 80 | +-----------+-------+ +-----+ | Out | +-----+-------+-------+ | Delivery_ID | Price | +-------------+-------+ | 1 | 75 | | 1 | 75 | | 2 | X | <- ? | 3 | Y | <- ? +-------------+-------+ OLD database design: +----+ | In | +----+------+----------+-------+ | Supply_ID | Quantity | Price | +-----------+----------+-------+ | 1 | 3 | 75 | | 2 | 3 | 80 | +-----------+----------+-------+ +-----+ | Out | +-----+-------+----------+-------+ | Delivery_ID | Quantity | Price | +-------------+----------+-------+ | 1 | 2 | 75 | | 2 | 1 | X | <- ? | 3 | 1 | Y | <- ? +-------------+----------+-------+

    Read the article

  • Reading a memorystream

    - by user1842828
    Using several examples here on StackOverflow I thought the following code would decompress a gzip file then read the memory-stream and write it's content to the console. No errors occur but I get no output. public static void Decompress(FileInfo fileToDecompress) { using (FileStream originalFileStream = fileToDecompress.OpenRead()) { string currentFileName = fileToDecompress.FullName; string newFileName = currentFileName.Remove(currentFileName.Length - fileToDecompress.Extension.Length); using (FileStream decompressedFileStream = File.Create(newFileName)) { using (GZipStream decompressionStream = new GZipStream(originalFileStream, CompressionMode.Decompress)) { MemoryStream memStream = new MemoryStream(); memStream.SetLength(decompressedFileStream.Length); decompressedFileStream.Read(memStream.GetBuffer(), 0, (int)decompressedFileStream.Length); memStream.Position = 0; var sr = new StreamReader(memStream); var myStr = sr.ReadToEnd(); Console.WriteLine("Stream Output: " + myStr); } } } }

    Read the article

  • Has jQuery core development been slowing down?

    - by David Murdoch
    So, I regularly head over to jQuery's Commit History on GitHub just to read through the new code committed to jQuery core. But there hasn't been anything new committed since April 24th. I've already read through jQuery core a few times and I'm pretty familiar with it which is why I like reading the commits. I just like to see what changed, why it was changed, etc. Why has there been a slow down in jQuery commits on GitHub? Anyone else have some recommendations for where I can go to view good javascript code being developed? My motive for reading jQuery's commit history is similar to the reasons I browse through accepted answers here on stackoverflow - to learn from people smarter than me. With that said, I am interested in the answer to this questions title, but I am more interested in finding a substitute to reading the jQuery commits.

    Read the article

  • SQL Server Mapping a user to a login and adding roles programmatically

    - by user163457
    In my SQL Server 2005 server I create databases and logins using Management Studio. My application requires that I give a newly created user read and write permissions to another database. To do this I right-click the newly created login, select properties and go to User Mapping. I put a check beside the database to map this login to the db and select db_datareader and db_datawriter as the roles to map. Can this be done programmatically? I've read about using Alter User and sp_change_users_login but I'm having problems getting these to work, since sp_change_users_login has been deprecated so I'd prefer to use Alter User. Please note my understanding of SQL Server database users/logins/roles is basic

    Read the article

  • Seam + hibernate + jsf on weblogic

    - by Kiva
    I'm making a little project with Seam, Hibernate and JSF. This project run on JBoss 5.1. My boss wants to deploy this project on WebLogic. I read on the seam documentation that seam and WebLogic don't work fine together. I would like to know if I can use Hibernate (with JPA) and JSF on WebLogic and what framework (struts, spring?) I can use to replace Seam. Edit: I read in the seam documentation (chapter 39, weblogic integration) and I find that: For several releases of Weblogic there has been an issue with how Weblogic generates stubs and compiles EJB's that use variable arguments in their methods. This is confirmed in the Weblogic 9.X and 10.0.MP1 versions. Unfortunately the 10.3 version only partially addresses the issue as detailed below. So, I want to know if other problems like this exist. Edit 2: I use Weblogic 10.3

    Read the article

  • Noise with multi-threaded raytracer

    - by herber88
    This is my first multi-threaded implementation, so it's probably a beginners mistake. The threads handle the rendering of every second row of pixels (so all rendering is handled within each thread). The problem persists if the threads render the upper and lower parts of the screen respectively. Both threads read from the same variables, can this cause any problems? From what I've understood only writing can cause concurrency problems... Can calling the same functions cause any concurrency problems? And again, from what I've understood this shouldn't be a problem... The only time both threads write to the same variable is when saving the calculated pixel color. This is stored in an array, but they never write to the same indices in that array. Can this cause a problem? Multi-threaded rendered image (Spam prevention stops me from posting images directly..) Ps. I use the exactly same implementation in both cases, the ONLY difference is a single vs. two threads created for the rendering.

    Read the article

  • Query returning related assets

    - by GMo
    I have 2 tables, one is an assets table which holds digital assets (e.g. article, images etc), the 2nd table is an asset_links table which maps 1-1 relationships between assets contained within the assets table. Here are the table definitions: Asset +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | source | varchar(255) | YES | | NULL | | | title | varchar(255) | YES | | NULL | | | date_created | datetime | YES | | NULL | | | date_embargo | datetime | YES | | NULL | | | date_expires | datetime | YES | | NULL | | | date_updated | datetime | YES | | NULL | | | keywords | varchar(255) | YES | | NULL | | | status | int(11) | YES | | NULL | | | priority | int(11) | YES | | NULL | | | fk_site | int(11) | YES | MUL | NULL | | | resource_type | varchar(255) | YES | | NULL | | | resource_id | int(11) | YES | | NULL | | | fk_user | int(11) | YES | MUL | NULL | | +---------------+--------------+------+-----+---------+----------------+ Asset_links +-----------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | asset_id1 | int(11) | YES | | NULL | | | asset_id2 | int(11) | YES | | NULL | | +-----------+---------+------+-----+---------+----------------+ In the asset_links table there are the following rows: 1 - 3, 1 - 4, 2 - 10, 2 - 56 I am looking to write one query which will return all assets which satisfy any asset search criteria and within the same query return all of the linked asset data for linked assets for that asset. e.g. The query returning assets 1 and 2 would return : Asset 1 attributes - Asset 3 attributes - Asset 4 attributes Asset 2 attributes - Asset 10 attributes - Asset 56 attributes What is the best way to write the query?

    Read the article

  • A good F# codebase to learn from

    - by Lucas
    Hi all, I've been teaching myself F# for a while now. I've read Programming F# by Chris Smith (great book) and I've written a few small scripts for getting the job done here and there. But IMO the best way to learn a new programming language—and more importantly, the idioms that come with it—is to read a good open source codebase written in that language. Naturally, writing code in that language is crucial, but in the beginning, you're basically struggling with your own ignorance about how things should be done. You could perform certain tasks one way or the other, but it takes experience to realize the flaws and virtues of each. Even after you've gotten a firm grasp of how things work, reading the code of people who have an even firmer one helps a great deal. Most would agree that the most insightful parts of any learn-a-programming-language book are the code examples, and reading a well-written open source codebase is the next level of that. So are there any out there for F#?

    Read the article

  • [C#] How to share a variable between two classes?

    - by Altefquatre
    Hello, How would you share the same object between two other objects? For instance, I'd like something in that flavor: class A { private string foo_; // It could be any other class/struct too (Vector3, Matrix...) public A (string shared) { this.foo_ = shared; } public void Bar() { this.foo_ = "changed"; } } ... // inside main string str = "test"; A a = new A(str); Console.WriteLine(str); // "test" a.Bar(); Console.WriteLine(str); // I get "test" instead of "changed"... :( I read there is some ref/out stuff, but I couldn't get what I'm asking here. I could only apply some changes in the methods scope where I was using ref/out arguments... I also read we could use pointers, but is there no other way to do it? Thanks Altefquatre

    Read the article

  • Python logging in Django

    - by Jeff
    I'm developing a Django app, and I'm trying to use Python's logging module for error/trace logging. Ideally I'd like to have different loggers configured for different areas of the site. So far I've got all of this working, but one thing has me scratching my head. I have the root logger going to sys.stderr, and I have configured another logger to write to a file. This is in my settings.py file: sviewlog = logging.getLogger('MyApp.views.scans') view_log_handler = logging.FileHandler('C:\\MyApp\\logs\\scan_log.log') view_log_handler.setLevel(logging.INFO) view_log_handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')) sviewlog.addHandler(view_log_handler) Seems pretty simple. Here's the problem, though: whatever I write to the sviewlog gets written to the log file twice. The root logger only prints it once. It's like addHandler() is being called twice. And when I put my code through a debugger, this is exactly what I see. The code in settings.py is getting executed twice, so two FileHandlers are created and added to the same logger instance. But why? And how do I get around this? Can anyone tell me what's going on here? I've tried moving the sviewlog logger/handler instantiation code to the file where it's used (since that actually seems like the appropriate place to me), but I have the same problem there. Most of the examples I've seen online use only the root logger, and I'd prefer to have multiple loggers.

    Read the article

  • Teacher confused about MVC?

    - by Gideon
    I have an assignment to create a game in java using MVC as a pattern. The thing is that stuff I read about MVC aren't really what the teacher is telling me. What I read is that the Model are the information objects, they are manipulated by the controllers. So in a game the controller mutates the placement of the objects and check if there is any collision etc. What my teacher told me is that I should put everything that is universal to the platform in the models and the controllers should only tell the model which input is given. That means the game loop will be in a model class, but also collision checks etc. So what I get from his story is that the View is the screen, the Controller is the unput handeler, and the Model is the rest. Can someone point me in the right direction?

    Read the article

  • Kernel dealing with the section headers in an ELF

    - by uki
    I recently read that the kernel and the dynamic loader mostly deal with the program header tables in an ELF file and that assemblers, compilers and linkers deal with the section header tables. The number of program header tables and section header tables are mentioned in the ELF header in fields named e_phnum and e_shnum respectively. e_phnum is two bytes in size, so if the number of program headers is 65535, we use a scheme known as extended numbering where, e_phnum is set to 0xffff and sh_link field of the zeroth section header table holds the actual count. My doubt is : If the count of program headers exceeds 65535, does that mean the kernel and/or the dynamic loader end up having to read the section table?

    Read the article

  • Balanced Search Tree Query, Asymtotic Analysis..

    - by AGeek
    Hi, The situation is as follows:- We have n number and we have print them in sorted order. We have access to balanced dictionary data structure, which supports the operations serach, insert, delete, minimum, maximum each in O(log n) time. We want to retrieve the numbers in sorted order in O(n log n) time using only the insert and in-order traversal. The answer to this is:- Sort() initialize(t) while(not EOF) read(x) insert(x,t); Traverse(t); Now the query is if we read the elements in time "n" and then traverse the elements in "log n"(in-order traversal) time,, then the total time for this algorithm (n+logn)time, according to me.. Please explain the follow up of this algorithm for the time calculation.. How it will sort the list in O(nlogn) time?? Thanks.

    Read the article

  • please give me a solution

    - by user327832
    here is the code i have written so far but ended up giving me error import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; public class Main { public static void main(String[] args) throws Exception { File file = new File("c:\\filea.txt"); InputStream is = new FileInputStream(file); long length = file.length(); System.out.println (length); bytes[] bytes = new bytes[(int) length]; try { int offset = 0; int numRead = 0; while (numRead >= 0) { numRead = is.read(bytes); } } catch (IOException e) { System.out.println ("Could not completely read file " + file.getName()); } is.close(); Object[] see = new Object[(int) length]; see[1] = bytes; System.out.println ((String[])see[1]); } }

    Read the article

  • Writing to a file in a servlet

    - by ankur verma
    I am working in a servlet and has this code : public void doPost(blah blah){ response.setContentType("text/html"); String datasent = request.getParameter("dataSent"); System.out.println(datasent); try{ FileWriter writer = new FileWriter("C:/xyz.txt"); writer.write("hello"); System.out.println("I wrote"); }catch(Exception ex){ ex.printStackTrace(); } response.getWriter().write("I am from server"); } But everytime it is throwing an error saying Access Denied.. Even when there is no lock on that file and there is no file whose name is C:/xyz.txt what should I do? ;( java.io.FileNotFoundException: C:\xyz.txt (Access is denied) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:212) at java.io.FileOutputStream.<init>(FileOutputStream.java:104) at java.io.FileWriter.<init>(FileWriter.java:63) at test.TestServlet.doPost(TestServlet.java:49) at javax.servlet.http.HttpServlet.service(HttpServlet.java:641) at javax.servlet.http.HttpServlet.service(HttpServlet.java:722) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:306) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:108) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:558) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:379) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:243) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:259) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:237) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:281) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722)

    Read the article

  • How do you do real time document tracking?

    - by Nimish
    I was considering diff Document Tracking options and came across DocTracking.com. DocTracking.com allows you to upload documents (PDF Word etc) and adds some kind of invisible tracking to it and returns the document to you which can then be used just like you would use the document otherwise. This tracking tells you when your documents were opened, who opened them (IP), geo-location of opening if they are re-opened or forwarded, what pages were read and how long it was read for, what was printed. Any leads on how this could be done would be appreciated.

    Read the article

  • How do you unit test new code that uses a bunch of classes that cannot be instantiated in a test har

    - by trendl
    I'm writing a messaging layer that should handle communication with a third party API. The API has a bunch of classes that cannot be easily (if at all) instantiated in a test harness. I decided to wrap each class that I need in my unit tests with an adapter/wrapper and expose the members I need through this adapter class. Often I need to expose the wrapped type as well which I do by exposing it as an object. I have also provided an interface for for each or the adapter classes to be able to use them with a mocking framework. This way I can substitute the classes in test for whatever I need. The downside is that I have a bunch of adapter classes that so far server no other reason but testing. For me this is a good reason by itself but others may find this not enough. Possibly, when I write an implementation for another third party vendor's API, I may be able to reuse much of my code and only provide the adapters specific to the vendor's API. However, this is a bit of a long shot and I'm not actually sure it will work. What do you think? Is this approach viable or am I writing unnecessary code that serves no real purpose? Let me say that I do want to write unit tests for my messaging layer and I do now know how to do it otherwise.

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • What can cause a spontaneous EPIPE error without either end calling close() or crashing?

    - by Hongli
    I have an application that consists of two processes (let's call them A and B), connected to each other through Unix domain sockets. Most of the time it works fine, but some users report the following behavior: A sends a request to B. This works. A now starts reading the reply from B. B sends a reply to A. The corresponding write() call returns an EPIPE error, and as a result B close() the socket. However, A did not close() the socket, nor did it crash. A's read() call returns 0, indicating end-of-file. A thinks that B prematurely closed the connection. Users have also reported variations of this behavior, e.g.: A sends a request to B. This works partially, but before the entire request is sent A's write() call returns EPIPE, and as a result A close() the socket. However B did not close() the socket, nor did it crash. B reads a partial request and then suddenly gets an EOF. The problem is I cannot reproduce this behavior locally at all. I've tried OS X and Linux. The users are on a variety of systems, mostly OS X and Linux. Things that I've already tried and considered: Double close() bugs (close() is called twice on the same file descriptor): probably not as that would result in EBADF errors, but I haven't seen them. Increasing the maximum file descriptor limit. One user reported that this worked for him, the rest reported that it did not. What else can possibly cause behavior like this? I know for certain that neither A nor B close() the socket prematurely, and I know for certain that neither of them have crashed because both A and B were able to report the error. It is as if the kernel suddenly decided to pull the plug from the socket for some reason.

    Read the article

  • How can I use TDD to solve a puzzle with an unknown answer?

    - by matthewsteele
    Recently I wrote a Ruby program to determine solutions to a "Scramble Squares" tile puzzle: I used TDD to implement most of it, leading to tests that looked like this: it "has top, bottom, left, right" do c = Cards.new card = c.cards[0] card.top.should == :CT card.bottom.should == :WB card.left.should == :MT card.right.should == :BT end This worked well for the lower-level "helper" methods: identifying the "sides" of a tile, determining if a tile can be validly placed in the grid, etc. But I ran into a problem when coding the actual algorithm to solve the puzzle. Since I didn't know valid possible solutions to the problem, I didn't know how to write a test first. I ended up writing a pretty ugly, untested, algorithm to solve it: def play_game working_states = [] after_1 = step_1 i = 0 after_1.each do |state_1| step_2(state_1).each do |state_2| step_3(state_2).each do |state_3| step_4(state_3).each do |state_4| step_5(state_4).each do |state_5| step_6(state_5).each do |state_6| step_7(state_6).each do |state_7| step_8(state_7).each do |state_8| step_9(state_8).each do |state_9| working_states << state_9[0] end end end end end end end end end So my question is: how do you use TDD to write a method when you don't already know the valid outputs? If you're interested, the code's on GitHub: Tests: https://github.com/mattdsteele/scramblesquares-solver/blob/master/golf-creator-spec.rb Production code: https://github.com/mattdsteele/scramblesquares-solver/blob/master/game.rb

    Read the article

  • Make function declarations based on function definitions

    - by Clinton Blackmore
    I've written a .cpp file with a number of functions in it, and now need to declare them in the header file. It occurred to me that I could grep the file for the class name, and get the declarations that way, and it would've worked well enough, too, had the complete function declaration before the definition -- return code, name, and parameters (but not function body) -- been on one line. It seems to me that this is something that would be generally useful, and must've been solved a number of times. I am happy to edit the output and not worried about edge cases; anything that gives me results that are right 95% of the time would be great. So, if, for example, my .cpp file had: i2cstatus_t NXTI2CDevice::writeRegisters( uint8_t start_register, // start of the register range uint8_t bytes_to_write, // number of bytes to write uint8_t* buffer = 0) // optional user-supplied buffer { ... } and a number of other similar functions, getting this back: i2cstatus_t NXTI2CDevice::writeRegisters( uint8_t start_register, // start of the register range uint8_t bytes_to_write, // number of bytes to write uint8_t* buffer = 0) for inclusion in the header file, after a little editing, would be fine. Getting this back: i2cstatus_t writeRegisters( uint8_t start_register, uint8_t bytes_to_write, uint8_t* buffer); or this: i2cstatus_t writeRegisters(uint8_t start_register, uint8_t bytes_to_write, uint8_t* buffer); would be even better.

    Read the article

  • How to deploy App_Data files with Azure cloud service (web role)

    - by user2977157
    I have a read-only data file (for IP geolocation) that my web role needs to read. It is currently in the App_Data folder, which is NOT included in the deployment package for the cloud service. Unlike "web deploy", there is no checkbox for an azure cloud service deployment to include/exclude App_Data. Is there a reasonable way to get the deployment package to include the App_Data folder/files? Or is using Azure storage for this sort of thing the better way to go? (cost and performance wise) Am using Visual Studio 2013 and the Azure SDK 2.2

    Read the article

  • Unit testing a method with many possible outcomes

    - by Cthulhu
    I've built a simple~ish method that constructs an URL out of approximately 5 parts: base address, port, path, 'action', and a set of parameters. Out of these, only the address part is mandatory, the other parts are all optional. A valid URL has to come out of the method for each permutation of input parameters, such as: address address port address port path address path address action address path action address port action address port path action address action params address path action params address port action params address port path action params andsoforth. The basic approach for this is to write one unit test for each of these possible outcomes, each unit test passing the address and any of the optional parameters to the method, and testing the outcome against the expected output. However, I wonder, is there a Better (tm) way to handle a case like this? Are there any (good) unit test patterns for this? (rant) I only now realize that I've learned to write unit tests a few years ago, but never really (feel like) I've advanced in the area, and that every unit test is a repeat of building parameters, expected outcome, filling mock objects, calling a method and testing the outcome against the expected outcome. I'm pretty sure this is the way to go in unit testing, but it gets kinda tedious, yanno. Advice on that matter is always welcome. (/rant) (note) christmas weekend approaching, probably won't reply to suggestions until next week. (/note)

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >