Search Results

Search found 5267 results on 211 pages for 'use cases'.

Page 7/211 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Subclassing Cocoa means no subclass ivars in some cases?

    - by Michael
    The question is generally coming from self = [super init]. In case if I'm subclassing NSSomething and in my init's method self = [super init] returns object of different class, does it mean I am not able to have my very own ivars in my subclass, just because self will be pointing to different class? Appreciate if you could bring some examples if my statement is wrong.

    Read the article

  • I've been told that Exceptions should only be used in exceptional cases. How do I know if my case is exceptional?

    - by tieTYT
    My specific case here is that the user can pass in a string into the application, the application parses it and assigns it to structured objects. Sometimes the user may type in something invalid. For example, their input may describe a person but they may say their age is "apple". Correct behavior in that case is roll back the transaction and to tell the user an error occurred and they'll have to try again. There may be a requirement to report on every error we can find in the input, not just the first. In this case, I argued we should throw an exception. He disagreed, saying, "Exceptions should be exceptional: It's expected that the user may input invalid data, so this isn't an exceptional case" I didn't really know how to argue that point, because by definition of the word, he seems to be right. But, it's my understanding that this is why Exceptions were invented in the first place. It used to be you had to inspect the result to see if an error occurred. If you failed to check, bad things could happen without you noticing. Without exceptions every level of the stack needs to check the result of the methods they call and if a programmer forgets to check in one of these levels, the code could accidentally proceed and save invalid data (for example). Seems more error prone that way. Anyway, feel free to correct anything I've said here. My main question is if someone says Exceptions should be exceptional, how do I know if my case is exceptional?

    Read the article

  • Needed list of special characters classification with respective characters

    - by pravin
    I am working on one web application , It's related to machine translation support i.e. which takes source text for translation and translated in to user specified language Currently it's in unit testing phase. Here, i want to check that, whether my machine translation feature is fully working for all the special characters. Because of different test cases I stuck at one point where i need all the special characters with classification. I needed all the special characters listing with classification. e.g. 1st : class name : Punctuation Characters : !?,"| etc test cases : segment1? segment2! segment3. 2nd : Class name : HTML entities characters : all the characters which belong under this class test cases : respective test cases 3rd : Class name : Extended ASCII characters :all the characters which belong under this class test cases : respective test cases Please folks provide this, if anyone has any idea or links so that i can make product perfect Thanks a lot

    Read the article

  • design pattern for unit testing? [duplicate]

    - by Maddy.Shik
    This question already has an answer here: Unit testing best practices for a unit testing newbie 4 answers I am beginner in developing test cases, and want to follow good patterns for developing test cases rather than following some person or company's specific ideas. Some people don't make test cases and just develop the way their senior have done in their projects. I am facing lot problems like object dependencies (when want to test method which persist A object i have to first persist B object since A is child of B). Please suggest some good books or sites preferably for learning design pattern for unit test cases. Or reference to some good source code or some discussion for Dos and Donts will do wonder. So that i can avoid doing mistakes be learning from experience of others.

    Read the article

  • design pattern for unit testing?

    - by Maddy.Shik
    I am beginner in developing test cases, and want to follow good patterns for developing test cases rather than following some person or company's specific ideas. Some people don't make test cases and just develop the way their senior have done in their projects. I am facing lot problems like object dependencies (when want to test method which persist A object i have to first persist B object since A is child of B). Please suggest some good books or sites preferably for learning design pattern for unit test cases. Or reference to some good source code or some discussion for Dos and Donts will do wonder. So that i can avoid doing mistakes be learning from experience of others.

    Read the article

  • Benchmarking ORM associations

    - by barerd
    I am trying to benchmark two cases of self referential many to many as described in datamapper associations. Both cases consist of an Item clss, which may require many other items. In both cases, I required the ruby benchmark library and source file, created two items and benchmarked require/unrequie functions as below: Benchmark.bmbm do |x| x.report("require:") { item_1.require_item item_2, 10 } x.report("unrequire:") { item_1.unrequire_item item_2 } end To be clear, both functions are datamapper add/modify functions like: componentMaps.create :component_id => item.id, :quantity => quantity componentMaps.all(:component_id => item.id).destroy! and links_to_components.create :component_id => item.id, :quantity => quantity links_to_components.all(:component_id => item.id).destroy! The results are variable and in the range of 0.018001 to 0.022001 for require function in both cases, and 0.006 to 0.01 for unrequire function in both cases. This made me suspicious about the correctness of my test method. Edit I went ahead and compared a "get by primary key case" to a "finding first matching record case" by: (1..10000).each do |i| Item.create :name => "item_#{i}" end Benchmark.bmbm do |x| x.report("Get") { item = Item.get 9712 } x.report("First") { item = Item.first :name => "item_9712" } end where the results were very different like 0 sec compared to 0.0312, as expected. This suggests that the benchmarking works. I wonder whether I benchmarked the two types of associations correctly, and whether a difference between 0.018 and 0.022 sec significant?

    Read the article

  • Combining lists of objects containing lists of objects in c#

    - by Dan H
    The title is a little prosaic, I know. I have 3 classes (Users, Cases, Offices). Users and Cases contain a list of Offices inside of them. I need to compare the Office lists from Users and Cases and if the ID's of Offices match, I need to add those IDs from Cases to the Users. So the end goal is to have the Users class have any Offices that match the Offices in the Cases class. Any ideas? My code (which isnt working) foreach (Users users in userList) foreach (Cases cases in caseList) foreach (Offices userOffice in users.officeList) foreach (Offices caseOffice in cases.officeList) { if (userOffice.ID == caseOffice.ID) users.caseAdminIDList.Add(cases.adminID); }//end foreach //start my data classes class Users { public Users() { List<Offices> officeList = new List<Offices>(); List<int> caseAdminIDList = new List<int>(); ID = 0; }//end constructor public int ID { get; set; } public string name { get; set; } public int adminID { get; set; } public string ADuserName { get; set; } public bool alreadyInDB { get; set; } public bool alreadyInAdminDB { get; set; } public bool deleted { get; set; } public List<int> caseAdminIDList { get; set; } public List<Offices> officeList { get; set; } } class Offices { public int ID { get; set; } public string name { get; set; } } class Users { public Users() { List<Offices> officeList = new List<Offices>(); List<int> caseAdminIDList = new List<int>(); ID = 0; }//end constructor public int ID { get; set; } public string name { get; set; } public int adminID { get; set; } public string ADuserName { get; set; } public bool alreadyInDB { get; set; } public bool alreadyInAdminDB { get; set; } public bool deleted { get; set; } public List<int> caseAdminIDList { get; set; } public List<Offices> officeList { get; set; } }

    Read the article

  • Maintaining traceability up-to-date as project evolves

    - by Catalin Piti?
    During various projects, I needed to make sure that the use case model I developed during the analysis phase is covering the requirements of the project. For that, I was able to have some degree of traceability between requirement statements (uniquely identified) and use cases (also uniquely identified). In some cases, enabling traceability implied some additional effort that I considered (and later proved) to be a good investment. Now, the biggest problem I faced was to maintain this traceability later, when things started to change (as a result of change requests, or as a result of use case changes). Any ideas of best practices for traceability maintenance? (It can apply to other items in the project - e.g. use cases and test cases, or requirements and acceptance test cases) Later edit Tools might help, but they can't detect gaps or errors in traceability. Navigation... maybe, but no warranty that the traceability is up-to-date or correct after applying the changes.

    Read the article

  • How to convert many thousands of lines of VBScript to C#?

    - by Ross Patterson
    I have a collection of about 10,000 small VBScript programs (50-100 lines each) and a small collection of larger ones, and I'm looking for a way to convert them to C# without resorting to by-hand transliteration. The programs are automated test cases for a web application, written for HP/Mercury's QuickTest Pro, and I'm trying to turn them into test cases for Selenium. Luckily, the tests appear to be well-written, using a library of building blocks and idioms (the larger programs), so the test cases actually resemble a domain-specific language more than they do VBScript, and the QTP-ness is well-buried inside the libraries. Ideally, what I'm searching for is a tool that can do the syntactic transformation from VBScript to C# for both the dsl-ish test cases and also the more complicated building-block libraries. That would leave me with a manual cleanup of the libraries, and probably very little work on the test cases. If I could find a VBScript-to-VB.NET translator, I'd take that also, as I suspect I could compile the VB.NET and then de-compile to C# using .NET Relector or something similar. Plan B is to write a translator of my own for the test cases, since they're in a very straight-line style, but it wouldn't help with the libraries. Any suyggestions? I haven't written a compiler in at least 15 years, and while I haven't forgotten how, I'm not looking forward to it - least of all for VBScript!

    Read the article

  • Is it possible to embed Cockburn style textual UML Use Case content in the code base to improve code

    - by fooledbyprimes
    experimenting with Cockburn use cases in code I was writing some complicated UI code. I decided to employ Cockburn use cases with fish,kite,and sea levels (discussed by Martin Fowler in his book 'UML Distilled'). I wrapped Cockburn use cases in static C# objects so that I could test logical conditions against static constants which represented steps in a UI workflow. The idea was that you could read the code and know what it was doing because the wrapped objects and their public contants gave you ENGLISH use cases via namespaces. Also, I was going to use reflection to pump out error messages that included the described use cases. The idea is that the stack trace could include some UI use case steps IN ENGLISH.... It turned out to be a fun way to achieve a mini,psuedo light-weight Domain Language but without having to write a DSL compiler. So my question is whether or not this is a good way to do this? Has anyone out there ever done something similar? c# example snippets follow Assume we have some aspx page which has 3 user controls (with lots of clickable stuff). User must click on stuff in one particular user control (possibly making some kind of selection) and then the UI must visually cue the user that the selection was successful. Now, while that item is selected, the user must browse through a gridview to find an item within one of the other user controls and then select something. This sounds like an easy thing to manage but the code can get ugly. In my case, the user controls all sent event messages which were captured by the main page. This way, the page acted like a central processor of UI events and could keep track of what happens when the user is clicking around. So, in the main aspx page, we capture the first user control's event. using MyCompany.MyApp.Web.UseCases; protected void MyFirstUserControl_SomeUIWorkflowRequestCommingIn(object sender, EventArgs e) { // some code here to respond and make "state" changes or whatever // // blah blah blah // finally we have this (how did we know to call fish level method?? because we knew when we wrote the code to send the event in the user control) UpdateUserInterfaceOnFishLevelUseCaseGoalSuccess(FishLevel.SomeNamedUIWorkflow.SelectedItemForPurchase) } protected void UpdateUserInterfaceOnFishLevelGoalSuccess(FishLevel.SomeNamedUIWorkflow goal) { switch (goal) { case FishLevel.SomeNamedUIWorkflow.NewMasterItemSelected: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.DrillDownOnDetails: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.CancelMultiSelect: //call some UI related methods here including methods for the other user controls if necessary.... break; // more cases... } } } //also we have protected void UpdateUserInterfaceOnSeaLevelGoalSuccess(SeaLevel.SomeNamedUIWorkflow goal) { switch (goal) { case SeaLevel.CheckOutWorkflow.ChangedCreditCard: // do stuff // more cases... } } } So, in the MyCompany.MyApp.Web.UseCases namespace we might have code like this: class SeaLevel... class FishLevel... class KiteLevel... The workflow use cases embedded in the classes could be inner classes or static methods or enumerations or whatever gives you the cleanest namespace. I can't remember what I did originally but you get the picture.

    Read the article

  • Filling up bounded form with information from another table while creating new record

    - by amir shadaab
    I have an excel sheet with information about each employee. I keep getting new updated spreadsheet every month. I have to create a database managing cases related to the employees. I have a database and the bounded form already created for the cases which also contain emp info fields. What I am trying to do is to only type in the emp id in the form and want the form to look up in the spreadsheet(which can be a table in the cases db) and populate other fields in the form and that information can go into the cases db. Can this be done?

    Read the article

  • Problems with mounting .ISO files

    - by user89599
    I'm using Precise, with GNOME. I've attempted to install some retro, multi-cd games (KOTOR1) via .ISO images and WINE, but I can't get the ISO's to mount correctly. First I tried GMountISO, which showed a read-only warning but worked - until I went to unmount. When the installation program asked for CD 2 I couldn't unmount from the /cdrom folder because neither GMountISO or umount from terminal could detect the mount. After a reboot, I changed to GISOMount (different somehow, I guess?), but when I attempt to mount the ISO I get an error window explaining the syntax of the mount command and, which is also what I get when I attempt to use mount from terminal. After checking /media from terminal on a lark I see the disc mounted there twice over, but umount won't recognize it, even when I specify the full path sudo umount /media/KOTOR_1.iso. It cleared up upon reboot. Can someone please assist? UPDATE :: Thanks for the quick response. What's weird, is the images are like stuck in limbo... I'll show you: sc@sc-HP-110-3700:/media$ ls cdrom KOTOR_1(0)(vcd) KOTOR_1(vcd) sc@sc-HP-110-3700:/media$ cd cdrom sc@sc-HP-110-3700:/media/cdrom$ ls sc@sc-HP-110-3700:/media/cdrom$ cd .. sc@sc-HP-110-3700:/media$ umount KOTOR_1(vcd) bash: syntax error near unexpected token `(' sc@sc-HP-110-3700:/media$ umount KOTOR_1.ISO umount: KOTOR_1.ISO is not mounted (according to mtab) sc@sc-HP-110-3700:/media$ sudo umount -a umount: /run/shm: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) umount: /run: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) umount: /dev: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) umount: /: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) sc@sc-HP-110-3700:/media$

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Is there such a thing as too much experience?

    - by sunpech
    For modern software developers in today's world, is there such a thing as having too much experience with a certain technology or programming language? To a recruiter, interviewer, or company hiring-- could there often be cases where a particular candidate has so much experience in a certain area or technology where it works against the candidate to being hired? I'm not talking about cases where a senior developer is applying for an entry-level developer position, and has a lot of experience in that sense. Nor am I talking about cases where a candidate is outright lying (e.g. 20+ years experience with Ruby on Rails). I've overheard this in conversations between hiring managers/developers during happy hours, yet I'm not quite sure I fully understand what they mean.

    Read the article

  • Do we need use case levels or not?

    - by Gabriel Šcerbák
    I guess no one would argue for decomposing use cases, that is just wrong. However sometimes it is necessary to specify use cases, which are on lower, more technical level, like for example authetication and authorization, which give the actor value, but are further from his business needs. Cockburn argues for levels when needed and explains how to move use cases from/to different levels and how to determine the right level. On the other hand, e.g. Bittner argues against use case levels, although he uses subflows and at the end of his book mentions, that at least two levels areneeded most of the time. My questionis, do you find use case levels necessary, helpful or unwanted? What are the reasons? Am I misssing some important arguments?

    Read the article

  • Behavior-Driven Development / Use case diagram

    - by Mik378
    Regarding growing of Behavior-Driven Development imposing acceptance testing, are use cases diagram useful or do they lead to an "over-documentation"? Indeed, acceptance tests representing specifications by example, as use cases promote despite of a more generic manner (since cases, not scenarios), aren't they too similar to treat them both at the time of a newly created project? From this link, one opinion is: Another realization I had is that if you do UseCases and automated AcceptanceTests you are essentially doubling your work. There is duplication between the UseCases and the AcceptanceTests. I think there is a good case to be made that UserStories + AcceptanceTests are more efficient way to work when compared to UseCases + AcceptanceTests. What to think about?

    Read the article

  • Dynamic tests with mstest and T4

    - by Victor Hurdugaci
    If you used mstest and NUnit you might be aware of the fact that the former doesn't support dynamic, data driven test cases. For example, the following scenario cannot be achieved with the out-of-box mstest: given a dataset, create distinct test cases for each entry in it, using a predefined generic test case. The best result that can be achieved using mstest is a single testcase that will iterate through the dataset. There is one disadvantage: if the test fails for one entry in the dataset, the whole test case fails. So, in order to overcome the previously mentioned limitation, I decided to create a text template that will generate the test cases for me. As an example, I will write some tests for an integer multiplication function that has 2 bugs in it: Read more >> [Cross post from victorhurdugaci.com]

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

  • Problem loading Oracle client libraries when running in a NAnt build

    - by Chris Farmer
    I am trying to use dbdeploy to manage Oracle schema changes. I can run it successfully from the command line to get it to generate my change scripts, but when I try to execute it via the dbdeploy NAnt task running through TeamCity, I get an error: System.Data.OracleClient requires Oracle client software version 8.1.7 or greater. I do have the Oracle 10.2.0.2 client software installed. It's the first entry in the system path, and the dbdeploy.exe app is able to successfully negotiate an Oracle connection. The dbdeploy code dynamically loads the System.Data.OracleClient assembly, which in-turn tries to use the Oracle client bits to talk to the database. This is what is failing in my NAnt environment. I have verified the following points: The same user identity is running the process in both cases The same working directory is used in both cases The same dbdeploy code is running in both cases and with the same supplied parameters The same database connection string is being used in both cases The same ADO.NET assembly is being dynamically loaded in both cases (System.Data.OracleClient, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) Here's the top of the stack trace during the error: at System.Data.OracleClient.OCI.DetermineClientVersion() at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction (String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor( OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection( DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection( DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection( DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection( DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at Net.Sf.Dbdeploy.Database.DatabaseSchemaVersionManager. GetCurrentVersionFromDb() My main question is this: how can I discover what's different about these running environments to see why my Oracle client software can't be loaded?

    Read the article

  • How to properly use glDiscardFramebufferEXT

    - by Rafael Spring
    This question relates to the OpenGL ES 2.0 Extension EXT_discard_framebuffer. It is unclear to me which cases justify the use of this extension. If I call glDiscardFramebufferEXT() and it puts the specified attachable images in an undefined state this means that either: - I don't care about the content anymore since it has been used with glReadPixels() already, - I don't care about the content anymore since it has been used with glCopyTexSubImage() already, - I shouldn't have made the render in the first place. Clearly, only the 1st two cases make sense or are there other cases in which glDiscardFramebufferEXT() is useful? If yes, which are these cases?

    Read the article

  • Basic jUnit Questions

    - by Epitaph
    I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String) `public String multiply(String num1, String num2); I have done the implementation and created a test class with the following test cases involving the input String parameter as 1) valid numbers 2) characters 3) special symbol 4) empty string 5) Null value 6) 0 7) Negative number 8) float 9) Boundary values 10) Numbers that are valid but their product is out of range 11) numbers will + sign (+23) 1) I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ? 2) If testing an input value like character ("a"), do I need to include test cases for ALL scenarios? "a" as the first argument "a" as the second argument "a" and "b" as the 2 arguments 3) As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit? 4) Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough? 5) Following from the above point, have I successfully tested the multiply() method?

    Read the article

  • PHP & MySQL - Deleting table rows problem.

    - by oReiLLy
    Okay my script is supposed to delete a specific users case which is stored in 2 MySQL tables but for some reason when the user deletes the specific case it deletes all the users cases I only want it to delete the case the user selects. I was wondering how can I fix this problem? Thanks in advance for helping. Here is the PHP & MySQL code. if(isset($_POST['delete_case'])) { $cases_ids = array(); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT cases.*, users_cases.* FROM cases INNER JOIN users_cases ON users_cases.cases_id = cases.id WHERE users_cases.user_id='$user_id'"); if (!$dbc) { print mysqli_error($mysqli); } else { while($row = mysqli_fetch_array($dbc)){ $cases_ids[] = $row["cases_id"]; } } foreach($_POST['delete_id'] as $di) { if(in_array($di, $cases_ids)) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"DELETE FROM users_cases WHERE cases_id = '$di'"); $dbc2 = mysqli_query($mysqli,"DELETE FROM cases WHERE id = '$di'"); } } } Here is the XHTML code. <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="submit" name="delete_case" id="delete_case" value="Delete Case" /> <input type="hidden" name="delete_id[]" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="submit" name="delete_case" id="delete_case" value="Delete Case" /> <input type="hidden" name="delete_id[]" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="submit" name="delete_case" id="delete_case" value="Delete Case" /> <input type="hidden" name="delete_id[]" value="' . $row['cases_id'] . '" /> </li> Here is the MySQL tables. CREATE TABLE cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, file VARCHAR(255) NOT NULL, case VARCHAR(255) NOT NULL, name VARCHAR(255) NOT NULL, PRIMARY KEY (id) ); CREATE TABLE users_cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, cases_id INT UNSIGNED NOT NULL, user_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) );

    Read the article

  • When is assembler faster than C?

    - by Adam Bellaire
    One of the stated reasons for knowing assembler is that, on occasion, it can be employed to write code that will be more performant than writing that code in a higher-level language, C in particular. However, I've also heard it stated many times that although that's not entirely false, the cases where assembler can actually be used to generate more performant code are both extremely rare and require expert knowledge of and experience with assembler. This question doesn't even get into the fact that assembler instructions will be machine-specific and non-portable, or any of the other aspects of assembler. There are plenty of good reasons for knowing assembler besides this one, of course, but this is meant to be a specific question soliciting examples and data, not an extended discourse on assembler versus higher-level languages. Can anyone provide some specific examples of cases where assembler will be faster than well-written C code using a modern compiler, and can you support that claim with profiling evidence? I am pretty confident these cases exist, but I really want to know exactly how esoteric these cases are, since it seems to be a point of some contention.

    Read the article

  • whats the diference between train, validation and test set, in neural networks?

    - by Daniel
    Im using this library http://pastebin.com/raw.php?i=aMtVv4RZ to implement a learning agent. I have generated the train cases, but i dont know for sure what are the validation and test sets, the teacher says: 70% should be train cases, 10% will be test cases and the rest 20% should be validation cases. Thanks. edit i have this code, for training.. but i have no ideia when to stop training.. def train(self, train, validation, N=0.3, M=0.1): # N: learning rate # M: momentum factor accuracy = list() while(True): error = 0.0 for p in train: input, target = p self.update(input) error = error + self.backPropagate(target, N, M) print "validation" total = 0 for p in validation: input, target = p output = self.update(input) total += sum([abs(target - output) for target, output in zip(target, output)]) #calculates sum of absolute diference between target and output accuracy.append(total) print min(accuracy) print sum(accuracy[-5:])/5 #if i % 100 == 0: print 'error %-14f' % error if ? < ?: break

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >