Search Results

Search found 9718 results on 389 pages for 'classes'.

Page 208/389 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • Implementing a ILogger interface to log data

    - by Jon
    I have a need to write data to file in one of my classes. Obviously I will pass an interface into my class to decouple it. I was thinking this interface will be used for testing and also in other projects. This is my interface: //This could be used by filesystem, webservice public interface ILogger { List<string> PreviousLogRecords {get;set;} void Log(string Data); } public interface IFileLogger : ILogger { string FilePath; bool ValidFileName; } public class MyClassUnderTest { public MyClassUnderTest(IFileLogger logger) {....} } [Test] public void TestLogger() { var mock = new Mock<IFileLogger>(); mock.Setup(x => x.Log(Is.Any<string>).AddsDataToList()); //Is this possible?? var myClass = new MyClassUnderTest(mock.Object); myClass.DoSomethingThatWillSplitThisAndLog3Times("1,2,3"); Assert.AreEqual(3,mock.PreviousLogRecords.Count); } This won't work I don't believe as nothing is storing the items so is this possible using Moq and also what do you think of the design of the interface?

    Read the article

  • Writing Java in Java

    - by Skeith
    I have been using Java for several months at work now and am becoming mildly competent in it. The problem I think I am having is that I program C++ in Java . By that I mean I have always used C++ and am treating Java as a simple syntax change instead of appreciate if for its own language. For instance a static variable in C++ is the same as a normal variable in Java as Java is all classes so they maintain there values between function calls. Little things like this are tripping me up constantly as I am self taught. What I want is to invest the time to become a good java programmer not just a C++ programmer that can write in Java. The problem is I do not know how to do this. I have tried reading the Java doc pages but I find them very clinical and hard to understand. So what I am looking for is recommendations on how I can learn to think in Java. Books that teach Java concepts not Java syntax, online tutorials that I can work through that give it a context, established Java traditions/best practices and any other thing that you could recommend.

    Read the article

  • How do you keep SOA DRY?

    - by TaylorOtwell
    In our organization, we've shifted to a more "service oriented architecture". To give an example, let's assume we need to retrieve a "Quote" object. This quote has a shipper, a consignee, phone numbers, contacts, email addresses, and other location information. In other words, a Quote object is made up of many other objects. So, it seems like it would make sense to make a "Quote Retrieval Service". In our situation, we've accomplished this by creating a .NET solution and writing the service. The service API looks something like this (in pseudo-code): Function GetQuote(String ID) Returns Quote So, so far so good. Now, when this service is consumed, to keep things "de-coupled", we are creating essentially a duplicate of the Quote object and mapping from the QuoteService version of the Quote into the consumer's version of the Quote. In many cases, these classes will have the exact same properties. So, if the Quote service is consumed by 5 other applications, we would have 6 definitions of what a "Quote" is. One for each consumer, and one for the service. This feels wrong. I thought code was supposed to be DRY, but it seems like our method of SOA is forcing us to create tons of duplicated class definitions. What are we doing wrong, or is the code duplication just a "necessary evil" of SOA?

    Read the article

  • Am I an idealist?

    - by ereOn
    This is not only a question, this is also a call for help. Since I started my career as a programmer, I always tried to learn from my mistakes. I worked hard to learn best-practices and while I don't consider myself a C++ expert, I still believe I'm not a beginner either. I was recently hired into a company for C++ development. There I was told that my way to work was "against the rules" and that I would have to change my mind. Here are the topics I disagree with my hierarchy (their words): "You should not use separate header files for your different classes. One big header file is both easier to read and faster to compile." "Trying to use different headers is counter-productive : use the same super-set of headers everywhere, and enforce the use #pragma hdrstop to hasten compilation" "You may not use Boost or any other library that uses nested directories to organize its files. Our build-machine doesn't work with nested directories. Moreover, you don't need Boost to create great software." One might think I'm somehow exaggerated things, but the sad truth is that I didn't. That's their actual words. I believe that having separate files enhance maintainability and code-correctness and can fasten compilation time by the use of the proper includes. Have you been in a similar situation? What should I do? I feel like it's actually impossible for me to work that way and day after day, my frustration grows.

    Read the article

  • Naming a class that processes orders

    - by p.campbell
    I'm in the midst of refactoring a project. I've recently read Clean Code, and want to heed some of the advice within, with particular interest in Single Responsibility Principle (SRP). Currently, there's a class called OrderProcessor in the context of a manufacturing product order system. This class is currently performs the following routine every n minutes: check database for newly submitted + unprocessed orders (via a Data Layer class already, phew!) gather all the details of the orders mark them as in-process iterate through each to: perform some integrity checking call a web service on a 3rd party system to place the order check status return value of the web service for success/fail email somebody if web service returns fail constantly log to a text file on each operation or possible fail point I've started by breaking out this class into new classes like: OrderService - poor name. This is the one that wakes up every n minutes OrderGatherer - calls the DL to get the order from the database OrderIterator (? seems too forced or poorly named) - OrderPlacer - calls web service to place the order EmailSender Logger I'm struggling to find good names for each class, and implementing SRP in a reasonable way. How could this class be separated into new class with discrete responsibilities?

    Read the article

  • Avoid overwriting all the methods in the child class

    - by Heckel
    The context I am making a game in C++ using SFML. I have a class that controls what is displayed on the screen (manager on the image below). It has a list of all the things to draw like images, text, etc. To be able to store them in one list I created a Drawable class from which all the other drawable class inherit. The image below represents how I would organize each class. Drawable has a virtual method Draw that will be called by the manager. Image and Text overwrite this method. My problem is that I would like Image::draw method to work for Circle, Polygon, etc. since sf::CircleShape and sf::ConvexShape inherit from sf::Shape. I thought of two ways to do that. My first idea would be for Image to have a pointer on sf::Shape, and the subclasses would make it point onto their sf::CircleShape or sf::ConvexShape classes (Like on the image below). In the Polygon constructor I would write something like ptr_shape = &polygon_shape; This doesn't look very elegant because I have two variables that are, in fact, just one. My second idea is to store the sf::CircleShape and sf::ConvexShape inside the ptr_shape like ptr_shape = new sf::ConvexShape(...); and to use a function that is only in ConvexShape I would cast it like so ((sf::ConvexShape*)ptr_shape)->convex_method(); But that doesn't look very elegant either. I am not even sure I am allowed to do that. My question I added details about the whole thing because I thought that maybe my whole architecture was wrong. I would like to know how I could design my program to be safe without overwriting all the Image methods. I apologize if this question has already been asked; I have no idea what to google.

    Read the article

  • Code Clone Analysis on Rawr &ndash; Part 1

    - by Dylan Smith
    In this post we’ll take a look at the first result from the Code Clone Analysis, and do some refactoring to eliminate the duplication.  The first result indicated that it found an exact match repeated 14 times across the solution, with 18 lines of duplicated code in each of the 14 blocks.   Net Lines Of Code Deleted: 179     In this case the code in question was a bunch of classes representing the various Bosses.  Every Boss class has a constructor that initializes a whole bunch of properties of that boss, however, for most bosses a lot of these are simply set to 0’s.     Every Boss class inherits from the class MultiDiffBoss, so I simply moved all the initialization of the various properties to the base class constructor, and left it up to the Boss subclasses to only set those that are different than the default values. In this case there are actually 22 Boss subclasses, however, due to some inconsistencies in the code structure Code Clone only identified 14 of them as identical blocks.  Since I was in there refactoring the 14 identified already, it was pretty straightforward to identify the other 8 subclasses that had the same duplicated behavior and refactor those also.   Note: Code Clone Analysis is pretty slow right now.  It takes approx 1 min to build this solution, but it takes 9 mins to run Code Clone Analysis.  Personally, if the results are high quality I’m OK with it taking a long time to run since I don’t expect it’s something I would be running all that often.  However, it would be nice to be able to run it as part of a nightly build, but at this time I don’t believe it’s possible to run outside of Visual Studio due to a dependency on the meta-data available in the VS environment.

    Read the article

  • What Does Installing Ubuntu "Alongside" Windows Entail?

    - by Soft Skeleton
    I recently posted a question about an error I was receiving trying to access Ubuntu from the boot menu. I am using Windows 7 and Ubuntu 12.x (I THINK because I haven't accessed it in over a year due to being unable to run an important program for one of my classes on Ubuntu). On another laptop, I partitioned the hard drive and installed Windows and Ubuntu on the partitions. On this laptop, I simply installed Ubuntu from Windows, picking the option "alongside Windows", and didn't partition my hard drive manually. I was under the impression "alongside" entailed that Ubuntu would partition my hard drive, and that if I were to return my Windows partition to factory settings it would not affect the Ubuntu partition. However, given my current problem, I am wondering if I was mistaken in this assumption? When installing Ubuntu from Windows, selecting "alongside" Windows as the option from the Ubuntu installer, does that simply install Ubuntu within the Windows partition and thus returning it to factory settings would wipe out anything I had on the Ubuntu OS as well? Ubuntu is still in the boot menu as an option, but when I try to access it it says the drive is "corrupt" and wubi is mentioned in the error. I additionally tried to download a program ran from Windows to investigate partitions and there were no sign of my Ubuntu partition viewable from Windows. Is it possible Windows just can't see it? Any insight, corrections or answers is appreciated.

    Read the article

  • Using the Java SE 8 Date Time API with JPA 2.1

    - by reza_rahman
    Most of you are hopefully aware of the new Date Time API included in Java SE 8. If you are not, you should check them out right now using the Java Tutorial Trail dedicated to the topic. It is a significantly leap forward in processing temporal data in Java. For those who already use Joda-Time the changes will look very familiar - very simplistically speaking the Java SE 8 feature is basically Joda-Time standardized. Quite naturally you will likely want to use the new Date Time APIs in your JPA domain model to better represent temporal data. The problem is that JPA 2.1 will not support the new API out of the box. So what are you to do? Fortunately you can make use of fairly simple JPA 2.1 Type Converters to use the Date Time API in your JPA domain classes. Steven Gertiser shows you how to do it in an extremely well written blog entry. Besides explaining the problem and the solution the entry is actually very good for getting a better understanding of JPA 2.1 Type Converters as well. I think such a set of converters may be a good fit for Apache DeltaSpike as a Java EE 7 extension? In case you are wondering about Java SE 8 support in the JPA specification itself, Nick Williams has already entered an excellent, well researched JIRA entry asking for such support in a future version of the JPA specification that's well worth looking at. Another possibility of course is for JPA providers to start supporting the Date Time API natively before anything is formalized in the specification. What do you think?

    Read the article

  • Understanding Asynchronous Programming with .NET Reflector

    - by Nick Harrison
    When trying to understand and learn the .NET framework, there is no substitute for being able to see what is going on behind at the scenes inside even the most confusing assemblies, and .NET Reflector makes this possible. Personally, I never fully understood connection pooling until I was able to poke around in key classes in the System.Data assembly. All of a sudden, integrating with third party components was much simpler, even without vendor documentation!With a team devoted to developing and extending Reflector, Red Gate have made it possible for us to step into and actually debug assemblies such as System.Data as though the source code was part of our solution. This maybe doesn’t sound like much, but it dramatically improves the way you can relate to and understand code that isn’t your own.Now that Microsoft has officially launched Visual Studio 2012, Reflector is also fully integrated with the new IDE, and supports the most complex language feature currently at our command: Asynchronous processing.Without understanding what is going on behind the scenes in the .NET Framework, it is difficult to appreciate what asynchronocity actually bring to the table and, without Reflector, we would never know the Arthur C. Clarke Magicthat the compiler does on our behalf.Join me as we explore the new asynchronous processing model, as well as review the often misunderstood and underappreciated yield keyword (you’ll see the connection when we dive into how the CLR handles async).Read more here

    Read the article

  • Organization standards for large programs

    - by Chronicide
    I'm the only software developer at the company where I work. I was hired straight out of college, and I've been working here for several years. When I started, eveeryone was managing their own data as they saw fit (lots of filing cabinets). Until recently, I've only been tasked with small standalone projects to help with simple workflows. In the beginning of the year I was asked to make a replacement for their HR software. I used SQL Server, Entity Framework, WPF, along with MVVM and Repository/Unit of work patterns. It was a huge hit. I was very happy with how it went, and it was a very solid program. As such, my employer asked me to expand this program into a corporate dashboard that tracks all of their various corporate data domains (People, Salary, Vehicles/Assets, Statistics, etc.) I use integrated authentication, and due to the initial HR build, I can map users to people in positions, so I know who is who when they open the program, and I can show each person a customized dashboard given their work functions. My concern is that I've never worked on such a large project. I'm planning, meeting with end users, developing, documenting, testing and deploying it on my own. I'm part way through the second addition, and I'm seeing that my code is getting disorganized. It's still programmed well, I'm just struggling with the organization of namespaces, classes and the database model. Are there any good guidelines to follow that will help me keep everything straight? As I have it now, I have folders for Data, Repositories/Unit of Work, Views, View Models, XAML Resources and Miscellaneous Utilities. Should I make parent folders for each data domain? Should I make separate EF models per domain instead of the one I have for the entire database? Are there any standards out there for organizing large programs that span multiple data domains? I would appreciate any suggestions.

    Read the article

  • "Programming error" exceptions - Is my approach sound?

    - by Medo42
    I am currently trying to improve my use of exceptions, and found the important distinction between exceptions that signify programming errors (e.g. someone passed null as argument, or called a method on an object after it was disposed) and those that signify a failure in the operation that is not the caller's fault (e.g. an I/O exception). As far as I understand, it makes little sense for an immediate caller to actually handle programming error exceptions, he should instead assure that the preconditions are met. Only "outer" exception handlers at task boundaries should catch them, so they can keep the system running if a task fails. In order to ensure that client code can cleanly catch "failure" exceptions without catching error exceptions by mistake, I create my own exception classes for all failure exceptions now, and document them in the methods that throw them. I would make them checked exceptions in Java. Now I have a few questions: Before, I tried to document all exceptions that a method could throw, but that sometimes creates an unwiedly list that needs to be documented in every method up the call chain until you can show that the error won't happen. Instead, I document the preconditions in the summary / parameter descriptions and don't even mention what happens if they are not met. The idea is that people should not try to catch these exceptions explicitly anyway, so there is no need to document their types. Would you agree that this is enough? Going further, do you think all preconditions even need to be documented for every method? For example, calling methods in IDisposable objects after calling Dispose is an error, but since IDisposable is such a widely used interface, can I just assume a programmer will know this? A similar case is with reference type parameters where passing null makes no conceivable sense: Should I document "non-null" anyway? IMO, documentation should only cover things that are not obvious, but I am not sure where "obvious" ends.

    Read the article

  • What level of detail to use in an interface members descriptions?

    - by famousgarkin
    I am extracting interfaces from some classes in .NET, and I am not completely sure about what level of detail of description to use for some of the interface members (properties, methods). An example: interface ISomeInterface { /// <summary> /// Checks if the object is checked out. /// </summary> /// <returns> /// Returns true if the object is checked out, or if the object locking is not enabled, /// otherwise returns false. /// </returns> bool IsObjectCheckedOut(); } class SomeImplementation : ISomeInterface { public bool IsObjectCheckedOut() { // An implementation of the method that returns true if the object is checked out, // or if the object locking is not enabled } } The part in question is the <returns>...</returns> section of the IsObjectCheckedOut description in the interface. Is it ok to include such a detail about return value in the interface itself, as the code that will work with the interface should know exactly what that method will do? All the current implementations of the method will do just that. But is it ok to limit the possible other/future implementations by description this way? Or should this not be included in the interface description, as there is no way to actually ensure that other/future implementations will do exactly this? Is it better to be as general as possible regarding the interface in such circumstances? I am currently inclined to the latter option.

    Read the article

  • Why is the use of abstractions (such as LINQ) so taboo?

    - by Matthew Patrick Cashatt
    I am an independent contractor and, as such, I interview 3-4 times a year for new gigs. I am in the midst of that cycle now and got turned down for an opportunity even though I felt like the interview went well. The same thing has happened to me a couple of times this year. Now, I am not a perfect guy and I don't expect to be a good fit for every organization. That said, my batting average is lower than usual so I politely asked my last interviewer for some constructive feedback, and he delivered! The main thing, according to the interviewer, was that I seemed to lean too much towards the use of abstractions (such as LINQ) rather than towards lower-level, organically grown algorithms. On the surface, this makes sense--in fact, it made the other rejections make sense too because I blabbed about LINQ in those interviews as well and it didn't seem that the interviewers knew much about LINQ (even though they were .NET guys). So now I am left with this question: If we are supposed to be "standing on the shoulders of giants" and using abstractions that are available to us (like LINQ), then why do some folks consider it so taboo? Doesn't it make sense to pull code "off the shelf" if it accomplishes the same goals without extra cost? It would seem to me that LINQ, even if it is an abstraction, is simply an abstraction of all the same algorithms one would write to accomplish exactly the same end. Only a performance test could tell you if your custom approach was better, but if something like LINQ met the requirements, why bother writing your own classes in the first place? I don't mean to focus on LINQ here. I am sure that the JAVA world has something comparable, I just would like to know why some folks get so uncomfortable with the idea of using an abstraction that they themselves did not write. UPDATE As Euphoric pointed out, there isn't anything comparable to LINQ in the Java world. So, if you are developing on the .NET stack, why not always try and make use of it? Is it possible that people just don't fully understand what it does?

    Read the article

  • Entity framework separating entities for product and customer specific implementation

    - by Codecat
    I am designing an application with intention into making it a product line. I would like to extend the functionality across all layers and first struggle is with domain models. For example, core functionality would have entity named Invoice with few standard fields and then customer requirements will add some new fields to it, but I don't want to add to core Invoice class. For every customer I could use customer specific DbContext and injected correct context with dependency injection. Also every customer will get they own deployment public class Product.Domain.Invoice { public int InvoiceId { get; set; } // Other fields } How to approach this problem? Solution 1 does not work since Entity Framework does not allow same simple name classes. public class CustomerA.Domain.Invoice : Product.Domain.Invoice { public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } } Solution 2 Create separate table and link it to core domain table. Reusing services and controllers could be harder. public class CustomerA.Domain.CustomerAInvoice { public Product.Domain.Invoice Invoice { get; set; } public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } }

    Read the article

  • With the outcome of the Oracle vs Google trial, does that mean Mono is now safe from Microsoft [closed]

    - by Evan Plaice
    According to the an article on ArsTechnica the judge of the case ruled that APIs are not patent-able. He referred to the structure of modules/methods/classes/functions as being like libraries/books/chapters. To patent an API would be putting a patent on thought itself. It's the internal implementations that really matter. With that in mind, Mono (C# clone for Linux/Mac) has always been viewed tentatively because, even though C# and the CLI are ECMA standards, Microsoft holds a patent on the technology. Microsoft holds a covenant not to sue open source developers based on their patents but has maintained the ability to pull the plug on the Mono development team if they felt the project was a threat. With the recent ruling, is Mono finally out of the woods. A firm precedent has been established that patents can't be applied to APIs. From what I understand, none of the Mono implementation is copied verbatim, only the API structure and functionality. It's a topic I have been personally interested in for years now as I have spent a lot of time developing cross-platform C# libraries in MonoDevelop. I acknowledge that this is a controversial topic, if you have opinions that's what commenting is for. Try to keep the answers factual and based on established sources.

    Read the article

  • Should all public methods in an abstract class be marked virtual?

    - by Justin Pihony
    I recently had to update an abstract base class on some OSS that I was using so that it was more testable by making them virtual (I could not use an interface as it combined two). This got me thinking whether I should mark all of the methods that I needed virtual, or if I should mark every public method/property virtual. I generally agree with Roy Osherove that every method should be made virtual, but I came across this article that got me thinking about whether this was necessary or not. I am going to limit this down to abstract classes for simplicity, however (whether all concrete public methods should be virtual is especially debatable, I am sure). I could see where you might want to allow a sub-class to use a method, but not want it overriding the implementation. However, as long as you trust that Liskov's Substitution Principle will be followed, then why would you not allow it to be overriden? By marking it abstract, you are forcing a certain override anyway, so, it seems to me that all public methods inside of an abstract class should indeed be marked virtual. However, I wanted to ask in case there was something I might not be thinking. Should all public methods within an abstract class be made virtual?

    Read the article

  • What should I "forget" when going to Javascript?

    - by ElGringoGrande
    I went from C=64 Basic and assembler to FORTRAN and C to C++ and Java. Professionally I started in Visual Basic for applications then to Visual Basic 4, 5, 6. After that VB.NET AND C# with some Java here and there. I have played with Ruby and Python and found both fun. During each step I never felt like I had to forget what I had learned before. I always felt like I was just learning better and/or slightly different ways of doing things but the difference was not major. The difference was like the difference between American, Australian and British English. (Maybe assembler was Latin and FORTRAN was Spanish.) But now I am using JavaScript to do real, actual work. (Before used it as a "Scripting" language pure a simple.) And I just feel like I have to forget some things to become proficient in it. It feels like some old Egyptian language. What should I forget? Is it just that code organization is different (no real classes so no one class one file)? Or is it something more basic?

    Read the article

  • Why is CS never a topic of conversation of the layman? [closed]

    - by hydroparadise
    Granted, every profession has it's technicalities. If you are an MD, you better know the anatomy of the human body, and if you are astronomer, you better know your calculus. Yet, you don't have to know these more advance topics to know that smoking might give you lung cancer because of carcinogens or the moon revolves around the earth because of gravity (thank you Discovery Channel). There's sort of a common knowledge (at least in more developed countries) of these more advanced topics. With that said, why are things like recursive descent parsing, BNF, or Turing machines hardly ever mentioned outsided 3000 or 4000 level classes in a university setting or between colleagues? Even back in my days before college in my pursuit of knowledge on how computers work, these very important topics (IMHO) never seem to get the light of day. Many different sources and sites go into "What is a processor?" or "What is RAM?", or "What is an OS?". You might get lucky and discover something about programming languages and how they play a role in how applications are created, but nothing about the tools for creating the language itself. To extend this idea, Dennis Ritchie died shortly after Steve Jobs, yet Dennis Ritchie got very little press compared to Steve Jobs. So, the heart of my question: Does the public in general not care to hear about computer science topics that make the technology in their lives work, or does the computer science community not lend itself to the general public to close the knowledge gap? Am I wrong to think the general public has the same thirst for knowledge on how things work as I do? Please consider the question carefully before answering or vote closing please.

    Read the article

  • Grails project structure

    - by Martin Janicek
    Good news everyone! I've changed the structure of the Grails project as requested in the issue 160028 and it should be much more user friendly than before. There are actually two things I've fixed/implemented. First of all the source folders are finally represented in the same way as for the Java projects (which means instead of the folder based structure it uses package based structure). The difference can be seen on pictures bellow:    Folder based structure:                                                 Package based structure: Second, minor and quite related change could be seen on those pictures too. There are different icons for different structures. For example Views and Layouts items are folder based, Domain Classes are package based and so on.

    Read the article

  • How should I create a mutable, varied jtree with arbitrary/generic category nodes?

    - by Pureferret
    Please note: I don't want coding help here, I'm on Programmers for a reason. I want to improve my program planning/writing skills not (just) my understanding of Java. I'm trying to figure out how to make a tree which has an arbitrary category system, based on the skills listed for this LARP game here. My previous attempt had a bool for whether a skill was also a category. Trying to code around that was messy. Drawing out my tree I noticed that only my 'leaves' were skills and I'd labeled the others as categories. Explanation of tree: The Tree is 'born' with a set of hard coded highest level categories (Weapons, Physical and Mental, Medical etc.). Fro mthis the user needs to be able to add a skill. Ultimately they want to add 'One-handed Sword Specialisation' for instance. To do so you'd ideally click 'add' with Weapons selected and then select One-handed from a combobox, then click add again and enter a name in a text field. Then click add again to add a 'level' or 'tier' first proficiency, then specialisation. Of course if you want to buy a different skill it's completely different, which is what I'm having trouble getting my head around let alone programming in. What is a good system for describing this sort of tree in code? All the other JTree examples I've seen have some predictable pattern, and I don't want to have to code this all in 'literals'. Should I be using abstract classes? Interfaces? How can I make this sort of cluster of objects extensible when I add in other skills not listed above that behave differently? If there is not a good system to use, if there a good process for working out how to do this sort of thing?

    Read the article

  • I have an MIS degree. How do I sell myself as a programmer?

    - by hydroparadise
    So, I graduated with a BSBA in Management Information Systems with honors almost 2 years ago which is more of a business degree. As of right now, I do have a job title of "Programmer", but it's more of a report writing position in an arbitrary, proprietary language called PowerOn with the occasional interesting project using more mainstream technologies like .Net and Java. I am also somewhat isoloated being the only programmer in the workplace, which I beleive is a detriment to my career path. The only people I have to bounce ideas against are those on the various SE sites. I don't regret going MIS, but over the past couple of years I have discovered my passion for coding, even though I have been doing some form of coding profesionally and as an enthusiast for years. I do want to persue my Masters in CS (at a later time), but I am not sure if I necessarily need a CS degree to get in with a team of programmers. In addition, I do have a number classes I have taken for different laguanges on the way (C++, Java, SQL, and VB.Net) I beleive my strength is in problem solving where code is just a tool to tackling to problem if needed. My question: How do I best sell myself as a programmer? Should I continue pounding out reports and wait till I have my masters in CS? Or am I viable to be a programmer as I stand?

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

  • Should I make a seperate unit test for a method, if it only modifies the parent state?

    - by Dante
    Should classes, that modify the state of the parent class, but not itself, be unit tested separately? And by separately, I mean putting the test in the corresponding unit test class, that tests that specific class. I'm developing a library based on chained methods, that return a new instance of a new type in most cases, where a chained method is called. The returned instances only modify the root parent state, but not itself. Overly simplified example, to get the point across: public class BoxedRabbits { private readonly Box _box; public BoxedRabbits(Box box) { _box = box; } public void SetCount(int count) { _box.Items += count; } } public class Box { public int Items { get; set; } public BoxedRabbits AddRabbits() { return new BoxedRabbits(this); } } var box = new Box(); box.AddRabbits().SetCount(14); Say, if I write a unit test under the Box class unit tests: box.AddRabbits().SetCount(14) I could effectively say, that I've already tested the BoxedRabbits class as well. Is this the wrong way of approaching this, even though it's far simpler to first write a test for the above call, then to first write a unit test for the BoxedRabbits separately?

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >