Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 164/563 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • What should be allowed inside getters and setters?

    - by Botond Balázs
    I got into an interesting internet argument about getter and setter methods and encapsulation. Someone said that all they should do is an assignment (setters) or a variable access (getters) to keep them "pure" and ensure encapsulation. Am I right that this would completely defeat the purpose of having getters and setters in the first place and validation and other logic (without strange side-effects of course) should be allowed? When should validation happen? When setting the value, inside the setter (to protect the object from ever entering an invalid state - my opinion) Before setting the value, outside the setter Inside the object, before each time the value is used Is a setter allowed to change the value (maybe convert a valid value to some canonical internal representation)?

    Read the article

  • Algorithm to figure out appointment times?

    - by Rachel
    I have a weird situation where a client would like a script that automatically sets up thousands of appointments over several days. The tricky part is the appointments are for a variety of US time zones, and I need to take the consumer's local time zone into account when generating appointment dates and times for each record. Appointment Rules: Appointments should be set from 8AM to 8PM Eastern Standard Time, with breaks from 12P-2P and 4P-6P. This leaves a total of 8 hours per day available for setting appointments. Appointments should be scheduled 5 minutes apart. 8 hours of 5-minute intervals means 96 appointments per day. There will be 5 users at a time handling appointments. 96 appointments per day multiplied by 5 users equals 480, so the maximum number of appointments that can be set per day is 480. Now the tricky requirement: Appointments are restricted to 8am to 8pm in the consumer's local time zone. This means that the earliest time allowed for each appointment is different depending on the consumer's time zone: Eastern: 8A Central: 9A Mountain: 10A Pacific: 11A Alaska: 12P Hawaii or Undefined: 2P Arizona: 10A or 11A based on current Daylight Savings Time Assuming a data set can be several thousand records, and each record will contain a timezone value, is there an algorithm I could use to determine a Date and Time for every record that matches the rules above?

    Read the article

  • Where could Distributed Version Control Systems currently be in Gartner's hype cycle?

    - by dukeofgaming
    Edit: Given the recent downvoting (+8/-6 at this point) it was made clear to me that Gartner's lifecycle is a biased metric from a programmer's perspective. This is something that is part of a paper I'm going to present to management, and management types are part of Gartner's audience. Giving DVCS exposure & enthusiasm (that "could" be deemed as hype, or at least attacked as such), think about the following question when reading this one: "how could I use Gartner's hype cycle to convince management that DVCSs are ready (or ready-enough) for us, and that it is not just hype" Just asking if DVCSs is hype wouldn't be constructive, Gartner's hype cycle is a more objective instrument than just asking that (even if this instrument is regarded as biased). If you know any other instrument please, by all means, mention it. Edit #2: I agree that Gartner's Life Cycle is not for every technology, but I consider it may have generated enough buzz to be considered hype by some, so it maybe deserves to be at least evaluated/pondered as such by using this instrument in order to prove/disprove it to whatever degree. I'm an advocate of DVCS, BTW. I'm doing research for a whitepaper I'm writing in favor of DVCS adoption at company and I stumbled upon the concept of social proof. I want to prove that the social proof of DVCS adoption is not necessarily cargo cult and doing further research I now stumbled upon Gartner's hype cycle which describes technology maturity in 5 phases. My question is: what could be an indicator of the current location of Distributed Version Control Systems (I mean git, mercurial, bazaar, etc. in general) at a particular phase in the hype cycle?... in other (less convoluted) words, would you say that currently expectations of DVCSs are a) starting, b)inflated, c)decreasing (disillusionment), d)increasing (enlightenment) or e)stabilizing (mature) and (more importantly) why? I know it is a hard question and there is subjectivity involved, but I'll grant the answer (and the traditional cookie) to the clearest argument/evidence for a particular phase.

    Read the article

  • how to update child records when updating the Master table using Linq [closed]

    - by user20358
    I currently use a general repositry class that can update only a single table like so public abstract class MyRepository<T> : IRepository<T> where T : class { protected IObjectSet<T> _objectSet; protected ObjectContext _context; public MyRepository(ObjectContext Context) { _objectSet = Context.CreateObjectSet<T>(); _context = Context; } public IQueryable<T> GetAll() { return _objectSet.AsQueryable(); } public IQueryable<T> Find(Expression<Func<T, bool>> filter) { return _objectSet.Where(filter); } public void Add(T entity) { _objectSet.AddObject(entity); _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Added); _context.SaveChanges(); } public void Update(T entity) { _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Modified); _context.SaveChanges(); } public void Delete(T entity) { _objectSet.Attach(entity); _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Deleted); _objectSet.DeleteObject(entity); _context.SaveChanges(); } } For every table class generated by my EDMX designer I create another class like this public class CustomerRepo : MyRepository<Customer> { public CustomerRepo (ObjectContext context) : base(context) { } } for any updates that I need to make to a particular table I do this: Customer CustomerObj = new Customer(); CustomerObj.Prop1 = ... CustomerObj.Prop2 = ... CustomerObj.Prop3 = ... CustomerRepo.Update(CustomerObj); This works perfectly well when I am updating just to the specific table called Customer. Now if I need to also update each row of another table which is a child of Customer called Orders what changes do I need to make to the class MyRepository. Orders table will have multiple records for a Customer record and multiple fields too, say for example Field1, Field2, Field3. So my questions are: 1.) If I only need to update Field1 of the Orders table for some rows based on a condition and Field2 for some other rows based on a different condition then what changes I need to do? 2.) If there is no such condition and all child rows need to be updated with the same value for all rows then what changes do I need to do? Thanks for taking the time. Look forward to your inputs...

    Read the article

  • No SQLCompact Edition Support IN VS 2013

    - by James Izzard
    Firstly apologies if this is a poor question - I am an engineer not a programmer. I have spent time moving from Visual Basic to C#. I have started C#/SQL tutorials. I have noticed VS 2013 has stopped supporting the compact edition database normally used for standalone desktop apps. Somebody has kindly written a plugin to re-implement support. I have also noticed a belief circulating that SQLite is to replace the compact edition. Would anybody be able to advise if this was accurate - I am slightly confused as to which database is best suited for desktop app development inside VS 2013. Any comment greatly appreciated. Cheers James

    Read the article

  • Learning C, C++ and C#

    - by Zac
    I'm sure you guys are tired of this question but after wading through hours of similar posts and questions I've really not made any progress to my specific concerns. I was hoping you guys could shed some light on a couple of questions I have before I decide on a course of action. BACKGROUND: I'm wanting to enroll in some type of program to learn a programming language/get a certificate/degree to work in the field. I've always been interested and bought a book on VB back in high school and dabbled. Now I want to get serious after a huge hiatus. Question 1: I've read it's counter-productive to learn C first, then C++ or C# because you develop bad habits. In a lot of college courses I've looked at, learning C/C++ is mandatory to advance. Should I ever bother learning C? On a related note, I really don't understand the difference between C and C++, or C# for the matter other than it incorporates .NET (which, I understand, is a compilation of tools and libraries that make programming easier and faster). Question 2: Where did you guys learn to program? Where do you recommend? Is it possible to land a job programming being self-taught? Is my best chance an ITT tech or a regular college? I was going to enroll in a JC and go from there but I can't decide what to do. LAST question :) I heard C++ is being "ported" to .NET. True? And if so, is this going to make C++ a solid, in-demand language to learn? Thanks for looking. :)

    Read the article

  • Object-Oriented equivalent of LISP's progn function?

    - by Archer
    I'm currently writing a LISP parser that iterates through some AutoLISP code and does its best to make it a little easier to read (changing prefix notation to infix notation, changing setq assignments to "=" assignments, etc.) for those that aren't used to LISP code/only learned object oriented programming. While writing commands that LISP uses to add to a "library" of LISP commands, I came across the LISP command "progn". The only problem is that it looks like progn is simply executing code in a specific order and sometimes (not usually) assigning the last value to a variable. Am I incorrect in assuming that for translating progn to object-oriented understanding that I can simply forgo the progn function and print the statements that it contains? If not, what would be a good equivalent for progn in an object-oriented language?

    Read the article

  • Interviews by Software Companies

    - by Glenn Nelson
    I have been chosen as one of the 12 final people for a full out scholarship to the college of my choice and it is paid for by a software company so long as I major in Computer Science.I have already had to write an essay on what has most shaped my life (Programming being it) and that was the basis for the interview decision. I now have to go in for an interview with people from the company for the final decision in a week. I do believe I have a good foundation in computer science already. I have roughly 4 years of programming experience in Java, C++, ASM and your typical web stuff. I have done everything from making my own CMS for my site to an assembler to network file transfer applications. That said what types of questions should I expect in an interview of this sort? Do I seem reasonably knowledgeable?

    Read the article

  • Python as a first language?

    - by user64085
    I have just started working in Information Security World. I want to learn the Python language for creating my own automated tool for Fuzzing, SQL-Injection etc. My question is I don't know much about C language (only basic knowledge) but I want to learn directly Python Language so is it good? I have seen there is lots of difference between Python and C (obviously) and for Information Security field Python = GOD so I want to know learning Python need any experience on C language? If not so can I start learning Python directly?

    Read the article

  • Penalization of under performing employees, how to avoid this? [closed]

    - by Sparky
    My company's management wants to deduct from the salary of under performing employees. I'm a member of the Core Strategy committee and they want my opinion also. I believe that the throughput from an employee depends on a lot of things such as the particular work assigned to them, other members of his/her team, other reasons etc. Such penalizations will be demoralizing to the people. How can I convince my management not to do so?

    Read the article

  • Am I copy/paste programmer ?

    - by Searock
    When ever I am stuck with a particular problem, I search for a solution in Google. And then I try to understand the code and tweak it according to my requirement. For example recently I had asked a question Reading xml document in firefox in stack overflow. Soufiane Hassou gave me a link to w3schools, where I found a example on parsing xml document, I understood how the example works, but I copied the code and tweaked it according to my requirement, since I don't like typing much. So does this make me a copy/paste programmer? How do you say if a person is a copy/paste programmer ? Thanks.

    Read the article

  • A new name for unit tests

    - by Will
    I never used to like unit testing. I always thought it increased the amount of work I had to do. Turns out, that's only true in terms of the actual number of lines of code you write and furthermore, this is completely offset by the increase in the number of lines of useful code that you can write in an hour with tests and test driven development. Now I love unit tests as they allow me to write useful code, that quite often works first time! (knock on wood) I have found that people are reluctant to do unit tests or start a project with test driven development if they are under strict time-lines or in an environment where others don't do it, so they don't. Kinda like, a cultural refusal to even try. I think one of the most powerful things about unit testing is the confidence that it gives you to undertake refactoring. It also gives new found hope, that I can give my code to someone else to refactor/improve, and if my unit tests still work, I can use the new version of the library that they modified, pretty much, without fear. It's this last aspect of unit testing that I think needs a new name. The unit test is more like a contract of what this code should do now, and in the future. When I hear the word testing, I think of mice in cages, with multiple experiments done on them to see the effectiveness of a compound. This is not what unit testing is, we're not trying out different code to see what is the most affective approach, we're defining what outputs we expect with what inputs. In the mice example, unit tests are more like the definitions of how the universe will work as opposed to the experiments done on the mice. Am I on crack or does anyone else see this refusal to do testing and do they think it's a similar reason they don't want to do it? What reasons do you / others give for not testing? What do you think their motivations are in not unit testing? And as a new name for unit testing that might get over some of the objections, how about jContract? (A bit Java centric I know :), or Unit Contracts?

    Read the article

  • Is there a language or design pattern that allows the *removal* of object behavior or properties in a class hierarchy?

    - by Sebastien Diot
    A well-know shortcoming of traditional class hierarchies is that they are bad when it comes to model the real world. As an example, trying to represent animals species with classes. There are actually several problems when doing that, but one that I never saw a solution to is when a sub-class "looses" a behavior or properties that was defined in a super-class, like a penguin not being able to fly (there are probably better examples, but that's the first one that comes to my mind, having seen "Madagascar 2" recently). On the one hand, you don't want to define for every property and behavior some flag that specifies if it is at all present, and check it every time before accessing that behavior or property. You would just like to say that birds can fly, simply and clearly, in the Bird class. But then it would be nice if one could define "exceptions" afterward, without having to use some horrible hacks everywhere. This often happens when a system has been productive for a while. You suddenly find an "exception" that doesn't fit in the original design at all, and you don't want to change a large portion of your code to accommodate it. So, is there some language or design patterns that can cleanly handle this problem, without requiring major changes to the "super-class", and all the code that uses it? Even if a solution only handle a specific case, several solutions might together form a complete strategy. [EDIT] Forgot about the Liskov Substitution Principle. That is why you can't do it. Assuming you define "traits/interfaces" for all major "feature groups", you can freely implement traits in different branches of the hierarchy, like the Flying trait could be implemented by Birds, and some special kind of squirrels and fish. So my question could amount to "How could I un-implement a trait?" If your super-class is a Java Serializable, you have to be one too, even if there is no way for you to serialize your state, for example if you contained a "Socket". So one way to do it is to always define all your traits in pair from the start: Flying and NotFlying (which would throw UnsupportedOperationExceiption, if not checked against). The Not-trait would not define any new interface, and could be simply checked for. Sounds like a "cheap" solution, in particular if used from the start.

    Read the article

  • Which of these design patterns is superior?

    - by durron597
    I find I tend to design class structures where several subclasses have nearly identical functionality, but one piece of it is different. So I write nearly all the code in the abstract class, and then create several subclasses to do the one different thing. Does this pattern have a name? Is this the best way for this sort of scenario? Option 1: public interface TaxCalc { String calcTaxes(); } public abstract class AbstractTaxCalc implements TaxCalc { // most constructors and fields are here public double calcTaxes(UserFinancials data) { // code double diffNumber = getNumber(data); // more code } abstract protected double getNumber(UserFinancials data); protected double initialTaxes(double grossIncome) { // code return initialNumber; } } public class SimpleTaxCalc extends AbstractCalc { protected double getNumber(UserFinancials data) { double temp = intialCalc(data.getGrossIncome()); // do other stuff return temp; } } public class FancyTaxCalc extends AbstractTaxCalc { protected double getNumber(UserFinancials data) { int temp = initialCalc(data.getGrossIncome()); // Do fancier math return temp; } } Option 2: This version is more like the Strategy pattern, and should be able to do essentially the same sorts of tasks. public class TaxCalcImpl implements TaxCalc { private final TaxMath worker; public DummyImpl(TaxMath worker) { this.worker = worker; } public double calcTaxes(UserFinancials data) { // code double analyzedDouble = initialNumber; int diffNumber = worker.getNumber(data, initialNumber); // more code } protected int initialTaxes(double grossIncome) { // code return initialNumber; } } public interface TaxMath { double getNumber(UserFinancials data, double initial); } Then I could do: TaxCalc dum = new TaxCalcImpl(new TaxMath() { @Override public double getNumber(UserFinancials data, double initial) { double temp = data.getGrossIncome(); // do math return temp; }); And I could make specific implementations of TaxMath for things I use a lot, or I could make a stateless singleton for certain kinds of workers I use a lot. So the question I'm asking is: Which of these patterns is superior, when, and why? Or, alternately, is there an even better third option?

    Read the article

  • Type dependencies vs directory structure

    - by paul
    Something I've been wondering about recently is how to organize types in directories/namespaces w.r.t. their dependencies. One method I've seen, which I believe is the recommendation for both Haskell and .NET (though I can't find the references for this at the moment), is: Type Type/ATypeThatUsesType Type/AnotherTypeThatUsesType My natural inclination, however, is to do the opposite: Type Type/ATypeUponWhichTypeDepends Type/AnotherTypeUponWhichTypeDepends Questions: Is my inclination bass-ackwards? Are there any major benefits/pitfalls of one over the other? Is it just something that depends on the application, e.g. whether you're building a library vs doing normal coding?

    Read the article

  • Why don't xUnit frameworks allow tests to run in parallel?

    - by Xavier Nodet
    Do you know of any xUnit framework that allows to run tests in parallel, to make use of multiple cores in today's machine? I don't... If none (or so few) of them does it, maybe there is a reason... Is it that tests are usually so quick that people simply don't feel the need to paralellize them? Is there something deeper that precludes distributing (at least some of) the tests over multiple threads? Thanks!

    Read the article

  • Communication between WCF service [library] and Self-host [Winform]

    - by Mur Haf Soz
    Introduction: I have a WCF service library and a self-host Winform. Service features is File explorer including (copy, move, delete, new folder, delete... etc) and Task Manager (run, kill, update list). Then now I want to add other features like chatting between self-host and client, send an image from client to self-host so when it received, it is shown in a pictureBox in a new form. Till now I have two endpoints for (Task Manager, File Manager) that runs under one service "MainService". And I set up all the connections using DotNet 4.0 WCF Configuration and Wizards, and I'm using netTcpBinding. Problem: I need to know how to communicate with between WCF service lib and self-host, so I can append a received chat from client on self-host form's textbox TextBoxChat. And also call a client callback from self-host when Send button clicked, to send the message from self-host textbox TextBoxMessage. Let's say this's self-host ChatForm So is it possible to do that in WCF? I would prefer to run ChatEndpoint under MainService, so all Endpoints use one port.

    Read the article

  • How to access an encrypted INI file from C on an embedded system with little RAM

    - by Mawg
    I want to encrypt an INI file using a Delphi program on a Windows PC. Then I need to decrypt & access it in C on an embedded system with little RAM. I will do that once & fetch all info; I will not be consutinuously accessing the INI file whenever my program needs data from the file. Any advice as to which encryption to use? Nothing too heavyweight, just good enough for "Security through obscurity" and FOSS for both Delphi & C. And how can I decrypt, get all the info from the INI file - using as little RAM as possible, and then free any allocated RAM? I hope that someone can help. [Update] I am currently using an Atmel UC3, although I am not sure if that will be the final case. It has 512kB falsh & 128kB RAM. For an INI file, I am talking of max 8 sections, with a total of max 256 entries, each max 8 chars. I chose INI (but am not married to it), because i have had major problems in the past when the format of a data fiel changes, no matter whether binary, or text. For tex, I prefer the free format of INI (on PC), but suppose I could switch to line_1=data_1, line_2=data_2 and accept that if I add new fields in future software erleases they must come at the end, even if it is not pretty when read directly by humans. I suppose if I choose a fixed format text file then I never need get more than one line into RAM at a time ...

    Read the article

  • Are there well-known PowerShell coding conventions?

    - by Tahir Hassan
    Are there any well-defined conventions when programming in PowerShell? For example, in scripts which are to be maintained long-term, do we need to: Use the real cmdlet name or alias? Specify the cmdlet parameter name in full or only partially (dir -Recurse versus dir -r) When specifying string arguments for cmdlets do you enclose them in quotes (New-Object 'System.Int32' versus New-Object System.Int32 When writing functions and filters do you specify the types of parameters? Do you write cmdlets in the (official) correct case? For keywords like BEGIN...PROCESS...END do you write them in uppercase only? It seems that MSDN lack coding conventions document for PowerShell, while such document exist for example for C#.

    Read the article

  • IntelliJ with Maven compilation

    - by Mik378
    I have a project that needs Hibernate jars. I added them as dependencies in the pom.xml and Maven compiles my project well. However, in the IDE, all annotations and calls to Hibernate API are marked as unresolved (red). How could I get IntelliJ being able to resolve them ? Is there a way to use Maven when I click on Build Project ? (ctrl+F9) Also, I am confused with the concept of facets within IntelliJ. Do I need them, let's say JPA facets to enable Persistence assistant etc... or there's an option to let Maven take care about ?

    Read the article

  • Unit testing time-bound code

    - by maasg
    I'm currently working on an application that does a lot of time-bound operations. That is, based on long now = System.currentTimeMillis();, and combined with an scheduler, it will calculate periods of time that parametrize the execution of some operations. e.g.: public void execute(...) { // executed by an scheduler each x minutes final int now = (int) TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis()); final int alignedTime = now - now % getFrequency() ; final int startTime = alignedTime - 2 * getFrequency(); final int endTimeSecs = alignedTime - getFrequency(); uploadData(target, startTime, endTimeSecs); } Most parts of the application are unit-tested independently of time (in this case, uploadData has a natural unit test), but I was wondering about best practices for testing time-bound parts that rely on System.currentTimeMillis() ?

    Read the article

  • Should I give the answer to a failed interview coding exercise?

    - by GlenH7
    We had a senior level interview candidate fail a nuance of the FizzBuzz question*. I mean, really, utterly, completely, failed the question - not even close. I even coached him through to thinking about using a loop and that 3 and 5 were really worth considering as special cases. He blew it. Just for QA purposes, I gave the same exact question to three teammates; gave them 5 minutes; and then came back to collect their pseudo-code. All of them nailed it and hadn't seen the question before. Two asked what the trick was... On a different logic exercise, the candidate showed some understanding of some of the features available within the language he chose to use (C#). So it's not as if he had never written a line of code. But his logic still stunk. My question is whether or not I should have given him the answer to the logic questions. He knew he blew them, and acknowledged it later in the interview. On the other hand, he never asked for the answer or what I was expecting to see. I know coding exercises can be used to set candidates up for failure (again, see second link from above). And I really tried to help him home in on answering the core of the question. But this was a senior level candidate and Fizz-Buzz is, frankly, ridiculously easy even with accounting for interview jitters. I felt like I should have shown him a way of solving the problem so that he could at least learn from the experience. But again, he didn't ask. What's the right way to handle that situation? *Okay, that's not the link to the actual FizzBuzz question, but it is a good P.SE discussion around FizzBuzz and links to the various aspects of it.

    Read the article

  • Should I list this work experience on my resume? [closed]

    - by Phoenix
    I am currently working at a company. I did an internship before this job with a prestigious company and project itself was challenging but it was in the initial phases and hence there were no tight schedules and we ended up doing brainstorming for the first month and the 2nd month actually setting up our hardware, which is linux servers in lab and a cluster administrator for the servers. And then i wrote an addin task which runs on the server and uses existing API to collect some statistics from the the servers in the cluster and feeding them into another entity which is basically an algorithm that calculates how the load on the servers should be automatically balanced. Neither of these things went into production by the time I left the company and I'm not even sure of their current state. Does it make sense to include it in my resume then? I also worked as a software engineer right out of school at another prestigious company for 9 months. I was involved in some bug fixes before the product launched and I don't even recollect the exact fixes I made to the product. So, will it make sense to have these experiences on my resume ? Will people question me about them and will saying it was bug fixes and mentioning what kind of fixes suffice as enough to justify my work ex there ?

    Read the article

  • Convert filenames to their checksum before saving to prevent duplicates. Is is a smart thing to do?

    - by Xananax
    TL;DR:what the title says I am developing some sort of image board in PHP. I was thinking of changing each image's filename to it's checksum prior to saving it. This way, I might be able to prevent duplicates. I know this wouldn't work for two images that are the same but differ in size or level of compression or whatnot, but this method would allow for an early check. What bugs me is that I never saw this method implemented anywhere, so I was wondering if there is a catch to it. Maybe it is just more efficient to keep the original filename and store the hash in DB? Maybe the whole method is just not useful and my question is moot? What do you think? On a side note, I don't really get how hashes are calculated so I was wondering, if my first question checks out, if it would be possible to calculate the likeness that two images are similar by comparing hashes (levenshtein or something of the sort).

    Read the article

  • What naming anti-patterns exist?

    - by Billy ONeal
    There are some names, where if you find yourself reaching for those names, you know you've already messed something up. For example: XxxManager This is bad because a class should describe what the class does. If the most specific word you can come up with for what the class does is "manage," then the class is too big. What other naming anti-patterns exist? EDIT: To clarify, I'm not asking "what names are bad" -- that question is entirely subjective and there's no way to answer it. I'm asking, "what names indicate overall design problems with the system." That is, if you find yourself wanting to call a component Xyz, that probably indicates the component is ill concieved. Also note here that there are exceptions to every rule -- I'm just looking for warning flags for when I really need to stop and rethink a design.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >