Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 193/663 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Is it more valuable to double major in Computer Science/Software Engineering or get an undergraduate CS degree with a Masters in SE?

    - by Austin Hyde
    A friend and I (both in college) are currently in a debate over which is better, in terms of employment opportunities, experience, and education: a Bachelors degree in both Computer Science and Software Engineering, or a Bachelors in Computer Science with a Masters in Software Engineering. My point of view is that I would rather go to school for 4-4.5 years to learn both sides of the field, and be out working on real projects gaining real experience, by going the double major route. His point of view is that it would look better to potential employers if he had a Bachelors in CS and Masters in SE. That way, when he's finally done after 4 years of CS and 2-4 of SE (depending on where he goes), he can pretty much have his choosing of what he wants to do. We are both in agreement on the distinction between the two degrees: CS is "traditional" and about the theory of algorithms, data structures, and programming, where SE is the study of the design of software and the implementation of CS theory. So, what's your stance on this debate? Have you gone one route or another? And most importantly, why?

    Read the article

  • Getting rid of Massive View Controller in iOS?

    - by Earl Grey
    I had a discussion with my colleague about the following problem. We have an application where we need filtering functionality. On any main screen within the upper navigation bar, there is a button in the upper right corner. Once you touch that button, an custom written Alert View like view will pop up modally, behind it a semitransparent black overlay view. In that modal view, there is a table view of options, and you can choose one exclusively. Based on your selection, once this modal view is closed, the list of items in the main view is filtered. It is simply a modally presented filter to filter the main table view.This UI design is dictated by the design department, I cannot do anything about it so let accept this as a premise. Also the main filter button in the navbar will change colours to indicate that the filter is active. The question I have is about implementation. I suggested to my colleague that we create a separate XYZFilter class that will be an instance created by the main view controller acquire the filtering options handle saving and restoration of its state - i.e. last filter selected provide its two views - the overlay view and the modal view be the datasource for the table in its modal view. For some unknown reason, my colleague was not impressed by that approach at all. He simply wants to do these functionalities in the main view controller, maybe out of being used to do this in the past like that :-/ Is there any fundamental problem with my approach? I want to keep the view controller small, not to have spaghetti code create a reusable component (for use outside the project) have more object oriented, decoupled approach. prevent duplication of code as we need the filtering in two different places but it looks the same in both.. Any advice?

    Read the article

  • When to delete a branch in Git

    - by Jo-Herman Haugholt
    I have a script project I've been managing with Git. Besides two main branches, several minor branches have been introduced over time to cover minor features, tweaks or temporary changes. Some of these branches are nearing end-of-life, and I won't be updating them any more. What's the different philosophies for handling branches like this? Should they be removed, or left in the repository unmaintained? If I do, won't I end up with a cluttered repository?

    Read the article

  • Stagnating in programming

    - by Coder
    Time after time this question came up in my mind, but up until today I wasn't thinking about it much. I have been programming for maybe around 8 years now, and for the last two years it seems I'm not as keen to pick up new technologies anymore. Maybe that's a burnout or something, but I'd say it's experience and what I like, that's stopping me from running after the latest and greatest. I'm C++ developer, by this I mean, I love close to metal programming. I have no problems tracing problems through assembly, using tools like WinDbg or HexView. When I use constructs, I think about how they are realized underneath, how the bits are set and unset under the hood. I love battling with complex threading problems and doing everything hardcore way, even by hand if the regular solutions seem half baked. But I also love the C++0x stuff, and use it a lot. And all C++ code as long as it's not cumbersome compared to C counterparts, sometimes I also fall back to sort of "Super C" if the C++ way is ugly. And then there are all other developers who seem to be way more forward looking, .Net 4.0 MVC, WPF, all those Microsoft X#s, LINQ languages, XML and XSLT, mobile devices and so on. I have done a considerable amount of .NET, SQL, ASPX programming, but the further I go, the less I want to try those technologies. Is that bad? Almost every day I hear people saying that managed code is the only way forward, WPF is the way to go. I hear that C++ is godawful, and you can't code anything in it that's somewhat stable. But I don't buy it. With the experience I have, and the knowledge of how native code is compiled and executes, I can say I find it extremely rare that C++ code is unstable, or leaks, or causes crashes that takes more than 30 seconds to identify and fix. And to tell the truth, I've seen enough problems with other "cool" languages that I'd say C++ is even more stable and production proof than the safe languages, at least for me. The only thing that scares me in C++ is new frameworks, I don't trust them, and I use them extra sparingly. STL - yes, ATL - very sparingly, everything else... Well, not very keen on it. Most huge problems I've ran into, all were related to frameworks, not the language itself. Some overrided operator here, bad hierarchy there, poor class design here, mystical castings there. Other than that, C/C++ (yes, I use them together) still seems a very controlled and stable way to develop applications. Am I stagnating? Should I switch a profession, or force myself in all that marketing hype? Are there more developers who feel the same way?

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • Who determines what is best practice? [closed]

    - by dreza
    I've often searched for answers relating to "best practice" when writing code. And I see many questions regarding what is best practice. However, I was thinking the other day. Who actually determines what this best practice is? Is it the owners or creators of the programming language. Is it the general developer community that is writing in the language and it's just a gut feeling consensus. Is it whoever seems to have the highest prestige in the developer world and shouts the loudest? Is best practice really just another term for personal opinion? I hope this is a constructive question. If I could word it better to ensure it doesn't get closed please feel free to edit, or inform me otherwise.

    Read the article

  • No SQLCompact Edition Support IN VS 2013

    - by James Izzard
    Firstly apologies if this is a poor question - I am an engineer not a programmer. I have spent time moving from Visual Basic to C#. I have started C#/SQL tutorials. I have noticed VS 2013 has stopped supporting the compact edition database normally used for standalone desktop apps. Somebody has kindly written a plugin to re-implement support. I have also noticed a belief circulating that SQLite is to replace the compact edition. Would anybody be able to advise if this was accurate - I am slightly confused as to which database is best suited for desktop app development inside VS 2013. Any comment greatly appreciated. Cheers James

    Read the article

  • Customizing configuration with Dependency Injection

    - by mathieu
    I'm designing a small application infrastructure library, aiming to simplify development of ASP.NET MVC based applications. Main goal is to enforce convention over configuration. Hovewer, I still want to make some parts "configurable" by developpers. I'm leaning towards the following design: public interface IConfiguration { SomeType SomeValue; } // this one won't get registered in container protected class DefaultConfiguration : IConfiguration { public SomeType SomeValue { get { return SomeType.Default; } } } // declared inside 3rd party library, will get registered in container protected class CustomConfiguration : IConfiguration { public SomeType SomeValue { get { return SomeType.Custom; } } } And the "service" class : public class Service { private IConfiguration conf = new DefaultConfiguration(); // optional dependency, if found, will be set to CustomConfiguration by DI container public IConfiguration Conf { get { return conf; } set { conf = value; } } public void Configure() { DoSomethingWith( Conf ); } } There, the "configuration" part is clearly a dependency of the service class, but it this an "overuse" of DI ?

    Read the article

  • Controllers in CodeIgniter

    - by Dileep Dil
    I little bit new to the CodeIgniter framework and this is my first project with this framework. During a chat on StackOverflow somebody said that we need to make controllers tiny as possible. Currently I have a default controller named home with 1332 lines of codes (and increasing) and a model named Profunction with 1356 lines of codes (and increasing). The controller class have about 46 functions on it and also with model class. I thought that Codeigniter can handle large Controllers or Models well, is there any problem/performance issue/security issues regarding this?

    Read the article

  • I'm a student learning C++ and I've recently found out about Ruby. Would learning (some of) Ruby help me with C++ or would it just confuse me?

    - by Von32
    Hi! As the title says, I'm a student that will be starting my second year of C++ very soon. I've discovered Ruby, however. While I've heard much buzz about the language before, I've disregarded it because I always thought it wasn't something that would be useful. However, I've found a number of FANTASTIC tutorials on ruby and am interested in learning it (probably because it seems so straightforward). Would playing around with ruby be a good or bad idea? I understand that there's not such thing as bad knowledge, but I'm afraid that Ruby will only confuse me when dealing with C++. How different from C++ is it? I've read it's based on C in some way, but my google-fu seems to be horrible today. How useful is Ruby in the real world? I'm not specifically asking about jobs- I'm more interested in what sort of applications may come from this language. Any specific examples worth looking at? Going back to Question two- I've read some posts on here that Ruby and C++ can hold hands once in a while. How flexible is this relationship? Is it rarely that this would work? Thank you Very much for your time! EDIT: This has to be the one community on the internet that doesn't suck. Why have I never posted before? You guys are awesome!

    Read the article

  • Where to put code documentation?

    - by Patrick
    I am currently using two systems to write code documentation (am using C++): Documentation about methods and class members are added next to the code, using the Doxygen format. On a server Doxygen is run on the sources so the output can be seen in a web browser Overview pages (describing a set of classes, the structure of the application, example code, ...) is added to a Wiki I personally think that this approach is easy because the documentation about members and classes is really close to the code, while the overview pages are really easy to edit in the Wiki (and it's also easy to add images, tables, ...). A web browser allows you to see both documentations. My co-worker now suggests to put everything in Doxygen, because we can then create one big help file with everything in it (using either Microsoft's HTML WorkShop or Qt Assistant). My concern is that editing Doxygen-style documentation is much harder (compared to Wiki), especially when you want to add tables, images, ... (or is there a 'preview' tool for Doxygen that doesn't require you to generate the code before you can see the result?) What do big open-source (or closed source) projects use to write their code documentation? Do they also split this up between Doxygen-style and a Wiki? Or do they use another system? What is the most appropriate way to expose the documentation? Via a Web server/browser, or via a big (several 100MB) help file? Which approach do you take when writing code documentation?

    Read the article

  • Designing binary operations(AND, OR, NOT) in graphs DB's like neo4j

    - by Nicholas
    I'm trying to create a recipe website using a graph database, specifically neo4j using spring-data-neo4j, to try and see what can be done in Graph Databases. My model so far is: (Chef)-[HAS_INGREDIENT]->(Ingredient) (Chef)-[HAS_VALUE]->(Value) (Ingredient)-[HAS_INGREDIENT_VALUE]->(Value) (Recipe)-[REQUIRES_INGREDIENT]->(Ingredient) (Recipe)-[REQUIRES_VALUE]->(Value) I have this set up so I can do things like have the "chef" enter ingredients they have on hand, and suggest recipes, as well as suggest recipes that are close matches, but missing one ingredient. Some recipes can get complex, utilizing AND, OR, and NOT type logic, something like (Milk AND (Butter OR spread OR (vegetable oil OR olive oil))) and I'm wondering if it would be sane to model this in a graph using a tree type representation? An example of what I was thinking is to create three "node" types of AND, OR, and NOT and have each of them connect to the nodes value underneath. How else might this be represented in a Graph Database or is my example above a decent representation?

    Read the article

  • Constructs for wrapping a hardware state machine

    - by Henry Gomersall
    I am using a piece of hardware with a well defined C API. The hardware is stateful, with the relevant API calls needing to be in the correct order for the hardware to work properly. The API calls themselves will always return, passing back a flag that advises whether the call was successful, or if not, why not. The hardware will not be left in some ill defined state. In effect, the API calls advise indirectly of the current state of the hardware if the state is not correct to perform a given operation. It seems to be a pretty common hardware API style. My question is this: Is there a well established design pattern for wrapping such a hardware state machine in a high level language, such that consistency is maintained? My development is in Python. I ideally wish the hardware state machine to be abstracted to a much simpler state machine and wrapped in an object that represents the hardware. I'm not sure what should happen if an attempt is made to create multiple objects representing the same piece of hardware. I apologies for the slight vagueness, I'm not very knowledgeable in this area and so am fishing for assistance of the description as well!

    Read the article

  • What are the advantages of the delegate pattern over the observer pattern?

    - by JoJo
    In the delegate pattern, only one object can directly listen to another object's events. In the observer pattern, any number of objects can listen to a particular object's events. When designing a class that needs to notify other object(s) of events, why would you ever use the delegate pattern over the observer pattern? I see the observer pattern as more flexible. You may only have one observer now, but a future design may require multiple observers.

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • Where and how to mention Stackoverflow participation in the résumé?

    - by Sandeepan Nath
    I think I have good enough reputation on SO now - here is my profile - http://stackoverflow.com/users/351903/sandeepan-nath. Well, this may not be that much as compared to so many other users out there but I am happy with mine. So, I was thinking of adding my profile link on my résumé. (Just the profile link and not that "I have this much reputation on SO"). Those who haven't seen, can see this question Would you put your stackoverflow profile link on your CV / Resume?. How would this look like? Forums/Blogs/Miscellaneous others No blogging as yet but active participant in Stackoverflow. My profile link - http://stackoverflow.com/users/351903/sandeepan-nath I think of putting this section after Project Details and Technical Expertise sections. Any tips/advice? Thanks Update MKO has made a very good point - "do you really want a potential employeer to be able to evaluate in detail everything you've ever written on SO". I thought of commenting but it would be too long - In my questions/answers I put a lot of statements like - "AFAIK ...", "following are my assumptions so far ...", "am I correct to conclude that... ?", "I doubt if it is possible to ..." etc. when I am not sure about something and I rarely involve in fights with other users. However I do argue on topics sometimes if I feel it is necessary and if I have a valid point. I do accept my mistakes and apologize for the same. As we all know nobody is perfect. I must have written many things which may be judged as wrong by a potential employer. But what if the same employer notices that I have improved in the quality of content by comparing old content with new one? Isn't that great? I also try to go back to older questions/answers and put corrective comments etc. when I feel I was wrong or if I can improve my post. Of course there are many employers who want you (potential employees) to be correct each and every time. They immediately remove you from consideration when you say a single incorrect thing. I have personally met such an interviewer few months back. He didn't even care to listen to any good thing I had done after he found a single wrong thing. Now the question is do you really care to work with such people? Or do you like those people who give value to the fact that you are striving to improve every day. I personally prefer the latter.

    Read the article

  • How do bug reports factor in to a sprint?

    - by Mark Ingram
    I've been reading up on Scrum recently. From my understanding, a meeting is held before the sprint starts, to decide what gets moved from the product backlog to the upcoming sprint backlog. Once a feature is completed in the current sprint, it will go into the "Ready to QA" bucket, and it's at this point that I'm getting confused. Do bug reports go back into the product backlog? I assume they can't go back into the sprint backlog as we've already decided what work will be done for this cycle? What happens when QA finds a bug? Where does it go?

    Read the article

  • Accessing UI elements from delegate function in Windows Phone 7

    - by EpsilonVector
    I have the following scenario: a page with a bunch of UI elements (grids, textblocks, whatever), and a button that when clicked launches an asynchronous network transaction which, when finished, launches a delegate function. I want to reference the page's UI elements from that delegate. Ideally I would like to do something like currentPage.getUIElementByName("uielement").insert(data), or even uielement.insert(data), or something similar. Is there a way to do this? No matter what I try an exception is being thrown saying that I don't have permissions to access that element. Is there a more correct way to handle updating pages with data retrieved over network?

    Read the article

  • Client/Server Application Using Google App Engine

    - by Kevin Zhang
    Can someone please advise me what is the possible solution of using GAE to make a Client/Serer Application? As far as I know, GAE is designed to do web applications. What I want to do is to have a Java Client(Swing based) deployed on a number of computers and deploy the server on GAE. I found an example on GAE website which teaches how to make a SOAP service using GAE, but I don't know whether using SOAP is a good idea for client/server applications. Can someone give me some hints about how to design this system and what technology should be used? Any advices are welcome. Many thanks.

    Read the article

  • Communication between WCF service [library] and Self-host [Winform]

    - by Mur Haf Soz
    Introduction: I have a WCF service library and a self-host Winform. Service features is File explorer including (copy, move, delete, new folder, delete... etc) and Task Manager (run, kill, update list). Then now I want to add other features like chatting between self-host and client, send an image from client to self-host so when it received, it is shown in a pictureBox in a new form. Till now I have two endpoints for (Task Manager, File Manager) that runs under one service "MainService". And I set up all the connections using DotNet 4.0 WCF Configuration and Wizards, and I'm using netTcpBinding. Problem: I need to know how to communicate with between WCF service lib and self-host, so I can append a received chat from client on self-host form's textbox TextBoxChat. And also call a client callback from self-host when Send button clicked, to send the message from self-host textbox TextBoxMessage. Let's say this's self-host ChatForm So is it possible to do that in WCF? I would prefer to run ChatEndpoint under MainService, so all Endpoints use one port.

    Read the article

  • C#/.net features to cut off assuming no backward compatibility needed?

    - by Gulshan
    Any product or framework evolves. Mainly it's done to catch up the needs of it's users, leverage new computing powers and simply make it better. Sometimes the primary design goal also changes with the product. C# or .net framework is no exception. As we see, the present day 4th version is very much different comparing with the first one. But thing comes as a barricade to this evolution- backward compatibility. In most of frameworks/products there are features would have been cut off if there was no need to support backward compatibility. According to you, what are these features in C#/.net? Please mention one feature per answer.

    Read the article

  • Good questions to ask a potential new boss?

    - by David Johnstone
    I first asked this question on Stack Overflow, but it turns out this is a better place for it. Imagine you were working as a software developer. Imagine that the manager of your team leaves and your company is looking for a replacement. Imagine that as part of the hiring process you had the opportunity to talk with him. You are not the only person doing an interview, and while it is not ultimately your decision whether or not to hire him, you do have an influence. What questions would you ask? What would you talk with him about?

    Read the article

  • Is there any reason to use "container" classes?

    - by Michael
    I realize the term "container" is misleading in this context - if anyone can think of a better term please edit it in. In legacy code I occasionally see classes that are nothing but wrappers for data. something like: class Bottle { int height; int diameter; Cap capType; getters/setters, maybe a constructor } My understanding of OO is that classes are structures for data and the methods of operating on that data. This seems to preclude objects of this type. To me they are nothing more than structs and kind of defeat the purpose of OO. I don't think it's necessarily evil, though it may be a code smell. Is there a case where such objects would be necessary? If this is used often, does it make the design suspect?

    Read the article

  • How do you end up with event-sourcing if you use a xDD approach?

    - by Tomas Jansson
    When working in a TDD or BDD manner your unit tests are supposed to drive your design. But how do you end up with event-sourcing using a xDD techniques? As I see it event sourcing is something you need to adopt early on to take full advantage of it. Lets say that you start without event-sourcing and do a release. Later on when you are releasing version 2.0 you realize that it would be great to use event-sourcing, but at that point you alread have missed all the events from version 1.0 so it makes it much harder to implement. Or do you take some kind of backup of your db from before event-sourcing and use that as base line and then add event-sourcing on top of that?

    Read the article

  • Scalable solution for website polling

    - by Tom Irving
    I'm looking to add push notifications to one of my iOS apps. The app is a client for a website which doesn't offer push notifications. What I've come up with so far: App sends a message to home server when transitioning to background, asking the server to start polling the website for the logged in user. The home server starts a new process to poll for that user. Polling happens every so many seconds / minutes. When the user returns to the iOS app, the app sends a message to the home server to stop polling. The home server kills the process polling for the user. Repeat. The problem is that this soon becomes stupid: 100s of users means 100s of different processes. It's just not scalable in the slighest. What I've written so far is in PHP, using CURL to do the polling and I started with PHP a few days ago, so maybe I'm missing something obvious that could help me with this. Some advice would be great.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >