Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 150/563 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Advice on a good comeback strategy after years of java abstinence

    - by simou
    It's almost 5 yrs since I left the java IT-project/enterprise business. Before I was a highly skilled enterprise developer / OO architect with more than 10 years experience in the business. I was proficient in many of those enterprise related technologies, including DWHing. Since then I haven't been doing any large scale programming, a little bit of C, Python, some dips into Scala, hacking a small java-plugin framework, opengl, but only as fun projects. Now I want to re-enter the java stage again, i.e. I'm looking for job opportunities as a developer. But I fear I might have lost much of my former punching strength, e.g. I would have to give my SQL knowledge a deep refreshing breath, re-visit basic stuff like design patterns, enterprise architectures, etc. and probably learn the new stuff like EJB3.1, JEE 6 too. I also missed the whole scrum craze. So from your own experience, what subject should I tackle first? Like technology (which ones?) or design skills (uml..)? But what I'm also wondering is since the basic design / architectural principles haven't changed much by now, what would be the chance on the job market for someone like me who left the java-world at a time where everything was less fragmented and EJB2.1 and XDoclet were de-facto standards? And how could I convince a potential employer that I'm still an effective on-the-job learner? Should I rather aim for "junior positions" ? Lots of questions but I'd be really glad if you could share your (encouraging :) thoughts with me. cheers! (btw I'm based in Austria)

    Read the article

  • ColdFusion Search with Excel Sheet creation

    - by user71602
    I created a form search with two buttons - one for results on the web page and one to create an excel sheet. Problem: When a user conducts a search with results on the web page - the user would have to select the criteria again then click the create excel button to create the sheet. How can I work it, so the user after conducting a search can just click one button to create the excel sheet? (Using "Post" for the form)

    Read the article

  • Learning C, C++ and C#

    - by Zac
    I'm sure you guys are tired of this question but after wading through hours of similar posts and questions I've really not made any progress to my specific concerns. I was hoping you guys could shed some light on a couple of questions I have before I decide on a course of action. BACKGROUND: I'm wanting to enroll in some type of program to learn a programming language/get a certificate/degree to work in the field. I've always been interested and bought a book on VB back in high school and dabbled. Now I want to get serious after a huge hiatus. Question 1: I've read it's counter-productive to learn C first, then C++ or C# because you develop bad habits. In a lot of college courses I've looked at, learning C/C++ is mandatory to advance. Should I ever bother learning C? On a related note, I really don't understand the difference between C and C++, or C# for the matter other than it incorporates .NET (which, I understand, is a compilation of tools and libraries that make programming easier and faster). Question 2: Where did you guys learn to program? Where do you recommend? Is it possible to land a job programming being self-taught? Is my best chance an ITT tech or a regular college? I was going to enroll in a JC and go from there but I can't decide what to do. LAST question :) I heard C++ is being "ported" to .NET. True? And if so, is this going to make C++ a solid, in-demand language to learn? Thanks for looking. :)

    Read the article

  • C/C++: Who uses the logical operator macros from iso646.h and why?

    - by Jaime Soto
    There has been some debate at work about using the merits of using the alternative spellings for C/C++ logical operators in iso646.h: and && and_eq &= bitand & bitor | compl ~ not ! not_eq != or || or_eq |= xor ^ xor_eq ^= According to Wikipedia, these macros facilitate typing logical operators in international (non-US English?) and non-QWERTY keyboards. All of our development team is in the same office in Orlando, FL, USA and from what I have seen we all use the US English QWERTY keyboard layout; even Dvorak provides all the necessary characters. Supporters of using the iso646.h macros claim we should them because they are part of the C and C++ standards. I think this argument is moot since digraphs and trigraphs are also part of these standards and they are not even supported by default in many compilers. My rationale for opposing these macros in our team is that we do not need them since: Everybody on our team uses the US English QWERTY keyboard layout; C and C++ programming books from the US barely mention iso646.h, if at all; and new developers may not be familiar with iso646.h (this is expected if they are from the US). /rant Finally, to my set of questions: Does anyone in this site use the iso646.h logical operator macros? Why? What is your opinion about using the iso646.h logical operator macros in code written and maintained on US English QWERTY keyboards? Is my digraph and trigraph analogy a valid argument against using iso646.h with US English QWERTY keyboard layouts? EDIT: I missed two similar questions in StackOverflow: Is anybody using the named boolean operators? Which C++ logical operators do you use: and, or, not and the ilk or C style operators? why?

    Read the article

  • Is it more valuable to double major in Computer Science/Software Engineering or get an undergraduate CS degree with a Masters in SE?

    - by Austin Hyde
    A friend and I (both in college) are currently in a debate over which is better, in terms of employment opportunities, experience, and education: a Bachelors degree in both Computer Science and Software Engineering, or a Bachelors in Computer Science with a Masters in Software Engineering. My point of view is that I would rather go to school for 4-4.5 years to learn both sides of the field, and be out working on real projects gaining real experience, by going the double major route. His point of view is that it would look better to potential employers if he had a Bachelors in CS and Masters in SE. That way, when he's finally done after 4 years of CS and 2-4 of SE (depending on where he goes), he can pretty much have his choosing of what he wants to do. We are both in agreement on the distinction between the two degrees: CS is "traditional" and about the theory of algorithms, data structures, and programming, where SE is the study of the design of software and the implementation of CS theory. So, what's your stance on this debate? Have you gone one route or another? And most importantly, why?

    Read the article

  • Any valid reason to Nest Master Pages in ASP.Net rather than Inherit?

    - by James P. Wright
    Currently in a debate at work and I cannot fathom why someone would intentionally avoid Inheritance with Master Pages. For reference here is the project setup: BaseProject MainMasterPage SomeEvent SiteProject SiteMasterPage nested MainMasterPage OtherSiteProject MainMasterPage (from BaseProject) The debate came up because some code in BaseProject needs to know about "SomeEvent". With the setup above, the code in BaseProject needs to call this.Master.Master. That same code in BaseProject also applies to OtherSiteProject which is just accessed as this.Master. SiteMasterPage has no code differences, only HTML differences. If SiteMasterPage Inherits MainMasterPage rather than Nests it, then all code is valid as this.Master. Can anyone think of a reason why to use a Nested Master Page here instead of an Inherited one?

    Read the article

  • What C++ coding standard do you use?

    - by gablin
    For some time now, I've been unable to settle on a coding standard and use it concistently between projects. When starting a new project, I tend to change some things around (add a space there, remove a space there, add a line break there, an extra indent there, change naming conventions, etc.). So I figured that I might provide a piece of sample code, in C++, and ask you to rewrite it to fit your standard of coding. Inspiration is always good, I say. ^^ So here goes: #ifndef _DERIVED_CLASS_H__ #define _DERIVED_CLASS_H__ /** * This is an example file used for sampling code layout. * * @author Firstname Surname */ #include <stdio> #include <string> #include <list> #include "BaseClass.h" #include "Stuff.h" /** * The DerivedClass is completely useless. It represents uselessness in all its * entirety. */ class DerivedClass : public BaseClass { //////////////////////////////////////////////////////////// // CONSTRUCTORS / DESTRUCTORS //////////////////////////////////////////////////////////// public: /** * Constructs a useless object with default settings. * * @param value * Is never used. * @throws Exception * If something goes awry. */ DerivedClass (const int value) : uselessSize_ (0) {} /** * Constructs a copy of a given useless object. * * @param object * Object to copy. * @throws OutOfMemoryException * If necessary data cannot be allocated. */ ItemList (const DerivedClass& object) {} /** * Destroys this useless object. */ ~ItemList (); //////////////////////////////////////////////////////////// // PUBLIC METHODS //////////////////////////////////////////////////////////// public: /** * Clones a given useless object. * * @param object * Object to copy. * @return This useless object. */ DerivedClass& operator= (const DerivedClass& object) { stuff_ = object.stuff_; uselessSize_ = object.uselessSize_; } /** * Does absolutely nothing. * * @param useless * Pointer to useless data. */ void doNothing (const int* useless) { if (useless == NULL) { return; } else { int womba = *useless; switch (womba) { case 0: cout << "This is output 0"; break; case 1: cout << "This is output 1"; break; case 2: cout << "This is output 2"; break; default: cout << "This is default output"; break; } } } /** * Does even less. */ void doEvenLess () { int mySecret = getSecret (); int gather = 0; for (int i = 0; i < mySecret; i++) { gather += 2; } } //////////////////////////////////////////////////////////// // PRIVATE METHODS //////////////////////////////////////////////////////////// private: /** * Gets the secret value of this useless object. * * @return A secret value. */ int getSecret () const { if ((RANDOM == 42) && (stuff_.size() > 0) || (1000000000000000000 > 0) && true) { return 420; } else if (RANDOM == -1) { return ((5 * 2) + (4 - 1)) / 2; } int timer = 100; bool stopThisMadness = false; while (!stopThisMadness) { do { timer--; } while (timer > 0); stopThisMadness = true; } } //////////////////////////////////////////////////////////// // FIELDS //////////////////////////////////////////////////////////// private: /** * Don't know what this is used for. */ static const int RANDOM = 42; /** * List of lists of stuff. */ std::list <Stuff> stuff_; /** * Specifies the size of this object's uselessness. */ size_t uselessSize_; }; #endif

    Read the article

  • Scala factory pattern returns unusable abstract type

    - by GGGforce
    Please let me know how to make the following bit of code work as intended. The problem is that the Scala compiler doesn't understand that my factory is returning a concrete class, so my object can't be used later. Can TypeTags or type parameters help? Or do I need to refactor the code some other way? I'm (obviously) new to Scala. trait Animal trait DomesticatedAnimal extends Animal trait Pet extends DomesticatedAnimal {var name: String = _} class Wolf extends Animal class Cow extends DomesticatedAnimal class Dog extends Pet object Animal { def apply(aType: String) = { aType match { case "wolf" => new Wolf case "cow" => new Cow case "dog" => new Dog } } } def name(a: Pet, name: String) { a.name = name println(a +"'s name is: " + a.name) } val d = Animal("dog") name(d, "fred") The last line of code fails because the compiler thinks d is an Animal, not a Dog.

    Read the article

  • Should I use title case in URLs?

    - by Amadiere
    We are currently deciding on a consistent naming convention across a site with multiple web applications. Historically, I've been an advocate of the 'lowercase all the letters!' when creating URLs: http://example.com/mysystem/account/view/1551 However, within the last year or two, specifically since I began using ASP.NET MVC & had more dealings with REST based URLs, I've become a fan of capitalizing the first letter of each section/word within the URL as it makes it easier to read (imho). http://example.com/MySystem/Account/View/1551 We're not in a situation where people need to read or be able to understand the URLs, so that's not a driver per se. The main thing we are after is a consistent approach that is rational and makes sense. Are there any standards that declare it good to do one way or another, or issues that we may run into on (at least realistically modern) setups that would choose a preference over another? What is the general consensus for this debate currently?

    Read the article

  • Should I ditch a creative pet project in lieu of one that would demonstrate skills more applicable to an employer?

    - by Hart Simha
    I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in english), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. Oh, and in case anyone is interested, my repository is here: www.github.com/hmsimha/vimagine

    Read the article

  • How to teach computer science ?

    - by proferikson
    I am just starting to teach computer science. It's just the basic level. I'm finding that I sometimes don't know how to approach topics in a way that lets students easily understand. I've found that the best thing is to use analogies. A terrible one, that worked well, was presenting recursion by using the process of eating a burger. (You eat one bite, then you eat the rest). Interesting real world examples do wonders here. I'd like to know the techniques you have used for teaching, or teachers you've had have used that proved particularly effective. I'd also like to know your best analogies and examples for topics in CS101, especially for those harder to grasp topics. I'm teaching the class in Java (as required).

    Read the article

  • Use a template to get alternate behaviour?

    - by Serge
    Is this a bad practice? const int sId(int const id); // true/false it doesn't matter template<bool i> const int sId(int const id) { return this->id = id; } const int MCard::sId(int const id){ MCard card = *this; this->id = id; this->onChange.fire(EventArgs<MCard&, MCard&>(*this, card)); return this->id; } myCard.sId(9); myCard.sId<true>(8); As you can see, my goal is to be able to have an alternative behaviour for sId. I know I could use a second parameter to the function and use a if, but this feels more fun (imo) and might prevent branch prediction (I'm no expert in that field). So, is it a valid practice, and/or is there a better approach?

    Read the article

  • Is unit testing development or testing?

    - by Rubio
    I had a discussion with a testing manager about the role of unit and integration testing. She requested that developers report what they have unit and integration tested and how. My perspective is that unit and integration testing are part of the development process, not the testing process. Beyond semantics what I mean is that unit and integration tests should not be included in the testing reports and systems testers should not be concerned about them. My reasoning is based on two things. Unit and integration tests are planned and performed against an interface and a contract, always. Regardless of whether you use formalized contracts you still test what e.g. a method is supposed to do, i.e. a contract. In integration testing you test the interface between two distinct modules. The interface and the contract determine when the test passes. But you always test a limited part of the whole system. Systems testing on the other hand is planned and performed against the system specifications. The spec determines when the test passes. I don't see any value in communicating the breadth and depth of unit and integration tests to the (systems) tester. Suppose I write a report that lists what kind of unit tests are performed on a particular business layer class. What is he/she supposed to take away from that? Judging what should and shouldn't be tested from that is a false conclusion because the system may still not function the way the specs require even though all unit and integration tests pass. This might seem like useless academic discussion but if you work in a strictly formal environment as I do, it's actually important in determining how we do things. Anyway, am I totally wrong? (Sorry for the long post.)

    Read the article

  • Should all new web projects build their backend based on xml/json result sets?

    - by Blankman
    If you were building a new Saas project, would it make sense to start with all of the backend services returning xml/json? Because these days you need to build for both the web and mobile devices, and having a backend that is build from the start to return xml and json, you are ready to go mobile (all services have the business logic, so you won't be repeating anything). Now the web would be MVC, so the controller would just be routing the request to your service backend, and converting the json or xml to html. The obviousl downside is that you have to build a backend, and then another web project that calls your backend. But this also goes to you favor as it forces you to seperate your concerns, and not leak business logic in your controller/view layer. Thoughts?

    Read the article

  • Unit-Testing functions which have parameters of classes where source code is not accessible

    - by McMannus
    Relating to this question, I have another question regarding unit testing functions in the utility classes: Assume you have function signatures like this: public function void doSomething(InternalClass obj, InternalElement element) where InternalClass and InternalElement are both Classes which source code are not available, because they are hidden in the API. Additionally, doSomething only operates on obj and element. I thought about mocking those classes away but this option is not possible due to the fact that they do not implement an interface at all which I could use for my Mocking classes. However, I need to fill obj with defined data to test doSomething. How can this problem be solved?

    Read the article

  • Research idea in simulation

    - by Nilani Algiriyage
    Hi, I'm an undergraduate in University of Keleniya,Sri Lanka. I'm interested in doing a research on BPM, BPMN. But I have very few knowledgeable people and very few resources in my country. My supervisor also doesn't have enough knowledge in this area. So if you can please help me to find a research topic in BPM or BPMN. At least please help me to get an idea what areas I can do? Thank you very much. Regards, Nilani.

    Read the article

  • Is it bad idea to use flag variable to search MAX element in array?

    - by Boris Treukhov
    Over my programming career I formed a habit to introduce a flag variable that indicates that the first comparison has occured, just like Msft does in its linq Max() extension method implementation public static int Max(this IEnumerable<int> source) { if (source == null) { throw Error.ArgumentNull("source"); } int num = 0; bool flag = false; foreach (int num2 in source) { if (flag) { if (num2 > num) { num = num2; } } else { num = num2; flag = true; } } if (!flag) { throw Error.NoElements(); } return num; } However I have met some heretics lately, who implement this by just starting with the first element and assigning it to result, and oh no - it turned out that STL and Java authors have preferred the latter method. Java: public static <T extends Object & Comparable<? super T>> T max(Collection<? extends T> coll) { Iterator<? extends T> i = coll.iterator(); T candidate = i.next(); while (i.hasNext()) { T next = i.next(); if (next.compareTo(candidate) > 0) candidate = next; } return candidate; } STL: template<class _FwdIt> inline _FwdIt _Max_element(_FwdIt _First, _FwdIt _Last) { // find largest element, using operator< _FwdIt _Found = _First; if (_First != _Last) for (; ++_First != _Last; ) if (_DEBUG_LT(*_Found, *_First)) _Found = _First; return (_Found); } Are there any preferences between one method or another? Are there any historical reasons for this? Is one method more dangerous than another?

    Read the article

  • Moving from building internal applications as WPF to ASP NET MVC?

    - by stuartmclark
    I have worked on quite a few internal applications for my work and I have always defaulted to using WPF, but I am considering re-writing existing ones into a web app. This is so that anyone in my company can use it without having to download anything from the network. I am just wondering if this is the way forward with any development of new internal applications? So, should I stop using WPF and start using ASP.NET MVC for internal applications that a lot of people need to use?

    Read the article

  • How to use the unit of work and repository patterns in a service oriented enviroment

    - by A. Karimi
    I've created an application framework using the unit of work and repository patterns for it's data layer. Data consumer layers such as presentation depend on the data layer design. For example a CRUD abstract form has a dependency to a repository (IRepository). This architecture works like a charm in client/server environments (Ex. a WPF application and a SQL Server). But I'm looking for a good pattern to change or reuse this architecture for a service oriented environment. Of course I have some ideas: Idea 1: The "Adapter" design pattern Keep the current architecture and create a new unit of work and repository implementation which can work with a service instead of the ORM. Data layer consumers are loosely coupled to the data layer so it's possible but the problem is about the unit of work; I have to create a context which tracks the objects state at the client side and sends the changes to the server side on calling the "Commit" (Something that I think the RIA has done for Silverlight). Here the diagram: ----------- CLIENT----------- | ------------------ SERVER ---------------------- [ UI ] -> [ UoW/Repository ] ---> [ Web Services ] -> [ UoW/Repository ] -> [DB] Idea 2: Add another layer Add another layer (let say "local services" or "data provider"), then put it between the data layer (unit of work and repository) and the data consumer layers (like UI). Then I have to rewrite the consumer classes (CRUD and other classes which are dependent to IRepository) to depend on another interface. And the diagram: ----------------- CLIENT ------------------ | ------------------- SERVER --------------------- [ UI ] -> [ Local Services/Data Provider ] ---> [ Web Services ] -> [ UoW/Repository ] -> [DB] Please note that I have the local services layer on the current architecture but it doesn't expose the data layer functionality. In another word the UI layer can communicate with both of the data and local services layers whereas the local services layer also uses the data layer. | | | | | | | | ---> | Local Services | ---> | | | UI | | | | Data | | | | | | | ----------------------------> | |

    Read the article

  • *Code owner* system: is it an efficient way?

    - by sergzach
    There is a new developer in our team. An agile methodology is in use at our company. But the developer has another experience: he considers that particular parts of the code must be assigned to particular developers. So if one developer had created a program procedure or module it would be considered normal that all changes of the procedure/module would be made by him only. On the plus side, supposedly with the proposed approach we save common development time, because each developer knows his part of the code well and makes fixes fast. The downside is that developers don't know the system entirely. Do you think the approach will work well for a medium size system (development of a social network site)?

    Read the article

  • Improving grepping over a huge file performance

    - by rogerio_marcio
    I have FILE_A which has over 300K lines and FILE_B which has over 30M lines. I created a bash script that greps each line in FILE_A over in FILE_B and writes the result of the grep to a new file. This whole process is taking over 5+ hours. I'm looking for suggestions on whether you see any way of improving the performance of my script. I'm using grep -F -m 1 as the grep command. FILE_A looks like this: 123456789 123455321 and FILE_B is like this: 123456789,123456789,730025400149993, 123455321,123455321,730025400126097, So with bash I have a while loop that picks the next line in FILE_A and greps it over in FILE_B. When the pattern is found in FILE_B i write it to result.txt. while read -r line; do grep -F -m1 $line 30MFile done < 300KFile Thanks a lot in advance for your help.

    Read the article

  • Etymology of software project names [closed]

    - by Benoit
    I would like to have a reference community wiki here in order to know what etymology software name have or why they are named that way. I was wondering why Imagemagick's mogrify was named this way. Today I wondered the same for Apache Lucene. It would be handy to have a list here. Could we extend such a list? Let me start and let you edit it please. I will ask for this to be community wiki. For each entry please link to an external reference. GNU Emacs: stand for “Editor MACroS”. Apache Lucene: Armenian name Imagemagick mogrify: from “transmogrify”. Thanks.

    Read the article

  • What modern design pattern / software engineering books for Java SE 6 do you recommend ?

    - by Scott Davies
    Hi, I am very familiar with Java 6 SE language features and am now looking for modern books that cover design patterns in Java for beginners as well as software engineering books that discuss architectures, algorithms and best practices in Java coding (sort of like the Effective C# books). I am aware of the classic GoF design patterns book, however, I'd like a more modern reference that takes advantage of the features of Java 6 SE. What books would you recommend ? Thanks, Scott

    Read the article

  • How do you avoid jumping to a solution when under pressure? [closed]

    - by GlenPeterson
    When under a particularly strict programming deadline (like an hour), if I panic at all, my tendency is to jump into coding without a real plan and hope I figure it out as I go along. Given enough time, this can work, but in an interview it's been pretty unsuccessful, if not downright counter-productive. I'm not always comfortable sitting there thinking while the clock ticks away. Is there a checklist or are there techniques to recognize when you understand the problem well enough to start coding? Maybe don't touch the keyboard for the first 5-10 minutes of the problem? At what point do you give up and code a brute-force solution with the hope of reasoning out a better solution later? A related follow-up question might be, "How do you ensure that you are solving the right problem?" Or "When is it most productive to think and design more vs. code some experiments to and figure out the design later?" EDIT: One close vote already, but I'm not sure why. I wrote this in the first person, but I doubt I'm the only programmer to ever choke in an interview. Here is a list of techniques for taking a math test and another for taking an oral exam. Maybe I'm not expressing myself well, but I'm asking if there is a similar list of techniques for handling a programming problem under pressure?

    Read the article

  • Branching strategy for frequent releases

    - by Technext
    We have very frequent releases and we use Git for version control. When i am mentioning about frequency, please assume it to include bug-fixes and feature release too. All releases are eventually merged into ‘mainline’. When a release is deployed on production and if a bug is identified, people start fixing the bug on the same branch from which the latest release was deployed on production. They do not create a new bug-fix branch for the same. I feel that’s not the right way to go for. There are several components and each component has a different owner, and thus, different perspective. Though I have not initiated talks with them, I am sure there will be a lot of resistance. Main issue that they might cite would be, “There’s a lot of work involved in creating and tracking branches especially when there are so frequent deployments on production. This will consume a lot of dev effort.” Do you think that fixing bug on the same branch from which release was done, a good idea? If yes, how do you manage it? Using tags? I know that best practices may not always be applicable due to several factors but still I would like to know what might be a good approach for branching in a scenario where releases/bug-fixes happen almost on a daily basis.

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >