Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 157/563 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Programming 101 [closed]

    - by Ashish SIngh
    i just got placed after completing my b.tech as an assistant programmer i am curious to know about some things.... i am not at all a very good programmer(in java) as i just started but whenever i see some complicated coding i feel like how man... how they think so much i mean flow and all... what should i do? should i just go with the flow or what?? java is very vast so nobody can memorize everything then how they find so many specific functions to use... should i try to memorize all the syntax stuff or just use google to things and with time it ll be all handy.... what should be my strategy to enhance my skills PS: i love java (crazy about it...) and one more thing, in my company i m not under much pressure so it is good or bad for me???? please guide me. i know you all can help me with your experience :) thank you.

    Read the article

  • Is case after case in a switch efficient?

    - by RandomGuy
    Just a random question regarding switch case efficiency in case after case; is the following code (assume pseudo code): function bool isValid(String myString){ switch(myString){ case "stringA": case "stringB": case "stringC": return true; default: return false; } more efficient than this: function bool isValid(String myString){ switch(myString){ case "stringA": return true; case "stringB": return true; case "stringC": return true; default: return false; } Or is the performance equal? I'm not thinking in a specific language but if needed let's assume it's Java or C (for this case would be needed to use chars instead of strings).

    Read the article

  • C#.NET (AForge) against Java (JavaCV, JMF) for video processing

    - by Leron
    I'm starting to get really confused looking deeper and deeper at video processing world and searching for optimal choices. This is the reason to post this and some other questions to try and navigate myself in the best possible way. I really like Java, working with Java, coding with Java, at the same time C# is not that different from Java and Visual Studio is maybe the best IDE I've been working with. So even though I really want to do my projects in Java so I can get better and better Java programmer at the same time I'm really attract to video processing and even though I'm still at the beginning of this journey I want to take the right path. So I'm really in doubt could Java be used in a production environment for serious video processing software. As the title says I already have been looking at maybe the two most used technologies for video processing in Java - JMF and JavaCV and I'm starting to think that even they are used and they provide some functionality, when it comes to real work and real project that's not the first thing that comes to once mind, I mean to someone that have a professional opinion about this. On the other hand I haven't got the time to investigate .NET (c# specificly) options but even AForge looks a lot more serious library then those provided for Java. So in general -either ways I'm gonna spend a lot of time learning some technology and trying to do something that make sense with it, but my plan is at the end the thing that I'll eventually come up to be my headline project. To represent my skills and eventually help me find a job in the field. So I really don't want to spend time learning something that will give me the programming result I want but at the same time is not something that is needed in the real world development. So what is your opinion, which language, technology is better for this specific issue. Which one worths more in terms that I specified above?

    Read the article

  • How to manage an issue tracker's backlog

    - by Josh Kelley
    We've been dutifully using Trac for several years now, and our "active tickets" list has grown to almost 200 items. These include bugs that are too low priority and too complicated to fix for now, feature requests that have been deferred, issues that have never really generated complaints but everyone agrees ought to be fixed someday, planned code refactorings and other design infelicities that we don't want to lose track of, etc. As a result, with almost 200 of these issues, the list is almost overwhelming; it's no longer useful as a source of what needs to be worked on right now. What's the best way to keep track of issues of this sort? Part of the problem is that some of these issues are such a low priority that they may never get done. I hate to lose track of these items (similar to not wanting to throw something out of my house in case I might need it someday); do I need to throw them out regardless (by marking them as wontfix) and assume I can find them in the future if I ever do need them?

    Read the article

  • Learning C, C++ and C#

    - by Zac
    I'm sure you guys are tired of this question but after wading through hours of similar posts and questions I've really not made any progress to my specific concerns. I was hoping you guys could shed some light on a couple of questions I have before I decide on a course of action. BACKGROUND: I'm wanting to enroll in some type of program to learn a programming language/get a certificate/degree to work in the field. I've always been interested and bought a book on VB back in high school and dabbled. Now I want to get serious after a huge hiatus. Question 1: I've read it's counter-productive to learn C first, then C++ or C# because you develop bad habits. In a lot of college courses I've looked at, learning C/C++ is mandatory to advance. Should I ever bother learning C? On a related note, I really don't understand the difference between C and C++, or C# for the matter other than it incorporates .NET (which, I understand, is a compilation of tools and libraries that make programming easier and faster). Question 2: Where did you guys learn to program? Where do you recommend? Is it possible to land a job programming being self-taught? Is my best chance an ITT tech or a regular college? I was going to enroll in a JC and go from there but I can't decide what to do. LAST question :) I heard C++ is being "ported" to .NET. True? And if so, is this going to make C++ a solid, in-demand language to learn? Thanks for looking. :)

    Read the article

  • ASP.NET MVC ....or.... PHP, Python, Ruby, Java...?

    - by Muaz Khan
    I’m using ASP.NET MVC in C# and jQuery as well as Ajax. A lot of other web technologies confuse me: PHP, Python, Ruby, Java (or C++) etc. What is your opinion about ASP.NET MVC? Should I choose something else? Today, everyone says, “PHP” is worldly used language..!! And that’s true!!! I’m confused, much confused about my future career. I’m worried I’m not going in right direction! Or for making my future brighter, whether I should choose something else other than ASP.NET MVC and C#. And what would that something else be? I want to be a web developer that can do everything with web (and for web). I’m worried if I’m wasting my time with ASP.NET MVC!!!

    Read the article

  • How can i move towards the Business intelliegnce/ data mining fields from software developer

    - by user1758043
    I am working as python developer and i work with djnago. I also do some web scrapping and building spiders and bots. Now from there i want to make my move to Business intelligence. I just want to know how can i move into that field. because as companies are not going to hire me in that field directly , i just want to know how can i make transistions. I was thinking of first work as Database developer in sql and then i can see futher. But i want from you guys so that i can start learning that stuff so that i can chnage jobs keeping that in mind. here in my area there are plent of jobs in all area but i need to know hoe to transitio and what thing i should learn before making that transition. Here JObs are plenty so if i know my stuff , getting job is piece of cake becaus ethey don't ahve any persons. same jobs keep getting advertised for months and months

    Read the article

  • When and why you should use void (instead of i.e. bool/int)

    - by Jonas
    I occasionally run into methods where a developer chose to return something which isn't critical to the function. I mean, when looking at the code, it apparently works just as nice as a void and after a moment of thought, I ask "Why?" Does this sound familiar? Sometimes I would agree that most often it is better to return something like a bool or int, rather then just do a void. I'm not sure though, in the big picture, about the pros and cons. Depending on situation, returning an int can make the caller aware of the amount of rows or objects affected by the method (e.g., 5 records saved to MSSQL). If a method like "InsertSomething" returns a boolean, I can have the method designed to return true if success, else false. The caller can choose to act or not on that information. On the other hand, May it lead to a less clear purpose of a method call? Bad coding often forces me to double-check the method content. If it returns something, it tells you that the method is of a type you have to do something with the returned result. Another issue would be, if the method implementation is unknown to you, what did the developer decide to return that isn't function critical? Of course you can comment it. The return value has to be processed, when the processing could be ended at the closing bracket of method. What happens under the hood? Did the called method get false because of a thrown error? Or did it return false due to the evaluated result? What are your experiences with this? How would you act on this?

    Read the article

  • JADE Multiple Agents

    - by Umar niaz
    Is it is necessary to run jade instance on remote machine to communicate agents remotely? As I know that something must be running on remote machine to execute particular program but what if we want to create agent on local machine and send or distribute it on remote machine without running program on remote machine? Is it possible and if not, then what is solution? Do we need to run an instance of agent or jade on client machine to communicate agents remotely?

    Read the article

  • Java regex patterns - compile time constants or instance members?

    - by KepaniHaole
    Currently, I have a couple of singleton objects where I'm doing matching on regular expressions, and my Patterns are defined like so: class Foobar { private final Pattern firstPattern = Pattern.compile("some regex"); private final Pattern secondPattern = Pattern.compile("some other regex"); // more Patterns, etc. private Foobar() {} public static Foobar create() { /* singleton stuff */ } } But I was told by someone the other day that this is bad style, and Patterns should always be defined at the class level, and look something like this instead: class Foobar { private static final Pattern FIRST_PATTERN = Pattern.compile("some regex"); private static final Pattern SECOND_PATTERN = Pattern.compile("some other regex"); // more Patterns, etc. private Foobar() {} public static Foobar create() { /* singleton stuff */ } } The lifetime of this particular object isn't that long, and my main reason for using the first approach is because it doesn't make sense to me to hold on to the Patterns once the object gets GC'd. Any suggestions / thoughts?

    Read the article

  • Can WinRT really be used at just the boundaries?

    - by Bret Kuhns
    Microsoft (chiefly, Herb Sutter) recommends when using WinRT with C++/CX to keep WinRT at the boundaries of the application and keep the core of the application written in standard ISO C++. I've been writing an application which I would like to leave portable, so my core functionality was written in standard C++, and I am now attempting to write a Metro-style front end for it using C++/CX. I've had a bit of a problem with this approach, however. For example, if I want to push a vector of user-defined C++ types to a XAML ListView control, I have to wrap my user-defined type in a WinRT ref/value type for it to be stored in a Vector^. With this approach, I'm inevitably left with wrapping a large portion of my C++ classes with WinRT classes. This is the first time I've tried to write a portable native application in C++. Is it really practical to keep WinRT along the boundaries like this? How else could this type of portable core with a platform-specific boundary be handled?

    Read the article

  • Java stored procedures in Oracle, a good idea?

    - by Scott A
    I'm considering using a Java stored procedure as a very small shim to allow UDP communication from a PL/SQL package. Oracle does not provide a UTL_UDP to match its UTL_TCP. There is a 3rd party XUTL_UDP that uses Java, but it's closed source (meaning I can't see how it's implemented, not that I don't want to use closed source). An important distinction between PL/SQL and Java stored procedures with regards to networking: PL/SQL sockets are closed when dbms_session.reset_package is called, but Java sockets are not. So if you want to keep a socket open to avoid the tear-down/reconnect costs, you can't do it in sessions that are using reset_package (like mod_plsql or mod_owa HTTP requests). I haven't used Java stored procedures in a production capacity in Oracle before. This is a very large, heavily-used database, and this particular shim would be heavily used as well (it serves as a UDP bridge between a PL/SQL RFC 5424 syslog client and the local rsyslog daemon). Am I opening myself up for woe and horror, or are Java stored procedures stable and robust enough for usage in 10g? I'm wondering about issues with the embedded JVM, the jit, garbage collection, or other things that might impact a heavily used database.

    Read the article

  • Generalized Ajax function [migrated]

    - by TecBrat
    Not sure if this question will be considered "off topic". If it is, I'll remove it, but: I hadn't see this yet so I wrote it and would like to know if this is a good approach to it. Would anyone care to offer improvements to it, or point me to an example of where someone else has already written it better? function clwAjaxCall(path,method,data,asynch) { var xmlhttp; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } if(asynch) { xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { //alert(xmlhttp.responseText); //var newaction=xmlhttp.responseText; //alert('Action becomes '+newaction); return xmlhttp.responseText; } } } if(method=='GET'){path=path+"/?"+data;} xmlhttp.open(method,path,asynch); if(method=='GET'){xmlhttp.send();}else{xmlhttp.send(data);} if (!asynch){return xmlhttp.responseText;} } I then called it like Just Testing <script type="text/javascript" src="/mypath/js/clwAjaxCall.js"></script> <script type="text/javascript"> document.write("<br>More Testing"); document.write(clwAjaxCall("http://www.mysite.com",'GET',"var=val",false)); </script>

    Read the article

  • BDD/TDD vs JAD?

    - by Jonathan Conway
    I've been proposing that my workplace implement Behavior-Driven-Development, by writing high-level specifications in a scenario format, and in such a way that one could imagine writing a test for it. I do know that working against testable specifications tends to increase developer productivity. And I can already think of several examples where this would be the case on our own project. However it's difficult to demonstrate the value of this to the business. This is because we already have a Joint Application Development (JAD) process in place, in which developers, management, user-experience and testers all get together to agree on a common set of requirements. So, they ask, why should developers work against the test-cases created by testers? These are for verification and are based on the higher-level specs created by the UX team, which the developers currently work off. This, they say, is sufficient for developers and there's no need to change how the specs are written. They seem to have a point. What is the actual benefit of BDD/TDD, if you already have a test-team who's test cases are fully compatible with the higher-level specs currently given to the developers?

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with a mixed languages. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options?

    Read the article

  • yet another question about migrating to Java

    - by aloneguid
    Hi, There are plenty similar questions, but maybe responses to this one will save a developer's life :) I want to migrate to Java. The reasons are very clear: all the .NET vacancies are client and windows oriented (Silverlight developer, ASP.NET developer, WPF developer etc.) and none of them are any interest to me. I worked with .NET since it's beginning as our company decided to invest in .NET having C++ stack and all the natual problems, so I was just blindly following and actually enjoyed it as the products were mostly server oriented with mixed C++/C# code. Today I have beforementioned problem - can't find an inspiring job. I'd rather kill myself than start working on a Silverlight or WPF project. Searching Java vacancies shows promising results, however they all require a huge java-related technology stack and experience. The question is is there any chance to find a job quickly and without dramatic salary drop (I know that Java guys are usually better paid, so there must be a kind of a credit) and if not, how must time and effort does it take to migrate (my .NET knowledge mostly includes server-oriented technologies like NHibernate, WCF, threading, sockets, ASP.NET web services, Enterprise Library, NInject etc etc etc, and (still) some C++ leftovers). Thanks!

    Read the article

  • yet another question about migrating to Java

    - by aloneguid
    Hi, There are plenty similar questions, but maybe responses to this one will save a developer's life :) I want to migrate to Java. The reasons are very clear: all the .NET vacancies are client and windows oriented (Silverlight developer, ASP.NET developer, WPF developer etc.) and none of them are any interest to me. I worked with .NET since it's beginning as our company decided to invest in .NET having C++ stack and all the natual problems, so I was just blindly following and actually enjoyed it as the products were mostly server oriented with mixed C++/C# code. Today I have beforementioned problem - can't find an inspiring job. I'd rather kill myself than start working on a Silverlight or WPF project. Searching Java vacancies shows promising results, however they all require a huge java-related technology stack and experience. The question is is there any chance to find a job quickly and without dramatic salary drop (I know that Java guys are usually better paid, so there must be a kind of a credit) and if not, how must time and effort does it take to migrate (my .NET knowledge mostly includes server-oriented technologies like NHibernate, WCF, threading, sockets, ASP.NET web services, Enterprise Library, NInject etc etc etc, and (still) some C++ leftovers). Thanks!

    Read the article

  • WCF service and security

    - by Gaz83
    Been building a WP7 app and now I need it to communicate to a WCF service I made to make changes to an SQL database. I am a little concerned about security as the user name and password for accessing the SQL database is in the App.Config. I have read in places that you can encrypt the user name and password in the config file. As the username and password is never exposed to the clients connected to the WCF service, would security in my situation be much of a problem? Just in case anyone suggests a method of security, I do not have SSL on my web server.

    Read the article

  • Where would my different development rhythm be suitable for the work?

    - by DarenW
    Over the years I have worked on many projects, with some successful and a great benefit to the company, and some total failures with me getting fired or otherwise leaving. What is the difference? Naturally I prefer the former and wish to avoid the latter, so I'm pondering this issue. The key seems to be that my personal approach differs from the norm. I write code first, letting it be all spaghetti and chaos, using whatever tools "fit my hand" that I'm fluent in. I try to organize it, then give up and start over with a better design. I go through cycles, from thinking-design to coding-testing. This may seem to be the same as any other development process, Agile or whatever, cycling between design and coding, but there does seem to be a subtle difference: The methods (ideally) followed by most teams goes design, code; design, code; ... while I'm going code, design; code, design; (if that makes any sense.) Music analogy: some types of music have a strong downbeat while others have prominent syncopation. In practice, I just can't think in terms of UML, specifications and so on, but grok things only by attempting to code and debug and refactor ad-hoc. I need the grounding provided by coding in order to think constructively, then to offer any opinions, advice or solutions to the team and get real work done. In positions where I can initially hack up cowboy code without constraints of tool or language choices, I easily gain a "feel" for the data, requirements etc and eventually do good work. In formalized positions where paperwork and pure "design" comes first and only later any coding (even for small proof-of-concept projects), I am lost at sea and drown. Therefore, I'd like to know how to either 1) change my rhythm to match the more formalized methodology-oriented team ways of doing things, or 2) find positions at organizations where my sense of development rhythm is perfect for the work. It's probably unrealistic for a person to change their fundamental approach to things. So option 2) is preferred. So where I can I find such positions? How common is my approach and where is it seen as viable but different, and not dismissed as undisciplined or cowboy coder ways?

    Read the article

  • Architecture/pattern resources for small applications and tools

    - by s73v3r
    I was wondering if anyone had any resources or advice related to using architecture patterns like MVVM/MVC/MVP/etc on small applications and tools, as opposed to large, enterprisy ones. EDIT: Most of the information I see on application architecture is directed at large, enterprise applications. I'm just writing small programs and tools. As far as using these architecture patterns, is it generally worthwhile to go through the overhead of using an MVC/MVVM framework? Or would I be better off keeping it simple?

    Read the article

  • What is the correlation between the quality of the software development process and the quality of the product?

    - by Ophir Yoktan
    I used to believe the practicing "good" software development methods tends to yield a better product in the long run. However, I've seen quite a few cases where "quick-and-dirty" \ "brute-force" \ "copy-paste" programming appeared to give decent results quicker, and cheaper. This appears especially in cases where time to market is more critical then maintenance overhead. Is there a correlation between the quality of the development process and techniques and the quality of the product?

    Read the article

  • how to plot scatterplot and histogram using R [migrated]

    - by Wee Wang Wang
    I have a dataset about maximum wind speed(cm) as below: Year Jan Feb Mar Apr May June July Aug Sept Oct Nov Dec 2011 4.5 5.6 5.0 5.4 5.0 5.0 5.2 5.3 4.8 5.4 5.4 3.8 2010 4.6 5.0 5.8 5.0 5.2 4.5 4.4 4.3 4.9 5.2 5.2 4.6 2009 4.5 5.3 4.3 3.9 4.7 5.0 4.8 4.7 4.9 5.6 4.9 4.1 2008 3.8 1.9 5.6 4.7 4.7 4.3 5.9 4.9 4.9 5.6 5.2 4.4 2007 4.6 4.6 4.6 5.6 4.2 3.6 2.5 2.5 2.5 3.3 5.6 1.5 2006 4.3 4.8 5.0 5.2 4.7 4.6 3.2 3.4 3.6 3.9 5.9 4.4 2005 2.7 4.3 5.7 4.7 4.6 5.0 5.6 5.0 4.9 5.9 5.6 1.8 How to create monthly max wind speed scatterplot (month in x-axis and wind speed in y-axis) and also the monthly max wind speed histogram by using R programming?

    Read the article

  • Convert filenames to their checksum before saving to prevent duplicates. Is is a smart thing to do?

    - by Xananax
    TL;DR:what the title says I am developing some sort of image board in PHP. I was thinking of changing each image's filename to it's checksum prior to saving it. This way, I might be able to prevent duplicates. I know this wouldn't work for two images that are the same but differ in size or level of compression or whatnot, but this method would allow for an early check. What bugs me is that I never saw this method implemented anywhere, so I was wondering if there is a catch to it. Maybe it is just more efficient to keep the original filename and store the hash in DB? Maybe the whole method is just not useful and my question is moot? What do you think? On a side note, I don't really get how hashes are calculated so I was wondering, if my first question checks out, if it would be possible to calculate the likeness that two images are similar by comparing hashes (levenshtein or something of the sort).

    Read the article

  • Is aspect oriented programming a misnomer?

    - by glenviewjeff
    From everything I have learned about "Aspect Oriented Programming" or "Aspect Oriented Software Development," labeling it as a programming paradigm or methodology appears to be inaccurate. From what I can tell it is not a fundamental technique for programming. To nail down what is meant by "paradigm" and "methodology," please refer to the following definitions from the American Heritage Dictionary. Compare how well or poorly "Object-Oriented Programming" applies to each vs. how well AOP fits. Paradigm: A set of assumptions, concepts, values, and practices that constitutes a way of viewing reality for the community that shares them, especially in an intellectual discipline. Methodology: A body of practices, procedures, and rules used by those who work in a discipline or engage in an inquiry; a set of working methods. "Evidence-based medicine" satisfies the definition of paradigm, but "hysterectomy-based medicine" would be a misnomer because the problem space is too narrow. I am getting the impression that AOP may be misnamed because based on the "oriented-programming" suffix, AOP is alleging to be both a paradigm and a methodology in the same way "Object-Oriented Programming" is. Both of these terms (paradigm and methodology) indicate a fundamental technique, where what I understand about aspects is a technology for solving a narrow problem scope, maybe comparable in magnitude to the static variable feature of Java. If it's true that aspects solve a narrow set of problems, and AOP isn't a misnomer, then why shouldn't all programming techniques be given the "oriented-programming" suffix, such as "inheritance-oriented programming," "dependency-oriented programming," or "scope-oriented programming?"

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >