Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 213/563 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Best practices for using namespaces in C++.

    - by Dima
    I have read Uncle Bob's Clean Code a few months ago, and it has had a profound impact on the way I write code. Even if it seemed like he was repeating things that every programmer should know, putting them all together and putting them into practice does result in much cleaner code. In particular, I found breaking up large functions into many tiny functions, and breaking up large classes into many tiny classes to be incredibly useful. Now for the question. The book's examples are all in Java, while I have been working in C++ for the past several years. How would the ideas in Clean Code extend to the use of namespaces, which do not exist in Java? (Yes, I know about the Java packages, but it is not really the same.) Does it make sense to apply the idea of creating many tiny entities, each with a clearly define responsibility, to namespaces? Should a small group of related classes always be wrapped in a namespace? Is this the way to manage the complexity of having lots of tiny classes, or would the cost of managing lots of namespaces be prohibitive?

    Read the article

  • should i concentrate on logical and puzzles part in programming, i want to be a web (flex)developer?

    - by abhilashm86
    I'm a student not good and can't easily crack at more puzzle, complex mathematics, hard logic problems? in college i studied c++, java, oops. I'm comfortable with all syntax and writing programs and using API's and doing mashups, i can do.......... but once a friend asked help on coding contest, i was in dilemma and frustration? It was simple and complex, i could not write code for those, so got scared? Is logical ability,complex mathematics, puzzles required for a developer point of view? please help and suggest methods to achieve things......

    Read the article

  • null values vs "empty" singleton for optional fields

    - by Uko
    First of all I'm developing a parser for an XML-based format for 3D graphics called XGL. But this question can be applied to any situation when you have fields in your class that are optional i.e. the value of this field can be missing. As I was taking a Scala course on coursera there was an interesting pattern when you create an abstract class with all the methods you need and then create a normal fully functional subclass and an "empty" singleton subclass that always returns false for isEmpty method and throws exceptions for the other ones. So my question is: is it better to just assign null if the optional field's value is missing or make a hierarchy described above and assign it an empty singleton implementation?

    Read the article

  • Recruiters intentionally present one good candidate for an available job

    - by Jeff O
    Maybe they do it without realizing. The recruiter's goal is to fill the job as soon as possible. I even think they feel it is in their best interest that the candidate be qualified, so I'm not trying to knock recruiters. Aren't they better off presenting 3 candidates, but one clearly stands out? The last thing they want from their client is a need to extend the interview process because they can't decide. If the client doesn't like any of them, you just bring on your next good candidate. This way they hedge their bet a little. Any experience, insight or ever heard of a head-hunter admit this? Does it make sense? There has to be a reason why the choose such unqualified people. I've seen jobs posted that clearly state they want someone with a CS degree and the recruiter doesn't take it literally. I don't have a CS degree or Java experience and still they think I'm a possible fit.

    Read the article

  • What threading pratice is good 90% of the time?

    - by acidzombie24
    Since my SO thread was closed i guess i can ask it here. What practice or practices are good 90% of the time when working with threading with multiple cores? Personally all i have done was share immutable classes and pass (copy) data to a queue to the destine thread. Note: This is for research and when i say 90% of the time i dont mean it is allowed to fail 10% of the time (thats ridiculous!) i mean 90% it is a good solution while the other 10% it is not so desirable due to implementation or efficiently reasons (or plainly another technique fits the problem domain a lot better).

    Read the article

  • How to motivate team for knowledge sharing sessions

    - by ring bearer
    I work in a team with wide range of expertise and experience. I have been trying to introduce weekly knowledge sharing sessions. Sessions of 30-60 min length where everybody gets a chance to present something and talk about it. This will contribute in improving presentational and language skills. However, the team is not motivated towards this, either the attendance is too low or none. How to get a team work towards such an idea?

    Read the article

  • Biggest mistake you've ever made

    - by Rogue Coder
    Similar to the question I read on Server Fault, what is the biggest mistake you've ever made in an IT related position. Some examples from friends: I needed to do some work on a production site so I decided to copy over the live database to the beta site. Pretty standard, but when I went to the beta site it was still pulling out-of-date info. OOPS! I had copied the beta database over to the live site! Thank god for backups. And for me, I created a form for an event that was to be held during a specific time range. Participants would fill out the form for a chance to win, and we would send the event organizers a CSV from the database. I went into the database, and found ONLY 1 ENTRY, MINE. Upon investigating, it appears as though I forgot an auto increment key, and because of the server setup there was no way to recover the lost data. I am aware this question is similar to ones on Stack Overflow but the ones I found seemed to receive generic answers instead of actual stories :) What is the biggest coding error/mistake ever…

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • "Oracle Certified Expert, Java Platform, Enterprise Edition 6 Java Persistence API Developer" Preparation

    - by Matt
    I have been working with Hibernate for a fews years now, and I want to solidify and demonstrate my knowledge by taking the Oracle JPA certification, also known as: "Oracle Certified Expert, Java Platform, Enterprise Edition 6 Java Persistence API Developer (CX-310-094)" There is a training course provided by Oracle: "Building Database Driven Applications with JPA (SL-370-EE6)" But this costs $1800 and I think it would be overkill for my needs. Ideally, I would like a self study guide that will cover everything in the exam. I have looked for books and these seem like possibilities: Pro JPA 2: Mastering the Java Persistence API (Expert's Voice in Java Technology) and Beginning Java EE 6 with GlassFish 3 2nd Edition (Expert's Voice in Java Technology) But these aren't checklist type study guides as far as I am aware. I found the official SCJP study guide very useful, but I think the equivalent text for the JPA exam isn't out yet. If anyone has taken this exam, I would be grateful to hear how you prepared for it. Thanks!

    Read the article

  • Limited resource practice problems?

    - by Mark
    I'm applying for some big companies and the areas I seem to be getting burned on is problems involving limited memory, disk-space or throughput. These large companies process GBs of data every second (or more), and they need efficient ways of managing all that data. I have no experience with this as none of the projects I have worked on have grown that large. Is there a good place to learn about or practice these sorts of problems? Most of the practice-problem sites I've encountered only have problems where you have to solve something efficiently (usually involving prime numbers) but none of them limit your resources.

    Read the article

  • Where do service implementations fit into the Microsoft Application Architecture guidelines?

    - by tuespetre
    The guidelines discuss the service layer with its service interfaces and data/message/fault contracts. They also discuss the business layer with its logic/workflow components and entities as well as the 'optional' application facade. What is unclear still to me after studying this guide is where the implementations of the service interfaces belong. Does the application facade in the business layer implement these interfaces, or does a separate 'service facade' exist to make calls to the business layer and it's facade/raw components? (With the former, there would be less seemingly trivial calls to yet another layer, though with the latter I could see how the service layer could remove the concerns of translating business entities to data contracts from the business layer.)

    Read the article

  • Why is a small fixed vocabulary seen as an advantage to RESTful services?

    - by Matt Esch
    So, a RESTful service has a fixed set of verbs in its vocabulary. A RESTful web service takes these from the HTTP methods. There are some supposed advantages to defining a fixed vocabulary, but I don't really grasp the point. Maybe someone can explain it. Why is a fixed vocabulary as outlined by REST better than dynamically defining a vocabulary for each state? For example, object oriented programming is a popular paradigm. RPC is described to define fixed interfaces, but I don't know why people assume that RPC is limited by these contraints. We could dynamically specify the interface just as a RESTful service dynamically describes its content structure. REST is supposed to be advantageous in that it can grow without extending the vocabulary. RESTful services grow dynamically by adding more resources. What's so wrong about extending a service by dynamically specifying a per-object vocabulary? Why don't we just use the methods that are defined on our objects as the vocabulary and have our services describe to the client what these methods are and whether or not they have side effects? Essentially I get the feeling that the description of a server side resource structure is equivalent to the definition of a vocabulary, but we are then forced to use the limited vocabulary in which to interact with these resources. Does a fixed vocabulary really decouple the concerns of the client from the concerns of the server? I surely have to be concerned with some configuration of the server, this is normally resource location in RESTful services. To complain at the use of a dynamic vocabulary seems unfair because we have to dynamically reason how to understand this configuration in some way anyway. A RESTful service describes the transitions you are able to make by identifying object structure through hypermedia. I just don't understand what makes a fixed vocabulary any better than any self-describing dynamic vocabulary, which could easily work very well in an RPC-like service. Is this just a poor reasoning for the limiting vocabulary of the HTTP protocol?

    Read the article

  • Non-Profit Technololgy for Non-Profits?

    - by TomJ
    I've been looking around for a way to give back to the community, but I haven't found my right fit yet, so an idea came to mind: A non-profit technology "company" that targets non-profits. Do these exist? I've been doing some google searches and can only find software that is targeted for non-profits that is created by for-profit companies or that charges what I believe to be an outrages amount, conferences directed towards non-profits and technology they may use -- or articles complaining about the digital divide and how non-profits view technology as key but dont have the funds or the knowledge to employ it. Pseudo "Business Model" An open source 501(3)(c) organization that targets directly targets non-profits to fill the "digital divide." Most services would be free and consulting fees would be charged for customization. Donations would be accepted and government grants would be sought after. This would enable non-profits to keep pace with the for-profits in the technology sector, but at little to no cost. Perhaps the first "industry" to be targeted would be those that fill key social needs like unemployment, or food banks.

    Read the article

  • Scalable Architecture for modern Web Development [on hold]

    - by Jhilke Dai
    I am doing research about Scalable architecture for Web Development, the research is solely to support Modern Web Development with flexible architecture which can scale up/down according to the needs without losing any core functionality. By Modern Web I mean to support all the Devices used to access websites, but the loading mechanism for all devices would be different. My quest of architecture is: For PC: Accessing web in PC is faster but it also depends on the Geo-location, so, the application would check by default the capacity of Internet/Browser and load the page according to it. For Mobile: Most of the mobile design these days either hide information or use different version of same application. eg: facebook uses m.facebook.com which is completely different than PC version. Hiding the things from Mobile using JavaScript or CSS is not a solution as it'll consume the bandwidth and make the application slow. So, my architecture research is about Serving one Application, which has different stack. When the application receives the request it'd send the Packaged Stack to the received request. This way the load time for end users would be faster and maintenance of application for developers would be easier. I am researching about for 4-tier(layered) architecture like: Presentation Layer Application Logic Layer -- The main Logic layer which stores the Presentation Stack Business Logic Layer Data Layer Main Question: Have you come across of similar architecture? If so, then can you list the links here, I'm very much interested to learn about those implementations specially in real world scenario. Have you thought about similar architectures and tried your own ideas, or if you have any ideas regarding this, then I urge to share. I am open to any discussions regarding this, so, please feel free to comment/answer.

    Read the article

  • Developing for Windows CE platform?

    - by grmbl
    I'm looking in creating some applications for workers to use on the workfloor. They'll be using Psion NEO devices running Windows CE 5.0. My skillset allows for C#, PHP, ASP.Net (+ webservices). Application requirements: should connect to our ERP system running on IBM iSeries (AS400). should be run in fullscreen (effectively hiding the OS). usability touch functionality. I have tried the following: Full winform application ran through RDP session: [+] easy deployment using .rdp file. [+] application can be run on desktop environment too. [+] rdp host can easily access DB2 using IBM drivers. [+] GUI works ok on small screen. [-] environment = terminal server. (which is already under heavy use) Full winform application running on device OS: [+] environment = local. [+] responsive. [-] must use a webservice to access DB2. [-] deployment... [-] fixed platform (no desktop) Console application running on device OS: [+] environment = local. [+] very responsive. [-] must use a webservice to access DB2. [-] no fullscreen or other window options? [-] deployment... [-] fixed platform (no desktop) I'm considering creating a web application but it seems the OS comes with IE 5? I don't want to alter the OS in any way! (install other browsers etc.) I would like to have an application that's responsive, easy to deploy, fullscreen and optionally multiplatform. I have seen handheld devices using terminal (emulation?) with a console like interface. This seems to be native to the device but I'm afraid this requires modest knowledge of C++? It seems that using RDP is the way to go but, I came here for advice and look for people that have been in the same situation willing to share their experience. There does not seem to be many "best practices" on the web that could help me decide the best way of working. Greetings

    Read the article

  • structure problem in Relational DBMS creation

    - by Kane
    For learning and understanding purpose, I currently want to try to make a small relational DBMS with simple features like (for now) only sequential reading/writing and CREATE TABLE, INSERT, SELECT, UPDATE and DELETE management. I am currently on the "think" part of the project and I am stuck on the way to store the read data in memory. First I was thinking of putting them properly on a structure, but the problem is that tables are all different, know the type of each column is not an issue, but I am not sure C provide a way to make fully dynamic structure. My second and current idea is to make a simple char array of the required length and just get the data by order with cast. But I am not sure if it is the good way to do that part, so I wanted to ask for your opinion and advices about that. Thanks in advance for your help. nb: I hope my question is enough clear and understandable, I still lack of pratice in english

    Read the article

  • Best open source ASP.NET MVC e-commerce projects

    - by Øyvind Knobloch-Bråthen
    I need to get a e-commerce site up and running, but I really don't want to program it from the bottom up if I don't need to. I want to program it using ASP.NET MVC. I'm looking for a good open source alternative (or one for purchase if it's modular enough) that I can use as a base and enhance with the needed functions? It has to have all "normal" e-commerce functions, and also the possibility to integrate with a credit card API of my choice. If anyone have any recommendations for me here, I would appreciate it :)

    Read the article

  • Learning Asynchronous programming

    - by xenoterracide
    Asynchronous non-blocking event driven programming seems to be all the rage. I have a basic conceptual understanding of what this all means. However what I'm not sure is when and where my code can benefit from being asynchronous, or how to make blocking IO, non-blocking. I'm sure that I can simply use a library to do this, but I'm more interested in more in depth concepts, and the various ways to implement it myself. Are there any comprehensive/definitive books, or other resources on this subject (like GoF for Design Patterns, or K&R for C, tldp for things like bash)? (Note: I'm not sure if this is actually functionally an identical question to my question on Learning event driven programming)

    Read the article

  • Why write clean, refactored code?

    - by Shamal Karunarathne
    Hi programming lovers, This is a question I've been asking myself for a long time. Thought of throwing out it to you. From my experience of working on several Java based projects, I've seen tons of codes which we call 'dirty'. The unconventional class/method/field naming, wrong way of handling of exceptions, unnecessarily heavy loops and recursion etc. But the code gives the intended results. Though I hate to see dirty code, it's time taking to clean them up and eventually comes the question of "is it worth? it's giving the desired results so what's the point of cleaning?" In team projects, should there be someone specifically to refactor and check for clean code? Or are there situations where the 'dirty' codes fail to give intended results or make the customers unhappy? Do feel free to comment and reply. And tell me if I'm missing something here. Thanks.

    Read the article

  • I need help with algorithms, how do I improve?

    - by David Burr
    I usually do well at figuring out solutions to programming assignments but for some reason, I'm really struggling in my Algorithms class. I'm not failing but I know I can do better. When I'm confronted with problems like "Divide the array to 2 subarrays so that the sum of each subarray is equal to the other subarray," I feel like my brain won't cooperate and think and I end up not being able to solve it. Some of the things I'm doing right now to help myself: reading CLR (1st ed.) -- it takes a lot of time for stuff to sink in and I can't understand most of it solving some problems -- no matter how much I try, most of the time, I end up googling for the solution before I understand how to solve it I know that good algorithmic skills are very important because lots of good companies ask these sorts of questions in their interview process so I'm a bit worried right now. What else can can I do to improve my algorithmic/problem solving skills? Any advice on how to deal with this?

    Read the article

  • How to explain OOP to a matlab programmer?

    - by Oak
    I have a lot of friends who come from electrical / physical / mechanical engineering background, and are curious about what is "OOP" all about. They all know Matlab quite well, so they do have basic programming background; but they have a very hard time grasping a complex type system which can benefit from the concepts OOP introduces. Can anyone propose a way I can try to explain it to them? I'm just not familiar with Matlab myself, so I'm having troubles finding parallels. I think using simple examples like shapes or animals is a bit too abstract for those engineers. So far I've tried using a Matrix interface vs array-based / sparse / whatever implementations, but that didn't work so well, probably because different matrix types are already well-supported in Matlab.

    Read the article

  • In a multidisciplinary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • Learning to implement dynamic language compiler

    - by TriArc
    I'm interested in learning how to create a compiler for a dynamic language. Most compiler books, college courses and articles/tutorials I've come across are specifically for statically typed languages. I've thought of a few ways to do it, but I'd like to know how it's usually done. I know type inferencing is a pretty common strategy, but what about others? Where can I find out more about how to create a dynamically typed language?

    Read the article

  • Multilevel Queue Scheduling (MQS) with Round Robin

    - by stackuser
    I'm trying to use MQS to create a Gantt chart of 5 processes (P1-P5) as well as their waiting, response, and turnaround times (and averages of those metrics) within a CPU task schedule. Here's the basic table of arrival times and bursts: Here's my actual work version after ticking off the finished processes. The time quantum for each time slice is (2 queues) TQ1=4 and TQ2=3. Note that I'm doing MQS and NOT MLFQ: It just doesn't feel like I'm doing MQS right here, I know this gets a little complex but maybe someone can point out where I'm going totally wrong.

    Read the article

  • Listing Unix experience on resume.

    - by beacon
    I have been using Linux for quite a long time, and I am familiar with many Unix commands and tools. However, my only experience with Unix is through various Linux distributions. How should I communicate on my resume that I am familiar with the Unix command line even though I have never used a UNIX(R) system? It seems strange to me to list Unix when I've never used UNIX(R). Some people refer to Unix clones as *nix, but I'm afraid that might fly over the heads of some HR people.

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >