Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 164/563 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Why are most GNU's software written in C

    - by BallroomProgrammer
    I am a Java developer, and I rarely write GUI program in C. However, I noticed that many GNU's projects, such as PSPP, R, Dia, etc., are written in C, instead of Java or C++. I personally don't mind this, but I am really curious why GNU favors C so much. My understanding is that C is the one that supports the least in object-oriented programming, and today's CS education really emphasizes OOP, as OOP really makes codes more reusable. In this case, why would so many developers choose to develop in C instead of C++ or Java. Does anyone know why GNU's software are so exclusively written in C? Do you think or GNU's software should be written in C++ or Java so that the source code could be more useful to people? Why or why not?

    Read the article

  • Message Queue: Which one is the best scenario?

    - by pandaforme
    I write a web crawler. The crawler has 2 steps: get a html page then parse the page I want to use message queue to improve performance and throughput. I think 2 scenarios: scenario 1: structure: urlProducer -> queue1 -> urlConsumer -> queue2 -> parserConsumer urlProducer: get a target url and add it to queue1 urlConsumer: according to the job info, get the html page and add it to queue2 parserConsumer: according to the job info, parse the page scenario 2: structure: urlProducer -> queue1 -> urlConsumer parserProducer-> queue2 -> parserConsumer urlProducer : get a target url and add it to queue1 urlConsumer: according to the job info, get the html page and write it to db parserProducer: get the html page from db and add it to queue2 parserConsumer: according to the job info, parse the page There are multiple producers or consumers in each structure. scenario1 likes a chaining call. It's difficult to find the point of problem, when occurring errors. scenario2 decouples queue1 and queue2. It's easy to find the point of problem, when occurring errors. I'm not sure the notion is correct. Which one is the best scenario? Or other scenarios? Thanks~

    Read the article

  • How to design application for scaling the application?

    - by Muhammad
    I have one application which handles hardware events connected on the same computer's PCIe slots. The maximum number of PCIe slots on motherboard are two. I have utilized both slots. Now for scaling the application I need either more PCIe slots in same computer or I use another computer. So consider I am using another computer with same application and hardware connected on the PCIe Slots. Now my problem is that I want to design application over it which can access both computers hardware devices and does the process on it. The processed data should be send back to the respective PC's hardware. Please refer the attached diagram for expansion.

    Read the article

  • Is this possible to re-duplicate the hardware signal on Linux?

    - by Ted Wong
    Since that every things is a file on the UNIX system. If I have a hardware, for example, a mouse, move from left corner to right corner, it should produce some kinds of file to communicate with the system. So, if my assumption is correct, is this possible to do following things: Capture the raw data, which is about moving mouse cursor from left corner to right corner? Reduplicate the raw data, using a program, same producing speed, and data, in order to "redo" moving mouse cursor from left corner to right corner

    Read the article

  • Designing binary operations(AND, OR, NOT) in graphs DB's like neo4j

    - by Nicholas
    I'm trying to create a recipe website using a graph database, specifically neo4j using spring-data-neo4j, to try and see what can be done in Graph Databases. My model so far is: (Chef)-[HAS_INGREDIENT]->(Ingredient) (Chef)-[HAS_VALUE]->(Value) (Ingredient)-[HAS_INGREDIENT_VALUE]->(Value) (Recipe)-[REQUIRES_INGREDIENT]->(Ingredient) (Recipe)-[REQUIRES_VALUE]->(Value) I have this set up so I can do things like have the "chef" enter ingredients they have on hand, and suggest recipes, as well as suggest recipes that are close matches, but missing one ingredient. Some recipes can get complex, utilizing AND, OR, and NOT type logic, something like (Milk AND (Butter OR spread OR (vegetable oil OR olive oil))) and I'm wondering if it would be sane to model this in a graph using a tree type representation? An example of what I was thinking is to create three "node" types of AND, OR, and NOT and have each of them connect to the nodes value underneath. How else might this be represented in a Graph Database or is my example above a decent representation?

    Read the article

  • Pattern for Accessing MySQL connection

    - by Dipan Mehta
    We have an application which is C++ trying to access MySQL database. There are several (about 5 or so) threads in the application (with Boost library for threading) and in each thread has a few objects, each of which is trying to access Database for its' own purpose. It has a simple ORM kind of model but that really is not an important factor here. There are three potential access patterns i can think of: There could be single connection object per application or thread and is shared between all (or group). The object needs to be thread safe and there will be contentions but MySQL will not be fired with too many connections. Every object could initiate connection on its own. The database needs to take care of concurrency (which i think MySQL can) and the design could be much simpler. There could be two possibilities here. a. either object keeps a persistent connection for its life OR b. object initiate connection as and when needed. To simplify the contention as in case of 1 and not to create too many sockets as in case of 2, we can have group/set based connections. So there could be there could be more than one connection (say N), each of this connection could be shared connection across M objects. Naturally, each of the pattern has different resource cost and would work under different constraints and objectives. What criteria should i use to choose the pattern of this for my own application? What are some of the advantages and disadvantages of each of these pattern over the other? Are there any other pattern which is better? PS: I have been through these questions: mysql, one connection vs multiple and MySQL with mutiple threads and processes But they don't quite answer exactly what i am trying to ask.

    Read the article

  • Architecture/pattern resources for small applications and tools

    - by s73v3r
    I was wondering if anyone had any resources or advice related to using architecture patterns like MVVM/MVC/MVP/etc on small applications and tools, as opposed to large, enterprisy ones. EDIT: Most of the information I see on application architecture is directed at large, enterprise applications. I'm just writing small programs and tools. As far as using these architecture patterns, is it generally worthwhile to go through the overhead of using an MVC/MVVM framework? Or would I be better off keeping it simple?

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • Convert filenames to their checksum before saving to prevent duplicates. Is is a smart thing to do?

    - by Xananax
    TL;DR:what the title says I am developing some sort of image board in PHP. I was thinking of changing each image's filename to it's checksum prior to saving it. This way, I might be able to prevent duplicates. I know this wouldn't work for two images that are the same but differ in size or level of compression or whatnot, but this method would allow for an early check. What bugs me is that I never saw this method implemented anywhere, so I was wondering if there is a catch to it. Maybe it is just more efficient to keep the original filename and store the hash in DB? Maybe the whole method is just not useful and my question is moot? What do you think? On a side note, I don't really get how hashes are calculated so I was wondering, if my first question checks out, if it would be possible to calculate the likeness that two images are similar by comparing hashes (levenshtein or something of the sort).

    Read the article

  • Building massively scalable systems, where to start? [closed]

    - by Mahmoud Hossam
    Recently, I've been seeing these job postings about building scalable systems using Java, and some of the technologies mentioned were: Cassandra Thrift Hadoop MapReduce Among others. How can I get started with these technologies? Is there something else I need to know before actually learning any of these technologies? Maybe some general concepts about building highly available and scalable systems? I already know Java SE, so I won't be starting from scratch.

    Read the article

  • Why do I have to choose between "management" and "technical" tracks in my career?

    - by Stephen Gross
    I was recently laid off, and although I found a new gig I'm a bit frustrated with how career tracks work in the land of software development. I really love doing a bit of everything: coding, testing, architect(ing), leadership/management, customer contact, requirements gathering, staff development, etc. Software companies, however, want me to fit into a niche: I'm either a coder, a tester, or a manager. When I try to explain to them that I'm best when I'm doing all of those at once, they seem very confused. I'm sympathetic to their interests, but at the same time frustrated that the industry works this way. Any advice? Do I just need to get with the program, so to speak?

    Read the article

  • Is there a language or design pattern that allows the *removal* of object behavior or properties in a class hierarchy?

    - by Sebastien Diot
    A well-know shortcoming of traditional class hierarchies is that they are bad when it comes to model the real world. As an example, trying to represent animals species with classes. There are actually several problems when doing that, but one that I never saw a solution to is when a sub-class "looses" a behavior or properties that was defined in a super-class, like a penguin not being able to fly (there are probably better examples, but that's the first one that comes to my mind, having seen "Madagascar 2" recently). On the one hand, you don't want to define for every property and behavior some flag that specifies if it is at all present, and check it every time before accessing that behavior or property. You would just like to say that birds can fly, simply and clearly, in the Bird class. But then it would be nice if one could define "exceptions" afterward, without having to use some horrible hacks everywhere. This often happens when a system has been productive for a while. You suddenly find an "exception" that doesn't fit in the original design at all, and you don't want to change a large portion of your code to accommodate it. So, is there some language or design patterns that can cleanly handle this problem, without requiring major changes to the "super-class", and all the code that uses it? Even if a solution only handle a specific case, several solutions might together form a complete strategy. [EDIT] Forgot about the Liskov Substitution Principle. That is why you can't do it. Assuming you define "traits/interfaces" for all major "feature groups", you can freely implement traits in different branches of the hierarchy, like the Flying trait could be implemented by Birds, and some special kind of squirrels and fish. So my question could amount to "How could I un-implement a trait?" If your super-class is a Java Serializable, you have to be one too, even if there is no way for you to serialize your state, for example if you contained a "Socket". So one way to do it is to always define all your traits in pair from the start: Flying and NotFlying (which would throw UnsupportedOperationExceiption, if not checked against). The Not-trait would not define any new interface, and could be simply checked for. Sounds like a "cheap" solution, in particular if used from the start.

    Read the article

  • Colleague unwilling to use unit tests "as it's more to code"

    - by m.edmondson
    A colleague is unwilling to use unit tests and instead opting for a quick test, pass it to the users, and if all is well it is published live. Needless to say some bugs do get through. I mentioned we should think about using unit tests - but she was all against it once it was realised more code would have to be written. This leaves me in the position of modifying something and not being sure the output is the same, especially as her code is spaghetti and I try to refactor it when I get a chance. So whats the best way forward for me?

    Read the article

  • What are some of the benefits of a "Micro-ORM"?

    - by Wayne M
    I've been looking into the so-called "Micro ORMs" like Dapper and (to a lesser extent as it relies on .NET 4.0) Massive as these might be easier to implement at work than a full-blown ORM since our current system is highly reliant on stored procedures and would require significant refactoring to work with an ORM like NHibernate or EF. What is the benefit of using one of these over a full-featured ORM? It seems like just a thin layer around a database connection that still forces you to write raw SQL - perhaps I'm wrong but I was always told the reason for ORMs in the first place is so you didn't have to write SQL, it could be automatically generated; especially for multi-table joins and mapping relationships between tables which are a pain to do in pure SQL but trivial with an ORM. For instance, looking at an example of Dapper: var connection = new SqlConnection(); // setup here... var person = connection.Query<Person>("select * from people where PersonId = @personId", new { PersonId = 42 }); How is that any different than using a handrolled ADO.NET data layer, except that you don't have to write the command, set the parameters and I suppose map the entity back using a Builder. It looks like you could even use a stored procedure call as the SQL string. Are there other tangible benefits that I'm missing here where a Micro ORM makes sense to use? I'm not really seeing how it's saving anything over the "old" way of using ADO.NET except maybe a few lines of code - you still have to write to figure out what SQL you need to execute (which can get hairy) and you still have to map relationships between tables (the part that IMHO ORMs help the most with).

    Read the article

  • How to practice object oriented programming?

    - by user1620696
    I've always programmed in procedural languages and currently I'm moving towards object orientation. The main problem I've faced is that I can't see a way to practice object orientation in an effective way. I'll explain my point. When I've learned PHP and C it was pretty easy to practice: it was just matter of choosing something and thinking about an algorithm for that thing. In PHP for example, it was matter os sitting down and thinking: "well, just to practice, let me build one application with an administration area where people can add products". This was pretty easy, it was matter of thinking of an algorithm to register some user, to login the user, and to add the products. Combining these with PHP features, it was a good way to practice. Now, in object orientation we have lots of additional things. It's not just a matter of thinking about an algorithm, but analysing requirements deeper, writing use cases, figuring out class diagrams, properties and methods, setting up dependency injection and lots of things. The main point is that in the way I've been learning object orientation it seems that a good design is crucial, while in procedural languages one vague idea was enough. I'm not saying that in procedural languages we can write good software without design, just that for sake of practicing it is feasible, while in object orientation it seems not feasible to go without a good design, even for practicing. This seems to be a problem, because if each time I'm going to practice I need to figure out tons of requirements, use cases and so on, it seems to become not a good way to become better at object orientation, because this requires me to have one whole idea for an app everytime I'm going to practice. Because of that, what's a good way to practice object orientation?

    Read the article

  • Inserting HTML code with jquery

    - by J. Robertson
    One of our web applications is a page that takes in a serial number and various information is returned and displayed to the user. The serial is passed via AJAX, and based on the response, one of the following can happen - An error message is shown A new form replaces the previous form Now, the way I am handling this is to use jQuery to destroy (using $.remove()) the table that displayed the initial serial form, then I'm appending another html table that contains another form. Right now I am including that additional form as part of the html source, and just setting it to display:none, then using jQuery to show it when appropriate. However, I don't like this approach because if someone views source on the page, they can see that table html code that is not being displayed. My next thought would be to use AJAX to read in another HTML file, and append it that way. However, I am trying to keep down the number of files this project uses, and since most pages in our project will use AJAX, I could see a case where there are multiple files containing HTML snippets - and that feels sloppy to me. What is the best way to handle a case where multiple html elements are being shown and removed with jQuery?

    Read the article

  • Handling Indirection and keeping layers of method calls, objects, and even xml files straight

    - by Cervo
    How do you keep everything straight as you trace deeply into a piece of software through multiple method calls, object constructors, object factories, and even spring wiring. I find that 4 or 5 method calls are easy to keep in my head, but once you are going to 8 or 9 calls deep it gets hard to keep track of everything. Are there strategies for keeping everything straight? In particular, I might be looking for how to do task x, but then as I trace down (or up) I lose track of that goal, or I find multiple layers need changes, but then I lose track of which changes as I trace all the way down. Or I have tentative plans that I find out are not valid but then during the tracing I forget that the plan is invalid and try to consider the same plan all over again killing time.... Is there software that might be able to help out? grep and even eclipse can help me to do the actual tracing from a call to the definition but I'm more worried about keeping track of everything including the de-facto plan for what has to change (which might vary as you go down/up and realize the prior plan was poor). In the past I have dealt with a few big methods that you trace and pretty much can figure out what is going on within a few calls. But now there are dozens of really tiny methods, many just a single call to another method/constructor and it is hard to keep track of them all.

    Read the article

  • Scalable solution for website polling

    - by Tom Irving
    I'm looking to add push notifications to one of my iOS apps. The app is a client for a website which doesn't offer push notifications. What I've come up with so far: App sends a message to home server when transitioning to background, asking the server to start polling the website for the logged in user. The home server starts a new process to poll for that user. Polling happens every so many seconds / minutes. When the user returns to the iOS app, the app sends a message to the home server to stop polling. The home server kills the process polling for the user. Repeat. The problem is that this soon becomes stupid: 100s of users means 100s of different processes. It's just not scalable in the slighest. What I've written so far is in PHP, using CURL to do the polling and I started with PHP a few days ago, so maybe I'm missing something obvious that could help me with this. Some advice would be great.

    Read the article

  • How best to implement HTML5 support for my validation library

    - by Vivin Paliath
    I have created an annotation-based validation library called regula. There seems to be some amount of interest around the framework and the next thing I'd like to do is to support HTML5 validation. Originally I figured that I would check to see if the browser supported the HTML5 validation that has been specified and to either emulate or delegate to built-in regula equivalents. This is trivial for things like required, but once you start getting into the date-validation, it gets tricky (date widgets, localization, etc.). So I have a few options in front of me: Full HTML5 Shim along with widgets (for date stuff etc.): I feel like this is overkill and essentially reinventing the wheel since this is already covered by things like modernizr. Use HTML5 validation if available (either native, or provided by shim; otherwise ignore): What this means is that if HTML5 validation is available (natively or through a shim) I will use it, otherwise I will ignore it. I'm leaning towards the latter since currently if someone wants to use HTML5 validation, they will most probably require a shim since not all browsers support HTML5. Which option do you think is better?

    Read the article

  • What should developers know about Windows executable binary file compression?

    - by Peter Turner
    I'd never heard of this before, so shame on me, but programs like UPX can compress my files by 80% which is totally sweet, but I have no idea what the the disadvantages are in doing this. Or even what the compressor does. Website linked above doesn't say anything about dynamically linking DLLs but it mentions about compressing DESCENT 2 and about compressing Netscape 4.06. Also, it doesn't say what the tradeoffs are, only the benefits. If there weren't tradeoffs why wouldn't my linker compress the file? If I have an environment where I have one executable and 20-30 DLL's, some of which are dynamically loaded an unloaded fairly arbitrarily, but not in loops (hopefully), do I take a big hit in processing time decompressing these DLL's when they're used?

    Read the article

  • What type of code is suitable for unit testing?

    - by RPK
    In Test Driven Development, what type of code is testable? I am using a Micro-ORM (PetaPoco) and I have several methods that interact with the database like: AddCustomer UpdateRecord etc. I want to know how to write a test for these methods. I searched YouTube for videos on writing a test for DAL, but I didn't find any. I want to know which method or class is testable and how to write a test before writing the code itself.

    Read the article

  • When is an object oriented program truly object oriented?

    - by Syed Aslam
    Let me try to explain what I mean: Say, I present a list of objects and I need to get back a selected object by a user. The following are the classes I can think of right now: ListViewer Item App [Calling class] In case of a GUI application, usually click on a particular item is selection of the item and in case of a command line, some input, say an integer representing that item. Let us go with command line application here. A function lists all the items and waits for the choice of object, an integer. So here, I get the choice, is choice going to conceived as an object? And based on the choice, return back the object in the list. Does writing this program like the way explained above make it truly object oriented? If yes, how? If not, why? Or is the question itself wrong and I shouldn't be thinking along those lines?

    Read the article

  • I need some career guidance, please.

    - by user18956
    Hi, I have been a teacher of guitar and music theory for the last ten years or so, and I have decided to get out of it and pursue something involving computers, but I am very confused about it all. I have no training related to programming besides a knowledge of xhtml and css - which I realize are not even programming languages. My problem is that I know I want to do something with either making video games, computer/online applications, or some other programming job, but I haven't a clue how to begin. I picked up a book from the Head First series entitled, Head First Programming that uses Python to teach programming concepts, but after that, I don't really know what is a good direction for me in terms of balancing career satisfaction with job availability and acceptable pay. I am not looking for a huge salary, I just want to be able to survive doing something I love, and which challenges me. I don't know even a single person involved in a related field, so I am in need of guidance. The first thing I would like to know is whether pursuing a career as a programmer for video games is a realistic option. I love video games, and play them all the time, and I have always wanted to make them. If this is an option, what would be the recommended course of action? What is a good language or technology to get involved in for the job market now? I have read that PHP/MySQL is a good place to find a job for some. Can I find a job without school, or do I need to got o college? Also, will the Python I learn in this book translate into any other language I need to learn? If it is anything like music, then I am sure it will, but I don't know much about programming - yet. And last, yet perhaps most important, is thirty years old too old to take such a radical redirection in careers? Thank you for any help you can offer. I really need it.

    Read the article

  • How do bug reports factor in to a sprint?

    - by Mark Ingram
    I've been reading up on Scrum recently. From my understanding, a meeting is held before the sprint starts, to decide what gets moved from the product backlog to the upcoming sprint backlog. Once a feature is completed in the current sprint, it will go into the "Ready to QA" bucket, and it's at this point that I'm getting confused. Do bug reports go back into the product backlog? I assume they can't go back into the sprint backlog as we've already decided what work will be done for this cycle? What happens when QA finds a bug? Where does it go?

    Read the article

  • Should I list this work experience on my resume? [closed]

    - by Phoenix
    I am currently working at a company. I did an internship before this job with a prestigious company and project itself was challenging but it was in the initial phases and hence there were no tight schedules and we ended up doing brainstorming for the first month and the 2nd month actually setting up our hardware, which is linux servers in lab and a cluster administrator for the servers. And then i wrote an addin task which runs on the server and uses existing API to collect some statistics from the the servers in the cluster and feeding them into another entity which is basically an algorithm that calculates how the load on the servers should be automatically balanced. Neither of these things went into production by the time I left the company and I'm not even sure of their current state. Does it make sense to include it in my resume then? I also worked as a software engineer right out of school at another prestigious company for 9 months. I was involved in some bug fixes before the product launched and I don't even recollect the exact fixes I made to the product. So, will it make sense to have these experiences on my resume ? Will people question me about them and will saying it was bug fixes and mentioning what kind of fixes suffice as enough to justify my work ex there ?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >