Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 225/563 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • High Usage of RAM by wxPython's GUI and need some advice to reduce it

    - by user67024
    I've recently developed a GUI in wxPython for windows platform. It contains a five tabs, 4 of them are just richTextCtrl boxes and the other one has controls for uploading files, buttons, textctrls, a slider etc.. As I was new to GUI development in Python, I used wxFormBuilder to generate some of the code using a good amount of sizers. So, now the problem is that the GUI starts off with a initial memory of around 40MB which is too much for such a simple application (Or so I think) . Also, when the functions handling the functions use huge lists as the program is for debugging large data logs and identifying the problems in'em implying that I can't afford memory for GUI. So, how can I reduce that start working memory size? Is it a general issue in wxPython? And currently trying use profilers but not sure if it's going to help.

    Read the article

  • About insertion sort and especially why it's said that copy is much faster than swap?

    - by Software Engeneering Learner
    From Lafore's "Data Structures and Algorithms in Java": (about insertion sort (which uses copy + shift instead of swap (used in bubble and selection sort))) However, a copy isn’t as time-consuming as a swap, so for random data this algo- rithm runs twice as fast as the bubble sort and faster than the selectionsort. Also author doesn't mention how time consuming shift is. From my POV copy is the simplest pointer assignment operation. While swap is 3x pointer assignment operations. Which doesn't take much time. Also shift of N elemtns is Nx pointer assignment operations. Please correct me if I'm wrong. Please explain, why what author says is true? I don't understand.

    Read the article

  • Is it worth to learn Experimental Languages?

    - by Xander Lamkins
    I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers. The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from multiple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments.

    Read the article

  • When do domain concepts become application constructs?

    - by Noren
    I recently posted a question regarding recovering a DDD architecture that became an anemic domain model into a multitier architecture and this question is a follow-on of sorts. My question is when do domain concepts become application constructs. My application is a local client C# 4/WPF with the following architecture: Presentation Layer Views ViewModels Business Layer ??? Domain Layer Classes that take the POCOs with primitive types and create domain concepts (e.g. image, layer, etc) Sanity checks values (e.g. image width 0) Interfaces for DTOs Interface for a repository that abstracts the filesystem Data Access Layer Classes that parse the proprietary binary files into POCOs with primitive types by explicit knowledge of the file format Implementation of domain DTOs Implementation of domain repository class Local Filesystem Proprietary binary files When does the MyImageType domain class with Int32 width, height, and Int32[] pixels become a System.Windows.Media.ImageDrawing? If I put it in the domain layer, it seems like implemenation details are being leaked (what if I didn't want to use WPF?). If I put it in the presentation layer, it seems like it's doing too much. If I create a business layer, it seems like it would be doing too little since there are few "rules" given the CRUD nature of the application. I think all of my reading has lead to analysis paralysis, so I thought fresh eyes might lend some perspective.

    Read the article

  • Learning Zend Framework 1 or 2?

    - by ehijon
    I have programmed for a few years in php and now I'm going to learn zend framwork. Zend is very popular and there are a lot of tutorials, books and documentation out there. But I saw in the last months that there is a second version of Zend, but it's not so used and popular, not yet. I think it is better to start with a new version, but I don't know what to do now, as when I see job offers many people require the first version. Which version do you suggest me?

    Read the article

  • How does a search functionality fit in DDD with CQRS?

    - by Songo
    In Vaughn Vernon's book Implementing domain driven design and the accompanying sample application I found that he implemented a CQRS approach to the iddd_collaboration bounded context. He presents the following classes in the application service layer: CalendarApplicationService.java CalendarEntryApplicationService.java CalendarEntryQueryService.java CalendarQueryService.java I'm interested to know if an application will have a search page that feature numerous drop downs and check boxes with a smart text box to match different search patterns; How will you structure all that search logic? In a command service or a query service? Taking a look at the CalendarQueryService.java I can see that it has 2 methods for a huge query, but no logic at all to mix and match any search filters for example. I've heard that the application layer shouldn't have any business logic, so where will I construct my dynamic query? or maybe just clutter everything in the Query service?

    Read the article

  • Resources for using TFS for Agile Project Development?

    - by Amy P
    Our company just installed TFS for us to start using for project development processes and source control. They want us to start using it to manage our projects as well. We have a small team, no current bug or task tracking software, and 2 developers of the 3 have experience with any actual methodologies. What books, websites, and/or other information can you recommend for us to use to get started?

    Read the article

  • Didatic approaches to teach versioning with Git

    - by Herberth Amaral
    I have already taught versioning with Git, but I think it could be more enjoyable for the guys I teach if I use another approach to teach them. The guys I mentioned before were used to working with SVN and I tried to teach Git based on SVN. Not such a good idea. It seems that some guys/teams which use SVN need a re-education on version control when they're learning Git or another DCVCS. In another attempt, I tried to show a scenario where a development team try to work without a (D)VCS and then I showed how their lives could be easier if they used a (D)VCS. I had the impression that part of the audience left the presentation without a clue what I was talking about. I've taught other classes on other subjects without problems, so I think this is not a issue with me as a teacher, but with my method. I know Git and versioning as well I know the other subjects I've presented to the other classes. So, basically, how to teach Git/DCVCS? Start with some diffs/patches and manual versioning and then teach how it can be more productive with Git? Start with Git object model? Or try to start with some pretty commands and try to save some time? To be clear: I'm looking for approaches on how to teach DCVCS (focusing on Git) effectively, based on real experiences.

    Read the article

  • Handling extremely large numbers in a language which can't?

    - by Mallow
    I'm trying to think about how I would go about doing calculations on extremely large numbers (to infinitum - intergers no floats) if the language construct is incapable of handling numbers larger than a certain value. I am sure I am not the first nor the last to ask this question but the search terms I am using aren't giving me an algorithm to handle those situations. Rather most suggestions offer a language change or variable change, or talk about things that seem irrelevant to my search. So I need a little guideance. I would sketch out an algorithm like this: Determine the max length of the integer variable for the language. If a number is more than half the length of the max length of the variable split it in an array. (give a little play room) Array order [0] = the numbers most to the right [n-max] = numbers most to the left Ex. Num: 29392023 Array[0]:23, Array[1]: 20, array[2]: 39, array[3]:29 Since I established half the length of the variable as the mark off point I can then calculate the ones, tenths, hundredths, etc. Place via the halfway mark so that if a variable max length was 10 digits from 0 to 9999999999 then I know that by halfing that to five digits give me some play room. So if I add or multiply I can have a variable checker function that see that the sixth digit (from the right) of array[0] is the same place as the first digit (from the right) of array[1]. Dividing and subtracting have their own issues which I haven't thought about yet. I would like to know about the best implementations of supporting larger numbers than the program can.

    Read the article

  • Are Intel compilers really better than Microsoft ones?

    - by Rocket Surgeon
    Years ago I was surprised when discovered that Intel sells Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really did it for me just now, Wow, amazing tools with nanoseconds resolution, unbeleivable. But the trial ended and team never seriously considered a purchase. From your experience, if license cost does not matter, which vendor is a winner ? It is not broad or vague question or attemt to spark a holy war. This sort of question about 2 very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the "grass greener" argument. I want to hear all "what ifs" stories. What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled ? What if AMD hardware is the target and everything will slow down for no reason ? Or on other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement in the compiler ? What if both are the same exactly, actually a single codebase just wrapped into 2 different boxes and licensed to both vendors by some 3rd party shop? And so on. But someone knows some answers.

    Read the article

  • Initialized variables vs named constants

    - by Mike
    I'm working on a fundamental programming class in college and our textbook is "programming logic and design" by joyce farrell(spelling?) Anyhow, I'm struggling conceptually when it comes to initialized variables and named constants. Our class is focusing on pseudo-code for the time being and not one particular language so let me illustrate what I'm talking about. Let's say I am declaring a variable named "myVar" and the data type is numeric: num myVar now I want to initialize it (I don't understand this concept) starting with the number 5 num myVar = 5 how is that any different than creating a named constant?

    Read the article

  • Free OS with MS Windows Archetecture and capabilities

    - by Nayana Adassuriya
    Currently most of the PC users mostly depend on the windows OS and they would not go away from that beaus of the hand on usage knowledge about and also because of the look and feel habituation. But there are plenty of Linux base Desktop operation systems there such as UBUNTU, FEDORA. Users do not tend to go for those OSs (specially office environments) because most of the 3rd party software and tools (such as Photoshop, flash, Visual Studio) mostly can install only in windows operating system. So I'm thinking why we cant create a free OS same as Windows. That is capable to install software that created for windows. that can communicate with windows servers and exchange etc.. . Simply it should be a free OS with all the capabilities of Windows OS. How about your idea?

    Read the article

  • Is "code that generates code" really all that great?

    - by Jaxo
    I was looking through CodePen's "popular pens" and I noticed this cool little spiral animation somebody made with a seemingly ridiculously small amount of code. This is quite impressive until you click the headings for HTML and CSS to show the "compiled" versions of the same code. Suddenly the 3 lines of HAML and ~40 lines of SCSS turns into a gigantic monster of repetition. Here's where my question comes in: Is it acceptable to do something like this in practice? Don't get me wrong - I love using preprocessors to help me write code faster, but in some cases it looks like it's an automatic copy-paste machine.

    Read the article

  • Help in (re)designing my Swing application

    - by Harihar Das
    I have developed a Swing application that controls execution of several script like jobs. I need to display the interim output of the jobs concurrently. I have followed MVC while writing the application. The application is working as expected. But off late I have the following requirements in hand: A few of the script jobs need special user privileges to execute so as to access specialized resources. There seems to be now way in Java to impersonate as a different user while running an application.[examined in this question]. Also trying to run the Swing application as a scheduled task in windows is not helping. Once started the jobs should be running even if the user logs off after starting the jobs. I am thinking of separating the execution logic from the UI and run that as a service; and introduce JMS in between the two layers so as to store/retrieve the interim the output. Note: I need to run this application on windows Any ideas on meeting my requirements will be highly appreciated.

    Read the article

  • Semantic Versioning and splitting apart a library, providing a bundled build

    - by Derick Bailey
    I've got a nice, fairly popular JavaScript library that is following Semantic Versioning. The current library has a few dependency libraries, which are available either as separate downloads or as part of a single bundled download. I see a need to head down this path further. I want to extract additional, smaller libraries out of the one larger library. Each of these extracted libraries would be available as separate files, or inside of the one bundled build, again. If I go down this path of extracting the libraries, and providing a bundled version of the final code, does this require a full version change in semantic versioning? Would I have to bump from 1.x to 2.x? My first thought it no: I will not change any public API, so I don't have to change the major version number. But then I wonder... well, I am restructuring a lot of things, even though the final API for the bundled version would be the same. Is there a clear answer from semver on something like this? Do I need to bump first, second or third dot? Or something else?

    Read the article

  • Do you test your SQL/HQL/Criteria ?

    - by 0101
    Do you test your SQL or SQL generated by your database framework? There are frameworks like DbUnit that allow you to create real in-memory database and execute real SQL. But its very hard to use(not developer-friendly so to speak), because you need to first prepare test data(and it should not be shared between tests). P.S. I don't mean mocking database or framework's database methods, but tests that make you 99% sure that your SQL is working even after some hardcore refactoring.

    Read the article

  • At which point is a continuous integration server interesting?

    - by Cedric Martin
    I've been reading a bit about CI servers like Jenkins and I'm wondering: at which point is it useful? Because surely for a tiny project where you'd have only 5 classes and 10 unit tests, there's no real need. Here we've got about 1500 unit tests and they pass (on old Core 2 Duo workstations) in about 90 seconds (because they're really testing "units" and hence are very fast). The rule we have is that we cannot commit code when a test fail. So each developers launches all his tests to prevent regression. Obviously, because all the developers always launch all the test we catch errors due to conflicting changes as soon as one developer pulls the change of another (when any). It's still not very clear to me: should I set up a CI server like Jenkins? What would it bring? Is it just useful for the speed gain? (not an issue in our case) Is it useful because old builds can be recreated? (but we can do this to with Mercurial, by checking out old revs) Basically I understand it can be useful but I fail to see exactly why. Any explanation taking into account the points I raised above would be most welcome.

    Read the article

  • How can I approach creating an efficient algorithm for maximizing value with these specific constraints?

    - by sway
    I'm having trouble coming up with an approach that isn't n^2 for this problem. Here's a contrived, simplified version I've come up with: Let's say you're a company that needs 4 employees to launch in a new city, a manager, two salespeople, and a customer support rep, and you magically know how much impact every candidate will have and how much salary they require to take the job. Your table of potential employees looks something like this: Name Position Salary Impact Adam Smith Manager 60,000 11 Allison Brown Salesperson 40,000 9 Brad Stewart Manager 55,000 9 ...etc (thousands of records) What algorithmic approach can be taken to find the maximum "impact" while still filling all the positions and remaining under, say, a 200,000 budget? Thanks!

    Read the article

  • What to do when the programming activity becomes a problem?

    - by gablin
    I once saw a program (can't remember which) where it talked about people "experiencing flow" when they are doing something they are passionate about. When "in flow", they tend to lose track of time and surrounding, concentrating only on their activity at hand. This happens a lot for me when I program; most particularly when I face a problem. I refuse to give up until it's solved. This usually leads to hours just rushing by and I forget to eat lunch, dinner gets pushed into far into the evening, and when I finally look at the clock, it's way into the wee-hours of the night and I will only get a few hours of sleep before having to rise early in the morning. (This is not to say that I'm in flow only when facing a problem - but I find it particularly hard to stop programming and step back when there's something I can't solve immediately.) I love programming, but I hate it when it disrupts my normal routines (most importantly eating and sleeping patterns). And sitting still for so many hours, staring a screen, is not healthy. Please, any ideas on how I can get my rampant programming activity under control?

    Read the article

  • Two interfaces with identical signatures

    - by corsiKa
    I am attempting to model a card game where cards have two important sets of features: The first is an effect. These are the changes to the game state that happen when you play the card. The interface for effect is as follows: boolean isPlayable(Player p, GameState gs); void play(Player p, GameState gs); And you could consider the card to be playable if and only if you can meet its cost and all its effects are playable. Like so: // in Card class boolean isPlayable(Player p, GameState gs) { if(p.resource < this.cost) return false; for(Effect e : this.effects) { if(!e.isPlayable(p,gs)) return false; } return true; } Okay, so far, pretty simple. The other set of features on the card are abilities. These abilities are changes to the game state that you can activate at-will. When coming up with the interface for these, I realized they needed a method for determining whether they can be activated or not, and a method for implementing the activation. It ends up being boolean isActivatable(Player p, GameState gs); void activate(Player p, GameState gs); And I realize that with the exception of calling it "activate" instead of "play", Ability and Effect have the exact same signature. Is it a bad thing to have multiple interfaces with an identical signature? Should I simply use one, and have two sets of the same interface? As so: Set<Effect> effects; Set<Effect> abilities; If so, what refactoring steps should I take down the road if they become non-identical (as more features are released), particularly if they're divergent (i.e. they both gain something the other shouldn't, as opposed to only one gaining and the other being a complete subset)? I'm particularly concerned that combining them will be non-sustainable as soon as something changes. The fine print: I recognize this question is spawned by game development, but I feel it's the sort of problem that could just as easily creep up in non-game development, particularly when trying to accommodate the business models of multiple clients in one application as happens with just about every project I've ever done with more than one business influence... Also, the snippets used are Java snippets, but this could just as easily apply to a multitude of object oriented languages.

    Read the article

  • Restful WebAPI VS Regular Controllers

    - by Rohan Büchner
    I'm doing some R&D on what seems like a very confusing topic, I've also read quite a few of the other SO questions, but I feel my question might be unique enough to warrant me asking. We've never developed an app using pure WebAPI. We're trying to write a SPA style app, where the back end is fully decoupled from the front end code Assuming our service does not know anything about who is accessing/consuming it: WebAPI seems like the logical route to serve data, as opposed to using the standard MVC controllers, and serving our data via an action result and converting it to JSON. This to me at least seems like an MC design... which seems odd, and not what MVC was meant for. (look mom... no view) What would be considered normal convention in terms of performing action(y) calls? My sense is that my understanding of WebAPI is incorrect. The way I perceive WebAPI, is that its meant to be used in a CRUD sense, but what if I want to do something like: "InitialiseMonthEndPayment".... Would I need to create a WebAPI controller, called InitialiseMonthEndPaymentController, and then perform a POST... Seems a bit weird, as opposed to a MVC controller where i can just add a new action on the MonthEnd controller called InitialisePayment. Or does this require a mindset shift in terms of design? Any further links on this topic will be really useful, as my fear is we implement something that might be weird an could turn into a coding/maintenance concern later on?

    Read the article

  • Commercial product using a GPL OS

    - by pfried
    we are planning to create a commercial product. The product consists of come MCUs and a small computer (we are developping on a raspberry pi at the moment). The computer needs an operating system as we would like keep things like WLAN and booting as simple as possible. We create some software running on this computer (node.js application). The most operating systems like Arch Linux are licenced under the GPL. The product we would sell contains the computer with preinstalled OS and software. This system operates as a central access point to MCU devices and is able to control them. We use other's software in our product. We do not modify their source code. The product (the computer part) consists of a computer, an OS and software we create. How does the use of an OS affect our own code (licence)? Is there a possibility of avoiding GPL for our own code? eg. shipping the software seperated? Are there any effects to other components of our product, eg. the MCU part? The node.js application delivers a WebApp to the client where it is executed. Are there any effects (As we would like to sell parts of the code as an additional App on the App Stores)? I know we make use of the work of the community and i respect this. The problem is: The software alone is kind of useless without the MCU devices. I do not expect a legal advice.

    Read the article

  • Flaws in my PHP development setup - sharing sources causing lags

    - by Wiktor
    I have following development setup for my PHP projects: Working station running on Windows 7 with PhpStorm IDE. GIT for version controlling. CentOS on virtual machine (VirtualBox) with Apache and MySQL (copy of production server). So far, I've been sharing project's source folders between host and guest systems and it was working quite well only really slow. The reason behind this is that Apache was reading files from remote folder (mounted locally). After doing some research, I found out that this set up can be improved by using disk mapping (Samba) instead of folder sharing. So I did that change. I configured my PhpStorm to automatically deploy files to mapped drive. Everything works like a charm now, except for one problem - when I change branches I need to synchronize project's local folder with the one on mapped drive and that takes time, a lot of time (like branching in SVN). Is there another way to handle this than just working on files directly on mapped drive?

    Read the article

  • Do we need to adopt a black-box asset our project is inheriting from its predecessor?

    - by Tom Anderson
    Our client has an eCommerce site which was developed by an in-house team, and is now showing its age. I work for a firm brought in as external contractors to build a replacement. Part of the current site is a Flash viewer applet which displays media about the product - zoom-able images, 360-degree views, movies, and so on. We need to show the same media the current site does, so we are simply reusing the viewer. The viewer is embedded on a page in the usual way, and told what media to show by means of an XML file it loads from our server, which is pretty simple for us to generate. We've got this working; it was pretty straightforward. But what else do we need to do? The thing is, as far as we're concerned, the viewer is a binary blob which is served from the client's content-distribution network. We embed it, feed it some XML, and it does its job, but we have no power over its internals. It's completely opaque to us - a black box. We can use it to do what it does, but we can't change it, so if we ever need to do something different, we're stuffed. We're building this site for the client, and when we're done, we'll hand it over for them to maintain. We won't be doing the maintenance ourselves. There's a small team within the client who are working as part of our team, and who will be the ones doing the maintenance. That team only includes one person from the team that built the old site, and it's not someone who knows the image viewer. The people who do know the image viewer are not slated to join our team when our system replaces theirs - they'll be moved to other projects. The documentation on the viewer is extremely thin, and as far as i know doesn't cover the internals at all. My worry is that if someone doesn't take some positive action, all knowledge of the internal workings of the viewer - even down to where the source code for it is - will be lost. It's possible it already has been. Is this something to worry about? If so, whose job is it to worry about it? What should they do about it once they've got worried?

    Read the article

  • Nearest color algorithm using Hex Triplet

    - by Lijo
    Following page list colors with names http://en.wikipedia.org/wiki/List_of_colors. For example #5D8AA8 Hex Triplet is "Air Force Blue". This information will be stored in a databse table (tbl_Color (HexTriplet,ColorName)) in my system Suppose I created a color with #5D8AA7 Hex Triplet. I need to get the nearest color available in the tbl_Color table. The expected anaser is "#5D8AA8 - Air Force Blue". This is because #5D8AA8 is the nearest color for #5D8AA7. Do we have any algorithm for finding the nearest color? How to write it using C# / Java? REFERENCE http://stackoverflow.com/questions/5440051/algorithm-for-parsing-hex-into-color-family http://stackoverflow.com/questions/6130621/algorithm-for-finding-the-color-between-two-others-in-the-colorspace-of-painte Suggested Formula: Suggested by @user281377. Choose the color where the sum of those squared differences is minimal (Square(Red(source)-Red(target))) + (Square(Green(source)-Green(target))) +(Square(Blue(source)-Blue(target)))

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >