Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 211/563 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • What is a good GUI text editor with code folding on Linux

    - by quanticle
    When I'm on Linux, I usually program using either gvim or emacs (depending on the language I'm working in, and the configuration of the machine). However, one thing I miss from the Windows world is code folding. Editors like Notepad++ and IDEs like Visual Studio allow shrink, or fold, blocks of code into single line headings. Are there any Linux editors with this facility? I know Eclipse can do code folding, but I don't want to launch Eclipse just to edit a HTML file.

    Read the article

  • MVC Validation with ModelState.isValid through a wizard

    - by Emmanuel TOPE
    I'm working on a small educational project on MVC 3, and I'm facing a small problem, when attempting to handle validation in my application through a wizard. I tried to get benefit from the ability of MVC3 to deliver content of a different view using the same URL, when handling an [HttpPost] method on a page. I my case,my main model's class contains about ten [Required] properties, that I would like to expose through a small wizard in 3 steps , So I want that the user may be able to enter his personal informations in the first step, then respond to some questions in the second stepp and finally receive a confirmation mail from the web application whit his credentials in the last step. I can't access the last step, because of the ModelState.isValid method that I use to handle validations, and which can't perform properly if I define some properties as [Required], but don't put them on the first view. As the replies to those questions remain in a couple of choices, I've thinked that I may use some nullable bool? for in order to avoid validation issues, but know that it's not the proper way. Are there someone who would like to help me find a way to extend my validation to those three steps ? Thanks in advance and sorry for my english, I'm not a native speaker.

    Read the article

  • Importance of a 1st Class Degree

    - by Nipuna Silva
    I'm currently at the 3rd year following a degree in Software Engineering. I'm thinking of moving into a research field in the future (programming language design, AI etc.) My problems are, What is the advantage/importance of carrying a 1st Class Degree (Honors for Americans) in to the industry rather than with just simple pass. Is it really important to have a 1st Class? Is it the practical knowledge i have to give priority or the theoretical knowledge, or both?

    Read the article

  • Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?

    - by Doug T.
    I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder. I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds. Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final "release" build of a project, that ant, maven, or another tool is used to do the full build from the command line? Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?

    Read the article

  • What is a good support knowledge base tool?

    - by Guillaume
    I have been searching for a tool to help my team organize its knowledge for resolving recurring support cases. I know this question will probably be closed, but I'll try my change anyway because I know that I can have some good answers about that. Context: our team is developing and supporting an huge applications (lots of different screens and workflow processes. We already have a good tool for managing our documentation, but we are struggling with support cases. Support action involve often quite a lot of manual steps to fix stuff and the knowledge for these actions is more 'oral transmission' than modern tools. We need an efficient way to store them in a knowledge base to be able to retrieve similar cases based on patterns (a stacktrace, an error message, a component name, a workflow step, ...) and ranked by similarity. Our wiki search is not very powerful when it come to this kind of search and the team members don't want to 'waste' time writing a report that will never be found... Do you know efficient knowledge base tool for this kind of use case ?

    Read the article

  • Command line options style - POSIX or what?

    - by maaartinus
    Somewhere I saw a rant against java/javac allegedly using a mix of Windows and Unix style like java -classpath ... -ea ... Something IMHO, it is no mix, it's just like find works as well, isn't it? AFAIK, according to POSIX, the syntax should be like java --classpath ... --ea ... Something and -abcdef would mean specifying 6 short options at once. I wonder which version leads in general to less typing and less errors. I'm writing a small utility in Java and in no case I'm going to use Windows style /a /b since I'm interested primarily in Unix. What style should I choose?

    Read the article

  • Nested languages code smell

    - by l0b0
    Many projects combine languages, for example on the web with the ubiquitous SQL + server-side language + markup du jour + JavaScript + CSS mix (often in a single function). Bash and other shell code is mixed with Perl and Python on the server side, evaled and sometimes even passed through sed before execution. Many languages support runtime execution of arbitrary code strings, and in some it seems to be fairly common practice. In addition to advice about security and separation of concerns, what other issues are there with this type of programming, what can be done to minimize it, and is it ever defensible (except in the "PHB on the shoulder" situation)?

    Read the article

  • What tools, libraries, or framework is needed to create a completely offline Javascript application?

    - by makerofthings7
    I am interested in creating a HTML application that can run as disconnected from the server as possible. Two examples of this include OWA in Exchange 2013 and to a lesser extent and the client available at www.ripple.com With the focus on OWA in Exchange 2013, what is needed to replicate the offline functionality available in a different application? A list technologies, frameworks, etc would be immensely helpful

    Read the article

  • Using Sql Server Change Data Capture with a frequently changing schema

    - by Pete
    We are looking into enabling Sql Server Change Data Capture for a new subsystem we are building. It's not really because we need it, but we are being pushed for having a complete history traceability, and CDC would nicely solve this requirement with minimum effort on our parts. We are following an agile development process, which in this case means that we frequently make changes to the database schema, e.g. adding new columns, moving data to other columns, etc. We did a small test where we created a table, enabled CDC for that table, and then added a new column to the table. Changes to the new column is not registered in the CDC table. Is there a mechanism to update the CDC table to the new schema, and are there any best practices to how you deal with captured data when migrating the database schema?

    Read the article

  • Architecture- Tracking lead origin when data is submitted by a server

    - by Kevin
    I'm looking for some assistance in determining the least complex strategy for tracking leads on an affiliate's website. The idea is to make the affiliate's integration with my application as easy as possible. I've run into theoretical barriers, so i'm here to explore other options. Application Overview: This is a lead aggregation / distribution platform. We will be focusing on the affiliate portion of this website. Essentially affiliates sign up, enter in marketing campaigns and sell us their conversions. Problem to be solved: We want to track a lead's origin and other events on the affiliate site. We want to know what pages, ads, and forms they viewed before they converted. This can easily be solved with pixel tracking. Very straightforward. Theoretical Issues: I thought I would ask affiliates to place the pixel where I could log impressions and set a third party cookie when the pixel is first called. Then I could associate future impressions with this cookie. The problem is that when the visitor converts on the affiliate's site and I receive their information via HTTP POST from the Affiliate's server I wouldn't be able to access the cookie and associate it with the lead record unless the lead lands on my processor via a redirect and is then redirected back to the affiliate's landing page. I don't want to force the affiliates to submit their forms directly to my tracking site, so allowing them to make an HTTP POST from their server side form processor would be ideal. I've considered writing JavaScript to set a First Party cookie but this seems to make things more complicated for the affiliate. I also considered having the affiliate submit the lead's data via a conversion pixel. This seems to be the most ideal scenario so far as almost all pixels are as easy as copy/paste. The only complication comes from the conversion pixel- which would submit all of the lead information and the request would come from the visitor's machine so I could access my third party cookie.

    Read the article

  • What would be the best approach to make revisions of user content?

    - by Kevin Simper
    I have searched and could not find any information about it. What is the best approach to storing revisions? I have a website where the user can write a document which can be fairly long (200-300 lines). How do you determine when to make a revision? Is it not a scalable solution to make a new one whenever the save, because that would be useless to the user when the want to look back, and it would require quite a lot of space. You could use time and say for every 15 minute they are working on it there would be a revision, but that would sometimes be nothing or the whole document have completely changed. I could make a diff from the previous revision, and compare by line and look at how many percent of the lines have been changed. What are other doing revisions?

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Best design for a memory resident tool

    - by Andrew S.
    I apologize if this tends more toward design that programming, but here goes. What design would you recommend for a database that is Memory resident Must run on windows, linux and (at a stretch) the mac Accept multiple queries simultaneously Have minimum overhead, since a search is expected to take <0.25s This program implements a domain-specific search. Think of it as a database, but one that takes advantage of domain specific information to outperform a convential database search (for example, with custom oracle indexing). We have a custom data structure for our data. Our protoype is a simple exe that constructs the database in memory each time it is run. We were thinking that perhaps this program would suffice, but augmented with sockets so it can listen for queries. This database will be static. Its contents will change infrequently. We expect queries, and the solution, to be delivered via a web service.

    Read the article

  • Is it a bad idea to list every function/method argument on a new line and why?

    - by dgnball
    I work with someone who, every time they call a function they put the arguments on a new line e.g. aFunction( byte1, short1, int1, int2, int3, int4, int5 ) ; I find this very annoying as it means the code isn't very compact, so I have to scan up and down more to actually make any sense of the logic. I'm interested to know whether this is actually bad practice and if so, how can I persuade them not to do it?

    Read the article

  • MonoGame; reliable enough to be accepted on iOS, Win 8 and Android stores?

    - by Serguei Fedorov
    I love XNA; it simplifies rendering code to where I don't have to deal with it, it runs on C# and has very fairly large community and documentation. I would love to be able to use it for games across many platforms. However, I am a little bit concerned about how well it will be met by platform owners; Apple has very tight rules about code base but Android does not. Microsoft's new Windows 8 platforms seems to be pretty lenient but I am not sure oh how they would respond to an XNA project being pushed to the app store (given they suddenly decided to dump it and force developers to use C++/Direct3D). So the bottom line is; is it safe to invest time and energy into a project that runs on MonoGame? In the end, is is possible to see my game on multiple platforms and not be shot down with a useless product?

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

  • What happens at control invoke function?

    - by user65909
    A question about form controls invoke function. Control1 is created on thread1. If you want to update something in Control1 from thread2 you must do something like: delegate void SetTextCallback(string txt); void setText(string txt) { if (this.textBox1.InvokeRequired) { SetTextCallback d = new SetTextCallback(setText); this.Invoke(d, new object[] { txt }); } else { // this will run on thread1 even when called from thread2 this.textBox1.AppendText(msg); } }` What happens behind the scenes here? This invoke behaves different from a normal object invoke. When you want to call a function in an object on a specific thread, then that thread must be waiting on some queue of delegates, and execute the incoming delegates. Is it correct that the windows forms control invoke function is completely different from the standard object invoke function?

    Read the article

  • How to sync audio files with Logitech media server in MAC OS?

    - by Abhishek
    I want to customize the Logitech Media Server (web interface on localhost) so that N number of DIFFERENT audio files will start to play at the same time on N number of wifi receivers, each file on a different receiver. Currently, the server will sync only 1 track to N number(amount) of receivers. Is it possible with Logitech media server is open source. How can I able to do this? can you explain me sample code?

    Read the article

  • Where to find a mentor?

    - by Fanatic23
    I have been looking around for resources that discuss about mentors, including questions in the stackexchange network. I see that the general response has been to join open source projects and/or check out local user groups if there exists any. Are we sure that joining open source project can get me a mentor? It can grab eyeballs, yes, if the project is exciting enough but a mentor? I am not so sure. What do you guys think? Finally, I'd want my mentor to be in a specific domain as opposed to just being a coding wizard and that makes the search even more difficult. Suggestions?

    Read the article

  • How to hire support people?

    - by Martin
    I manage a tech support team at a mid-sized software company. We are the last line of support, so issues that we can't fix need to be escalated to the development team. When I joined the company, our team wasn't capable of much beyond using a specific set of troubleshooting steps to solve known issues and escalating anything else to the developers. It's always been a goal of mine for our team to shoulder as much of the support burden as possible without ever bothering a developer. Over the past few years, I, along with several new hires I've made, have made pretty good progress in that direction. We've coded our own troubleshooting tools which now ship with several of our products. When users have never-before-seen issues, we analyze stack traces and troubleshoot down to the code level, and if we need to submit a bug, half the time we've already identified in the code where in the code the bug is and offered a patch to fix it. Here's the problem I've always had: finding support people capable of the work I've described above is really difficult. I've hired 3 people in the past 3 years, and I've probably looked at several thousand resumes and conducted several hundred phone screens to do so. I know it's pretty well accepted that hiring good people is tough in the tech industry, but it seems that support is especially difficult -- there are clearly thousands of people walking around calling themselves support analysts, but 99%+ of them seemingly aren't capable of anything beyond reading a script. I'm curious if anyone has experience recruiting the sort of folks I'm talking about, and if you have any suggestions to share. We've tried all sorts of things -- different job titles/descriptions, using headhunters, etc. And while we've managed to hire a few good folks, it's basically taken us a year to find an appropriate candidate for each opening we've had, and I can't help but wonder if there's something we could be doing differently.

    Read the article

  • Is it worth it to switch from home-grown remote command interface to using JMX

    - by Sam Goldberg
    Without knowing too much about JMX, I've always assumed that it would be the best approach for building in remote management to our standalone Java server application. Our server application has some minimal remote control capability, using text commands sent via TCP/IP socket to it. Using the home grown approach, it is fairly to add a new command. (Just create new command text, and the code to handle that in the message receiver). On the other hand, we have hardly implemented any commands, even though there are many things we would like to be able to execute remotely. I am trying to weigh the value of moving to incorporating JMX (learning it, and building the interfaces), versus just sticking with the home-grown approach. Does anyone have any experience or advice regarding changing an existing application to use JMX?

    Read the article

  • Why is there a "new" in Go?

    - by dystroy
    I'm still puzzled as why we have new in Go. When you want to instantiate a struct, you do t := Thing{} and, obviously, you can get a pointer to a new instance by doing t := &Thing{} But there's also this possibility : t := new(Thing) This last one seems a little alien to me. &Thing{} is as clear and concise as new(Thing) and it uses only constructs you often use elsewhere. It's also more extensible as you might change it to &Thing{3} or &Thing{Feets:7}. In my opinion, having a supplementary keyword is costly, it makes the language more complex and adds to what you must know. And it might mask to newcomers what's behind instantiating a struct. It also makes one more reserved word. So what's the reasoning behind new ? Is it something useful ? Should we use it ?

    Read the article

  • How do we keep dependent data structures up to date?

    - by Geo
    Suppose you have a parse tree, an abstract syntax tree, and a control flow graph, each one logically derived from the one before. In principle it is easy to construct each graph given the parse tree, but how can we manage the complexity of updating the graphs when the parse tree is modified? We know exactly how the tree has been modified, but how can the change be propagated to the other trees in a way that doesn't become difficult to manage? Naturally the dependent graph can be updated by simply reconstructing it from scratch every time the first graph changes, but then there would be no way of knowing the details of the changes in the dependent graph. I currently have four ways to attempt to solve this problem, but each one has difficulties. Nodes of the dependent tree each observe the relevant nodes of the original tree, updating themselves and the observer lists of original tree nodes as necessary. The conceptual complexity of this can become daunting. Each node of the original tree has a list of the dependent tree nodes that specifically depend upon it, and when the node changes it sets a flag on the dependent nodes to mark them as dirty, including the parents of the dependent nodes all the way down to the root. After each change we run an algorithm that is much like the algorithm for constructing the dependent graph from scratch, but it skips over any clean node and reconstructs each dirty node, keeping track of whether the reconstructed node is actually different from the dirty node. This can also get tricky. We can represent the logical connection between the original graph and the dependent graph as a data structure, like a list of constraints, perhaps designed using a declarative language. When the original graph changes we need only scan the list to discover which constraints are violated and how the dependent tree needs to change to correct the violation, all encoded as data. We can reconstruct the dependent graph from scratch as though there were no existing dependent graph, and then compare the existing graph and the new graph to discover how it has changed. I'm sure this is the easiest way because I know there are algorithms available for detecting differences, but they are all quite computationally expensive and in principle it seems unnecessary so I'm deliberately avoiding this option. What is the right way to deal with these sorts of problems? Surely there must be a design pattern that makes this whole thing almost easy. It would be nice to have a good solution for every problem of this general description. Does this class of problem have a name?

    Read the article

  • Should tests be in the same ruby file or in separeted ruby files?

    - by Junior Mayhé
    While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require "selenium-webdriver" require "test/unit" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = "http://mysite" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + "/") # a huge test here end def test_2 @driver.get(@base_url + "/") # a huge test here end def test_3 @driver.get(@base_url + "/") # a huge test here end def test_4 @driver.get(@base_url + "/") # a huge test here end def test_5 @driver.get(@base_url + "/") # a huge test here end end

    Read the article

  • Why USA produces the best / most popular software? [closed]

    - by user1598390
    Have you noticed that a disproportionate amount of popular software products comes from the USA ? Examples: iOS, OS X, Phosothop, Oracle, Windows, Final Cut Pro, MS Office, iTunes, iWorks Suite, iLife Suite, AutoCad, Aperture, Google search engine, Twitter and endless stream of software that are the best in their fields and that are the models the rest of the industry want to emulate. Few people would deny that the most popular software comes from American companies. Obviously there's plenty of good software coming from outside the US, like Linux or SAP but most great looking, killer software comes from USA. Maybe these companies outsource the code elsewhere but the inception and design is mostly done in the USA. Why is that? and, can it be replicated elsewhere given the correct "ingredients" ?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >