Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 179/563 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Dependency Checker/ Installer With Java/Ant

    - by jsn
    I need some kind of software to easily roll out code on new servers. I use Apache Ant for builds. However, say I want to set-up a new server fast and my Java program depends on GhostScript, if there any software that can automatically check the computer for it (and then maybe the PATH) and add it if is not there? I have already looked at Maven and Apache Ivy, however, I think these are only for .jar files (from what I saw). Thanks for any help.

    Read the article

  • Best approach to accessing multiple data source in a web application

    - by ced
    I've a base web application developed with .net technologies (asp.net) used into our LAN by 30 users simultanousley. From this web application I've developed two verticalization used from online users. In future i expect hundreds users simultanousley. Our company has different locations. Each site use its own database. The web application needs to retrieve information from all existing databases. Currently there are 3 database, but it's not excluded in the future expansion of new offices. My question then is: What is the best strategy for a web application to retrieve information from different databases (which have the same schema) whereas the main objective performance data access and high fault tolerance? There are case studies in the literature that I can take as an example? Do you know some good documents to study? Do you have any tips to implement this task so efficient? Intuitively I would say that two possible strategy are: perform queries from different sources in real time and aggregate data on the fly; create a repository that contains the union of the entities of interest and perform queries directly on repository;

    Read the article

  • moore's law and quadratic algorithm

    - by damon
    I was going thru a video (from coursera - by sedgewick) in which he argues that you cannot sustain Moore's law using a quadratic algorithm.He elaborates like this In year 197* you build a computer of power X ,and need to count N objects.This takes M days According to Moore's law,you have a computer of power 2X after 1.5 years.But now you have 2N objects to count. If you use a quadratic algorithm, In year 197*+1.5 ,it takes (4M)/2 = 2M days 4M because the algorithm is quadratic,and division by 2 because of doubling computer power. I find this hard to understand.I tried to work thru this as below To count N objects using comp=X , it takes M days. -> N/X = M After 1.5 yrs ,you need to count 2N objects using comp=2X -> 2N/(2X) -> N/X -> M days where do I go wrong? can someone please help me understand?

    Read the article

  • There is any reason for which a delete method/field/function refactoring doesn't exist?

    - by raisercostin
    An operation in an interface is obsolete so I decided to delete it. It seems that there is no automatic support for such a "refactoring". For me is a refactoring operation since the behavior of the code will be preserved since nobody(tests, client apis) will notice that the operation was removed. In eclipse, in java code, on an method in an interface I have the following options: rename, move, change method signature, inline, extract interface, extract superclass, use supertype when possible, pull up, push down, introduce parameter objet, introduce indirection, generate declared type. There is any reason for which a delete method/field/function refactoring doesn't exist?

    Read the article

  • Segmentation fault 11 in MacOS X- C++ [migrated]

    - by Marcos Cesar Vargas Magana
    all. I have a "segmentation fault 11" error when I run the following code. The code actually compiles but I get the error at run time. //** Terror.h ** #include <iostream> #include <string> #include <map> using std::map; using std::pair; using std::string; template<typename Tsize> class Terror { public: //Inserts a message in the map. static Tsize insertMessage(const string& message) { mErrorMessages.insert( pair<Tsize, string>(mErrorMessages.size()+1, message) ); return mErrorMessages.size(); } private: static map<Tsize, string> mErrorMessages; } template<typename Tsize> map<Tsize,string> Terror<Tsize>::mErrorMessages; //** error.h ** #include <iostream> #include "Terror.h" typedef unsigned short errorType; typedef Terror<errorType> error; errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); //** main.cpp ** #include <iostream> #include "error.h" using namespace std; int main() { try { throw error(memoryAllocationError); } catch(error& err) { } } I have kind of debugging the code and the error happens when the message is being inserted in the static map member. An observation is that if I put the line: errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); inside the "main()" function instead of at global scope, then everything works fine. But I would like to extend the error messages at global scope, not at local scope. The map is defined static so that all instances of "error" share the same error codes and messages. Do you know how can I get this or something similar. Thank you very much.

    Read the article

  • Game Trees Conceptual Question

    - by Chris Corbin
    I am struggling to conceptually understand a question in a programming assignment for an algorithms class. The problem is dealing with a fictitious 2 player game, named Easy. The rules of the game are simple; each player may chose one of 4 integers {0-3} after which that integer is not available for the other player. The catch is, a player picks {0} it means they quit. The objective is for Player 1 to get {1} and Player 2 to get {2}, in which case they may win, however if both or neither succeed, then the game ends in a draw. I have been asked to draw the game tree for Easy, showing all nodes, which they explained as 4! = 24. Labeling the edges, which represent moves (selecting a number) and the leaves with who won (1 means Player 1 won, -1 means Player 2 won, and 0 means a tie). I have drawn out a game tree, which I believe is correct, however I am not 100% certain hence I am asking the question. My game tree only has 16 leaves. I am thinking that when a player picks {0}, and then quits, the game tree stops there? I don't see how it is possible to get to 24 leaves? Any help would be greatly appreciate, and if you need more information I would be happy to provide it. Thanks

    Read the article

  • Software Life-cycle of Hacking

    - by David Kaczynski
    At my local university, there is a small student computing club of about 20 students. The club has several small teams with specific areas of focus, such as mobile development, robotics, game development, and hacking / security. I am introducing some basic agile development concepts to a couple of the teams, such as user stories, estimating complexity of tasks, and continuous integration for version control and automated builds/testing. I am familiar with some basic development life-cycles, such as waterfall, spiral, RUP, agile, etc., but I am wondering if there is such a thing as a software development life-cycle for hacking / breaching security. Surely, hackers are writing computer code, but what is the life-cycle of that code? I don't think that they would be too concerned with maintenance, as once the breach has been found and patched, the code that exploited that breach is useless. I imagine the life-cycle would be something like: Find gap in security Exploit gap in security Procure payload Utilize payload I propose the following questions: What kind of formal definitions (if any) are there for the development life-cycle of software when the purpose of the product is to breach security?

    Read the article

  • How to depict Import a file action in the Sequence diagram

    - by user970696
    Everyone says sequence diagrams are so easy but I just cannot figure this out. Basically user clicks on an 'Import from temp folder' button, the program opens a window with a list populated with filenames, user clicks on a filename, clicks on OK and the document is imported. I know the order of the actions but how to depict e.g. populating a list, or selecting an item from a list? So I assume the objects would be like: [USER] [ImportDialogWindow] [ListOfFiles:STRING] [?where to go with selected file]

    Read the article

  • Why should ViewModel route actions to Controller when using the MVCVM pattern?

    - by Lea Hayes
    When reading examples across the Internet (including the MSDN reference) I have found that code examples are all doing the following type of thing: public class FooViewModel : BaseViewModel { public FooViewModel(FooController controller) { Controller = controller; } protected FooController Controller { get; private set; } public void PerformSuperAction() { // This just routes action to controller... Controller.SuperAction(); } ... } and then for the view: public class FooView : BaseView { ... private void OnSuperButtonClicked() { ViewModel.PerformSuperAction(); } } Why do we not just do the following? public class FooView : BaseView { ... private void OnSuperButtonClicked() { ViewModel.Controller.SuperAction(); // or, even just use a shortcut property: Controller.SuperAction(); } }

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • What source code organization approach helps improve modularity and API/Implementation separation?

    - by Berin Loritsch
    Few languages are as restrictive as Java with file naming standards and project structure. In that language, the file name must match the public class declared in the file, and the file must live in a directory structure matching the class package. I have mixed feelings about that approach. While I never have to guess where a file lives, there's still a lot of empty directories and artificial constraints. There's several languages that define everything about a class in one file, at least by convention. C#, Python (I think), Ruby, Erlang, etc. The commonality in most these languages is that they are object oriented, although that statement can probably be rebuffed (there is one non-OO language in the list already). Finally, there's quite a few languages mostly in the C family that have a separate header and implementation file. For C I think this makes sense, because it is one of the few ways to separate the API interface from implementations. With C it seems that feature is used to promote modularity. Yet, with C++ the way header and implementation files are split seems rather forced. You don't get the same clean API separation that you do with C, and you are forced to include some private details in the header you would rather keep only in the implementation. There's quite a few languages that have a concept that overlaps with interfaces like Java, C#, Go, etc. Some languages use what feels like a hack to provide the same concept like C# using pure virtual abstract classes. Still others don't really have an interface concept and rely on "duck" typing--for example Ruby. Ruby has modules, but those are more along the lines of mixing in behaviors to a class than they are for defining how to interact with a class. In OO terms, interfaces are a powerful way to provide separation between an API client and an API implementation. So to hurry up and ask the question, from a personal experience point of view: Does separation of header and implementation help you write more modular code, or does it get in the way? (it helps to specify the language you are referring to) Does the strict file name to class name scheme of Java help maintainability, or is it unnecessary structure for structure's sake? What would you propose to promote good API/Implementation separation and project maintenance, how would you prefer to do it?

    Read the article

  • Standards for how developers work on their own workstations

    - by Jon Hopkins
    We've just come across one of those situations which occasionally comes up when a developer goes off sick for a few days mid-project. There were a few questions about whether he'd committed the latest version of his code or whether there was something more recent on his local machine we should be looking at, and we had a delivery to a customer pending so we couldn't wait for him to return. One of the other developers logged on as him to see and found a mess of workspaces, many seemingly of the same projects, with timestamps that made it unclear which one was "current" (he was prototyping some bits on versions of the project other than his "core" one). Obviously this is a pain in the neck, however the alternative (which would seem to be strict standards for how each developer works on their own machine to ensure that any other developer can pick things up with a minimum of effort) is likely to break many developers personal work flows and lead to inefficiency on an individual level. I'm not talking about standards for checked-in code, or even general development standards, I'm talking about how a developer works locally, a domain generally considered (in my experience) to be almost entirely under the developers own control. So how do you handle situations like this? Are the one of those things that just happens and you have to deal with, the price you pay for developers being allowed to work in the way that best suits them? Or do you ask developers to adhere to standards in this area - use of specific directories, naming standards, notes on a wiki or whatever? And if so what do your standards cover, how strict are they, how do you police them and so on? Or is there another solution I'm missing? [Assume for the sake of argument that the developer can not be contacted to talk through what he was doing here - even if he could knowing and describing which workspace is which from memory isn't going to be simple and flawless and sometimes people genuinely can't be contacted and I'd like a solution which covers all eventualities.]

    Read the article

  • Design of input files reading when it comes to defaults/transformations

    - by Stefano Borini
    Suppose you have an application that reads an input file, on a language that does not support the concept of None. The input is read, parsed, and the contents are stored on a structure for later use. Now, in general you want to keep into account transformation of the data from the input, such as adding default values when not specified, or adding full path information to relative path specified in the input. There are two different strategies to achieve this. The first strategy is to perform these transformations at input file reading time. In practice, you put all the intelligence into the input parser, and your application has no logic to deal with unexpected circumstances, such as an unspecified value. You lose the information of what was specified and what wasn't, but you gain in black-boxing the details. Your "running code" needs that information in any case and in a proper form, and is not concerned if it's the default or a user-specified information. The second strategy is to have the file reader a real one-to-one mapper from the file to a memory-stored object, with no intelligent behavior. unspecified values are not filled (which may however be a problem in languages not supporting None) and data is stored verbatim from the file. The intelligence for recovery must now go into the "running code", which must check what was specified in the file, eventually fall back to a default, or modify the input properly before using it. I would like to know your opinion on these two approaches, and in particular which one you found the most frequently implemented.

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • Hard-copy approaches to time tracking

    - by STW
    I have a problem: I suck at tracking time-on-task for specific feature/defects/etc while coding them. I tend to jump between tasks a fair bit (partly due to the inherit juggling required by professional software development, partly due to my personal tendancy to focus on the code itself and not the business process around code). My personal preference is for a hard-copy system. Even with gabillions of pixels of real-estate on-screen I find it terribly distracting to keep a tracking window convienient; either I forget about it or it gets in my ways. So, looking for suggestions on time-tracking. My only requirement is a simple system to track start/stop times per task. I've considered going as far as buying a time-clock and giving each ticket a dedicated time-card. When I start working on it, punch-in; when done working, punch-out.

    Read the article

  • How do you manage the testing of your Android software on physical devices?

    - by Philip Regan
    I'm in charge of managing mobile application development at my company, and I am currently building a mobile device "library" for testing. Essentially, we want to have a representative device in-house for each of the OSes we are developing for, currently iOS (iPhone-only), Blackberry, and Android. Simulators only go so far, but I'm placing into the process a step to test software on the devices themselves. The problem we're finding is with Android. I don't think any of us here ever really understood just how fragmented the whole platform is until we started looking at devices to acquire. We are going to wait until v2.3 of Android is released, but which products to choose? Do we go by the most popular by market share? Do we get a small range of products by specs from least to most powerful overall? We're trying to avoid having to manage a dozen different devices to test each app, if not because of cost if only for the repeated time sink. How do you manage the testing of your Android software on physical devices?

    Read the article

  • Is creating a full application in Silverlight advisable?

    - by Anthony
    Is creating a huge public site fully in Silverlight really advisable? for eg. an ecommerce site. I don't want to start any debate but actually I feel Silverlight shouldn't be used for full website because the biggest loss you incur is of SEO. No search engines till today can parse the xap file and index it based on it's content. You can get around it by doing ifs and thens like if Silverlight is not supported then make an Asp.Net equivalent page for it but that only doubles our effort of making application, more than anything else. Why write double code in 2 applications meant for the same purpose. If that is the only option why not create Asp.Net application only. What are your views? Thanks in advance :)

    Read the article

  • Is there something better than a StringBuilder for big blocks of SQL in the code

    - by Eduardo Molteni
    I'm just tired of making a big SQL statement, test it, and then paste the SQL into the code and adding all the sqlstmt.append(" at the beginning and the ") at the end. It's 2011, isn't there a better way the handle a big chunk of strings inside code? Please: don't suggest stored procedures or ORMs. edit Found the answer using XML literals and CData. Thanks to all the people that actually tried to answer the question without questioning me for not using ORM, SPs and using VB edit 2 the question leave me thinking that languages could try to make a better effort for using inline SQL with color syntax, etc. It will be cheaper that developing Linq2SQL. Just something like: dim sql = <sql> SELECT * ... </sql>

    Read the article

  • Do we need use case levels or not?

    - by Gabriel Šcerbák
    I guess no one would argue for decomposing use cases, that is just wrong. However sometimes it is necessary to specify use cases, which are on lower, more technical level, like for example authetication and authorization, which give the actor value, but are further from his business needs. Cockburn argues for levels when needed and explains how to move use cases from/to different levels and how to determine the right level. On the other hand, e.g. Bittner argues against use case levels, although he uses subflows and at the end of his book mentions, that at least two levels areneeded most of the time. My questionis, do you find use case levels necessary, helpful or unwanted? What are the reasons? Am I misssing some important arguments?

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • What do you call "X <= $foo <= Y" comparison?

    - by Jakob
    While writing a Perl statement like if ( $foo >= X && $foo <= Y ) yet again, I wondered why many programming languages do not support the more comfortable form if ( X <= $foo <= Y ) and what this is called. I came up with "3-legged comparison" but no results when searching for it. By the way there is also the "element-of-set" form if ( $foo in X..Y ) which I only consider more readable when provided via a short keyword. Is there a term for X <= $foo <= Y comparison? Which languages support it?

    Read the article

  • Ruby on Rails background API polling

    - by Matthew Turney
    I need to integrate a free/busy calendar integration with Zimbra. Unlike outlook, it seems, Zimbra requires polling their API. I need to be able to grab the free/busy data in background tasks for 10's of thousands of users on a regular time interval, preferably every few minutes. What would be the best way to implement this in a Rails application without bogging down our current resque tasks? I have considered moving this process to something like node.js or something similar in Ruby. The biggest problem is that we have no control over the IO, as each clients Zimbra instances could be slow and we don't want to create a huge backup in tasks. Thoughts and ideas?

    Read the article

  • svn usage advice

    - by AngeloBad
    I need an advice about SVN usage. I use Tortoise SVN on my client to deal with a project I am working on. The probjem is that I have two set of bugs top fix on the project. One to deploy till 5 days, and one to deploy till 10 days. I am going to solve all the bugs before the 5fth days but I do not want to deploy the last 5 before the release date (till 10 days). How can I work on two separate codes and the merge all the modification? Is it possible? I have to create a branch?

    Read the article

  • Reasons NOT to open source not-for-profit code?

    - by naught101
    I am a big fan of open source code. I think I understand most of the advantages of going open source. I'm a science student researcher, and I have to work with quite a surprising amount of software and code that is not open source (either it's proprietary, or it's not public). I can't really see a good reason for this, and I can see that the code, and people using it, would definitely benefit from being more public (if nothing else, in science it's vital that your results can be replicated if necessary, and that's much harder if others don't have access to your code). Before I go out and start proselytising, I want to know: are there any good arguments for not releasing not-for-profit code publicly, and with an OSI-compliant license? (I realise there are a few similar questions on SE, but most focus on situations where the code is primarily used for making money, and I couldn't much relevant in the answers.) Clarification: By "not-for-profit", I am including downstream profit motives, such as parent-company brand-recognition and investor profit expectations. In other words, the question relates only to software for which there is NO profit motive tied to the software what so ever.

    Read the article

  • how a pure functional programming language manage without assignment statements?

    - by Gnijuohz
    When reading the famous SICP,I found the authors seem rather reluctant to introduce the assignment statement to Scheme in Chapter 3.I read the text and kind of understand why they feel so. As Scheme is the first functional programming language I ever know something about,I am kind of surprised that there are some functional programming languages(not Scheme of course) can do without assignments. Let use the example the book offers,the bank account example.If there is no assignment statement,how can this be done?How to change the balance variable?I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory,this must can be done too. I learned C,Java,Python and use assignments a lot in every program I wrote.So it's really an eye-opening experience.I really hope someone can briefly explain how assignments are avoided in those functional programming languages and what profound impact(if any) it has on these languages. The example mentioned above is here: (define (make-withdraw balance) (lambda (amount) (if (>= balance amount) (begin (set! balance (- balance amount)) balance) "Insufficient funds"))) This changed the balance by set!.To me it looks a lot like a class method to change the class member balance. As I said,I am not familiar with functional programming languages,so if I said something wrong about them,feel free to point out.

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >