Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 221/563 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Is there a process-oriented IDE ?

    - by Raveline
    My problem is simple : when I'm programming in an OO paradigm, I'm often having part of a main business process divided in many classes. Which means, if I want to examine the whole functional chain that leads to the output, for debugging or for optimization research, it can be a bit painful. So I was wondering : is there an IDE that let you put a "process tag" on functions coming from different objects, and give you a view of all those functions having the same tag ? edit : To give an example (that I'm making up completely, sorry if it doesn't sound very realistic). Let's say we have the following business process for a HR application : receive a holiday-request by an employee, check the validity of the request, then give an alert to his boss ("one of those lazy programmer wants another day off"); at the same time, let's say the boss will want to have a table of all employee's timetable during the time the employee wants his vacations; then handle the answer of the boss, send a nice little mail to the employee ("No way, lazy bones"). Even if we get rid of everything not purely business-related (mail sending process, db handling to get the useful info, GUI functionalities, and so on), we still have something that doesn't really fit in "one class". I'd like to have an IDE that would give me the opportunity to embrace quickly, at the very least : The function handling the validation of the request by the employee; The function preparing the "timetable" for the boss; The function handling the validation of the request by the boss; I wouldn't put all those functions in the same class (but perhaps that's my mistake ?). This is where my dreamed IDE could be helpful.

    Read the article

  • Is Query Performance different for different versions of SQL Server?

    - by Ronak Mathia
    I have fired 3 update queries in my stored procedure for 3 different tables. Each table contains almost 2,00,000 records and all records have to be updated. I am using indexing to speed up the performance. It quite working well with SQL Server 2008. stored procedure takes only 12 to 15 minutes to execute. (updates almost 1000 rows in 1 second in all three tables) But when I run same scenario with SQL Server 2008 R2 then stored procedure takes more time to complete execution. its about 55 to 60 minutes. (updates almost 100 rows in 1 second in all three tables). I couldn't find any reason or solution for that. I have also tested same scenario with SQL Server 2012. but result is same as above. Please give suggestions.

    Read the article

  • Online medical image processing grand challenges

    - by taltos
    Hello! I moved my question from stackoverflow here. I cherish the hope that I will be luckier. I'm currently working on my thesis, and I'm looking for an/some online medical image processing grand challenge(s). I already know this site but I need a challenge which has microscopic image dataset like cells, chromosomes, bacterias, viruses etc with classification or recognation objective. Like karyotyping. Maybe someone is working on this field or his university made a challenge what I'm looking for, and can help me. Thank you!

    Read the article

  • Zooming options terminology

    - by Mark
    I've come up with 4 different ways to fit an image inside a viewing region, but I'm trouble coming up with names for them. Perhaps someone can suggest some? Fit image in viewing region, do not enlarge if image is smaller Size image so it fits snuggly inside the viewing region (enlarge if necessary) -- the image is as large as possible while still fitting within the viewing region Size image so that it fills the entire viewing region -- the image will be the same size or bigger than the viewing region 1:1 ratio; 1 pixel in the image corresponds to 1 pixel on screen All zooming options maintain aspect ratio. Stretching is just ugly, so it's not an option :)

    Read the article

  • "Programming error" exceptions - Is my approach sound?

    - by Medo42
    I am currently trying to improve my use of exceptions, and found the important distinction between exceptions that signify programming errors (e.g. someone passed null as argument, or called a method on an object after it was disposed) and those that signify a failure in the operation that is not the caller's fault (e.g. an I/O exception). As far as I understand, it makes little sense for an immediate caller to actually handle programming error exceptions, he should instead assure that the preconditions are met. Only "outer" exception handlers at task boundaries should catch them, so they can keep the system running if a task fails. In order to ensure that client code can cleanly catch "failure" exceptions without catching error exceptions by mistake, I create my own exception classes for all failure exceptions now, and document them in the methods that throw them. I would make them checked exceptions in Java. Now I have a few questions: Before, I tried to document all exceptions that a method could throw, but that sometimes creates an unwiedly list that needs to be documented in every method up the call chain until you can show that the error won't happen. Instead, I document the preconditions in the summary / parameter descriptions and don't even mention what happens if they are not met. The idea is that people should not try to catch these exceptions explicitly anyway, so there is no need to document their types. Would you agree that this is enough? Going further, do you think all preconditions even need to be documented for every method? For example, calling methods in IDisposable objects after calling Dispose is an error, but since IDisposable is such a widely used interface, can I just assume a programmer will know this? A similar case is with reference type parameters where passing null makes no conceivable sense: Should I document "non-null" anyway? IMO, documentation should only cover things that are not obvious, but I am not sure where "obvious" ends.

    Read the article

  • What is bootstrap listener in the context of Spring framework?

    - by jillionbug2fix
    I am studying Spring framework, in web.xml I added following which is a bootstrap listener. Can anyone give me a proper idea of what is a bootstrap listener? <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> You can see the doc here: ContextLoadListener Bootstrap listener to start up and shut down Spring's root WebApplicationContext. Simply delegates to ContextLoader as well as to ContextCleanupListener. This listener should be registered after Log4jConfigListener in web.xml, if the latter is used. As of Spring 3.1, ContextLoaderListener supports injecting the root web application context via the ContextLoaderListener(WebApplicationContext) constructor, allowing for programmatic configuration in Servlet 3.0+ environments. See WebApplicationInitializer for usage examples...

    Read the article

  • How is"cloud computing"different from "client-server"?

    - by BellevueBob
    Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's.

    Read the article

  • How can I approach creating an efficient algorithm for maximizing value with these specific constraints?

    - by sway
    I'm having trouble coming up with an approach that isn't n^2 for this problem. Here's a contrived, simplified version I've come up with: Let's say you're a company that needs 4 employees to launch in a new city, a manager, two salespeople, and a customer support rep, and you magically know how much impact every candidate will have and how much salary they require to take the job. Your table of potential employees looks something like this: Name Position Salary Impact Adam Smith Manager 60,000 11 Allison Brown Salesperson 40,000 9 Brad Stewart Manager 55,000 9 ...etc (thousands of records) What algorithmic approach can be taken to find the maximum "impact" while still filling all the positions and remaining under, say, a 200,000 budget? Thanks!

    Read the article

  • Recommended reading for (Object Oriented) application design architecture?

    - by e4rthdog
    In life it doesnt matter if you do one thing for 15 years. You will end up waking one day and asking stuff that are equal to "how do i walk?" :) My specific question is that as a new entrant to C# and OOP i am stepping into many little "details" that need to be addressed. Written a lot of code in VB.NET / cobol / simple php e.t.c surely does not help much into the OOP world... So , even after reading entry level books for C# and watching some videos i recently found out about the "factory model design" for applications. I would appreciate if any of you guys recomment some reading on application design architecture for further reading...

    Read the article

  • why do we need to put private members in headers

    - by Simon
    Private variables are a way to hide complexity and implementation details to the user of a class. This is a rather nice feature. But I do not understand why in c++ we need to put them in the header of a class. I see some annoying downsides to this: it clutters the header from the user it force recompilation of all client libraries whenever the internals are modified Is there a conceptual reason behind this requirement? Is it only to ease the work of the compiler?

    Read the article

  • Usage of repository between EF model and code consumer

    - by jim
    I have binary data in my database that I'll have to convert to bitmap at some point. I was thinking whether or not it's appropriate to use a repository and do it there. My consumer, which is a presentation layer, will use this repository. For example: // This is a class I created for modeling the item as is. public class RealItem { public string Name { get; set; } public Bitmap Image { get; set; } } public abstract class BaseRepository { //using Unity (http://unity.codeplex.com) to inject the dependancy of entity context. [Dependency] public Context { get; set; } } public calss ItemRepository : BaseRepository { public List<Items> Select() { IEnumerable<Items> items = from item in Context.Items select item; List<RealItem> lst = new List<RealItem>(); foreach(itm in items) { MemoryStream stream = new MemoryStream(itm.Image); Bitmap image = (Bitmap)Image.FromStream(stream); RealItem ritem = new RealItem{ Name=item.Name, Image=image }; lst.Add(ritem); } return lst; } } Is this a correct way to use the repository pattern? I'm learning this pattern and I've seen a lot of examples online that are using a repository but when I looked at their source code... for example: public IQueryable<object> Select { return from q in base.Context select q; } as you can see no behavior is added to the system by their approach, so I was confused that maybe repository is something else and I got it all wrong. At the end there should be extra benifits of using them right?

    Read the article

  • Commercial product using a GPL OS

    - by pfried
    we are planning to create a commercial product. The product consists of come MCUs and a small computer (we are developping on a raspberry pi at the moment). The computer needs an operating system as we would like keep things like WLAN and booting as simple as possible. We create some software running on this computer (node.js application). The most operating systems like Arch Linux are licenced under the GPL. The product we would sell contains the computer with preinstalled OS and software. This system operates as a central access point to MCU devices and is able to control them. We use other's software in our product. We do not modify their source code. The product (the computer part) consists of a computer, an OS and software we create. How does the use of an OS affect our own code (licence)? Is there a possibility of avoiding GPL for our own code? eg. shipping the software seperated? Are there any effects to other components of our product, eg. the MCU part? The node.js application delivers a WebApp to the client where it is executed. Are there any effects (As we would like to sell parts of the code as an additional App on the App Stores)? I know we make use of the work of the community and i respect this. The problem is: The software alone is kind of useless without the MCU devices. I do not expect a legal advice.

    Read the article

  • Daylight saving time: Annoying and pointless [closed]

    - by polemon
    Daylight saving time is a big annoyance for me. Not just from the standpoint, that I never know when we set our clocks an hour ahead or an hour back. Setting the clock ahead or back disturbs my time organization, and is responsible for my bad mood around that day. From the standpoint of a programmer, it's no less annoying. you always have to check whether it isn't "that date" in the year, when you have to work with local time. I hear people have the same views on this that I have. also, I don't see any benefits from it. The supposedly added "extra hour" of sunlight; I don't feel that. In case you live in a region where daylight savings is observed (like in Germany, where I live), please tell me how you manage the annoyances that come with it, and (if possible) how to get rid of it, once and for all...

    Read the article

  • To implement registration page with Vaadin or not?

    - by JVerstry
    This is a tactical implementation question about usage of Vaadin or in some part of my application. Vaadin is a great framework to login users and implement sophisticated web applications with many pages. However, I think it is not very well suited to desgin pages to register new users for my application. Am I right? Am I am wrong? It seems to me that a simple HTML/CSS/Javascript login + email registration + confirmation email with confirmation link cannot be implemented easily with Vaadin. It seems like Vaadin would be overkill. Do you agree? Or am I missing something? I am looking for feedback from experienced Vaadin users.

    Read the article

  • Changes in Language Punctuation [closed]

    - by Wes Miller
    More social curiosity than actual programming question... (I got shot for posting this on Stack Overflow. They sent me here. At least i hope here is where they meant.) Based on the few responses I got before the content police ran me off Stack Overflow, I should note that I am legally blind and neatness and consistency in programming are my best friends. A thousand years ago when I took my first programming class (Fortran 66) and a mere 500 years ago when I tokk my first C and C++ classes, there were some pretty standard punctuation practices across languages. I saw them in Basic (shudder), PL/1, PL/AS, Rexx even Pascal. Ok, APL2 is not part of this discussion. Each language has its own peculiar punctuation. Pascal's periods, Fortran's comma separated do loops, almost everybody else's semicolons. As I learned it, each language also has KEYWORDS (if, for, do, while, until, etc.) which are set off by whitespace (or the left margin) if, etc. Each language has function, subroutines of whatever they're called. Some built-in some user coded. They were set off by function_name( parameters );. As in sqrt( x ) or rand( y ); Lately, there seems to be a new set of punctuation rules. Especially in c++ where initializers get glued onto the end of variable declarations int x(0); or auto_ptr p(new gizmo); This usually, briefly fools me into thinking someone is declaring a function prototype or using a function as a integer. Then "if" and 'for' seems to have grown parens; if(true) for(;;), etc. Since when did keywords become functions. I realize some people think they ARE functions with iterators as parameters. But if "for" is a function, where did the arg separating commas go? And finally, functions seem to have shed their parens; sqrt (2) select (...) I know, I koow, loosening whitespace rules is good. Keep reading. Question: when did the old ways disappear and this new way come into vogue? Does anyone besides me find it irritating to read and that the information that the placement of punctuation used to convey is gone? I know full well that K&R put the { at the end of the "if" or "for" to save a byte here and there. Can't use that excuse here. Space as an excuse for loss of readability died as HDD space soared past 100 MiB. Your thoughts are solicited. If there is a good reason to do this, I'll gladly learn it and maybe in another 50 years I'll get used to it. Of course it's good that compilers recognize these (IMHO) typos and keep right on going, but just because you CAN code it that way doesn't mean you HAVE to, right?

    Read the article

  • What is missing and should be added to Code Complete 3rd Edition? [closed]

    - by Peter Turner
    It's been quite a few years since Code Complete was published. I really love the book, I keep it in the bathroom at the office and read a little out of it once or twice a day. What developments in computer software... development need to be added to Code Complete 3e, and for the sake of reductionism, what should be removed to make room for them? Is it necessary even possible to call Code Complete Code Complete if it doesn't have language features that even Delphi has like anonymous methods and generics? Also, what languages would be more appropriate than C++ to use for a majority of code examples?

    Read the article

  • Easy to understand and interesting book on algorithms

    - by gasan
    Please advise me a book on algorithms, that would be easier to read and understand than Cormen's book1. It may be not so big and deep in explanation. I even want it to not be that big, however it shouldn't contain misconceptions or errors or inaccuracies. It should be a some kind of pre-Cormen's book, that will help later to understand more sophisticated conceptions. A beginner book (but still worth to read). 1 Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein

    Read the article

  • Best arguments for/against introducing ORM technology into a companies dev process

    - by james
    I have started using ORM technology in the last few years. My first exposure was NHibernate. I then moved onto Linq 2 Sql, and Entity Framework. The issue I have however is, there are some organisations where I have found strong opposition to introducing ORM tools. They usually have a number of reasons: they have a lot of built up SQL skills in the team, and are worried about the underlying SQL that ORM's generate. they have DBA's who like to be able to see the SQL an app uses in order that can review it for best practice. they are worried about performance (some people have "heard" the ORM's aren't as performant but have no real proof themselves - there may well be some truth in this! :). So, I'm looking for the best or most convincing arguments that you have put forward FOR the use of ORM tools. Equally, I would be interested in the against arguments too. Note: this is NOT a discussion over which ORM I should use.

    Read the article

  • Are there any GUI or user interface design patterns? [closed]

    - by Niranjan Kala
    I was curious about GUI design patterns, so I searched and got some information, including a list of UI patterns for the web. This UI patterns website says that: UI Patterns is a growing collection of User Interface Design Principles and User Interface Usability Patterns present on web applications and sites today. Are there any other design patterns for constructing websites or other user interfaces? Are there any books that describe these patterns? I'm particularly interested in patterns for Windows desktop development and web development in the .NET platform.

    Read the article

  • Sending an email with attachment from server side

    - by SaravananArumugam
    I have to create a word document in a specific format and send it as attachment to some email addresses. I have a preview screen for the report which on approval has to be send in email. This is an ASP.NET MVC 3 application. I am left with a few options here. I am creating the preview using html. I can convert this html into doc and send it, which would be a straight solution. But capturing the Response object's output is being a tough job. I thought of using Mail merge functionality of MS word, where I'll be filling the placeholders of the doc template. But the problem is conceptually, it doesn't appear to be mail merge. I have found someone suggesting to use RTF format and replace the placeholders with database values. Which is the right thing to do? What's the best solution here? Is there any other option than the three listed above?

    Read the article

  • What's the proper way to merge two projects in source control software

    - by Mallow
    I'm using Fossil-SCM to maintain my projects. Since I don't work in a team I usually have just a very linear branch of development: 1.0 - 1.1 - 1.2 I'm wondering what the procedure is when you have one project who's task is about to be given to a related project. And thereby rendering the first project obsolete. Although I tend to rewrite most of my code if I don't remember having already written it, I still would like to keep the code archived. And I'ld rather not have a fossil repo that just is dead. Can I merge it? Is that the proper way of handling this? For example the code was extracting data from an excel file in order to format an HTML page. Now, I've convinced my employer to move their excel spreadsheet into a database to decrease redundancy, increase efficiency and yaddy yadda. Since I can now make logical queries that don't have to jump hoops to preform using the database I won't need the extra vbs files that originally manipulated the excel file. Technically I would be porting part of the existing code into the current new project. Since it already has it's own trunk, would it be advisable to combine the trunk of a different project to this one, and how would I do that exactly?? SO I guess my tree would look like this, and I haven't seen examples of software branching that resemble this inverted tree before so I'm wondering what the norm for a situation like this?

    Read the article

  • Starting an HTML canvas game with no graphics skills

    - by Jacob
    I want to do some hobby game development, but I have some unfortunate handicaps that have me stuck in indecision; I have no artistic talent, and I also have no experience with 3D graphics. But this is just a hobby project that might not go anywhere, so I want to develop the stuff I care about; if the game shows good potential, my graphic "stubs" can be replaced with something more sophisticated. I do, however, want my graphics engine to render something approximate to the end goal. The game is tile-based, with each tile being a square. Each tile also has an elevation. My target platform (subject to modification) is JavaScript rendering to the HTML 5 canvas, either with a 2D or WebGL context. My question to those of you with game development experience is whether it's easier to develop an isometric game using a 2D graphics engine and sprites or a 3D game using rudimentary 3D primitives and basic textures? I realize that there are limitations to isometric projection, but if it makes developing my throwaway graphics engine easier, I'm OK with the visual warts that would be introduced. Or is representing a 3D world with an actual 3D engine easier?

    Read the article

  • Storing data for use on Android and Windows Applications

    - by Andy Mepham
    I posted this last night on StackOverflow and was advised to move it over to StackExchange, thank you for taking a moment to look at my question. I'm developing a project proposal for my final year project at University and as I aim to use programming languages I am currently not too familiar with I'm looking for some guidance - I can't include details of my project but hopefully you will understand what I'm after. I'm going to be creating an Android application (in Java) and a Windows Application (in C#) that will ideally access, query and update a remotely hosted Database or set of XML files (this would most likely be over the Internet). I've done some looking around the internet and SQLite seems like a safe-bet for cross-platform manipulation of the database; however I would like to keep the system as lightweight as possible and I'm wondering whether XML files may provide a better alternative? Anyone out there that has experience using SQLite and/or remotely hosted XML for the purposes of Android and/or C# development that could point me in the right direction? If there is an alternative solution other than those I have mentioned I would be interested to hear about them too. Thank you for taking the time to read my question. Edit: The purpose of this application is for a small scale business, the data source would not need to be updated by more than one source but may be view from multiple sources (i.e. through multiple phones and a desktop PC). The database wouldn't be updating masses of data at a time (most likely single rows of a few tables at the most).

    Read the article

  • Relationship between Repository and Unit of Work

    - by NullOrEmpty
    I am going to implement a repository, and I would like to use the UOW pattern since the consumer of the repository could do several operations, and I want to commit them at once. After read several articles about the matter, I still don't get how to relate this two elements, depending on the article it is being done in a way u other. Sometimes the UOW is something internal to the repository: public class Repository { UnitOfWork _uow; public Repository() { _uow = IoC.Get<UnitOfWork>(); } public void Save(Entity e) { _uow.Track(e); } public void SubmittChanges() { SaveInStorage(_uow.GetChanges()); } } And sometimes it is external: public class Repository { public void Save(Entity e, UnitOfWork uow) { uow.Track(e); } public void SubmittChanges(UnitOfWork uow) { SaveInStorage(uow.GetChanges()); } } Other times, is the UOW whom references the Repository public class UnitOfWork { Repository _repository; public UnitOfWork(Repository repository) { _repository = repository; } public void Save(Entity e) { this.Track(e); } public void SubmittChanges() { _repository.Save(this.GetChanges()); } } How are these two elements related? UOW tracks the elements that needs be changed, and repository contains the logic to persist those changes, but... who call who? Does the last make more sense? Also, who manages the connection? If several operations have to be done in the repository, I think using the same connection and even transaction is more sound, so maybe put the connection object inside the UOW and this one inside the repository makes sense as well. Cheers

    Read the article

  • Thread safe GUI programming

    - by James
    I have been programming Java with swing for a couple of years now, and always accepted that GUI interactions had to happen on the Event Dispatch Thread. I recently started to use GTK+ for C applications and was unsurprised to find that GUI interactions had to be called on gtk_main. Similarly, I looked at SWT to see in what ways it was different to Swing and to see if it was worth using, and again found the UI thread idea, and I am sure that these 3 are not the only toolkits to use this model. I was wondering if there is a reason for this design i.e. what is the reason for keeping UI modifications isolated to a single thread. I can see why some modifications may cause issues (like modifying a list while it is being drawn), but I do not see why these concerns pass on to the user of the API. Is there a limit imposed by an operating system? Is there a good reason these concerns are not 'hidden' (i.e. some form of synchronization that is invisible to the user)? Is there any (even purely conceptual) way of creating a thread safe graphics library, or is such a thing actually impossible? I found this http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness which seems to describe GTK differently to how I understood it (although my understanding was the same as many people's) How does this differ to other toolkits? Is it possible to implement this in Swing (as the EDT model does not actually prevent access from other threads, it just often leads to Exceptions)

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >