Search Results

Search found 19525 results on 781 pages for 'say'.

Page 67/781 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Install Eclipse Juno on 12.04

    - by veepsk
    So I am trying to install eclipse Juno on Ubuntu 12.04. I did all the things instructed in this link But whenever I install any new softwares (Say CDT or Pydev) on eclipse the softwares are gone on opening the eclipse app again.(Have to open eclipse again with root privileges to install all the software) I also ran into many problems with linking the include library for Eclipse CDT. So can anyone help me with Installing Juno in a way that I do not need root access every time I change configurations in Eclipse.

    Read the article

  • OBI & P6 Analytics Demo @ MAOAUG

    - by mark.kromer
    Mark will be speaking in King of Prussia, outside of Philly, for the Mid-Atlantic Oracle Apps Users Group on Oracle BI w/P6 Analytics for IT projects this Friday: http://www.maoaug.org. Stop by and say HI if you are in the area!

    Read the article

  • Agile and different facet of software development

    - by arjun
    It is said that the Kanban methodology is suited for software maintenance and support areas, whereas Scrum is more suited for new product development. No process or methods are complete. Using the right one will help you succeed, but they will not guarantee success. Which agile approach is best suited for a project which is basically a re-platforming from one technology to another (say from Java to .NET).

    Read the article

  • Thou shalt not put code on a piedestal - Code is a tool, no more, no less

    - by Ralf Westphal
    “Write great code and everything else becomes easier” is what Paul Pagel believes in. That´s his version of an adage by Brian Marick he cites: “treat code as an end, not just a means.” And he concludes: “My post-Agile world is software craftsmanship.” I wonder, if that´s really the way to go. Will “simply” writing great code lead the software industry into the light? He´s alluding to the philosopher Kant who proposed, a human beings should never be treated as a means, but always as an end. But should we transfer this ethical statement into the world of software? I doubt it.   Reason #1: Human beings are categorially different from code. They are autonomous entities who need to find a way of living happily together. To Kant it seemed this goal could only be reached if nobody (ab)used a human being for his/her purposes. Because using a human being, i.e. treating it as a means, would contradict the fundamental autonomy and freedom of human beings. People should hold up a symmetric view of their relationships: Since nobody wants to be (ab)used, nobody should (ab)use anybody else. If you want to be treated decently, with respect, in accordance with your own free will - which means as an end - then do the same to other people. Code is dead, it´s a product, it´s a tool for people to reach their goals. No company spends any money on code other than to save money or earn money in the long run. Code is not a puppy. Enterprises do not commission software development to just feel good in its company. Code is not a buddy. Code is a slave, if you will. A mechanical slave, a non-tangible robot. Code is a tool, is a tool. And if we start to treat it differently, if we elevate its status unduely… I guess that will contort our relationship in a contraproductive way. Please get me right: Just because something is “just a tool”, “just a product” does not mean we should not be careful while designing, building, using it. Right to the contrary. We should be very careful when writing code – but not for the code´s sake! We should be careful because we respect our customers who are fellow human beings who should be treated as an end. If we are careless, neglectful, ignorant when producing code on their behalf, then we´re using them. Being sloppy means you´re caring more for yourself that for your customer. You´re then treating the customer as a means to fulfill some of your own needs. That´s plain unethical behavior.   Reason #2: The focus should always be on your purpose, not on any tool. But if code is treated as an end, then the focus is on the code. That might sound right, because where else should be your focus as a software developer? But, well, I´d say, your focus should be on delivering value to your customer. Because in the end your customer does not care if you write a single line of code. She just wants her problem to be solved. Solving problems is the purpose of any contractor. Code must be treated just as a means, a tool we know how to handle very well. But if we´re really trying to be craftsmen then we should be conscious about exactly that and act ethically. That means we must never be so focused on our tool as to be unable to suggest better solutions to the problems of our customers than code.   I´m all with Paul when he urges us to “Write great code”. Sure, if you need to write code, then by all means do so. Write the best code you can think of – and then try to improve it. Paul has all the best intentions when he signs Brians “treat code as an end” - but as we all know: “The road to hell is paved with best intentions” ;-) Yes, I can imagine a “hell of code focus”. In fact, I don´t need to imagine it, I´m seeing it quite often. Because code hell is whereever two developers stand together and are so immersed in talking about all sorts of coding tricks, design patterns, code smells, technologies, platforms, tools that they lose sight of the big picture. Talking about TDD or SOLID or refactoring is a sign of consciousness – relative to the “cowboy coders” view of the world. But from yet another point of view TDD, SOLID, and refactoring are just cures for ailments within a system. And I fear, if “Writing great code” is the only focus or the main focus of software development, then we as an industry lose the ability to see that. Focus draws a line around something, it defines a horizon for perceptions and thinking. So if we focus on code our horizon ends where “the land of code” ends. I don´t think that should be our professional attitude.   So what about Software Craftsmanship as the next big thing after Agility? I think Software Craftsmanship has an important message for all software developers and beyond. But to make it the successor of the Agility movement seems to miss a point. Agility never claimed to solve all software development problems, I´d say. So to blame it for having missed out on certain aspects of it is wrong. If I had to summarize Agility in one word I´d say “Value”. Agility put value for the customer back in software development. Focus on delivering value early and often – that´s Agility´s mantra. All else follows from that. And I ask you: Is that obsolete? Is delivering value not hip anymore? No, sure not. That´s our very purpose as software developers. So how can Agility become obsolete and need to be replaced? We need to do away with this “either/or”-thinking. It´s either Agility or Lean or Software Craftsmanship or whatnot. Instead we should start integrating concepts and movements. Think “both/and”. Think Agility plus Software Craftsmanship plus Lean plus whatnot. We don´t neet to tear down anything from a piedestal and replace it with a new idol. Instead we should do away with piedestals and arrange whatever is helpful is a circle. Then we can turn to concepts, movements for whatever they are best. After 10 years of Agility we should be able to identify what it was good at – and keep that. Keep Agility around and add whatever Agility was lacking or never concerned with. Add whatever is at the core of Software Craftsmanship. Add whatever is at the core of Lean etc. But don´t call out the age of Post-Agility. Because it better never will end. Because once we start to lose Agility´s core we´re losing focus of the customer.

    Read the article

  • Is Tracking Software Usage Illegal?

    - by Graviton
    Let's say if I am doing desktop application, and I am interested to know whether our software really gets used or not. Is it alright to insert in code that tracks whether our software is used, for how long and so on? Note that no person-identifiable information will be collected, all I am interested to know is how frequent and for how long the software is used. The information will be sent to our server for diagnosis. What do you think?

    Read the article

  • What is the oldest living piece of unaltered production code? [closed]

    - by user1598390
    It's come to my mind that parts of the code in, say, Unix, has maybe passed unaltered from one version or flavor into another. Maybe some pieces of the source code of the ls command is the same, unaltered, than was written years ago. Have any of you read or learn about this ? What would be the oldest living piece of unaltered production code still running, passing from version through version of a program or system ? Will the code we write outlive us for decades ?

    Read the article

  • Data structures in functional programming

    - by pwny
    I'm currently playing with LISP (particularly Scheme and Clojure) and I'm wondering how typical data structures are dealt with in functional programming languages. For example, let's say I would like to solve a problem using a graph pathfinding algorithm. How would one typically go about representing that graph in a functional programming language (primarily interested in pure functional style that can be applied to LISP)? Would I just forget about graphs altogether and solve the problem some other way?

    Read the article

  • How would one go about integrated python into a c++ written game for the use of user-made scripts

    - by Spencer Killen
    I'm quite new to game development (not the site) and I'm currently just trying to educate myself about some certain things before I really begin working and a game. anyway, I'd like to know what basic algorithm/outline of how a game would be coded effeciently with the implementation of user coded scripts for gameplay and levels that are written in python, Is this even possible? would all the features of python be avalible? like say "multi-threading"?

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. * If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project? EDIT *This bit seemed to get a lot of attention. What I mean is I could probably do this for say, 30 users, spending a few tens of thousands of pounds. I'm not including stuff I don't know about the medical industry and government, but I think most people who've been around programming are familiar with that kind of database/front end kind of design. My point is the NHS project looks like a BIG version of this, with bells and whistles, notably security. But surely a budget millions of times larger than mine could provide this?

    Read the article

  • How to Deal with a Difficult Boss?

    - by Anonymous
    I have some problems with my boss, it's quite a long story :) About one year ago, I'm working as team leader of project X. Everything work fine until one of my fellow (staff) flame me that I have problem with ALL member in our team, that guy also flame me to other staff that I report them with a poor performance. My boss call me and blame me without ask a single question. I try to explain everything to my boss but she doesn't listen to me. One month later, we have a meeting. This is only team leader's meeting, my boss talk about this problem with other team leader. There are two person who have worked with this guy, they all say "This guy cannot trust". That guy had do same thing same problem with his former team leader. Finally, everything's clear and I think I gain some trust from her. I can say that I'm the best team leader in her hand, as only project that archive more than 120% profit. Then I move to new project, this is bigger project and I can manage it quite good. But I have a problem again. One of my staff always leave and does not follow our company rule, I call him to talk and tell him that you cannot do this because that's not allow in our company. He also changed working time record file of himself, then I call him to warn again. This time he ask me to move to another project so I go to talk to my boss. She come to my building when I'm not there (other staff call me) and talk with that guy (who have problem with me); I think she still not trust me. And AGAIN, she believe what that guy said and I got blamed. I want to know how can I deal with this kind of boss, or is it better to find a new job, or any other suggestion about this problem? Thank you :) Additional information: Even my job title is "Team Leader" but it's my responsibility to manage staff working time and their behavior. This responsible is my company's rule.

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • What are the tradeoffs for using 'partial view models'?

    - by Kenny Evitt
    I've become aware of an itch due to some non-DRY code pertaining to view model classes in an (ASP.NET) MVC web application and I'm thinking of scratching my itch by organizing code in various 'partial view model' classes. By partial-view-model, I'm referring to a class like a view model class in an analogous way to how partial views are like views, i.e. a way to encapsulate common info and behavior. To strengthen the 'analogy', and to aid in visually organizing the code in my IDE, I was thinking of naming the partial-view-model classes with a _ prefix, e.g. _ParentItemViewModel. As a slightly more concrete example of why I'm thinking along these lines, imagine that I have a domain-model-entity class ParentItem and the user-friendly descriptive text that identifies these items to users is complex enough that I'd like to encapsulate that code in a method in a _ParentItemViewModel class, for which I can then include an object or a collection of objects of that class in all the view model classes for all the views that need to include a reference to a parent item, e.g. ChildItemViewModel can have a ParentItem property of the _ParentItemViewModel class type, so that in my ChildItemView view, I can use @Model.ParentItem.UserFriendlyDescription as desired, like breadcrumbs, links, etc. Edited 2014-02-06 09:56 -05 As a second example, imagine that I have entity classes SomeKindOfBatch, SomeKindOfBatchDetail, and SomeKindOfBatchDetailEvent, and a view model class and at least one view for each of those entities. Also, the example application covers a lot more than just some-kind-of-batches, so that it wouldn't really be useful or sensible to include info about a specific some-kind-of-batch in all of the project view model classes. But, like the above example, I have some code, say for generating a string for identifying a some-kind-of-batch in a user-friendly way, and I'd like to be able to use that in several views, say as breadcrumb text or text for a link. As a third example, I'll describe another pattern I'm currently using. I have a Contact entity class, but it's a fat class, with dozens of properties, and at least a dozen references to other fat classes. However, a lot of view model classes need properties for referencing a specific contact and most of those need other properties for collections of contacts, e.g. possible contacts to be referenced for some kind of relationship. Most of these view model classes only need a small fraction of all of the available contact info, basically just an ID and some kind of user-friendly description (i.e. a friendly name). It seems to be pretty useful to have a 'partial view model' class for contacts that all of these other view model classes can use. Maybe I'm just misunderstanding 'view model class' – I understand a view model class as always corresponding to a view. But maybe I'm assuming too much.

    Read the article

  • web vs desktop? (php vs c++?)

    - by Dhaivat Pandya
    I need to write a simple file transfer mechanism (that isn't ftp). Firstly, it must have a GUI. Secondly, it must not be dropbox. Third, it may not use any paid libraries, and hopefully, it uses open source components. The question that came to my mind is, where is everyone moving, from desktop to web, or from web to desktop? Would it be more useful to be experienced in say, C++ than in PHP (or vice versa)?

    Read the article

  • When following SRP, how should I deal with validating and saving entities?

    - by Kristof Claes
    I've been reading Clean Code and various online articles about SOLID lately, and the more I read about it, the more I feel like I don't know anything. Let's say I'm building a web application using ASP.NET MVC 3. Let's say I have a UsersController with a Create action like this: public class UsersController : Controller { public ActionResult Create(CreateUserViewModel viewModel) { } } In that action method I want to save a user to the database if the data that was entered is valid. Now, according to the Single Responsibility Principle an object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. Since validation and saving to the database are two separate responsibilities, I guess I should create to separate class to handle them like this: public class UsersController : Controller { private ICreateUserValidator validator; private IUserService service; public UsersController(ICreateUserValidator validator, IUserService service) { this.validator = validator; this.service= service; } public ActionResult Create(CreateUserViewModel viewModel) { ValidationResult result = validator.IsValid(viewModel); if (result.IsValid) { service.CreateUser(viewModel); return RedirectToAction("Index"); } else { foreach (var errorMessage in result.ErrorMessages) { ModelState.AddModelError(String.Empty, errorMessage); } return View(viewModel); } } } That makes some sense to me, but I'm not at all sure that this is the right way to handle things like this. It is for example entirely possible to pass an invalid instance of CreateUserViewModel to the IUserService class. I know I could use the built in DataAnnotations, but what when they aren't enough? Image that my ICreateUserValidator checks the database to see if there already is another user with the same name... Another option is to let the IUserService take care of the validation like this: public class UserService : IUserService { private ICreateUserValidator validator; public UserService(ICreateUserValidator validator) { this.validator = validator; } public ValidationResult CreateUser(CreateUserViewModel viewModel) { var result = validator.IsValid(viewModel); if (result.IsValid) { // Save the user } return result; } } But I feel I'm violating the Single Responsibility Principle here. How should I deal with something like this?

    Read the article

  • Dependency Checker/ Installer With Java/Ant

    - by jsn
    I need some kind of software to easily roll out code on new servers. I use Apache Ant for builds. However, say I want to set-up a new server fast and my Java program depends on GhostScript, if there any software that can automatically check the computer for it (and then maybe the PATH) and add it if is not there? I have already looked at Maven and Apache Ivy, however, I think these are only for .jar files (from what I saw). Thanks for any help.

    Read the article

  • Experiences with learning Chinese

    - by Greg Low
    I've had a few friends asking me about learning Chinese and what I've found works and doesn't work. I was answering a question on a mailing list today and I thought I should post this info where it might be useful to many. The question that was initially asked was whether Rosetta Stone was useful but I've provided much more info on learning the language here. I’ve used Rosetta Stone with Chinese but it’s really hard to know whether to recommend it or not. Rosetta Stone works the same way in all languages. They show you photos and then let you both see and hear the target language and get you to work out what they’re talking about. The thinking is that that’s how children learn. However, at first, I found it very frustrating. I’d be staring at photos trying to work out what they were really trying to get at. Sometimes it’s far from obvious. I could not have survived without Google Translate open at the same time. The other weird thing is that the photos are from a mixture of countries. While that’s good in a way, it also means that they are endlessly showing pictures of something that would never happen in the target language and culture. For any language, constant interaction with a speaker of the target language is needed. Rosetta Stone has a “Studio” option. That’s the best part of the program. In my case, it lets me connect around twice a week to a live online class from Beijing. Classes usually have the teacher plus two to four students. You get some Studio access with the initial packages but need to purchase it for ongoing use. I find it very inexpensive. It seems to work out to about $70 (AUD/USD) for six months. That’s a real bargain. The other downside to Rosetta Stone is that they tend to teach very formal language, but as with other languages, that’s not how the locals speak. It might have been correct at one point but no-one actually says that. As an example, Rosetta Stone teach Gonggòng qìche (pronounced roughly like “gong gong chee chure” for bus. Most of my friends from areas like Taiwan would just say Gongche. Google Translate says Zongxiàn (pronounced somewhat like “dzong sheean”) instead. Mind you, the Rosetta Stone option isn't really as bad as "omnibus"; it's more like saying "public bus". If you say the option they provide, people would understand you. I also listen to ChinesePod in the car. They also have SpanishPod. Each podcast is about five minutes of spoken conversation. It is very good for providing current language. Another resource I use is local Meetup groups. Most cities have these and for a variety of languages. It’s way less structured (just standard conversation) but good for getting interaction. The obvious challenge for Asian languages is reading/writing. The input editors for Chinese that are part of Windows are excellent. Many of my Chinese friends speak fluently but cannot read or write. I was determined to learn to do both. For writing, I’m talking about on a computer, not with a pen. (Mind you, I can barely write English with a pen nowadays). When using Rosetta Stone, you can choose to have the Chinese words displayed in pinyin (Wo xihuan xuéxí zhongguó) or in Chinese characters (???????) or both. This year, I’ve been forcing myself to just use the Chinese characters. I use a pinyin input editor in Windows though, as it’s very fast.  (The character recognition input in the iPad is also amazing). Notice from the example that I provided above that the pronunciation of the pinyin isn’t that obvious to us at first either.  Since changing to only using characters, I find I can now read many more Chinese characters fluently. It’s a major challenge though. I can read about 300 now and yet you need around 2,500 to be able to read a newspaper fairly well. Tones are a major issue for some Asian languages. Mandarin has four tones (plus a neutral tone) and there is a major difference in meaning between two words that are spelled the same in pinyin but with different tones. For example, Ma (3rd tone?) is a horse, Ma (1st tone?) is like “mom”, and ma (neutral tone?) is a question mark and so on. Clearly you don’t want to mix these up. As in English, they also have words that do sound the same but mean different things in different contexts. What’s interesting is that even though we see two words that differ only by tone as very similar, to a native speaker, if you say the right words with the wrong tone, you might as well have said a completely different word. My wife’s dialect of Chinese has eight tones. It’s much worse. The reason I’m so keen to learn to read/write Chinese is that even though the different dialects are pronounced so differently that speakers of one dialect often cannot understand another dialect, the writing is generally the same. The only difference is that many years ago, the Chinese government created a simplified set of characters for some of the most commonly used ones. Older Chinese and most Cantonese speakers often struggle with the simplified characters. This is the simplified form of “three apples”: ????   This is the traditional form of the same words: ????  Note that two of the characters are the same but the middle two are quite different. For most languages, the best thing is to watch current movies in the target language but to watch them with the target language as subtitles, not your native language. You want to know what they actually said, not what it roughly means (which is what the English subtitle would give you). The difficulty with Asian languages like Chinese is that you have the added challenge of understanding the subtitles when they are written in the target language. I wish there were Mandarin Chinese movies with pinyin subtitles. For learning to read characters, I also recommend HanCard on the iPad. It is targeted at the HSK language proficiency levels. (I’m intending to take the first HSK exam as soon as I’m ready). Hope that info helps someone get started.  

    Read the article

  • Evaluating Scrum - is it okay to have people with multiple roles in a Scrum team?

    - by Wayne M
    I'm evaluating some Agile-style methodologies for possible introduction to my team. With Scrum, is it allowable to have the same person perform multiple roles? We have a small team of four developers and a web designer; we don't really have a lead (I fulfill this role), QA testers or business analysts, and all of our development tasks come from the CIO. Automated testing is seen as a total waste of time, and everything focuses on speed and not quality. What will happen is the CIO will come up with a development task (whether a feature or a bug) and give it to a developer (not to the whole team, to an individual, often in private or out of the blue) who is then expected to get it completed. The CIO doesn't gather requirements beyond the initial idea (and this has bitten us before as we'll implement something only to find out that none of the end users can use the feature, because they weren't consulted or even informed about it before we developed it, and in a panic we'll be told to revert the change) but requires say in/approval of everything that we do. First things first, is a Scrum style something to consider to introduce some standards and practices? From reading, Scrum seems to rely on a bit more trust and communication and focuses more on project management than on development, which is something we are completely devoid of as we don't have any semblance of project management at present. Second, if it can work is it unreasonable for someone, let's say myself, to act as both ScrumMaster and a developer? Or for a developer to also be the Product Owner (although chances are this will be the CIO, who isn't a developer)? I realize the Scrum Master and the Product Owner should be different people but at the same time I don't think we have anyone who has the qualities of a Product Owner (chances are it would turn into a "I need all these stories, I don't care how but get it done" type of deal and/or any freeze would be unfrozen on a whim). It seems to me that I might need to pick and choose pieces of Scrum/XP/Lean to compensate for how things are done currently, as it's highly unlikely that the mentality can be changed; for instance Pair Programming would never fly (seen as a waste, you get half the tasks done if you need two people for everything), TDD would be a hard sell, but short cycles would be welcomed.

    Read the article

  • Is it a bug or a task when something doesn't work, yet, in development process

    - by Patkos Csaba
    We usually have this dilemma in our team. Sometimes, in order to implement a task or a story we find out that the system must be in a specific state. For example, a specific system configuration has to be made beforehand. The task / story can be completed and it is working as specified on it with the proper configuration in place. Note that the configuration is not directly related with the task. Next, we have to create a new ... ??? ... something for the process of generating that configuration file. This is where the problems appear. Some say that it is a bug others say it is a task or an extra feature. So, where is the limit between bugs and tasks in the development phase? Should we even consider something a bug if all the tasks are working as stated in their definitions? Can a thing be considered a bug because one compares it to the current (unstable) state of the system? Short example: A feature requires configuring a communication service for a specific operation. In the process of the implementation the team discovers that the service requires the hostnames of the pears to be resolvable to an IP address. The team adds the hostnames to the DNS server (or hosts files) and continues implementing the required feature. After the initial feature is working, a question is risen. Should the sysadmin configure the DNS or hosts file or should our application do it automatically? An automatic solution is possible. So a decision is made to implement it. ... here start the discussions ... is this a bug or an extra feature / task? PS: I know that I mixed feature / task / story in the question. It is intentional. I am interested in separating bugs from the rest. Doesn't matter what the rest means in a particular case.

    Read the article

  • How to efficiently render resizable GUI elements in DirectX?

    - by PolGraphic
    I wonder what would be most efficient way to render the GUI elements. When we're talking about constant-size elements (that can still be moving), the textures' atlas seems to be good. But what with the resizeable elements? Let's say the panel (with textured borders)? Is there any better way than just render 9 rectangles with textures on them (I guess one texture and different textures coordinates for left-top corner, border, middle etc. used in shader)?

    Read the article

  • How can I implement 2D cel shading in XNA?

    - by Artii
    So I was just wondering on how to give a scene I am rendering a hand drawn look (like say Crayon Physics). I don't really want to preprocess the sprites and was thinking of using a shader. Cel shading supplies the effect I want to achieve, but I am only aware of the 3D instances for it. So I wanted to ask if anyone knew a way to get this effect in 2D, or if cel shading would work just as fine on 2D scenes?

    Read the article

  • Why can we recognize game engines?

    - by Bart van Heukelom
    About many games you can say "oh that's the Unreal engine for sure", "this was made by upgrading GTA 4", etc. We can often recognize the engine used for a game just by looking at its graphics (disregarding menus and such). I'm wondering, why is this? All game engines use the same 3D rendering technology that we all use, and the different games usually have a distinct art style, so what's left to recognize?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >