Search Results

Search found 8706 results on 349 pages for 'projects'.

Page 302/349 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Drools - Doing Complex Stuff inside a Rule Condition or Consequence

    - by mfcabrera
    Hello, In my company we are planning to use Drools a BRE for couple of projects. Now we trying to define some best-practices. My question is what should be and shouldn't be done inside a Rule Condition/Consequence. Given that we can write Java directly or call methods (for example From a Global object in the Working Memory). Example. Given a Rule that evaluates a generic Object (e.g. Person) have property set to true. Now, that specific propertie can only be defined for that Object going to the database and fetching that info. So we have two ways of implementing that: Alternative A: Go to the database and fetch the object property (true/false, a code) Insert the Object in the working memory Evaluate the rule Alternative B: Insert a Global Object that has a method that connects to the database and check for the property for the given object. Insert the Object to eval in Working Memory In the rule, call the Global Object and perform the access to the database Which of those is considered better? I really like A, but sometimes B is more straightforward, however what would happen if something like a Exception from the Database is raised? I have seen the alternative B implemented in the Drools 5.0 Book from Packt Publishing,however they are doing a mocking and they don't talk about the actual implications of going to the database at all. Thank you,

    Read the article

  • Large scale Merge Replication strategy - what can go wrong?

    - by niidto
    Hi, I'm developing a piece of software that uses Merge Replication and SQL Compact on Windows Mobile 6. At the moment it is running on 5 devices reasonably well. The issues I've come up against are as follows: The schema has had to change a lot, and will continue to have to change as the application evolves. There have been various errors replicating these schema changes down to the device, uploads failing due to schema inconsistencies. Subscriptions expiring (after 14 days) and unable to reinitialize with upload - AKA, potential data los of unsynced data up to that point. Basically, the worst case scenario is data loss, and when merge replication fails, there seems to be no way back to get the data off. My method until now has been to drop and create the subscription on the device. I don't hear many people doing this, though it seems to solve everything. The long term plan is to role this out to 500+ devices. Any advice on people who have undertaken similar projects, and how to minimise data loss and make it so that there's appropriate error handling code to recover from sync failures would be much appreciated. James

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • Why isn't our c# graphics code working any more?

    - by Jared
    Here's the situation: We have some generic graphics code that we use for one of our projects. After doing some clean-up of the code, it seems like something isn't working anymore (The graphics output looks completely wrong). I ran a diff against the last version of the code that gave the correct output, and it looks like we changed one of our functions as follows: static public Rectangle FitRectangleOld(Rectangle rect, Size targetSize) { if (rect.Width <= 0 || rect.Height <= 0) { rect.Width = targetSize.Width; rect.Height = targetSize.Height; } else if (targetSize.Width * rect.Height > rect.Width * targetSize.Height) { rect.Width = rect.Width * targetSize.Height / rect.Height; rect.Height = targetSize.Height; } else { rect.Height = rect.Height * targetSize.Width / rect.Width; rect.Width = targetSize.Width; } return rect; } to static public Rectangle FitRectangle(Rectangle rect, Size targetSize) { if (rect.Width <= 0 || rect.Height <= 0) { rect.Width = targetSize.Width; rect.Height = targetSize.Height; } else if (targetSize.Width * rect.Height > rect.Width * targetSize.Height) { rect.Width *= targetSize.Height / rect.Height; rect.Height = targetSize.Height; } else { rect.Height *= targetSize.Width / rect.Width; rect.Width = targetSize.Width; } return rect; } All of our unit tests are all passing, and nothing in the code has changed except for some syntactic shortcuts. But like I said, the output is wrong. We'll probably just revert back to the old code, but I'm curious if anyone has any idea what's going on here. Thanks.

    Read the article

  • Want to add a functional language to my toolchest. Haskell or Erlang?

    - by sean.johnson
    I've been an OO/procedural guy my whole career except in school where I did a lot of logic programming (Prolog). I work on an amazing variety of projects (freelancer) and so I don't want the tools I know and understand to hold me back from using the right tool for the job. I've decided I should know a functional programming language. I've narrowed the field to Haskell and Erlang. What are the pros and cons, advantages and disadvantages, and major trade offs of Haskell and Erlang? How do I decide in a rational way, which is the better path? This is a big time investment, so I'd like to chose wisely. Is there a good case to be made for something else entirely? F#, Scala Ocaml? (BTW, I'm normally a Ruby/C/Obj.C guy, so I'm not terribly impressed or dependent on the JVM as a runtime. It's completely neutral to me. It's a fine runtime, I don't hold it for or against a language. I don't use Microsoft products though, so a .NET runtime would be a negative.)

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

  • Eclipse > Javascript > Code highlighting not working with Object Notation

    - by Redsandro
    I am using Eclipse Helios with PDT, and when I am editing JavaScript files with the default JavaScript Editor (JSDT), code highlighting (Mark Occurrences) is not working for half of the code, for example JSON-style (or Object Literal if you will) declarations. Little example: Foo = {}; Foo.Bar = Foo.Bar || {}; Foo.Bar = { bar: function(str) { alert(str) }, baz: function(str) { this.bar(str); // This bar *is* highlighted though } }; Foo.Bar.baz('text'); No Bar, bar or baz is highlighted. For now, I humbly edit the JavaScript part of projects in Notepad++ because it just highlights every occurrence of whatever is currently selected. Is there a common practice for Eclipse JavaScript developers to get code highlighting work correctly, using the popular Object Literal notation? An option or update I missed? -update- I have found that code highlighting depends on the code being properly outlined. Altough commonly used, Object Literal outlining still seems rare in javascript editors. the Spket Javascript Editor does partial Object Literal outlining, and the Aptana Javascript Editor does full Object Literal outlining. But both loses other important functionality. A quest for the editor with the least loss of functionality is currently in progress in this question.

    Read the article

  • Should I learn to code?

    - by saltcod
    Hi All, This is more of a philosophical question than a technical one, but I’d like some opinions on it, and I think that there are many others in my position that would benefit. My issue is that I don’t really have time to learn how to code. I know, I know… no one has time anymore, but please hear me out. Since learning to use Drupal about 2 years ago I’ve been involved with several projects wherein I’ve become the default quasi-developer, front-end designer, site manager, and system administrator. What I’ve found is that I can produce fairly nice, feature rich Drupal sites with the wealth of contrib. modules out there (Views, CCK, image handling, etc….). BUT! I can’t code. I know enough PHP to insert something into a block, or re-word a string, but that’s about it. I still don’t really even know how arrays work. My question Succinctly, my question is: Given the time that I have available for all of this stuff – in addition to a full-time job and regular life – am I better off trying to become more expert at the front-end stuff, or should I just learn PHP already? Pros 1. If a project doesn’t use Drupal, I’ll know enough PHP to be able to participate. 2. Learning PHP would help my Drupal development too 3. Learning PHP would make front-end theming easier 4. Learning PHP should give me that missing background in programming – and should allow me to learn other languages in the future Cons 1. At 28, I know I’m not too old to learn anything. But am I too old to become ‘good’? 2. Am I better off getting better and better at front-end UX work? 3. Am I better off farming out the PHP work? Suggestions from coders welcome! Thanks Terry

    Read the article

  • TFS: Choose which workspace to add a solution too.

    - by Patricker
    I have a solution which I developed in VS2008 and which I am trying to add to Source Control (TFS 2010, though the issue happened in TFS 2008 as well). I have several TFS workspaces on my computer and I have access to several Team Projects. When I right click the solution in my Solution Explorer and choose the "Add Solution to Source Control" option I am never given an option of choosing which Workspace or which Team Project to add the existing solution too. VS2008 then proceeds to add it to the same team project every time. I have tried selecting an alternate workspace/team project in every window where I can see an option for it but it always adds it back to the same one. I even tried changing the name of my new workspace so that alphabetically it was the first thinking that it might be somehow related to that... no luck. I then tried goign to the Change Source Control window where you can add/remove bindings on a solution/project but that window also defaults to the same Team Project as trying to add the solution directly does... Any help would be greatly appreciated with this, maybe I'm just missing something?

    Read the article

  • How can I get my business objects layer to use the management layer in their methods?

    - by Tom Pickles
    I have a solution in VS2010 with several projects, each making up a layer within my application. I have business entities which are currently objects with no methods, and I have a management layer which references the business entities layer in it's project. I now think I have designed my application poorly and would like to move methods from helper classes (which are in another layer) into methods I'll create within the business entities themselves. For example I have a VirtualMachine object, which uses a helper class to call a Reboot() method on it which passes the request to the management layer. The static manager class talks to an API that reboots the VM. I want to move the Reboot() method into the VirtualMachine object, but I will need to reference the management layer: public void Reboot() { VMManager.Reboot(this.Name); } So if I add a reference to my management project in my entities project, I get the circular dependency error, which is how it should be. How can I sort this situation out? Do I need to an yet another layer between the entity layer and the management layer? Or, should I just forget it and leave it as it is. The application works ok now, but I am concerned my design isn't particularly OOP centric and I would like to correct this.

    Read the article

  • Problem using a COM interface as parameter

    - by Cesar
    I have the following problem: I have to projects Project1 and Project2. In Project1 I have an interface IMyInterface. In Project2 I have an interface IMyInterface2 with a method that receives a pointer to IMyInterface1. When I use import "Project1.idl"; in my Project2.idl, a #include "Project1.h" appears in Project2___i.h. But this file does not even exist!. What is the proper way to import an interface defined into other library into a idl file? I tried to replace the #include "Project1.h" by *#include "Project1_i.h"* or *#include "Project1_i.c"*, but it gave me a lot of errors. I also tried to use importlib("Project1.tlb") and define my interface IMyInterface2 within the library definition. But when I compile Project2PS project, an error is raised (something like dlldata.c is not generated if no interface is defined). I tried to create a dummy Project1.h. But when Project2___i.h is compiled, compiler cannot find MyInterface1. And if I include Project1___i.h I get a lot of errors again! Apparently, it is a simple issue, but I don't know how to solve it. I'm stuck with that!. By the way, I'm using VS2008 SP1. Thanks in advance.

    Read the article

  • Stretch ListBox Items hit area to full width if the ListBox?

    - by Nicholas
    I've looked around for an answer on this, but the potential duplicates are more concerned with presentation than interaction. I have a basic list box, and each item's content is a simple string. The ListBox itself is stretched to fill it's grid container, but each ListBoxItem's hitarea does not mirror the ListBox width. It looks as if the hitarea (pointer contact area) for each item is only the width of the text content. How do I make this stretch all the way across, regardless of the text size. I've set HorizontalContentAlignment to Stretch, but this doesn't solve my problem. My only other guess is that the content is actually stretching, but the background is invisible and so not capturing the mouse pointer. <ListBox Grid.Row="1" x:Name="ProjectsListBox" DisplayMemberPath="Name" ItemsSource="{Binding Path=Projects}" SelectedItem="{Binding Path=SelectedProject}" HorizontalContentAlignment="Stretch"/> The XAML is pretty straight forward on this. If I mouse over the text in one of the items, then the entire width of the item becomes active. I guess I just need to know how to create an interactive background that is invisible.

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • Portable Eclipse

    - by Jeach
    I'm trying to port my entire 'workspace' to a USB key (including the Eclipse executable) so that I can carry my work anywhere with me and work off the key directly. My directory hierarchy is similar to this: /workspace/eclipse - Where my current eclipse binary is stored /workspace/codebase - Where I keep the root of all my eclipse projects /workspace/resources - Where I keep all project files (images, docs, libs, etc.) It all works perfectly fine on one system. But when I change over to another system, the USB key gets mounted on another drive. For example, on my laptop, I get 'E:\', on my PC, I get 'K:\' and at work I get 'F:\', etc, etc. This means that because Eclipse (for 'some' reason) seems to only use full path names (including driver letters) in every single one of its configuration files (such as .classpath), nothing ever works when I want to work on another system. I put a 'libs' directory in the base of every project and populate it with its dependent JAR files. Why doesn't it use relative names instead, so that I could specify something like "../../libs/log4j.jar"? Anyone know how to fix this problem? Does anyone know of a workaround for this? For some reason, I really doubt I'm the first developer to do this! Thanks for your help and any suggestions.

    Read the article

  • I need help solving a rather weird error in a WCF service.

    - by Moulde
    Hi.. I have a solution that contains three projects. A main project with my MVC app, a silverlight application and a (silverlight enabled) WCF service project. In my silverlight project i have made a Service Reference to my WCF service. And i pretty much got that working. In my WCF service i have a method that returns an Book object, which got some random fields like title, date etc. In the book class, i have a ICollection field that contains a list of events. The book class is generated using entity framework 4.0, and Lazy Loading is enabled. If i in my getBook(int id) method return a book with the events field not initialized, it works as a charm. But if i initialize the field, i'm getting this error. The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error. I have a few ideas why that is happening, and while writing this i just got another one. The wcf service somehow threw away the reference to the event class. That would be very weird since i have a reference between my main mvc app (with the models) and my WCF service. Since i have enabled lazy loading in EF 4.0, i suspect that it may be the thing generating the error. But i'm not sure why that would be, because i'm not in any way accessing that field. I could understand that i may not be able to access the events field after i recive the object in my silverlight application since the connection between the book object and the entity framework is like broken. Did i mention that Lazy Loading is enabled on my EF instance? And there is no inner exception in the thrown exception. Thanks in advance. Malte Baden Hansen

    Read the article

  • Repository layout and sparse checkouts

    - by chuanose
    My team is considering to move from Clearcase to Subversion and we are thinking of organising the repository like this: \trunk\project1 \trunk\project2 \trunk\project3 \trunk\staticlib1 \trunk\staticlib2 \trunk\staticlib3 \branches\.. \tags\.. The issue here is that we have lots of projects (1000+) and each project is a dll that links in several common static libraries. Therefore checking out everything in trunk is a non-starter as it will take way too long (~2 GB), and is unwieldy for branching. Using svn:externals to pull out relevant folders for each project doesn't seem ideal because it results in several working copies for each static library folder. We also cannot do an atomic commit if the changes span the project and some static libraries. Sparse checkouts sounds very suitable for this as we can write a script to pull out only the required directories. However when we want to merge changes from a branch back to the trunk we will need to first check out a full trunk. Wonder if there is some advice on 1) a better repository organization or 2) a way to merge over branch changes to a trunk working copy that is sparse?

    Read the article

  • Converting to MVC3 - some views still want 'System.Web.Mvc, Version=1.0.0.0,

    - by justSteve
    I've used the directions from the release notes and have been able to navigate most pages - my unit tests are not comprehensive but most all pass. However...when I attempt to edit an existing or create a new user I'm getting the error pasted below - notice that it's references version=1... - this project started life as a v1 and was converted to mvc2 at the RTM. I'm still working with V2 projects but no longer any v1. Am i due for a GAC cleansing? Server Error in '/' Application. Could not load file or assembly 'System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. === Pre-bind state information === LOG: User = STUDIO11\mUser LOG: DisplayName = System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (Fully-specified) LOG: Appbase = file:///C:/Users/C:\Users\[path to project]/ LOG: Initial PrivatePath = C:\Users\[path to project]\bin Calling assembly : App_Web_qcjylaoc, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null. === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Users\[path to project]\web.config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config. LOG: Post-policy reference: System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 LOG: The same bind was seen before, and was failed with hr = 0x80070002.

    Read the article

  • Easy way to "organize"/"render"/"generate" HTML?

    - by Rockmaninoff
    Hi all, I'm a relative newbie to web development. I know my HTML and CSS, and am getting involved with Ruby on Rails for some other projects, which has been daunting but very rewarding. Basically I'm wondering if there's a language/backbone/program/solution to eliminate the copypasta trivialities of HTML, with some caveats. Currently my website is hosted on a school server and unfortunately can't use Rails. Being a newbie I also don't really know what other technologies are available to me (or even what those technologies might be). I'm essentially looking for a way to auto-insert all of my header/sidebar/footer/menu information, and when those need to be updated, the rest of the pages get updated. Right now, I have a sidebar that is a tree of all of the pages on my website. When I add a page, not only do I need to update the sidebar, I have to update it for every page in my domain. This is really inefficient and I'm wondering if there is a better way. I imagine this is a pretty widespread problem, but searching Google turns up too many irrelevant links (design template websites, tutorials, etc.). I'd appreciate any help. Oh, and I've heard of HAML as a way to render HTML; how would it be used in this situation?

    Read the article

  • Best practice- How to team-split a django project while still allowing code reusal

    - by Infinity
    I know this sounds kind of vague, but please let me explain- I'm starting work on a brand new project, it will have two main components: "ACME PRODUCT" (think Gmail, Meebo, etc), and "THE SITE" (help, information, marketing stuff, promotional landing pages, etc lots of marketing-induced cruft). So basically the url /acme/* will load stuff in the uber cool ajaxy application, and every other URI will load stuff in the other site. Problem: "THE SITE" component is out of my hands, and will be handled by a consultants team that will work closely with marketing, And I and my team will work solely on the ACME PRODUCT. Question: How to set up the django project in such a way that we can have: Seperate releases. (They can push new marketing pages and functionality without having to worry about the state of our code. Maybe even separate Subversion "projects") Minimize impact (on our product) of whatever flying-unicorns-hocus-pocus the other team codes into the site. Still allow some code reusal. My main concern is that the ACME product needs to be rock solid, and therefore needs to be somewhat isolated of whatever mistakes/code bloopers the consultants make in their marketing side of the site. How have you handled this? Any ideas? Thanks!

    Read the article

  • World Economic Crisis. IT prospects

    - by Andrew Florko
    There was alike question in 2008, 2 years passed. Please, share your expectations about IT market and employment in the next year or two (or so far you can predict). IMHO Russia (my native country) fully met Crisis in spring, 2008. Stock markets shrank 3(!) times during half a year. Many developers were fired those days but I suppose just because business was shocked and freezed some projects. Developers expected +20% salary growth per year in 2004-2007 (Developer salary in Moscow was about 2-3K$ in early 2008). Then there was 30% (very subjective) salary cut-off in 2008 and salaries were frozen till 2009. Now things are slowly coming back to 2008. Looking in the future I expect pessimistic scenario and another crash. Our economic depends more and more on oil & gas every year. IT that serves industry will be shrinked because we can't compete to China in real production. Due to high currency board (rubble is strong compared to dollar) we can't rely on offshore programming. Our officials are concerned on innovative economic breakthrough but it's an ordinary budget money assignemtn in practice. I don't believe in innovations either because who require innovations if you have debts and tomorrow is vapor?

    Read the article

  • What is better: Developing a Web project in MVC or N -Tier Architecture?

    - by Starx
    I have asked a similar question before and got an convincing answer as well? http://stackoverflow.com/questions/2843311/what-is-difference-of-developing-a-website-in-mvc-and-3-tier-or-n-tier-architectu Due to the conclusion of this question I started developing projects in N-tier Architecture. Just about an hour ago, I asked another question, about what is the best design pattern to create interface? There the most voted answer is suggesting me to use MVC architecture. http://stackoverflow.com/questions/2930300/what-is-the-best-desing-pattern-to-design-the-interface-of-an-webpage Now I am confused, First post suggested me that both are similar, just a difference that in N-tier, the tier are physically and logically separated and one layer has access to the one above and below it but not all the layers. I think ASP.net used 3 Tier architecture while developing applications or Web applications. Where as frameworks like Zend, Symphony they use MVC. I just want to stick to a pattern that is best suitable for WebProject Development? May be this is a very silly confusion? But if someone could clear this confusion, that would be very greatful?

    Read the article

  • In Scrum, should a team remove points from (defect) stories that don't result in a code change?

    - by CanIgtAW00tW00t
    My work uses a Scrum-like process to manage projects. I say Scrum-like, because we call it Scrum, but our project managers exclude aspects of Scrum that are inconvenient (most notably customer interaction). One of the stories in our current sprint was to correct a defect. After spending almost an entire day working on the issue, I determined the issue was the result of a permissions issue, so I didn't end up modifying any code. Our Scrum master / project manager decided that no code change equals zero points. I know that Scrum points are supposed to measure size / complexity and not time, but our Scrum master invests a lot of time in preparing graphs and statistical information from past sprints (average velocity, average points completed, etc.) I've always been of the opinion that for statistics to be meaningful in any way, the data must be as accurate as possible. All of our data is fuzzy to begin with, because, from time to time, we're encouraged by the Scrum master to "adjust" our size / complexity estimates, both increasing and decreasing them. I'd like to hear some other developers / Scrum team members thoughts on the merits of statistics based on past sprints, and also whether they think it's appropriate to "adjust" size / complexity estimates in the middle of a sprint, or the remove all points from a story all together for situations similar to what I've just described.

    Read the article

  • What is the appropriate granularity in building a ViewModel?

    - by JasCav
    I am working on a new project, and, after seeing some of the difficulties of previous projects that didn't provide enough separation of view from their models (specifically using MVC - the models and views began to bleed into each other a bit), I wanted to use MVVM. I understand the basic concept, and I'm excited to start using it. However, one thing that escapes me a bit - what data should be contained in the ViewModel? For example, if I am creating a ViewModel that will encompass two pieces of data so they can be edited in a form, do I capture it like this: public PersonAddressViewModel { public Person Person { get; set; } public Address Address { get; set; } } or like this: public PersonAddressViewModel { public string FirstName { get; set; } public string LastName { get; set; } public string StreetName { get; set; } // ...etc } To me, the first feels more correct for what we're attempting to do. If we were doing more fine grain forms (maybe all we were capturing was FirstName, LastName, and StreetAddress) then it might make more sense to go down to that level. But, I feel like the first is correct since we're capturing ALL Person data in the form and ALL Address data. It seems like it doesn't make sense (and a lot of extra work) to split things apart like that. Appreciate any insight.

    Read the article

  • Xcode 4: nib files not loading when run, can't find UI elements

    - by Jordan
    So, I just downloaded Xcode 4 and installed it. I was actually quite looking forward to the single window and integrated IB... - However, when I open and run one of my projects, the nib files that the project uses don't seem to load. Instead I'm left looking at a blank white screen (iPhone). This project ran well and fine on Xcode 3.2. So I thought... this can't be that hard to fix. So I opened up a nib file, thinking that maybe editing or creating a new one from scratch could point me in the right direction. But I can't find the old resources panel from interface builder anywhere. How am I meant to create a new view or add buttons? I know I'm probably just missing something obvious :s Did anyone else have the same nib file problems - is there a fix (or something stupidly simple that I'm forgetting about)? - EDIT: Ok. If I background and un-background the app, the view loads fine. But this happens every time I build, on both iPhone and iOS simulator, i.e. the app doesn't work properly until it's been backgrounded. All the code for loading the view follows from - (void)applicationDidFinishLaunching:(UIApplication *)application. Now I am really confused. - Thanks :)

    Read the article

  • Using Mercurial in a Large Organization

    - by Kristopher Johnson
    I've been using Mercurial for my own personal projects for a while, and I love it. My employer is considering a switch from CVS to SVN, but I'm wondering whether I should push for Mercurial (or some other DVCS) instead. One wrinkle with Mercurial is that it seems to be designed around the idea of having a single repository per "project". In this organization, there are dozens of different executables, DLLs, and other components in the current CVS repository, hierarchically organized. There are a lot of generic reusable components, but also some customer-specific components, and customer-specific configurations. The current build procedures generally get some set of subtrees out of the CVS repository. If we move from CVS to Mercurial, what is the best way to organize the repository/repositories? Should we have one huge Mercurial repository containing everything? If not, how fine-grained should the smaller repositories be? I think people will find it very annoying if they have to pull and push updates from a lot of different places, but they will also find it annoying if they have to pull/push the entire company codebase. Anybody have experience with this, or advice?

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >