Search Results

Search found 9366 results on 375 pages for 'common lisp'.

Page 338/375 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • UITableViewCell outlets not set during bundle load (possibly very elementary question)

    - by Jan Zich
    What are the most common reasons for an outlet (a class property) not being set during a bundle load? I'm sorry; most likely I'm not using the correct terms. It's my first steps with iPhone OS development and Objective-C, so please bear with me. Here is more details. Basically, I'm trying to create a table view based form with a fixed number of static rows. I followed this example: http://developer.apple.com/iphone/library/documentation/userexperience/conceptual/TableView_iPhone/TableViewCells/TableViewCells.html Scroll down to The Technique for Static Row Content please. I have one nib file with one table view, three table cells and all connections set as in the example. The problem is that the corresponding cell properties in my controller are never initialised. I get an exception in cellForRowAtIndexPath complaining that the returned cell is nil: UITableView dataSource must return a cell from tableView:cellForRowAtIndexPath. Here are the relevant parts from the implementation of the controller: @synthesize cellA; @synthesize cellB; @synthesize cellC; - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 3; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { switch (indexPath.row) { case 0: return cellA; break; case 1: return cellB; break; case 2: return cellC; break; default: return nil; } } And here is the interface part: @interface AssociatePhoneViewController : UITableViewController { UITableViewCell *cellA; UITableViewCell *cellB; UITableViewCell *cellB; } @property (nonatomic, retain) IBOutlet UITableViewCell *cellA; @property (nonatomic, retain) IBOutlet UITableViewCell *cellB; @property (nonatomic, retain) IBOutlet UITableViewCell *cellC; @end This must be possibly one of the the most embarrassing questions on StackOverflow. It looks like the most basic example code. Is is possible that the cells are not instantiated with the nib file? I have them on the same level before the tabula view in the nib file. I tried to move them after the table view, but it did not make any difference. Are table cells in some way special? Do I need to set some flag or some property on them in the nib file? I was under the impression that all classes (views, windows, controllers …) listed in a nib file are simply instantiated (and linked using the provided connections). Could it possibly be some memory issue? The cell properties in my controller are not defined in any special way.

    Read the article

  • ASP.NET- forcing child/container events to fire before parent onload?

    - by Hans Gruber
    I'm working on a questionnaire type application in which questions are stored in a database. Therefore, I create my controls dynamically on every Page.OnLoad. This works like a charm and ViewState is persisted between postbacks because I ensure that my dynamic controls always have the same generated Control.ID. In addition to the user control that dynamically populates the questions, my questionnaire page also contains a 'Status' section (also encapsulated by a user control) which represents the status of the questionnaire (choices are 'Complete', 'Started' or 'In Progress'). If the user changes the status of questionnaire (i.e. from 'In Progress' to 'Complete'), I need to postback to the server because the contents of the dynamic portion of the questionnaire depend on the selected status. Some questions are always present regardless of status, and yet others may not be present at all for the selected status. The point is, when the status changes, I have to postback to the page and render the right set of questions. Additionally, I need to preserve any user entered values for those questions which are 'always available'. However, due to the page life cycle in ASP.NET, the 'Status' user control's OnLoad, which contains the correct status needed to load the right questions from the DB, doesn't get executed until after the 'dynamic questions' user control has already been populated (with the wrong/stale values). To get around this, I raise an event from my 'Status' user control to the main page to indicate that the Status has changed. The main page then raises an event on the 'dynamic questions' user control. Since by the time this event bubbles up, the 'dynamic questions' user control has already loaded the 'wrong' questions from the DB, it first calls Controls.Clear. It then happily uses the new status to query the database for the 'correct' questions and does a Control.Add() on each. FYI, Control.IDs are consistent across postbacks. This solution works...sorta. The correct set of questions for the selected status do get rendered; however ViewState is getting lost for those 'always available' questions. I'm guessing this is because the 'dynamic questions' user control calls Controls.Clear when responding to the status changed event. This must somehow kill the association between ViewState and my dynamic controls, even though the Control.ID are consistent. This seems like such a common requirement, I'm virtually certain there is a better, cleaner and less error prone approach to accomplish this. In case its not plain obvious, I haven't been able to grok the ASP.NET page life-cycle despite working with it for the last year. Any help is much appreciated!

    Read the article

  • help me "dry" out this .net XML serialization code

    - by Sarah Vessels
    I have a base collection class and a child collection class, each of which are serializable. In a test, I discovered that simply having the child class's ReadXml method call base.ReadXml resulted in an InvalidCastException later on. First, here's the class structure: Base Class // Collection of Row objects [Serializable] [XmlRoot("Rows")] public class Rows : IList<Row>, ICollection<Row>, IEnumerable<Row>, IEquatable<Rows>, IXmlSerializable { public Collection<Row> Collection { get; protected set; } public void ReadXml(XmlReader reader) { reader.ReadToFollowing(XmlNodeName); do { using (XmlReader rowReader = reader.ReadSubtree()) { var row = new Row(); row.ReadXml(rowReader); Collection.Add(row); } } while (reader.ReadToNextSibling(XmlNodeName)); } } Derived Class // Acts as a collection of SpecificRow objects, which inherit from Row. Uses the same // Collection<Row> that Rows defines which is fine since SpecificRow : Row. [Serializable] [XmlRoot("MySpecificRowList")] public class SpecificRows : Rows, IXmlSerializable { public new void ReadXml(XmlReader reader) { // Trying to just do base.ReadXml(reader) causes a cast exception later reader.ReadToFollowing(XmlNodeName); do { using (XmlReader rowReader = reader.ReadSubtree()) { var row = new SpecificRow(); row.ReadXml(rowReader); Collection.Add(row); } } while (reader.ReadToNextSibling(XmlNodeName)); } public new Row this[int index] { // The cast in this getter is what causes InvalidCastException if I try // to call base.ReadXml from this class's ReadXml get { return (Row)Collection[index]; } set { Collection[index] = value; } } } And here's the code that causes a runtime InvalidCastException if I do not use the version of ReadXml shown in SpecificRows above (i.e., I get the exception if I just call base.ReadXml from within SpecificRows.ReadXml): TextReader reader = new StringReader(serializedResultStr); SpecificRows deserializedResults = (SpecificRows)xs.Deserialize(reader); SpecificRow = deserializedResults[0]; // this throws InvalidCastException So, the code above all compiles and runs exception-free, but it bugs me that Rows.ReadXml and SpecificRows.ReadXml are essentially the same code. The value of XmlNodeName and the new Row()/new SpecificRow() are the differences. How would you suggest I extract out all the common functionality of both versions of ReadXml? Would it be silly to create some generic class just for one method? Sorry for the lengthy code samples, I just wanted to provide the reason I can't simply call base.ReadXml from within SpecificRows.

    Read the article

  • Executing a .NET Managed Assembly from SQL Server 2008 - Pro's, Con's & Recommendations

    - by RPM1984
    Hi guys, looking for opinions/recommendations/links for the following scenario im currently facing. The Platform: .NET 4.0 Web Application SQL Server 2008 The Task: Overhaul a component of the system that performs (fairly) complex mathematical operations based on a specific user activity, and updates numerous tables in the database. A common user activity might be "Bob" decides to post a forum topic. This results in (the end-solution) needing to look at various factors (about the post he did), then after doing some math based on lookup values/ratios as well as other data in the database, inserting some other data as a result of these operations. The Options: Ok - so here's what im thinking. Although it would be much easier to do this in C# (LINQ-SQL) it doesnt make much sense as the majority of the computations are based on values in the db, and it will get difficult to control/optimize/debug the LINQ over time. Hence, im leaning towards created a managed assembly (C# Class Library) that contains the lookup values (constants) as well as leveraging the math classes in the existing .NET BCL. Basically i'd expose a few methods that can be called by the T-SQL Stored Procedures. This to me has the following advantages: Simplicity of math. Do complex math in .NET vs complex math in T-SQL. No brainer. =) Abstraction of computatations, configurable "lookup" values and business logic from raw T-SQL. T-SQL only needs to care about the data, simplifying the stored procedures and making it easier to maintain. When it needs to do math it delegates off to the managed assembly. So, having said that - ive never done this before (call .NET assmembly from T-SQL), and after some googling the best site i could come up with is here, which is useful but outdated. So - what am i asking? Well, firstly - i need some better references on how to actually do this. "This" being how to call a C# .NET 4 Assembly from within T-SQL Stored Procedures in SQL Server 2008. Secondly, who out there has done this, what problems (if any) did you face? Realize this may be difficult to provide a "correct answer", so ill try to give it to whoever gives me the answer with a combination of good links and a list of pro's/con's/problems with this implementation. Cheers!

    Read the article

  • Failed to obtain JDBC Driver for MySQL under Tomcat environment

    - by Michael Mao
    Hi all: I've been trying to obtain the Driver class for JDBC connection to MySQL. The workstation is running on Linux, Fedora 10. I have manually set up the classpath variable for Java by CLI like this: bash-3.2$ echo $CLASSPATH /home/cmao/public_html/jsp/mysql-connector-java-5.1.12-bin.jar This shows that I've added the lastest mysql connection jar archive to my CLASSPATH variable. I've created a test JSP page which can be found here And source code for this page is: <%@page language="java"%> <%@page import="java.sql.*"%> <%@page import="java.util.*"%> <html> <head> <title>UTS JDBC MySQL connection test page</title> </head> <body> <% Connection con = null; out.print("Java version is : " + System.getProperty("java.version") + "<br />"); out.print("Tomcat version is : " + application.getServerInfo() + "<br />"); out.print("Servlet version is: " + application.getMajorVersion() + "<br />"); out.print("JSP version is : " + JspFactory.getDefaultFactory().getEngineInfo().getSpecificationVersion() +"<br />"); //out.print("Java classpath is : " + System.getProperty("java.class.path")+ "<br />"); //out.print("JSP classpath is : " + appliaction.getAttribute("org.apache.catalina.jsp_classpath") + "<br />"); //out.print("Tomcat classpath is : " + System.getProperty("org.apache.tomcat.common.classpath") + "<br />"); try { Class c = Class.forName("com.mysql.jdbc.Driver"); } catch(Exception e) { out.println("Error! Failed to obtain JDBC driver for MySQL... Missing class \"com.mysql.jdbc.Driver\"<br />"); } %> </body> </html> None of those commented out line would work, various Jsper Expetions would be thrown. You can check those Error pages from the following links: classpath Error page catalina Error page tomcat Error page It seems, from my limited knowledge of JSP and Servlet, the Tomcat environment "ignores" my Java CLASSPATH? In which case I cannot configure the MySQL JDBC package to let my Servlets(a JSP is but a Servlet anyway) work. I am not sure how to fix this issue. would it be better if I use an IDE like Eclipse or NetBeans and create a real Java "web app" so that everything can be "self-configured" by the usage of a web.config XML configuration file? So that I can certainly bypass this Tomcat environment restriction? Many thanks for the suggestions in advance.

    Read the article

  • Make Div overlay ENTIRE page (not just viewport)??

    - by Polaris878
    Hello, So I have a problem that I think is quite common but I have yet to find a good solution for. I want to make an overlay div cover the ENTIRE page... NOT just the viewport. I don't understand why this is so hard to do... I've tried setting body, html heights to 100% etc but that isn't working. Here is what I have so far: <html> <head> <style type="text/css"> .OverLay { position: absolute; z-index: 3; opacity: 0.5; filter: alpha(opacity = 50); top: 0; bottom: 0; left: 0; right: 0; width: 100%; height: 100%; background-color: Black; color: White;} body { height: 100%; } html { height: 100%; } </style> </head> <body> <div style="height: 100%; width: 100%; position: relative;"> <div style="height: 100px; width: 300px; background-color: Red;"> </div> <div style="height: 230px; width: 9000px; background-color: Green;"> </div> <div style="height: 900px; width: 200px; background-color: Blue;"></div> <div class="OverLay">TestTest!</div> </div> </body> </html> I'd also be open to a solution in JavaScript if one exists, but I'd much rather just be using some simple CSS.

    Read the article

  • What is the best practice for adding persistence to an MVC model?

    - by etheros
    I'm in the process of implementing an ultra-light MVC framework in PHP. It seems to be a common opinion that the loading of data from a database, file etc. should be independent of the Model, and I agree. What I'm unsure of is the best way to link this "data layer" into MVC. Datastore interacts with Model //controller public function update() { $model = $this->loadModel('foo'); $data = $this->loadDataStore('foo', $model); $data->loadBar(9); //loads data and populates Model $model->setBar('bar'); $data->save(); //reads data from Model and saves } Controller mediates between Model and Datastore Seems a bit verbose and requires the model to know that a datastore exists. //controller public function update() { $model = $this->loadModel('foo'); $data = $this->loadDataStore('foo'); $model->setDataStore($data); $model->getDataStore->loadBar(9); //loads data and populates Model $model->setBar('bar'); $model->getDataStore->save(); //reads data from Model and saves } Datastore extends Model What happens if we want to save a Model extending a database datastore to a flatfile datastore? //controller public function update() { $model = $this->loadHybrid('foo'); //get_class == Datastore_Database $model->loadBar(9); //loads data and populates $model->setBar('bar'); $model->save(); //saves } Model extends datastore This allows for Model portability, but it seems wrong to extend like this. Further, the datastore cannot make use of any of the Model's methods. //controller extends model public function update() { $model = $this->loadHybrid('foo'); //get_class == Model $model->loadBar(9); //loads data and populates $model->setBar('bar'); $model->save(); //saves } EDIT: Model communicates with DAO //model public function __construct($dao) { $this->dao = $dao; } //model public function setBar($bar) { //a bunch of business logic goes here $this->dao->setBar($bar); } //controller public function update() { $model = $this->loadModel('foo'); $model->setBar('baz'); $model->save(); } Any input on the "best" option - or alternative - is most appreciated.

    Read the article

  • generated service mock: everything but RhinoMocks fails?

    - by hko
    I have the "quest" to search for the next Mocking Framework for my company, and basically it's down to NSubstitute (simplest syntax, but no strict mocks), FakeItEasy(best reviews, Roy Osherove bonus, and slightly better lib support than NSubstitute), Moq (best "other libs support", biggest featureset, downside: mock.Object). We definitely want to move on from RhinoMocks, e.g. because of the unusefull interactiontest error messages (it should tell me what the parameter was instead, when a verification fails). So I was pretty surprised the other day (that was yesterday) when I found out RhinoMocks could do a thing where every other mock framework fails at: Mocking an autogenerated SomethingService (a typical VS autogenerated service with a default construtor in a partial class). Please don't argue about the design.. I intend to write lightweight integration tests (and some unit tests), and I can't mess around with the service, the product is installed on too many customers system. See this code: // here the NSubstitute and FakeItEasy equivalents throw an exception.. see below TicketStoreService fakeTicketStoreService = MockRepository.GenerateMock<TicketStoreService>(); fakeTicketStoreService.Expect(service => service.DoSomething(Arg.Is(new Guid())).Return(new Guid()); fakeTicketStoreService.DoSomething(Arg.Is(new Guid())); fakeTicketStoreService.VerifyAllExpectations(); Note that DoSomething is a non-virtual methodcall in an autogenerated class. So it shouldn't work, according to common knowledge. But it does. Problem is that it's the only (non commercial) framework that can do this: Rhino.Mocks works, and verification works too FakeItEasy says it doesn't find a default constructor (probably just wrong exception message): No default constructor was found on the type SomeNamespace.TicketStoreService Moq gives something sane and understandable: Invalid setup on a non-virtual (overridable in VB) member: service=> service.DoSomething Nsubstitute gives a message System.NotSupportedException: Cannot serialize member System.ComponentModel.Component.Site of type System.ComponentModel.ISite because it is an interface. I'm really wondering what's going on here with the frameworks, except Moq. The "fancy new" frameworks seem to have an initial perf hit too, probably preparing some Type cache and serializing stuff, whilst RhinoMocks somehow manages to create a very "slim" mock without recursion. I have to admit I didn't like RhinoMocks very well, but here it shines.. unfortunately. So, is there a way to get that to work with newer (non-commercial!) mocking frameworks, or somehow get a sane error message out of Rhino.Mocks? And why can Rhino.Mocks achieve this, when clearly every Mocking framework states it can only work with virtual methods when given a concrete class? Let's not derail the discussion by talking about alternative approaches like Extract&Override or runtime-proxy Mocking frameworks like JustMock/TypeMock/Moles or the new Fakes framework, I know these, but that would be less ideal solutions, for reasons beyond this topic. Any help appreciated..

    Read the article

  • Best method for converting several sets of numbers with several different ratios

    - by C Patton
    I'm working on an open-source harm reduction application for opioid addicts. One of the features in this application is the conversion (in mg/mcg) between common opioids, so people don't overdose by accident. If you're morally against opioid addiction and wouldn't respond because of your morals, please consider that this application is for HARM REDUCTION.. So people don't end up dead. I have this data.. 3mg morphine IV = 10mcg fentanyl IV 2mg morphine oral = 1mg oxycodone oral 3mg oral morphine = 1mg oxymorphone oral 7.0mg morphine oral = 1mg hydromorphone oral 1mg morphine iv = .10mg oxymorphone iv 1mg morphine oral = 1mg hydrocodone oral 1mg morphine oral = 6.67mg codeine oral 1mg morphine oral = .10mg methadone oral And I have a textbox that is the source dosage in mg (a double) that the user can enter in. Underneath this, I have radio boxes for the source substance (ie: morphine) and the destination substance (ie oxycodone) for conversion.. I've been trying to think of the most efficient way to do this, but nearly every seems sloppy. If I were to do something like public static double MorphinetoOxycodone(string morphineValue) { double morphine = Double.Parse(morphineValue); return (morphine / 2 ); } I would also have to make a function for OxycodonetoMorphine, OxycodonetoCodeine, etc.. and then would end up with dozens functions.. There must be an easier way than this that I'm missing. If you'll notice, all of my conversions use morphine as the base value.. what might be the easiest way to use the morphine value to convert one opioid to another? For example, if 1mg morphine oral is equal to 1mg hydrocodone and 1mg morphine oral is equal to .10mg methadone, wouldn't I just multiply 1*.10 to get the hydrocodone-methadone value? Implementing this idea is what I'm having the most trouble with. Any help would be GREATLY appreciated.. and if you'd like, I would add your name/nickname to the credits in this program. It's possible that many, many people around the world will use this (I'm translating it into several languages as well) and to know that your work could've helped an addict from dying.. I think that's a great thing :) -cory

    Read the article

  • Classes to Entities; Like-class inheritence problems

    - by Stacey
    Beyond work, some friends and I are trying to build a game of sorts; The way we structure some of it works pretty well for a normal object oriented approach, but as most developers will attest this does not always translate itself well into a database persistent approach. This is not the absolute layout of what we have, it is just a sample model given for sake of representation. The whole project is being done in C# 4.0, and we have every intention of using Entity Framework 4.0 (unless Fluent nHibernate can really offer us something we outright cannot do in EF). One of the problems we keep running across is inheriting things in database models. Using the Entity Framework designer, I can draw the same code I have below; but I'm sure it is pretty obvious that it doesn't work like it is expected to. To clarify a little bit; 'Items' have bonuses, which can be of anything. Therefore, every part of the game must derive from something similar so that no matter what is 'changed' it is all at a basic enough level to be hooked into. Sounds fairly simple and straightforward, right? So then, we inherit everything that pertains to the game from 'Unit'. Weights, Measures, Random (think like dice, maybe?), and there will be other such entities. Some of them are similar, but in code they will each react differently. We're having a really big problem with abstracting this kind of thing into a database model. Without 'Enum' support, it is proving difficult to translate into multiple tables that still share a common listing. One solution we've depicted is to use a 'key ring' type approach, where everything that attaches to a character is stored on a 'Ring' with a 'Key', where each Key has a Value that represents a type. This works functionally but we've discovered it becomes very sluggish and performs poorly. We also dislike this approach because it begins to feel as if everything is 'dumped' into one class; which makes management and logical structure difficult to adhere to. I was hoping someone else might have some ideas on what I could do with this problem. It's really driving me up the wall; To summarize; the goal is to build a type (Unit) that can be used as a base type (Table per Type) for generic reference across a relatively global scope, without having to dump everything into a single collection. I can use an Interface to determine actual behavior so that isn't too big of an issue. This is 'roughly' the same idea expressed in the Entity Framework.

    Read the article

  • Internet Explorer buggy when accessing a custom weblogic provider

    - by James
    I've created a custom Weblogic Security Authentication Provider on version 10.3 that includes a custom login module to validate users. As part of the provider, I've implemented the ServletAuthenticationFilter and added one filter. The filter acts as a common log on page for all the applications within the domain. When we access any secured URLs by entering them in the address bar, this works fine in IE and Firefox. But when we bookmark the link in IE an odd thing happens. If I click the bookmark, you will see our log on page, then after you've successfully logged into the system the basic auth page will display, even though the user is already authenticated. This never happens in Firefox, only IE. It's also intermittent. 1 time out of 5 IE will correctly redirect and not show the basic auth window. Firefox and Opera will correctly redirect everytime. We've captured the response headers and compared the success and failures, they are identical. final boolean isAuthenticated = authenticateUser(userName, password, req); // Send user on to the original URL if (isAuthenticated) { res.sendRedirect(targetURL); return; } As you can see, once the user is authenticated I do a redirect to the original URL. Is there a step I'm missing? The authenticateUser() method is taken verbatim from an example in Oracle's documents. private boolean authenticateUser(final String userName, final String password, HttpServletRequest request) { boolean results; try { ServletAuthentication.login(new CallbackHandler() { @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(userName); } if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password.toCharArray()); } } } }, request); results = true; } catch (LoginException e) { results = false; } return results; I am asking the question here because I don't know if the issue is with the Weblogic config or the code. If this question is more suited to ServerFault please let me know and I will post there. It is odd that it works everytime in Firefox and Opera but not in Internet Explorer. I wish that not using Internet Explorer was an option but it is currently the company standard. Any help or direction would be appreciated. I have tested against IE 6 & 8 and deployed the custom provider on 3 different environments and I can still reproduce the bug.

    Read the article

  • XSD: xs:sequence & xs:choice combination for xs:extension elements?

    - by bguiz
    Hi, My question is about defining an XML schema that will validate the following sample XML: <rules> <other>...</other> <bool>...</bool> <other>...</other> <string>...</string> <other>...</other> </rules> The order of the child nodes does not matter. The cardinality of the child nodes is 0..unbounded. All the child elements of the rules node have a common base type, rule, like so: <xs:complexType name="booleanRule"> <xs:complexContent> <xs:extension base="rule"> ... </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="stringFilterRule"> <xs:complexContent> <xs:extension base="filterRule"> ... </xs:extension> </xs:complexContent> </xs:complexType> My current attempt at defining the schema for the rules node is below. However, Can I nest xs:choice within xs:sequence? If, where do I specify the maxOccurs="unbounded" attribute? Is there a better way to do this, such as an xs:sequence which specifies only the base type of its child elements? <xs:element name="rules"> <xs:complexType> <xs:sequence> <xs:choice> <xs:element name="bool" type="booleanRule" /> <xs:element name="string" type="stringRule" /> <xs:element name="other" type="someOtherRule" /> </xs:choice> </xs:sequence> </xs:complexType> </xs:element>

    Read the article

  • Function Point Analysis -- a seriously overestimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week (I"m not even saying weekend) to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • How and why do I set up a C# build machine?

    - by mmr
    Hi all, I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts? What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests. What kind of hardware will I need for this? Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to. How often should we make this kind of build? How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so? Is there anything else I'm not seeing here? I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know. EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX. Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.

    Read the article

  • How do I display a jquery dialog box before the entire page is loaded?

    - by obarshay
    On my site a number of operations can take a long time to complete. When I know a page will take a while to load, I would like to display a progress indicator while the page is loading. Ideally I would like to say something along the lines of: $("#dialog").show("progress.php"); and have that overlay on top of the page that is being loaded (disappearing after the operation is completed). Coding the progress bar and displaying progress is not an issue, the issue is getting a progress indicator to pop up WHILE the page is being loaded. I have been trying to use JQuery's dialogs for this but they only appear after the page is already loaded. This has to be a common problem but I am not familiar enough with JavaScript to know the best way to do this. Here's simple example to illustrate the problem. The code below fails to display the dialog box before the 20 second pause is up. I have tried in Chrome and Firefox. In fact I don't even see the "Please Wait..." text. Here's the code I am using: <html> <head> <link type="text/css" href="http://jqueryui.com/latest/themes/base/ui.all.css" rel="stylesheet" /> <script type="text/javascript" src="http://jqueryui.com/latest/jquery-1.3.2.js"></script> <script type="text/javascript" src="http://jqueryui.com/latest/ui/ui.core.js"></script> <script type="text/javascript" src="http://jqueryui.com/latest/ui/ui.dialog.js"></script> </head> <body> <div id="please-wait">My Dialog</div> <script type="text/javascript"> $("#please-wait").dialog(); </script> <?php flush(); echo "Waiting..."; sleep(20); ?> </body> </html>

    Read the article

  • Is the recent trend toward widescreen (16:9) computer monitors a plus or minus for programmers?

    - by DanM
    It's almost gotten to the point where you can't buy a conventional (4:3) monitor anymore. Pretty much everything is widescreen. This is fine for watching movies or TV, but is it good or bad for programming? My initial thoughts on the issue are that widescreens are a net negative for programmers. Here are some of the disadvantages I see: Poor space utiliziation One disadvantage of widescreens you can't argue with is that they offer poor space utilization for the amount of total pixels you get. For example, my Thinkpad, which I bought just before the widescreen craze, has a 15" monitor with a native resolution of 1600 x 1200. The newer 15.4" Thinkpads run at most 1680 x 1050. So (if you do the math) you get fewer pixels in a wider (but not shorter) package. With desktop monitors, you pay a price in terms of desk space used. Two 1680 x 1050 monitors will simply take up more of your desk than two 1600 x 1200 monitors (assuming equal dot pitch). More scrolling If you compare a 1680 x 1050 monitor to a 1600 x 1200 monitor, you get 80 extra pixels of width but 150 fewer pixels of height. The height reduction means you lose approximately 11 lines of code. That's less you can see on the screen at one time and more scrolling you have to do. This harms productivity, maybe not dramatically, but insidiously. Less room for wide panels Widescreens also mean you lose space for wide but short panels common in programming environments. If you use Visual Studio, for example, your code window will be that much shorter when viewing the Find Results, Task List, or Error List (all of which I use frequently). This isn't to say the 80 pixels of extra width you get with widescreen would never be useful, but I tend to keep my lines of code short, so seeing more lines would be more valuable to me than seeing fewer, longer lines. What do you think? Do you agree/disagree? Are you now using one or more widescreen monitors for development? What resolution are you running on each? Do you ever miss the height of the traditional 4:3 monitor? Would you complain if your monitors were one inch narrower but two inches taller?

    Read the article

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • XML - how to use namespace prefixes

    - by Asbie
    I have this XML at http://localhost/file.xml: <?xml version="1.0" encoding="utf-8"?> <val:Root xmlns:val="http://www.hw-group.com/XMLSchema/ste/values.xsd"> <Agent> <Version>2.0.3</Version> <XmlVer>1.01</XmlVer> <DeviceName>HWg-STE</DeviceName> <Model>33</Model> <vendor_id>0</vendor_id> <MAC>00:0A:DA:01:DA:DA</MAC> <IP>192.168.1.1</IP> <MASK>255.255.255.0</MASK> <sys_name>HWg-STE</sys_name> <sys_location/> <sys_contact> HWg-STE:For more information try http://www.hw-group.com </sys_contact> </Agent> <SenSet> <Entry> <ID>215</ID> <Name>Home</Name> <Units>C</Units> <Value>27.7</Value> <Min>10.0</Min> <Max>40.0</Max> <Hyst>0.0</Hyst> <EmailSMS>1</EmailSMS> <State>1</State> </Entry> </SenSet> </val:Root> I am trying to read this from my c# code: static void Main(string[] args) { var xmlDoc = new XmlDocument(); xmlDoc.Load("http://localhost/file.xml"); XmlElement root = xmlDoc.DocumentElement; // Create an XmlNamespaceManager to resolve the default namespace. XmlNamespaceManager nsmgr = new XmlNamespaceManager(xmlDoc.NameTable); nsmgr.AddNamespace("val", "http://www.hw-group.com/XMLSchema/ste/values.xsd"); XmlNodeList nodes = root.SelectNodes("/val:SenSet/val:Entry"); foreach (XmlNode node in nodes) { string name = node["Name"].InnerText; string value = node["Value"].InnerText; Console.Write("name\t{0}\value\t{1}", name, value); } Console.ReadKey(); } } Problem is that the node is empty. I understand this is a common newbie problem when reading XML, still not able to solve what I am doing wrong, probably something with the Namespace "val" ?

    Read the article

  • Question about Architecture for Viewing Images in ASP.NET MVC App

    - by Charlie Flowers
    I have an approach in mind for an image viewer in a web app, and want to get a sanity check and any thoughts you stackoverflowers might have. Here's the whirlwind nutshell summary: I'm working on an ASP.NET MVC application that will run in my company's retail stores. Even though it is a web application, we own the store machines and have control over them. We have a "windows agent" running on the store machine which we can talk to from the browser via javascript (it is a WCF service, and our web app has permission to talk to it from the browser). One of the web pages needs to be an "image viewer" page with some common things like Rotate & Zoom. Now, there are some WebForms controls that offer Rotate and Zoom. However, they take up server resources and generate a good bit of traffic between the server and the browser. For example, the Rotate function would cause an ajax call to the server, which would then generate a new image written to a .NET Canvas object, which would then be written to a file on the server, which would then be returned from the ajax call and refreshed inside the browser. Normally, that's a pretty good way of doing things. But in our case, we have code running on the store machine that we can communicate with. This leads me to consider the following approach: When the user asks to view an image, we tell our "windows agent" to download it from our image server to the store machine. We then redirect our browser to our image viewer page, which will pull the image from the local file we just wrote to the store machine. When the user clicks "Rotate", we cause JavaScript code in the browser to call our "windows agent" software, asking it to perform the "Rotate" function. The "windows agent" does the rotation using the same kind of imaging control that would formerly have been used on the server, but it does so now on the store machine. Javascript in the browser then refreshes the image on the page to show the newly rotated image. Zoom and similar features would be implemented the same way. This seems to be much more efficient, scalable, and responsive for the end-users. However, I've never heard of anything like it being done, mostly because it's rare to have this combination of a web app plus a "windows agent" on the client machine. What do you think? Feasible? Reasonable? Any pitfalls I overlooked or improvements / suggestions you can see? Has anyone done anything like this who would like to offer the wisdom of experience? Thanks!

    Read the article

  • What are the weaknesses of this user authentication method?

    - by byronh
    I'm developing my own PHP framework. It seems all the security articles I have read use vastly different methods for user authentication than I do so I could use some help in finding security holes. Some information that might be useful before I start. I use mod_rewrite for my MVC url's. Passwords are sha1 and md5 encrypted with 24 character salt unique to each user. mysql_real_escape_string and/or variable typecasting on everything going in, and htmlspecialchars on everything coming out. Step-by step process: Top of every page: session_start(); session_regenerate_id(); If user logs in via login form, generate new random token to put in user's MySQL row. Hash is generated based on user's salt (from when they first registered) and the new token. Store the hash and plaintext username in session variables, and duplicate in cookies if 'Remember me' is checked. On every page, check for cookies. If cookies set, copy their values into session variables. Then compare $_SESSION['name'] and $_SESSION['hash'] against MySQL database. Destroy all cookies and session variables if they don't match so they have to log in again. If login is valid, some of the user's information from the MySQL database is stored in an array for easy access. So far, I've assumed that this array is clean so when limiting user access I refer to user.rank and deny access if it's below what's required for that page. I've tried to test all the common attacks like XSS and CSRF, but maybe I'm just not good enough at hacking my own site! My system seems way too simple for it to actually be secure (the security code is only 100 lines long). What am I missing? I've also spent alot of time searching for the vulnerabilities with mysql_real_escape string but I haven't found any information that is up-to-date (everything is from several years ago at least and has apparently been fixed). All I know is that the problem was something to do with encoding. If that problem still exists today, how can I avoid it? Any help will be much appreciated.

    Read the article

  • C++ adding friend to a template class in order to typecast

    - by user1835359
    I'm currently reading "Effective C++" and there is a chapter that contains code similiar to this: template <typename T> class Num { public: Num(int n) { ... } }; template <typename T> Num<T> operator*(const Num<T>& lhs, const Num<T>& rhs) { ... } Num<int> n = 5 * Num<int>(10); The book says that this won't work (and indeed it doesn't) because you can't expect the compiler to use implicit typecasting to specialize a template. As a soluting it is suggested to use the "friend" syntax to define the function inside the class. //It works template <typename T> class Num { public: Num(int n) { ... } friend Num operator*(const Num& lhs, const Num& rhs) { ... } }; Num<int> n = 5 * Num<int>(10); And the book suggests to use this friend-declaration thing whenever I need implicit conversion to a template class type. And it all seems to make sense. But why can't I get the same example working with a common function, not an operator? template <typename T> class Num { public: Num(int n) { ... } friend void doFoo(const Num& lhs) { ... } }; doFoo(5); This time the compiler complaints that he can't find any 'doFoo' at all. And if i declare the doFoo outside the class, i get the reasonable mismatched types error. Seems like the "friend ..." part is just being ignored. So is there a problem with my understanding? What is the difference between a function and an operator in this case?

    Read the article

  • Separate specific #ifdef branches

    - by detly
    In short: I want to generate two different source trees from the current one, based only on one preprocessor macro being defined and another being undefined, with no other changes to the source. If you are interested, here is my story... In the beginning, my code was clean. Then we made a new product, and yea, it was better. But the code saw only the same peripheral devices, so we could keep the same code. Well, almost. There was one little condition that needed to be changed, so I added: #if defined(PRODUCT_A) condition = checkCat(); #elif defined(PRODUCT_B) condition = checkCat() && checkHat(); #endif ...to one and only one source file. In the general all-source-files-include-this header file, I had: #if !(defined(PRODUCT_A)||defined(PRODUCT_B)) #error "Don't make me replace you with a small shell script. RTFM." #endif ...so that people couldn't compile it unless they explicitly defined a product type. All was well. Oh... except that modifications were made, components changed, and since the new hardware worked better we could significantly re-write the control systems. Now when I look upon the face of the code, there are more than 60 separate areas delineated by either: #ifdef PRODUCT_A ... #else ... #endif ...or the same, but for PRODUCT_B. Or even: #if defined(PRODUCT_A) ... #elif defined(PRODUCT_B) ... #endif And of course, sometimes sanity took a longer holiday and: #ifdef PRODUCT_A ... #endif #ifdef PRODUCT_B ... #endif These conditions wrap anywhere from one to two hundred lines (you'd think that the last one could be done by switching header files, but the function names need to be the same). This is insane. I would be better off maintaining two separate product-based branches in the source repo and porting any common changes. I realise this now. Is there something that can generate the two different source trees I need, based only on PRODUCT_A being defined and PRODUCT_B being undefined (and vice-versa), without touching anything else (ie. no header inclusion, no macro expansion, etc)?

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • Using pointers, references, handles to generic datatypes, as generic and flexible as possible

    - by Patrick
    In my application I have lots of different data types, e.g. Car, Bicycle, Person, ... (they're actually other data types, but this is just for the example). Since I also have quite some 'generic' code in my application, and the application was originally written in C, pointers to Car, Bicycle, Person, ... are often passed as void-pointers to these generic modules, together with an identification of the type, like this: Car myCar; ShowNiceDialog ((void *)&myCar, DATATYPE_CAR); The 'ShowNiceDialog' method now uses meta-information (functions that map DATATYPE_CAR to interfaces to get the actual data out of Car) to get information of the car, based on the given data type. That way, the generic logic only has to be written once, and not every time again for every new data type. Of course, in C++ you could make this much easier by using a common root class, like this class RootClass { public: string getName() const = 0; }; class Car : public RootClass { ... }; void ShowNiceDialog (RootClass *root); The problem is that in some cases, we don't want to store the data type in a class, but in a totally different format to save memory. In some cases we have hundreds of millions of instances that we need to manage in the application, and we don't want to make a full class for every instance. Suppose we have a data type with 2 characteristics: A quantity (double, 8 bytes) A boolean (1 byte) Although we only need 9 bytes to store this information, putting it in a class means that we need at least 16 bytes (because of the padding), and with the v-pointer we possibly even need 24 bytes. For hundreds of millions of instances, every byte counts (I have a 64-bit variant of the application and in some cases it needs 6 GB of memory). The void-pointer approach has the advantage that we can almost encode anything in a void-pointer and decide how to use it if we want information from it (use it as a real pointer, as an index, ...), but at the cost of type-safety. Templated solutions don't help since the generic logic forms quite a big part of the application, and we don't want to templatize all this. Additionally, the data model can be extended at run time, which also means that templates won't help. Are there better (and type-safer) ways to handle this than a void-pointer? Any references to frameworks, whitepapers, research material regarding this?

    Read the article

  • Modular GWT design concerns

    - by GlGuru
    Hi, I have a couple of questions regarding a modular GWT based application framework. I have some ideas about them but being new to the field of web development I feel they are far from ideal. I'd appreciate a few comments and suggestions in this regard. Here are my questions: I am developing a framework which will allow third parties to embed GWT applications into our website and do some communication with them using simple iFrame postMessage. All these third party modules are going to use our SDK which is also GWT based. The problem arises that even though all the modules will be using the same codebase there is going to be a massive overheard in the amount of duplicate Javascript code (i.e. our common SDK code base which is quite large) being downloaded on the client's machine. This is highly redundant and problematic, not only due to the sheer size of duplicate code but, also due to the fact that subsequent updates of the SDK would require the modules to be recompiled which is going to create a DLL hell kind of scenario in the long run. What is the best way of doing this kind of thing? Is there a way where I can have some static GWT code (i.e. the SDK) and the dynamic GWT module refers to it (even if lies on a different domain) and it all work happily? The other part of the problem lies in providing robust scripting front end to the SDK. At first it appears to be trivial since Javascript itself is a scripting language. However, I do not know how to load and call a piece of pure Javascript code at runtime? I am willing to put restrictions on the target Javascript (i.e. having a function main and unique namespace or something). Furthermore the Javascript will come as a string from a database and not as a full URL. If its doable in Javascript how does one get this right in GWT i.e. forcing the compiler to emit a certain function in the generated Javascript? This I believe can be lesser of a problem by having a stub Javascript with all the right requirements which just loads up a GWT generated Javascript. Is any of this possible at all? I generally hate to be this verbose but I hope to find a quick solution to the problem as its holding up my progress. I'd highly appreciate any comments, suggestions and experiences.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >