Search Results

Search found 13955 results on 559 pages for 'easy angel'.

Page 443/559 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Invert linear linked list

    - by ArtWorkAD
    Hi, a linear linked list is a set of nodes. This is how a node is defined (to keep it easy we do not distinguish between node an list): class Node{ Object data; Node link; public Node(Object pData, Node pLink){ this.data = pData; this.link = pLink; } public String toString(){ if(this.link != null){ return this.data.toString() + this.link.toString(); }else{ return this.data.toString() ; } } public void inc(){ this.data = new Integer((Integer)this.data + 1); } public void lappend(Node list){ Node child = this.link; while(child != null){ child = child.link; } child.link = list; } public Node copy(){ if(this.link != null){ return new Node(new Integer((Integer)this.data), this.link.copy()); }else{ return new Node(new Integer((Integer)this.data), null); } } public Node invert(){ Node child = this.link; while(child != null){ child = child.link; } child.link = this;.... } } I am able to make a deep copy of the list. Now I want to invert the list so that the first node is the last and the last the first. The inverted list has to be a deep copy. I started developing the invert function but I am not sure. Any Ideas? Update: Maybe there is a recursive way since the linear linked list is a recursive data structure. I would take the first element, iterate through the list until I get to a node that has no child and append the first element, I would repeat this for the second, third....

    Read the article

  • Can Haskell's Parsec library be used to implement a recursive descent parser with backup?

    - by Thor Thurn
    I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.

    Read the article

  • forks in C - exercise

    - by Zka
    I try to repeat and learn more advanced uses and options when cutting trees with forks in the jungle of C. But foolishly I find an example which should be very easy as I have worked with forks before and even written some code, but i can't understand it fully. Here comes : main() { if (fork() == 0) { if (fork() == 0) { printf("3"); } else if ((wait(NULL)) > 0) { printf("2"); } } else { if (fork() == 0) { printf("1"); exit(0); } if (fork() == 0) { printf("4"); } } printf("0"); return 0; } Possible solutions are : 3201040 3104200 1040302 4321000 4030201 1403020 where 2, 5 and 6 are correct answers. First of all, shouldn't there be four zeroes in the output? Second... How does one come to the solution at all? Been doing this on paper for almost an hour and I'm not even close to understanding why the given solution are more correct than the false ones (except for nr3 as it can't end with 2 since a 0 must follow). Anyone with his forks in check who can offer some good explanation?

    Read the article

  • How can I test that my hash function is good in terms of max-load?

    - by philcolbourn
    I have read through various papers on the 'Balls and Bins' problem and it seems that if a hash function is working right (ie. it is effectively a random distribution) then the following should/must be true if I hash n values into a hash table with n slots (or bins): Probability that a bin is empty, for large n is 1/e. Expected number of empty bins is n/e. Probability that a bin has k collisions is <= 1/k!. Probability that a bin has at least k collisions is <= (e/k)**k. These look easy to check. But the max-load test (the maximum number of collisions with high probability) is usually stated vaguely. Most texts state that the maximum number of collisions in any bin is O( ln(n) / ln(ln(n)) ). Some say it is 3*ln(n) / ln(ln(n)). Other papers mix ln and log - usually without defining them, or state that log is log base e and then use ln elsewhere. Is ln the log to base e or 2 and is this max-load formula right and how big should n be to run a test? This lecture seems to cover it best, but I am no mathematician. http://pages.cs.wisc.edu/~shuchi/courses/787-F07/scribe-notes/lecture07.pdf BTW, with high probability seems to mean 1 - 1/n.

    Read the article

  • Split large repo into multiple subrepos and preserve history (Mercurial)

    - by Andrew
    We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history? Here is an example of what our single repository (OurPlatform) currently looks like: /OurPlatform ---- Core ---- Core.Tests ---- Database ---- Database.Tests ---- CMS ---- CMS.Tests ---- Product1.Domain ---- Product1.Stresstester ---- Product1.Web ---- Product1.Web.Tests ---- Product2.Domain ---- Product2.Stresstester ---- Product2.Web ---- Product2.Web.Tests ==== Product1.sln ==== Product2.sln All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests. So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?

    Read the article

  • Windows Batch file to echo a specific line number

    - by Lee
    So for the second part of my current dilemma, I have a list of folders in c:\file_list.txt. I need to be able to extract them (well, echo them with some mods) based on the line number because this batch script is being called by an iterative macro process. I'm passing the line number as a parameter. @echo off setlocal enabledelayedexpansion set /a counter=0 set /a %%a = "" for /f "usebackq delims=" %%a in (c:\file_list.txt) do ( if "!counter!"=="%1" goto :printme & set /a counter+=1 ) :printme echo %%a which gives me an output of %a. Doh! So, I've tried echoing !a! (result: ECHO is off.); I've tried echoing %a (result: a) I figured the easy thing to do would be to modify the head.bat code found here: http://stackoverflow.com/questions/130116/dos-batch-commands-to-read-first-line-from-text-file except rather than echoing every line - I'd just echo the last line found. Not as simple as one might think. I've noticed that my counter is staying at zero for some reason; I'm wondering if the set /a counter+=1 is doing what I think it's doing.

    Read the article

  • Custom DateTime model binder in Asp.net MVC

    - by Robert Koritnik
    I would like to write my own model binder for DateTime type. First of all I'd like to write a new attribute that I can attach to my model property like: [DateTimeFormat("d.M.yyyy")] public DateTime Birth { get; set,} This is the easy part. But the binder part is a bit more difficult. I would like to add a new model binder for type DateTime. I can either implement IModelBinder interface and write my own BindModel() method inherit from DefaultModelBinder and override BindModel() method My model has a property as seen above (Birth). So when the model tries to bind request data to this property, my model binder's BindModel(controllerContext, bindingContext) gets invoked. Everything ok, but. How do I get property attributes from controller/bindingContext, to parse my date correctly? How can I get to the PropertyDesciptor of property Birth? Edit Because of separation of concerns my model class is defined in an assembly that doesn't (and shouldn't) reference System.Web.MVC assembly. Setting custom binding (similar to Scott Hanselman's example) attributes is a no-go here.

    Read the article

  • WPF tree data binding

    - by Am
    Hi, I have a well defined tree repository. Where I can rename items, move them up, down, etc. Add new and delete. The data is stored in a table as follows: Index Parent Label Left Right 1 0 root 1 14 2 1 food 2 7 3 2 cake 3 4 4 2 pie 5 6 5 1 flowers 8 13 6 5 roses 9 10 7 5 violets 11 12 Representing the following tree: (1) root (14) (2) food (7) (8) flowers (13) (3) cake (4) (5) pie (6) (9) roeses (10) (11) violets (12) or root food cake pie flowers roses violets Now, my problem is how to represent this in a bindable way, so that a TreeView can handle all the possible data changes? Renaming is easy, all I need is to make the label an updatble field. But what if a user moves flowers above food? I can make the relevant data changes, but they cause a complete data change to all other items in the tree. And all the examples I found of bindable hierarchies are good for non static trees.. So my current (and bad) solution is to reload the displayed tree after relocation change. Any direction will be good. Thanks

    Read the article

  • Navigating to nodes using xpath in flat structure

    - by James Berry
    I have an xml file in a flat structure. We do not control the format of this xml file, just have to deal with it. I've renamed the fields because they are highly domain specific and don't really make any difference to the problem. <attribute name="Title">Book A</attribute> <attribute name="Code">1</attribute> <attribute name="Author"> <value>James Berry</value> <value>John Smith</value> </attribute> <attribute name="Title">Book B</attribute> <attribute name="Code">2</attribute> <attribute name="Title">Book C</attribute> <attribute name="Code">3</attribute> <attribute name="Author"> <value>James Berry</value> </attribute> Key things to note: the file is not particularly hierarchical. Books are delimited by an occurance of an attribute element with name='Title'. But the name='Author' attribute node is optional. Is there a simple xpath statement I can use to find the authors of book 'n'? It is easy to identify the title of book 'n', but the authors value is optional. And you can't just take the following author because in the case of book 2, this would give the author for book 3. I have written a state machine to parse this as a series of elements, but I can't help thinking there would have been a way to directly get the results that I want.

    Read the article

  • Are short tags *that* bad?

    - by Col. Shrapnel
    Everyone here on SO says it's bad, for 3 main reasons: XML is used everywhere You can't control where your script is going to be run short tags are going to be removed in PHP6 But let's see closer at them: Last one is easy - it's just not true. XML is not really a problem. If you want to use short tags, it won't be a problem for you to write a single tag using PHP echo, <?="<?XML...?>"?>. Anyway, why not to leave such a trifle thing for one's own judgment? PHP configuration access. Oh yes, the only thing that can be really considered as a reason. Of course in case you plan to spread your code wide. But if you don't? I think most of scripts being written not for the wide spread, but just for one place. If you can use short tags in that place - why to abandon them? Anyway, it's the only template we are talking about. If you don't like short tags, PHP native template is probably not for you, why not to try Smarty then? Well, the question is: is there any other real reasons to abandon short tags and make it strict recommendation? Or, as it was said, better to leave short tags alone?

    Read the article

  • Moq, a translator and an expression

    - by jeriley
    I'm working with an expression within a moq-ed "Get Service" and ran into a rather annoying issue. In order to get this test to run correctly and the get service to return what it should, there's a translator in between that takes what you've asked for, sends it off and gets what you -really- want. So, thinking this was easy I attempt this ... the fakelist is the TEntity objects (translated, used by the UI) and TEnterpriseObject is the actual persistance. mockGet.Setup(mock => mock.Get(It.IsAny<Expression<Func<TEnterpriseObject, bool>>>())).Returns( (Expression<Func<TEnterpriseObject, bool>> expression) => { var items = new List<TEnterpriseObject>(); var translator = (IEntityTranslator<TEntity, TEnterpriseObject>) ObjectFactory.GetInstance(typeof (IEntityTranslator<TEntity, TEnterpriseObject>)); fakeList.ForEach(fake => items.Add(translator.ToEnterpriseObject(fake))); items = items.Where(expression); var result = new List<TEnterpriseObject>(items); fakeList.Clear(); result.ForEach(item => translator.ToEntity(item)); return items; }); I'm getting the red squigglie under there items.where(expression) -- says it can't be infered from usage (confused between <Func<TEnterpriseObject,bool>> and <Func<TEnterpriseObject,int,bool>>) A far simpler version works great ... mockGet.Setup(mock => mock.Get(It.IsAny<Expression<Func<TEntity, bool>>>())).Returns( (Expression<Func<TEntity, bool>> expression) => fakeList.AsQueryable().Where(expression)); so I'm not sure what I'm missing... ideas?

    Read the article

  • Javascript : Submitting a form outside the actual form doesn't work

    - by Ben Fransen
    Hello all, I'm trying to achieve a fairly easy triggering mechanism for deleting multiple items from a tablegrid. If a user has enough access he/she is able to delete multiple users from a table. In the table I have set up checkboxes, one per row/user. The name of the checkboxes is UsersToDeletep[], and the value per row is the unique UserID. When a user clicks the button 'Delete selected users' a simple validation takes place to make sure at least one checkbox is selected. After that I call my simple function Submit(form). The function works perfectly when called within the form-tags, where I also use it to delete a single user. The function: function Submit(form) { document.forms[form].submit(); } I've also alerted document.forms[form]. The result is, as expected [object HTMLFormElement]. But for some reason the form just won't submit and a pagereload takes place. I'm a bit confused and can't seem to figure out what I'm doing wrong. Can anyone point me in the right direction? Thanks in advance! Ben

    Read the article

  • Is this an example of LINQ-to-SQL?

    - by Edward Tanguay
    I made a little WPF application with a SQL CE database. I built the following code with LINQ to get data out of the database, which was surprisingly easy. So I thought "this must be LINQ-to-SQL". Then I did "add item" and added a "LINQ-to-SQL classes" .dbml file, dragged my table onto the Object Relational Designer but it said, "The selected object uses an unsupported data provider." So then I questioned whether or not the following code actually is LINQ-to-SQL, since it indeed allows me to access data from my SQL CE database file, yet officially "LINQ-to-SQL" seems to be unsupported for SQL CE. So is the following "LINQ-to-SQL" or not? using System.Linq; using System.Data.Linq; using System.Data.Linq.Mapping; using System.Windows; namespace TestLinq22 { public partial class Window1 : Window { public Window1() { InitializeComponent(); MainDB db = new MainDB(@"Data Source=App_Data\Main.sdf"); var customers = from c in db.Customers select new {c.FirstName, c.LastName}; TheListBox.ItemsSource = customers; } } [Database(Name = "MainDB")] public class MainDB : DataContext { public MainDB(string connection) : base(connection) { } public Table<Customers> Customers; } [Table(Name = "Customers")] public class Customers { [Column(DbType = "varchar(100)")] public string FirstName; [Column(DbType = "varchar(100)")] public string LastName; } }

    Read the article

  • Rhino Commons and Rhino Mocks Reference Documents?

    - by Ogre Psalm33
    Ok, is it just me, or does there seem to be a lack of (easy to find) reference documentation for Rhino Commons and Rhino Mocks? My coworkers have started using Rhino Mocks and Rhino Commons (particularly the NHibernate stuff), and I found a few tutorial-ish examples, which were good. But when I see them making use of a class in their code--let's pick something like Rhino.Commons.NHRepository, for example--I have been having a hard time just finding someplace on the web that tells me what Rhino.Commons.NHRepository is or what it does. I like to learn by looking at real examples, but using this approach, it's very handy to look at what the full docs are for a class, instead of just the current context. Similarly, I saw IaMockedRepository.Expect(...) being used in some code, but it took me forever to finally find this page that explains the AAA syntax for Rhino Mocks, which made it clear to me. I've found the Ayende.com wiki on Rhino Commons, but that seems to have a number of broken links. To me, the Rhino libraries seem like a great set of libraries in need of some desperate community help in the documentation area (Of course, as we all know, documentation is not the forte of most coders, and incomplete docs are all too common). Does anyone know if this is something in the works, someplace that some volunteer documenters are needed, or is there some great reference docs out there that I have somehow missed to Rhino Mocks and Rhino Commons?

    Read the article

  • How to create database records for a financial year

    - by David A Gibson
    Hello, I need to create entities (specifically contracts) in a database table that are associated with a financial year. These contracts will be applied to projects. Any contract variation will be recorded by creating a new contract record for the same financial year but the original will remain associated with the project as a history. The projects can last several years and so at any one time a project will have a live contract record for each year as well as any number of historic contracts for that year. All of which is incidental but I'm trying to provide some context. If the contracts where for the year - it would be easy and I'd just store the Year either as a date field with the 1st of January or just an Integer. However the contracts run for financial years and I don't know how to approach this. I don't want a separate table containing the financial years as I don't want the users to have to maintain this. I don't want to store the financial year as a string "2009/2010" as this is not ideal for sorting/extracting the data. Any ideas will be helpful, my best so far is to have starting and ending year in 2 columns and just "KNOW" that starting is April of the year etc Thanks

    Read the article

  • Convert a Relative URL to an Absolute URL in Actionscript / Flex

    - by Bear
    I am working with Flex, and I need to take a relative URL source property and convert it to an absolute URL before loading it. The specific case I am working with involves tweaking SoundEffect's load method. I need to determine if a file will be loaded from the local file system or over the network from looking at the source property, and the easiest way I've found to do this is to generate the absolute URL. I'm having trouble generating the absolute URL for sound effect in particular. Here were my initial thoughts, which haven't worked. Look for the DisplayObject that the Sound Effect targets, and use its loaderInfo property. The target is null when the SoundEffect loads, so this doesn't work. Look at FlexGlobals.topLevelApplication, at the url or loaderInfo properties. Neither of these are set, however. Look at the FlexGlobals.topLevelApplication.systemManager.loaderInfo. This was also not set. The SoundEffect.as code basically boils down to var url:String = "mySound.mp3"; /*>> I'd like to convert the URL to absolute form here and tweak it as necessary <<*/ var req:URLRequest = new URLRequest(url); var loader:Loader = new Loader(); loader.load(req); Does anyone know how to do this? Any help clarifying the rules of how relative urls are resolved for URLRequests in ActionScript would also be much appreciated. edit I would also be perfectly satisfied with some way to tell whether the url will be loaded from the local file system or over the network. Looking at an absolute URL it would just be easy to look at the prefix, like file:// or http://.

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • How to reliably replace a library-defined error handler with my own?

    - by sharptooth
    On certain error cases ATL invokes AtlThrow() which is implemented as ATL::AtlThrowImpl() which in turn throws CAtlException. The latter is not very good - CAtlException is not even derived from std::exception and also we use our own exceptions hierarchy and now we will have to catch CAtlException separately here and there which is lots of extra code and error-prone. Looks like it is possible to replace ATL::AtlThrowImpl() with my own handler - define _ATL_CUSTOM_THROW and define AtlThrow() to be the custom handler before including atlbase.h - and ATL will call the custom handler. Not so easy. Some of ATL code is not in sources - it comes compiled as a library - either static or dynamic. We use the static - atls.lib. And... it is compiled in such way that it has ATL::ThrowImpl() inside and some code calling it. I used a static analysis tool - it clearly shows that there're paths on which the old default handler is called. To ensure I even tried to "reimplement" ATL::AtlThrowImpl() in my code. Now the linker says it sees two declarations of ATL::AtlThrowImpl() which I suppose confirms that there's another implementation that can be called by some code. How can I handle this? How do I replace the default handler completely and ensure that the default handler is never called?

    Read the article

  • Pros and cons of MPMoviePlayerController versus launching UIWebView to stream movie

    - by Nosredna
    I have a client who has video content for the web in Flash format. My task is to help them show the videos in an iPhone app. I realize that step one is to get these videos into the appropriate Quicktime format for the iPhone. Then I'm going to have to help the client figure out how or where to host these files. If that's tricky I assume they can be hosted at YouTube. My chief concern, though, is which approach to take to stream the video. What are the pros and cons of MPMoviePlayerController versus launching UIWebView with the URL of the stream? Is there any difference? Is one of them more or less forgiving? Is one of them a better user experience? Any gotchas I might expect to run into? I'm assuming playing video is pretty easy on the iPhone. Is it reasonable to try both and have one available as a fallback, or would that be a waste of time? I'm trying to schedule this out a bit, so I'd love to hear real-world experiences from anyone who's done this.

    Read the article

  • Save a form in an XML file using Ajax and JSP

    - by novellino
    Hello, I want to create a simple form with a name and an email and save these data in an XML file. So far I found that using Ajax with jQuery is quite easy. So I used the usual code: //dataString have the values taken from the form var dataString = 'name='+ name + '&email=' + email; $.ajax({ type: "POST", url: "users.xml", data: dataString, dataType: "xml", success: function() { .... } }); If I understood well, in the url I should add the name of the XML file that will be created. When the user clicks a button I call the function with the Ajax request, and then I should call somewhere a function for generating the xml. I am using also two beans. One is for setting the elements of the user and the other is for saving the data in the XML. I am using the XStream library for the xml although I don't know if is the best solution. The problem now it that I can not connect all these together in order to save the data in the XML. Does anyone know what should I do? Thanks a lot!

    Read the article

  • Valueurl Binding On Large Arrays Causes Sluggish User Interface

    - by Hooligancat
    I have a large data set (some 3500 objects) that returns from a remote server via HTTP. Currently the data is being presented in an NSCollectionView. One aspect of the data is a path pack to the server for a small image that represents the data (think thumbnail for simplicity). Bindings works fantastically for the data that is already returned, and binding the image via a valueurl binding is easy to do. However, the user interface is very sluggish when scrolling through the data set - which makes me think that the NSCollectionView is retrieving all the image data instead of just the image data used to display the currently viewable images. I was under the impression that Cocoa controls were smart enough to only retrieve data for the information that is actually being output to the user interface through lazy loading. This certainly seems to be the case with NSTableView - but I could be misguided on this thought. Should valueurl binding act lazily and, moreover, should it act lazily in an NSCollectionView? I could create a caching mechanism (in fact I already have such a thing in place for another application - see my post here if you are interested http://stackoverflow.com/questions/1740209/populating-nsimage-with-data-from-an-asynchronous-nsurlconnection) but I really don't want to go this route if I don't have to for this specific implementation as the user could potentially change data sets often and may only want small sub-sets of the data. Any suggested approaches? Thanks!

    Read the article

  • Allow selection of readonly files from SaveFileDialog?

    - by Andy Dent
    I'm using Microsoft.Win32.SaveFileDialog in an app where all files are saved as readonly but the user needs to be able to choose existing files. The existing files being replaced are renamed eg: blah.png becomes blah.png-0.bak, blah.png-1.bak and so on. Thus, the language for the OverwritePrompt is inappropriate - we are not allowing them to overwrite files - so I'm setting dlog.OverwritePrompt = false; The initial filenames for the dialog are generated based on document values so for them, it's easy - we rename the candidate file in advance and if the user cancels or chooses a different name, rename it back again. When I delivered the feature, testers swiftly complained because they wanted to repeatedly save files with names different from the agreed workflow (those goofy, playful guys!). I can't figure out a way to do this with the standard dialogs, in a way that will run safely on both XP (for now) and Windows 7. I was hoping to hook into the FileOK event but that is called after I get a warning dialog: |-----------------------------------------| | blah.png | | This file is set to read-only. | | Try again with a different file name. | |-----------------------------------------|

    Read the article

  • How to join a table in symfony (Propel) and retrieve object from both table with one query

    - by Jean-Philippe
    Hi, I'm trying to get an easy way to fetch data from two joined Mysql table using Propel (inside Symfony) but in one query. Let's say I do this simple thing: $comment = CommentPeer::RetrieveByPk(1); print $comment->getArticle()->getTitle(); //Assuming the Article table is joined to the Comment table Symfony will call 2 queries to get that done. The first one to get the Comment row and the next one to get the Article row linked to the comment one. Now, I am trying to find a way to make all that within one query. I've tried to join them using $c = new Criteria(); $c->addJoin(CommentPeer::ARTICLE_ID, ArticlePeer::ID); $c->add(CommentPeer::ID, 1); $comment = CommentPeer::doSelectOne($c); But when I try to get the Article object using $comment->getArticle() It will still issue the query to get the Article row. I could easily clear all the selected columns and select the columns I need but that would not give me the Propel object I'd like, just an array of the query's raw result. So how can I get a populated propel object of two (or more) joined table with only one query? Thanks, JP

    Read the article

  • Converting TrueClass / FalseClass to integer.

    - by Nick Gorbikoff
    Hello. I'm trying to figure out if there is an easy way to do the following short of adding to_i method to TrueClass/FalseClass. Here is a dilemma: I have a boolean field in my rails app - that is obviously stored as Tinyint in mysql. However - I need to generate xml based of the data in mysql and send it to customer - there SOAP service requires the field in question to have 0 or 1 as the value of this field. So at the time of the xml generation I need to convert my False to 0 and my True to 1 ( which is how they are stored in the DB). Since True & False lack to_i method I could write some if statement that generate either 1 or 0 depending on true/false state. However I have about 10 of these indicators and creating and if/else for each is not very DRY. So what you recommend I do? Or I could add a to_i method to the True / False class. But I'm not sure where should I scope it in my rails app? Just inside this particular model or somewhere else?

    Read the article

  • Converting Makefile to Visual Studio Terminology Questions (First time using VS)

    - by Ukko
    I am an old Unix guy who is converting a makefile based project over to Microsoft Visual Studio, I got tasked with this because I understand the Makefile which chokes VS's automatic import tools. I am sure there is a better way than what I am doing but we are making things fit into the customer's environment and that is driving my choices. So gmake is not a valid answer, even if it is the right one ;-) I just have a couple of questions on terminology that I think will be easy for an experienced (or junior) user to answer. Presently, a make all will generate several executables and a shared library. How should I structure this? Is it one "solution" with multiple projects? There is a body of common code (say 50%) that is shared between the various executable targets that is not in a formal library, if that matters. I thought I could just set up the first executable and then add targets for the others, but that does not seem to work. I know I am working against the tool, so what is the right way? I am also using Visual C++ 2010 Express to try and do this so that may also be a problem if support for multiple targets is not supported without using Visual C++ 2010 (insert superlative). Thanks, this is really one of those questions that should be answerable by a quick chat with the resident Windows Developer at the water cooler. So, I am asking at the virtual water cooler, I also spring for a virtual frosty beverage after work.

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >