Search Results

Search found 4778 results on 192 pages for 'asha series'.

Page 160/192 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • SQL Joining Two or More from Table B with Common Data in Table A

    - by Matthew Frederick
    The real-world situation is a series of events that each have two or more participants (like sports teams, though there can be more than two in an event), only one of which is the host of the event. There is an Event db table for each unique event and a Participant db table with unique participants. They are joined together using a Matchup table. They look like this: Event EventID (PK) (other event data like the date, etc.) Participant ParticipantID (PK) Name Matchup EventID (FK to Event table) ParicipantID (FK to Participant) Host (1 or 0, only 1 host = 1 per EventID) What I'd like to get as a result is something like this: EventID ParticipantID where host = 1 Participant.Name where host = 1 ParticipantID where host = 0 Participant.Name where host = 0 ParticipantID where host = 0 Participant.Name where host = 0 ... Where one event has 2 participants and another has 3 participants, for example, the third participant column data would be null or otherwise noticeable, something like (PID = ParticipantID): EventID PID-1(host) Name-1 (host) PID-2 Name-2 PID-3 Name-3 ------- ----------- ------------- ----- ------ ----- ------ 1 7 Lions 8 Tigers 12 Bears 2 11 Dogs 9 Cats NULL NULL I suspect the answer is reasonably straightforward but for some reason I'm not wrapping my head around it. Alternately it's very difficult. :) I'm using MYSQL 5 if that affects the available SQL.

    Read the article

  • JQuery & Wordpress - Hide multiple divs inside unique ID?

    - by steelfrog
    I'm trying to write a short Wordpress JQuery for Wordpress comments that would allow users to toggle specific comments on and off. This is my first script, and I'm having a tough time. Once overtly simplified, the comments are formatted like so: <li id="li-comment-<?php comment_ID() ?>"> <div class="gravatar"><img src="#" /></div> <div class="comment_poster">Username</div> <div class="comment_options">Option buttons</div> <div class="comment_content">Comment</div> </li> In the "comment_options" DIV is a series of buttons that control the individual comments (reply, quote, edit, close, etc.). The close button is what I'm trying to write this script for. I need it to toggle the "gravtar" and "comment_content" DIVs, but leave the rest in place so that it still displays the user ID and controls. However, I can't seem to figure out how to contain the action. This is what I have so far: $(document).ready(function() { $("div.trigger").click(function() { $("div.gravatar").slideToggle(); $("div.comment_content").slideToggle(); }); }); The problem with this is that it closes are the .gravatar and .comment_content on the page, not just the ones found in the same list item. If you're curious, this is the page I'm working on. Any idea how I could resolve this? Again, this is my firs time with JQuery, so I'm a little fuzzy on how it all works. Thanks!

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Combining SQL Rows

    - by lumberjack4
    I've got SQL Compact Database that contains a table of IP Packet Headers. The Table looks like this: Table: PacketHeaders ID SrcAddress SrcPort DestAddress DestPort Bytes 1 10.0.25.1 255 10.0.25.50 500 64 2 10.0.25.50 500 10.0.25.1 255 80 3 10.0.25.50 500 10.0.25.1 255 16 4 75.48.0.25 387 74.26.9.40 198 72 5 74.26.9.40 198 75.48.0.25 387 64 6 10.0.25.1 255 10.0.25.50 500 48 I need to perform a query to show 'conversations' going on across a local network. Packets going from A - B is part of the same conversations as packets going from B - A. I need to perform a query to show the on going conversations. Basically what I need is something that looks like this: Returned Query: SrcAddress SrcPort DestAddress DestPort TotalBytes BytesA->B BytesB->A 10.0.25.1 255 10.0.25.50 500 208 112 96 75.48.0.25 387 74.26.9.40 198 136 72 64 As you can see I need the query (or series of queries) to recognize that A-B is the same as B-A and break up the byte counts accordingly. I'm not a SQL guru by any means but any help on this would be greatly appreciated.

    Read the article

  • Dismissing UIImagePickerController from UITabBarController

    - by Dave
    I have a tab bar application whereby one tab uses a navigation controller to move through a series of views. On the final view, there is a button to add a photo, which presents a UIImagePickerController. So far, so good - however when I finish picking the image, or cancel the operation, the previous view is loaded, but without the tab bar. I'm sure I'm missing something elementary, but any suggestions on how to properly release the UIImagePickerController would be much appreciated. The code is as follows: ImagePickerViewController *aController = [[ImagePickerViewController alloc]; initWithNibName:@"ImagePickerViewController" bundle:[NSBundle mainBundle]]; [self presentModalViewController:aController animated:YES]; [aController release]; //viewDidLoad self.window = [[[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]] autorelease]; imagePickerController = [[UIImagePickerController alloc] init]; imagePickerController.delegate = self; if([UIImagePickerController isSourceTypeAvailable: UIImagePickerControllerSourceTypeCamera]){ imagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera; } else { imagePickerController.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; } [window addSubview:imagePickerController.view]; //ImagePickerViewController imagePickerControllerDidCancel - FinalViewController is the last view in the stack controlled by a navigation controller which contains the button to present the UIImagePickerController [picker dismissModalViewControllerAnimated:YES]; FinalViewController *aController = [[FinalViewController alloc initWithNibName:@"FinalViewController" bundle:[NSBundle mainBundle]]; [picker presentModalViewController:aController animated:YES]; [aController release];

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • Use the repository pattern when using PLINQO generated data?

    - by Chad
    I'm "upgrading" an MVC app. Previously, the DAL was a part of the Model, as a series of repositories (based on the entity name) using standard LINQ to SQL queries. Now, it's a separate project and is generated using PLINQO. Since PLINQO generates query extensions based on the properties of the entity, I started using them directly in my controller... and eliminated the repositories all together. It's working fine, this is more a question to draw upon your experience, should I continue down this path or should I rebuild the repositories (using PLINQO as the DAL within the repository files)? One benefit of just using the PLINQO generated data context is that when I need DB access, I just make one reference to the the data context. Under the repository pattern, I had to reference each repository when I needed data access, sometimes needing to reference multiple repositories on a single controller. The big benefit I saw on the repositories, were aptly named query methods (i.e. FindAllProductsByCategoryId(int id), etc...). With the PLINQO code, it's _db.Product.ByCatId(int id) - which isn't too bad either. I like both, but where it gets "harrier" is when the query uses predicates. I can roll that up into the repository query method. But on the PLINQO code, it would be something like _db.Product.Where(x = x.CatId == 1 && x.OrderId == 1); I'm not so sure I like having code like that in my controllers. Whats your take on this?

    Read the article

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (104)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas? I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.

    Read the article

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • Workflow UI Integration - is WF a good approach?

    - by AJ
    Somewhat similar to this question, except we haven't decided that we're going with WF yet. I'm working on designing a system that requires a series of decisions and activities on a "work object," so I naturally began to consider workflow, specifically WF. What I'm wondering is if WF is a good solution for a situation like the following (oversimplified for this question) case (please forgive bad ascii art): __________________ | Gather some info | | (web page) | |__________________| | | / \ / \ / \ / \ / cond \ \ 1 / \ / \ / \ / \ / | | ______________|_______________ | | | | | ______|______ ______|________ / do some / | Get more info | / process / | (web page) | /____________/ |_______________| | | / \ / \ / \ / cond. \ \ 2 / \ / \ / \ / | | |__________________ | | | | _____|_____ _____|_____ / some / / another / / process / / process / /__________/ /__________/ The part I'm struggling with is the get more info (web page) step and what happens subsequent, which would mean a halt in the execution of the workflow runtime. I'm aware that this is possible, but I'm not sure that WF is the best approach for this type of code, as the user interaction may be required at many different points through the entire workflow, and the workflow will drive what data entry screens are needed. We are using a WinForms/ASP.NET web forms package for UI, which is homegrown and difficult to push deployments on, so something like SharePoint integration is out of the question. Our back-end is DB2, and the workflow code (whether it's in WF or otherwise) will need to interact with that as well. I guess the bottom line is, should we look into using WF for this, or would we be better served just coding it ourselves? Can WF easily integrate data entry screens to capture information that can be used further on in the workflow?

    Read the article

  • What makes merging in DVCS easy?

    - by afriza
    I read at Joel on Software: With distributed version control, the distributed part is actually not the most interesting part. The interesting part is that these systems think in terms of changes, not in terms of versions. and at HgInit: When we have to merge, Subversion tries to look at both revisions—my modified code, and your modified code—and it tries to guess how to smash them together in one big unholy mess. It usually fails, producing pages and pages of “merge conflicts” that aren’t really conflicts, simply places where Subversion failed to figure out what we did. By contrast, while we were working separately in Mercurial, Mercurial was busy keeping a series of changesets. And so, when we want to merge our code together, Mercurial actually has a whole lot more information: it knows what each of us changed and can reapply those changes, rather than just looking at the final product and trying to guess how to put it together. By looking at the SVN's repository folder, I have the impression that Subversion is maintaining each revisions as changeset. And from what I know, Hg is using both changeset and snapshot while Git is purely using snapshot to store the data. If my assumption is correct, then there must be other ways that make merging in DVCS easy. What are those?

    Read the article

  • Is there a standard lexer/parser tool for Python?

    - by Salim Fadhley
    A volunteer job requires us to convert a large number of LaTeX documents into ePub format. It's a series of open-source fiction book which has so far only been produced only on paper via a print on demand service. We'd like to be able to offer the book to users of book-reader devices (such as Kindle) which require the ePub format for best results. Fortunately, ePub is a very simple format, however there's no trivial way for LaTeX to produce the XHTML outut required. We experimented with alternative LaTeX compilers (e.g. plastex) but in the end we figured that it would probably be a lot easier to simply write our own compiler which understands a tiny subset of the LaTeX language and compiles directly to XHTML / ePub. Previously I used a tool on Windows called GOLD. This allowed me to go directly from BNF grammars to a stub parser. It also alllowed me to implement the parser in any language I liked. (I'd choose Python). This product has to work on Linux, so I'm wondering if there's an equivalent toolchain that works as well under Ubutnu / Eclipse / Python. The idea is that we will take the grammar of TeX and just implement a teeny subset of that, but we do not want to spend a huge amount of time worrying about grammar and parsing. A parser generator would obviously save us a great deal of time. Sal UPDATE 1: Bonus marks for a solution with excellent documentation or tutorials.

    Read the article

  • Liferay: Customise the web.xml HeaderFilter added during portlet deloyment

    - by gid
    I need to customise the deployment of my liferay portlet such that the GWT nocache.js files don't get a 'Expires' HTTP header set. My war file looks like this: view.jsp com.foobar.MyEntryPoint/com.foobar.MyEntryPoint.nocache.js com.foobar.MyEntryPoint/12312312313213123123123.cache.html WEB-INF/web.xml WEB-INF/portlet.xml WEB-INF/liferay-portlet.xml ... etc my web.xml is pretty much empty (only has the displayName) On deployment this is rewritten my liferay to have a series of filters in particalar: Header Filter com.liferay.portal.kernel.servlet.PortalClassLoaderFilter filter-class com.liferay.portal.servlet.filters.header.HeaderFilter Cache-Control max-age=315360000, public Expires 315360000 Header Filter *.js This filter adds an Expires header for about 2020 to the .nocache.js js files... the trouble is these files really shouldn't be cached (the hint is in the name :) For development purposes I have worked around this by disabling the filter using: com.liferay.portal.servlet.filters.header.HeaderFilter=false in portal-ext.properties globaly. What I what I would like to to is one of the following: Disable HeaderFilter only for this portlet or war file. I can always add my own expires Add an init-param to the HeaderFilter to match anything other than .nocache.js files Any ideas how either of these things could be achieved? Stack: liferay-6.0.1 CE, Windows 7, java 1.6.0_18, GWT 2.0.3

    Read the article

  • How to generate hibernate POJO classes programmatically?

    - by Vatsala
    Hi I am aware of the Hibernate Eclipse plugin that helps us (through a series of screens and button clicks) to generate the POJO and DAO classes for the underlying tables. But I would like to mimic this in a runtime environment, i.e. I would like to be able to do the exact same steps programmatically , where I should be able to supply the .cfg.xml file, the reveng.xml file, the database URL, the destination folder, via a command line/ parameters within main(String[] args).. Apparently there is no such tool available which works in a pure Hibernate scenario. There is one which is tuned to generate code for the spring framework - but thats not of direct use to me right now. I tried downloading hibernate-tools.jar's source code for the eclipse plugin, but right now the src code download link at hibernate.org(new design) has been disabled for some reason. Has anyone handled such a thing before? Or can you give me some clues to do this? I have tried a certain JDBCReader class's object, the rationale being read all tables using JDBCReader's methods and then figure out how to use hbm2POJO generator class....

    Read the article

  • Making one table equal to another without a delete *

    - by Joshua Atkins
    Hey, I know this is bit of a strange one but if anyone had any help that would be greatly appreciated. The scenario is that we have a production database at a remote site and a developer database in our local office. Developers make changes directly to the developer db and as part of the deployment process a C# application runs and produces a series of .sql scripts that we can execute on the remote side (essentially delete *, insert) but we are looking for something a bit more elaborate as the downtime from the delete * is unacceptable. This is all reference data that controls menu items, functionality etc of a major website. I have a sproc that essentially returns a diff of two tables. My thinking is that I can insert all the expected data in to a tmp table, execute the diff, and drop anything from the destination table that is not in the source and then upsert everything else. The question is that is there an easy way to do this without using a cursor? To illustrate the sproc returns a recordset structured like this: TableName Col1 Col2 Col3 Dest Src Anything in the recordset with TableName = Dest should be deleted (as it does not exist in src) and anything in Src should be upserted in to dest. I cannot think of a way to do this purely set based but my DB-fu is weak. Any help would be appreciated. Apologies if the explanation is sketchy; let me know if you need anymore details.

    Read the article

  • PHP incomplete code - scan dir, include only if name starts or end with x

    - by Adrian M.
    I posted a question before but I am yet limited to mix the code without getting errors.. I'm rather new to php :( ( the dirs are named in series like this "id_1_1" , "id_1_2", "id_1_3" and "id_2_1" , "id_2_2", "id_2_3" etc.) I have this code, that will scan a directory for all the files and then include a same known named file for each of the existing folders.. the problem is I want to modify a bit the code to only include certain directories which their names: ends with "_1" starts with "id_1_" I want to create a page that will load only the dirs that ends with "_1" and another file that will load only dirs that starts with "id_1_".. <?php include_once "$root/content/common/header.php"; include_once "$root/content/common/header_bc.php"; include_once "$root/content/" . $page_file . "/content.php"; $page_path = ("$root/content/" . $page_file); $includes = array(); $iterator = new RecursiveIteratorIterator( new RecursiveDirectoryIterator($page_path), RecursiveIteratorIterator::SELF_FIRST); foreach($iterator as $file) { if($file->isDir()) { $includes[] = strtoupper($file . '/template.php'); } } $includes = array_reverse($includes); foreach($includes as $file){ include $file; } include_once "$root/content/common/footer.php"; ?> Many Thanks!

    Read the article

  • Animate opacity and delay transition CSS

    - by user1876246
    I have a series of DIVS, I want DIV 1 & DIV 3 to fade out and then DIV 2 and DIV 4 to slide left to take their place, 1 second after the fade. So far I have gotten them to fade out but I cannot figure out how to delay the sliding. Follows is my CSS, ignore the lack of vendor prefixes for this question please. .slide-show{ -webkit-animation: fadeShow 0.25s 1 normal forwards ease-out; animation: fadeShow 0.25s 1 normal forwards ease-out; visibility: visible; } .slide-hide{ -webkit-animation: fadeHide 0.25s 1 normal forwards ease-out; animation: fadeHide 0.25s 1 normal forwards ease-out; //I need the following to be delayed for 1 second visibility: hidden; position: absolute; } @keyframes fadeHide{ 0% { opacity: 1; } 100% { opacity: 0; } } @keyframes fadeShow{ 0% { opacity: 0; } 100% { opacity: 1; } }

    Read the article

  • strange syntax error in python, version 2.6 and 3.1

    - by flow
    this may not be an earth-shattering deficiency of python, but i still wonder about the rationale behind the following behavior: when i run source = """ print( 'helo' ) if __name__ == '__main__': print( 'yeah!' ) #""" print( compile( source, '<whatever>', 'exec' ) ) i get :: File "<whatever>", line 6 # ^ SyntaxError: invalid syntax i can avoid this exception by (1) deleting the trailing #; (2) deleting or outcommenting the if __name__ == '__main__':\n print( 'yeah!' ) lines; (3) add a newline to very end of the source. moreover, if i have the source end without a trailing newline right behind the print( 'yeah!' ), the source will also compile without error. i could also reproduce this behavior with python 2.6, so it’s not new to the 3k series. i find this error to be highly irritating, all the more since when i put above source inside a file and execute it directly or have it imported, no error will occur—which is the expected behavior. a # (hash) outside a string literal should always represent the start of a (possibly empty) comment in a python source; moreover, the presence or absence of a if __name__ == '__main__' clause should not change the interpretation of a soure on a syntactical level. can anyone reproduce the above problem, and/or comment on the phenomenon? cheers

    Read the article

  • Advantages/disadvantages of Python and Ruby

    - by Seburdis
    I know this is going to seem a little like all the other python vs ruby question out there, but I'm not looking specifically to pick one over the other all the time. My question is, essentially, why would you use one language over the other when you are starting a new project? What features does ruby have that python doesn't that would make you decide on it for a given project? What about python over ruby? I was just recently thinking about the differentiation between the two languages because of Jamis Buck's "There is no magic, only awesome" series of articles (4 parts, available here) when I realized I really don't know enough about the two languages to know when to choose one over the other. I'm hoping to get objective answers from people who have experience with both languages, rather than just "python is better, ruby sucks" kind of responses. If you know of a feature in one language that doesn't exist in the other and is great in a certain situation, feel free to chime in and say why you think it's awesome. If you have another language comparable to these that you'd like to suggest pros/cons for, like groovy for example, that would be appreciated too. Some thing I know each language has going for it: Ruby: Awesome metaprogramming Great community Wide selection of Gems Rails Great code readability, usually MacRuby is great for native development on Mac without objc Amazing testing tools (cucumber, rspec, shoulda, autotest, etc.) Python: Whitespace indentation List comprehensions Better functional programming support? Lots of support on linux Easy_install isn't far from gems Great variety of libraries available

    Read the article

  • Updating extra attributes in a has_many, :through relationship using Rails

    - by Robbie
    I've managed to set up a many-to-many relationship between the following models Characters Skills PlayerSkills PlayerSkills, right now, has an attribute that Skills don't normally have: a level. The models look something like this (edited for conciseness): class PlayerSkill < ActiveRecord::Base belongs_to :character belongs_to :skill end class Skill < ActiveRecord::Base has_many :player_skills has_many :characters, :through => :player_skills attr_accessible :name, :description end class Character < ActiveRecord::Base belongs_to :user has_many :player_skills has_many :skills, :through => :player_skills end So nothing too fancy in the models... The controller is also very basic at this point... it's pretty much a stock update action. The form I'm looking to modify is characters#edit. Right now it renders a series of checkboxes which add/remove skills from the characters. This is great, but the whole point of using has_many :through was to track a "level" as well. Here is what I have so far: - form_for @character do |f| = f.error_messages %p = f.label :name %br = f.text_field :name %p = f.label :race %br = f.text_field :race %p = f.label :char_class %br = f.text_field :char_class %p - @skills.each do |skill| = check_box_tag "character[skill_ids][]", skill.id, @character.skills.include?(skill) =h skill.name %br %p = f.submit After it renders "skill.name", I need it to print a text_field that updates player_skill. The problem, of course, is that player_skill may or may not exist! (Depending on if the box was already ticked when you loaded the form!) From everything I've read, has_many :through is great because it allows you to treat the relationship itself as an entity... but I'm completely at a loss as to how to handle the entity in this form. As always, thanks in advance for any and all help you can give me!

    Read the article

  • Urgent: Sort HashSet() function data in sequence

    - by vincent low
    i am new to java, the function i like to perform is something like: i will load a series of data from a file, into my hashSet() function. the problem is, i able to enter all the data in sequence, but i cant retrieve it out in sequence base on the account name in the file. any 1 can help to give a comment? below is my code: public Set retrieveHistory(){ Set dataGroup = new HashSet(); try{ File file = new File("C:\\Documents and Settings\\vincent\\My Documents\\NetBeansProjects\\vincenttesting\\src\\vincenttesting\\vincenthistory.txt"); BufferedReader br = new BufferedReader(new FileReader(file)); String data = br.readLine(); while(data != null){ System.out.println("This is all the record:"+data); Customer cust = new Customer(); //break the data based on the , String array[] = data.split(","); cust.setCustomerName(array[0]); cust.setpassword(array[1]); cust.setlocation(array[2]); cust.setday(array[3]); cust.setmonth(array[4]); cust.setyear(array[5]); cust.setAmount(Double.parseDouble(array[6])); cust.settransaction(Double.parseDouble(array[7])); dataGroup.add(cust); //then proced to read next customer. data = br.readLine(); } br.close(); }catch(Exception e){ System.out.println("error" +e); } return dataGroup; } public static void main(String[] args) { FileReadDataModel fr = new FileReadDataModel(); Set customerGroup = fr.retrieveHistory(); System.out.println(e); for(Object obj : customerGroup){ Customer cust = (Customer)obj; System.out.println("Cust name :" +cust.getCustomerName()); System.out.println("Cust amount :" +cust.getAmount()); }

    Read the article

  • UIView animation only animating some of the things I ask it to

    - by Ben
    I have a series of (say) boxes on the screen in a row, all subviews of my main view. Each is a UIView. I want to shift them all left and have a new view also enter the screen from the right in lockstep. Here's what I'm doing: // First add a dummy view offscreen UIView * stagingView = /* make this view, which sets up its width/height */ CGRect frame = [stagingView frame]; frame.origin.x = /* just off the right side of the screen */; [stagingView setFrame:frame]; [self stagingView]; And then I set up animations in one block for all of my subviews (which includes the one I just added): [UIView beginAnimations:@"shiftLeft" context:NULL]; [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(_animationDidStop:context:)]; [UIView setAnimationDuration:0.3]; for (UIView * view in [self subviews]) { CGRect frame = [view frame]; frame.origin.x -= (frame.size.width + viewPadding); [view setFrame:frame]; } [UIView commitAnimations]; Here's what I expect: The (three) views already on screen get shifted left and the newly staged view marches in from the right at the same time. Here's what happens: The newly staged view animates in exactly as expected, and the views already on the screen do not appear to move at all! (Or possibly they jump without animation to their end locations). And! If I comment out the whole business of creating the new subview offscreen... the ones onscreen do animate correctly! Huh? (Thanks!)

    Read the article

  • PHPUnit - multiple stubs of same class

    - by keithjgrant
    I'm building unit tests for class Foo, and I'm fairly new to unit testing. A key component of my class is an instance of BarCollection which contains a number of Bar objects. One method in Foo iterates through the collection and calls a couple methods on each Bar object in the collection. I want to use stub objects to generate a series of responses for my test class. How do I make the Bar stub class return different values as I iterate? I'm trying to do something along these lines: $stubs = array(); foreach ($array as $value) { $barStub->expects($this->any()) ->method('GetValue')) ->will($this->returnValue($value)); $stubs[] = $barStub; } // populate stubs into `Foo` // assert results from `Foo->someMethod()` So Foo->someMethod() will produce data based on the results it receives from the Bar objects. But this gives me the following error whenever the array is longer than one: There was 1 failure: 1) testMyTest(FooTest) with data set #2 (array(0.5, 0.5)) Expectation failed for method name is equal to <string:GetValue> when invoked zero or more times. Mocked method does not exist. /usr/share/php/PHPUnit/Framework/MockObject/Mock.php(193) : eval()'d code:25 One thought I had was to use ->will($this->returnCallback()) to invoke a callback method, but I don't know how to indicate to the callback which Bar object is making the call (and consequently what response to give). Another idea is to use the onConsecutiveCalls() method, or something like it, to tell my stub to return 1 the first time, 2 the second time, etc, but I'm not sure exactly how to do this. I'm also concerned that if my class ever does anything other than ordered iteration on the collection, I won't have a way to test it.

    Read the article

  • Scripts to parse and download iTunes Connect and AppStore data

    - by bradhouse
    I'm looking for recommendations of a script or series of scripts that download and parse iTunes Connect sales data and AppStore comments, ratings and rankings data for a defined app. I'm also aware of solutions like: AppViz appsales-mobile iphone-stats Heartbeat.app I'm sure I'll find a few more with more searching. I can't help but feel there must be a really decent set of open source scripts out there to do this, given how many developers are now writing apps for the AppStore. Would be interested to hear any commercial offerings as well (although my personal preference is for open source, so I can at least see what it is doing with my iTunes Connect login credentials). To be clear, I'm really looking for something that hits all of the areas mentioned: App Store (per store) Comments Ratings Category/store rankings iTunes Connect The contents of the sales reports Analysis/graphs of the data is not necessary (but would be a nice to have I guess). I'm not really looking for something like AppSales Mobile above, I would like the raw data so I can do my own analysis and formatting. So far it looks like AppViz (listed above) is the best out there. Any suggestions on what is good/available or should I just go roll my own?

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >