Search Results

Search found 25585 results on 1024 pages for 'multiple variables'.

Page 343/1024 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • How can I call multiple nutrient information from ESHA Research API? (apid.esha.com)

    - by user1833044
    I want to call ESHA Research nutrient REST API. I cannot seem to figure out how to call multiple nutrients using ESHA REST API. So far I am calling the following and only able to retrieve the calories, or protein, or another type of nutrient information. So I was hoping someone had experience in retrieving all the nutrient information with one call. Is this possible? This is how I call to retrieve the TWIX nutrient http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da (returns calories, please note the api key is not xxxx but instead a key generated from Esha once you sign up as developer) The return is JSON format. If I want to call fat it would be the following http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da&n=urn:uuid:589294dc-3dcc-4b64-be06-c07e7f65c4bd How can I make a call once and get a return of all the nutrients (so Fat, Calories, Carbs, Vitamins, etc..) for a particular food ID? I have researched and looked at this for a while and cannot seem to find the answer. Thanks in advance for your help.

    Read the article

  • Is there a way to manage browser [chrome] session cookies with a fast toggle for multiple users?

    - by ResoluteHiker
    I feel I should go into more detail. My wife and I share a laptop and browse email. Currently we keep coming back to each others gmail accounts, having to log out and log back in. Are there any good extensions or addons that would allow us to toggle back and forth between these? This does not necessarily need to apply to just gmail but include any cookie, session, etc. I'd be willing to use Firefox if such an extension exists on it as well. Much appreciated!

    Read the article

  • Thoughts on my new template language?

    - by Ralph
    Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed as the closing brace. If no brace is used, they will be closed after taking on element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? Do you like the attribute syntax? (round brackets) I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • Types of quotes for an HTML templating language

    - by Ralph
    I'm developing a templating language, and now I'm trying to decide on what I should do with quotes. I'm thinking about having 3 different types of quotes which are all handled differently: backtick ` double quote " single quote ' expand variables ? yes no escape sequences no yes ? escape html no yes yes Backticks Backticks are meant to be used for outputting JavaScript or unescaped HTML. It's often handy to be able to pass variables into JS, but it could also cause issues with things being treated as variables that shouldn't. My variables are PHP-style ($var) so I'm thinking that might mess with jQuery pretty bad... but if I disable variable expansion w/ backticks then, I'm not sure how would insert a variable into a JS code block? Single Quotes Not sure if escape sequences like \n should be treated as literals or converted. I find it pretty rare that I want to disable escape sequences, but if you do, you could use backticks. So I'm leaning towards "yes" for this one, but that would be contrary to how PHP does it. Double Quotes Pretty certain I want everything enabled for this one. Modifiers I'm also thinking about adding modifiers like @ or r in front of the string that would change some of these options to enable a few more combinations. I would need 9 different quotes or 3 quotes and 2 modifiers to get every combination wouldn't I? My language also supports "filters" which can be applied against any "term" (number, variable, string) so you could always write something like "blah blah $var blah"|expandvars Or "my string"|escapehtml Thoughts? What would you prefer? What would be least confusing/most intuitive?

    Read the article

  • JavaScript Class Patterns Revisited: Endgame

    - by Liam McLennan
    I recently described some of the patterns used to simulate classes (types) in JavaScript. But I missed the best pattern of them all. I described a pattern I called constructor function with a prototype that looks like this: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); and I mentioned that the problem with this pattern is that it does not provide any encapsulation, that is, it does not allow private variables. Jan Van Ryswyck recently posted the solution, obvious in hindsight, of wrapping the constructor function in another function, thereby allowing private variables through closure. The above example becomes: var Person = (function() { // private variables go here var name,age; function constructor(n, a) { name = n; age = a; } constructor.prototype = { toString: function() { return name + " is " + age + " years old."; } }; return constructor; })(); var john = new Person("John Galt", 50); console.log(john.toString()); Now we have prototypal inheritance and encapsulation. The important thing to understand is that the constructor, and the toString function both have access to the name and age private variables because they are in an outer scope and they become part of the closure.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Data classes: getters and setters or different method design

    - by Frog
    I've been trying to design an interface for a data class I'm writing. This class stores styles for characters, for example whether the character is bold, italic or underlined. But also the font-size and the font-family. So it has different types of member variables. The easiest way to implement this would be to add getters and setters for every member variable, but this just feels wrong to me. It feels way more logical (and more OOP) to call style.format(BOLD, true) instead of style.setBold(true). So to use logical methods insteads of getters/setters. But I am facing two problems while implementing these methods: I would need a big switch statement with all member variables, since you can't access a variable by the contents of a string in C++. Moreover, you can't overload by return type, which means you can't write one getter like style.getFormatting(BOLD) (I know there are some tricks to do this, but these don't allow for parameters, which I would obviously need). However, if I would implement getters and setters, there are also issues. I would have to duplicate quite some code because styles can also have a parent styles, which means the getters have to look not only at the member variables of this style, but also at the variables of the parent styles. Because I wasn't able to figure out how to do this, I decided to ask a question a couple of weeks ago. See Object Oriented Programming: getters/setters or logical names. But in that question I didn't stress it would be just a data object and that I'm not making a text rendering engine, which was the reason one of the people that answered suggested I ask another question while making that clear (because his solution, the decorator pattern, isn't suitable for my problem). So please note that I'm not creating my own text rendering engine, I just use these classes to store data. Because I still haven't been able to find a solution to this problem I'd like to ask this question again: how would you design a styles class like this? And why would you do that? Thanks on forehand!

    Read the article

  • Best way to handle multiple tables to replace one big table in Rails? (e.g. 'Books1', 'Books2', etc.

    - by mikep
    Hello, I've decided to use multiple tables for an entity (e.g. Books1, Books2, Books3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just Books). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table. With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations. (I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. I'm going to split the adds across the tables. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end class Books3 < ActiveRecord::Base belongs_to :user end I'm assuming that the main performance hit would come in terms of memory and possibly some method call overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_. But for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. For example, with a user.id of 2, I'd only need books3.find_by_author('Author'), but the way I'm thinking of setting this up, I'd still have access to Books1..n. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. So, a few questions: 1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this? 2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this? 3) Does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class? For example, so I can call books.find_by_author('Author') instead of books3.find_by_author('Author'). Thank you!

    Read the article

  • CakePHP sending multiple lines of same cookie information in response header. why??

    - by Vicer
    Hi all, I am having this problem with one of my cakePHP applications. My application displays a big html table on the page. I noticed that when the table goes beyond a certain size limit, IE cannot display that page. While trying to figure out why this happens, I noticed that my html response header contains a HUGE number of lines repeating the same thing. Response Headers ------------------------------ Date Thu, 20 May 2010 04:18:10 GMT Server Apache/2.0.63 (Win32) mod_ssl/2.0.63 OpenSSL/0.9.7m PHP/5.2.10 X-Powered-By PHP/5.2.10 P3P CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Set-Cookie CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp Keep-Alive timeout=15, max=100 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html ========================================================================== Request Headers -------------------------- Host localhost:8080 User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729) Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://localhost:8080/mywebapp/section2/bigtable/1 Cookie CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1 Authorization Basic cGt1bWFyYTpwa3VtYXJh Notice the huge set of repeated lines at 'Set-Cookie' in the Response Header. This only happens when I try to display large tables. Does anyone have any clue to what might be causing this? Any help to find the issue is appreciated. I am using CakePHP 1.2.5. As far as I know, I am not messing with any set cookie functions. But yes I am using session variables.

    Read the article

  • Writing an optimised and efficient search engine with mySQL and ColdFusion

    - by Mel
    I have a search page with the following scenarios listed below. I was told there was a better way to do it, but not how, and that I am using too many if statements, and that it's prone to causing an error through url manipulation: Search.cfm will processes a search made from a search bar present on all pages, with one search input (titleName). If search.cfm is accessed manually (through URL not through using the simple search bar on all pages) it displays an advanced search form with three inputs (titleName, genreID, platformID) or it evaluates searchResponse variable and decides what to do. If simple search query is blank, has no results, or less than 3 characters it displays an error If advanced search query is blank, has no results, or less than 3 characters it displays an error If any successful search returns results, they come back normally. The top-of-page logic is as follows: <!---SET DEFAULT VARIABLE---> <cfparam name="variables.searchResponse" default=""> <!---CHECK TO SEE IF SIMPLE SEARCH A FORM WAS SUBMITTED AND EXECUTE SEARCH IF IT WAS---> <cfif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) GTE 3> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="simpleSearch" searchString="#Form.titleName#" returnvariable="simpleSearchResult"> <cfset variables.searchResponse = "simpleSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.simpleSearchResult") AND simpleSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> <!---CHECK IF ADVANCED SEARCH FORM WAS SUBMITTED---> <cfif IsDefined("Form.AdvancedSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.advancedSearch") AND Len(Trim(Form.titleName)) GTE 2> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="advancedSearch" returnvariable="advancedSearchResult" titleName="#Form.titleName#" genreID="#Form.genreID#" platformID="#Form.platformID#"> <cfset variables.searchResponse = "advancedSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.advancedSearchResult") AND advancedSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> I'm using the searchResponse variable to decide what the the page displays, based on the following scenarios: <!---ALWAYS DISPLAY SIMPLE SEARCH BAR AS IT'S PART OF THE HEADER---> <form name="simpleSearch" action="search.cfm" method="post"> <input type="hidden" name="simpleSearch" /> <input type="text" name="titleName" /> <input type="button" value="Search" onclick="form.submit()" /> </form> <!---IF NO SEARCH WAS SUBMITTED DISPLAY DEFAULT FORM---> <cfif searchResponse IS ""> <h1>Advanced Search</h1> <!---DISPLAY FORM---> <form name="advancedSearch" action="search.cfm" method="post"> <input type="hidden" name="advancedSearch" /> <input type="text" name="titleName" /> <input type="text" name="genreID" /> <input type="text" name="platformID" /> <input type="button" value="Search" onclick="form.submit()" /> </form> </cfif> <!---IF SEARCH IS BLANK OR LESS THAN 3 CHARACTERS DISPLAY ERROR MESSAGE---> <cfif searchResponse IS "invalidString"> <cfoutput> <h1>INVALID SEARCH</h1> </cfoutput> </cfif> <!---IF SEARCH WAS MADE BUT NO RESULTS WERE FOUND---> <cfif searchResponse IS "noResult"> <cfoutput> <h1>NO RESULT FOUND</h1> </cfoutput> </cfif> <!---IF SIMPLE SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "simpleSearchResult"> <cfoutput> <h1>Search Results</h1> </cfoutput> <cfoutput query="simpleSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> <!---IF ADVANCED SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "advancedSearchResult"> <cfoutput> <h1>Search Results</h1> <p>Your search for "#Form.titleName#" returned #advancedSearchResult.RecordCount# result(s).</p> </cfoutput> <cfoutput query="advancedSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> Is my logic a) not efficient because my if statements/is there a better way to do this? And b) Can you see any scenarios where my code can break? I've tested it but I have not been able to find any issues with it. And I have no way of measuring performance. Any thoughts and ideas would be greatly appreciated. Many thanks

    Read the article

  • [R] Merge multiple data frames - Error in match.names(clabs, names(xi)) : names do not match previou

    - by Jasmine
    Hi all- I'm getting some really bizarre stuff while trying to merge multiple data frames. Help! I need to merge a bunch of data frames by the columns 'RID' and 'VISCODE'. Here is an example of what it looks like: d1 = data.frame(ID = sample(9, 1:100), RID = c(2, 5, 7, 9, 12), VISCODE = rep('bl', 5), value1 = rep(16, 5)) d2 = data.frame(ID = sample(9, 1:100), RID = c(2, 2, 2, 5, 5, 5, 7, 7, 7), VISCODE = rep(c('bl', 'm06', 'm12'), 3), value2 = rep(100, 9)) d3 = data.frame(ID = sample(9, 1:100), RID = c(2, 2, 2, 5, 5, 5, 9,9,9), VISCODE = rep(c('bl', 'm06', 'm12'), 3), value3 = rep("a", 9), values3.5 = rep("c", 9)) d4 = data.frame(ID =sample(8, 1:100), RID = c(2, 2, 5, 5, 5, 7, 7, 7, 9), VISCODE = c(c('bl', 'm12'), rep(c('bl', 'm06', 'm12'), 2), 'bl'), value4 = rep("b", 9)) dataList = list(d1, d2, d3, d4) I looked at the answers to the question titled "Merge several data.frames into one data.frame with a loop." I used the reduce method suggested there as well as a loop I wrote: try1 = mymerge(dataList) try2 <- Reduce(function(x, y) merge(x, y, all= TRUE, by=c("RID", "VISCODE")), dataList, accumulate=F) where dataList is a list of data frames and mymerge is: mymerge = function(dataList){ L = length(dataList) mdat = dataList[[1]] for(i in 2:L){ mdat = merge(mdat, dataList[[i]], by.x = c("RID", "VISCODE"), by.y = c("RID", "VISCODE"), all = TRUE) } mdat } For my test data and subsets of my real data, both of these work fine and produce exactly the same results. However, when I use larger subsets of my data, they both break down and give me the following error: Error in match.names(clabs, names(xi)) : names do not match previous names. The really weird thing is that using this works: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], faq[1:47, ]) And using this fails: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], faq[1:48, ]) As far as I can tell, there is nothing special about row 48 of faq. Likewise, using this works: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], pdx[1:47, ]) And using this fails: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], pdx[1:48, ]) Row 48 in faq and row 48 in pdx have the same values for RID and VISCODE, the same value for EXAMDATE (something I'm not matching on) and different values for ID (another thing I'm not matching on). Besides the matching RID and VISCODE, I see anything special about them. They don't share any other variable names. This same scenario occurs elsewhere in the data without problems. To add icing on the complication cake, this doesn't even work: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], faq[1:48, 2:3]) where columns 2 and 3 are "RID" and "VISCODE". 48 isn't even the magic number because this works: dataList = list(demog[1:500,], neurobat[1:500,], apoe[1:500,], mmse[1:457,]) while using mmse[1:458, ] fails. I can't seem to come up with test data that causes the problem. Has anyone had this problem before? Any better ideas on how to merge? Thanks for your help! Jasmine

    Read the article

  • Can I query DOM Document with xpath expression from multiple threads safely?

    - by Dan
    I plan to use dom4j DOM Document as a static cache in an application where multiples threads can query the document. Taking into the account that the document itself will never change, is it safe to query it from multiple threads? I wrote the following code to test it, but I am not sure that it actually does prove that operation is safe? package test.concurrent_dom; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.DocumentHelper; import org.dom4j.Element; import org.dom4j.Node; /** * Hello world! * */ public class App extends Thread { private static final String xml = "<Session>" + "<child1 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText1</child1>" + "<child2 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText2</child2>" + "<child3 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText3</child3>" + "</Session>"; private static Document document; private static Element root; public static void main( String[] args ) throws DocumentException { document = DocumentHelper.parseText(xml); root = document.getRootElement(); Thread t1 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child1"); if(!n1.getText().equals("ChildText1")){ System.out.println("WRONG!"); } } } }; Thread t2 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child2"); if(!n1.getText().equals("ChildText2")){ System.out.println("WRONG!"); } } } }; Thread t3 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child3"); if(!n1.getText().equals("ChildText3")){ System.out.println("WRONG!"); } } } }; t1.start(); t2.start(); t3.start(); System.out.println( "Hello World!" ); } }

    Read the article

  • Does .NET have a built in IEnumerable for multiple collections?

    - by Bryce Wagner
    I need an easy way to iterate over multiple collections without actually merging them, and I couldn't find anything built into .NET that looks like it does that. It feels like this should be a somewhat common situation. I don't want to reinvent the wheel. Is there anything built in that does something like this: public class MultiCollectionEnumerable<T> : IEnumerable<T> { private MultiCollectionEnumerator<T> enumerator; public MultiCollectionEnumerable(params IEnumerable<T>[] collections) { enumerator = new MultiCollectionEnumerator<T>(collections); } public IEnumerator<T> GetEnumerator() { enumerator.Reset(); return enumerator; } IEnumerator IEnumerable.GetEnumerator() { enumerator.Reset(); return enumerator; } private class MultiCollectionEnumerator<T> : IEnumerator<T> { private IEnumerable<T>[] collections; private int currentIndex; private IEnumerator<T> currentEnumerator; public MultiCollectionEnumerator(IEnumerable<T>[] collections) { this.collections = collections; this.currentIndex = -1; } public T Current { get { if (currentEnumerator != null) return currentEnumerator.Current; else return default(T); } } public void Dispose() { if (currentEnumerator != null) currentEnumerator.Dispose(); } object IEnumerator.Current { get { return Current; } } public bool MoveNext() { if (currentIndex >= collections.Length) return false; if (currentIndex < 0) { currentIndex = 0; if (collections.Length > 0) currentEnumerator = collections[0].GetEnumerator(); else return false; } while (!currentEnumerator.MoveNext()) { currentEnumerator.Dispose(); currentEnumerator = null; currentIndex++; if (currentIndex >= collections.Length) return false; currentEnumerator = collections[currentIndex].GetEnumerator(); } return true; } public void Reset() { if (currentEnumerator != null) { currentEnumerator.Dispose(); currentEnumerator = null; } this.currentIndex = -1; } } }

    Read the article

  • How to use multiple flatpages models in a django app?

    - by the_drow
    I have multiple models that can be converted to flatpages but have to have some extra information (For example I have an about us page but I also have a blog). However I understand that there must be only one flatpages model since the middleware only returns the flatpages instance and does not resolve the child models. What do I have to do? EDIT: It seems I need to change the views. Here's the current code: from django.contrib.flatpages.models import FlatPage from django.template import loader, RequestContext from django.shortcuts import get_object_or_404 from django.http import HttpResponse, HttpResponseRedirect from django.conf import settings from django.core.xheaders import populate_xheaders from django.utils.safestring import mark_safe from django.views.decorators.csrf import csrf_protect DEFAULT_TEMPLATE = 'flatpages/default.html' # This view is called from FlatpageFallbackMiddleware.process_response # when a 404 is raised, which often means CsrfViewMiddleware.process_view # has not been called even if CsrfViewMiddleware is installed. So we need # to use @csrf_protect, in case the template needs {% csrf_token %}. # However, we can't just wrap this view; if no matching flatpage exists, # or a redirect is required for authentication, the 404 needs to be returned # without any CSRF checks. Therefore, we only # CSRF protect the internal implementation. def flatpage(request, url): """ Public interface to the flat page view. Models: `flatpages.flatpages` Templates: Uses the template defined by the ``template_name`` field, or `flatpages/default.html` if template_name is not defined. Context: flatpage `flatpages.flatpages` object """ if not url.endswith('/') and settings.APPEND_SLASH: return HttpResponseRedirect("%s/" % request.path) if not url.startswith('/'): url = "/" + url # Here instead of getting the flat page it needs to find if it has a page with a child model. f = get_object_or_404(FlatPage, url__exact=url, sites__id__exact=settings.SITE_ID) return render_flatpage(request, f) @csrf_protect def render_flatpage(request, f): """ Internal interface to the flat page view. """ # If registration is required for accessing this page, and the user isn't # logged in, redirect to the login page. if f.registration_required and not request.user.is_authenticated(): from django.contrib.auth.views import redirect_to_login return redirect_to_login(request.path) if f.template_name: t = loader.select_template((f.template_name, DEFAULT_TEMPLATE)) else: t = loader.get_template(DEFAULT_TEMPLATE) # To avoid having to always use the "|safe" filter in flatpage templates, # mark the title and content as already safe (since they are raw HTML # content in the first place). f.title = mark_safe(f.title) f.content = mark_safe(f.content) # Here I need to be able to configure what I am passing in the context c = RequestContext(request, { 'flatpage': f, }) response = HttpResponse(t.render(c)) populate_xheaders(request, response, FlatPage, f.id) return response

    Read the article

  • Maven best practice for generating multiple jars with different/filtered classes ?

    - by jaguard
    I developed a Java utility library (similarly to Apache Commons) that I use in various projects. Additionally to fat clients I also use it for mobile clients (PDA with J9 Foundation profile). In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality but not really needed in all the projects. Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars Currently in the projects that area using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the generated jar size constraints for the mobile projects. Migrating now to Maven I just wonder if there are any best practices to arrive to something similar so I consider the following scenarios: [1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes ... I'm not very comfortable with this approach since it pollute the library with library consumers demands (filters) [2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach my look problematic since even that will let "my-lib" project intact (with no additional classifiers & artifacts), basically will double the number of projects since for every client project I'll have one just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency) [3] - Into the every client project that use the library (ex: my-pda-app) add some specialized Maven plugins to trim - out (when generating the final artifact/package) the un-needed classes (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin) What is the best practice for solving this kind of problems in the "Maven way" ?!

    Read the article

  • Maven-ear-plugin - excluding multiple modules i.e. jars, wars etc.

    - by James Murphy
    I've been using the Maven EAR plugin for creating my ear files for a new project. I noticed in the plugin documentation you can specify exclude statements for modules. For example the configuration for my plugin is as follows... <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> <version>2.4.1</version> <configuration> <jboss> <version>5</version> </jboss> <modules> <!-- Include the templatecontroller.jar inside the ear --> <jarModule> <groupId>com.kewill.kdm</groupId> <artifactId>templatecontroller</artifactId> <bundleFileName>templatecontroller.jar</bundleFileName> <includeInApplicationXml>true</includeInApplicationXml> </jarModule> <!-- Exclude the following classes from the ear --> <jarModule> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <excluded>true</excluded> </jarModule> <jarModule> <groupId>antlr</groupId> <artifactId>antlr</artifactId> <excluded>true</excluded> </jarModule> ... declare multiple excludes <security> <security-role id="SecurityRole_1234"> <role-name>admin</role-name> </security-role> </security> </configuration> </plugin> </plugins> This approach is absolutely fine with small projects where you have say 4-5 modules to exclude. However, in my project I have 30+ and we've only just started the project so as it expands this is likely to grow. Besides explicitly declaring exclude statements per module is it possible to use wildcards or and exclude all maven dependencies flag to only include those modules i declare and exclude everything else? Is anyone aware of a more elegant solution?

    Read the article

  • Django : presenting a form very different from the model and with multiple field values in a Django-

    - by sebpiq
    Hi ! I'm currently doing a firewall management application for Django, here's the (simplified) model : class Port(models.Model): number = models.PositiveIntegerField(primary_key=True) application = models.CharField(max_length=16, blank=True) class Rule(models.Model): port = models.ForeignKey(Port) ip_source = models.IPAddressField() ip_mask = models.IntegerField(validators=[MaxValueValidator(32)]) machine = models.ForeignKey("vmm.machine") What I would like to do, however, is to display to the user a form for entering rules, but with a very different organization than the model : Port 80 O Not open O Everywhere O Specific addresses : --------- delete field --------- delete field + add address field Port 443 ... etc Where Not open means that there is no rule for the given port, Everywhere means that there is only ONE rule (0.0.0.0/0) for the given port, and with specific addresses, you can add as many addresses as you want (I did this with JQuery), which will make as many rules. Now I did a version completely "handmade", meaning that I create the forms entirely in my templates, set input names with a prefix, and parse all the POSTed stuff in my view (which is quite painful, and means that there's no point in using a web framework). I also have a class which aggregates the rules together to easily pre-fill the forms with the informations "not open, everywhere, ...". I'm passing a list of those to the template, therefore it acts as an interface between my model and my "handmade" form : class MachinePort(object): def __init__(self, machine, port): self.machine = machine self.port = port @property def fully_open(self): for rule in self.port.rule_set.filter(machine=self.machine): if ipaddr.IPv4Network("%s/%s" % (rule.ip_source, rule.ip_mask)) == ipaddr.IPv4Network("0.0.0.0/0"): return True else : return False @property def partly_open(self): return bool(self.port.rule_set.filter(machine=self.machine)) and not self.fully_open @property def not_open(self): return not self.partly_open and not self.fully_open But all this is rather ugly ! Do anyone of you know if there is a classy way to do this ? In particular with the form... I don't know how to have a form that can have an undefined number of fields, neither how to transform these fields into Rule objects (because all the rule fields would have to be gathered from the form), neither how to save multiple objects... Well I could try to hack into the Form class, but seems like too much work for such a special case. Is there any nice feature I'm missing ?

    Read the article

  • using '.each' method: how do I get the indexes of multiple ordered lists to each begin at [0]?

    - by shecky
    I've got multiple divs, each with an ordered list (various lengths). I'm using jquery to add a class to each list item according to its index (for the purpose of columnizing portions of each list). What I have so far ... <script type="text/javascript"> /* Objective: columnize list items from a single ul or ol in a pre-determined number of columns 1. get the index of each list item 2. assign column class according to li's index */ $(document).ready(function() { $('ol li').each(function(index){ // assign class according to li's index ... index = li number -1: 1-6 = 0-5; 7-12 = 6-11, etc. if ( index <= 5 ) { $(this).addClass('column-1'); } if ( index > 5 && index < 12 ) { $(this).addClass('column-2'); } if ( index > 11 ) { $(this).addClass('column-3'); } // add another class to the first list item in each column $('ol li').filter(function(index) { return index != 0 && index % 6 == 0; }).addClass('reset'); }); // closes li .each func }); // closes doc.ready.func </script> ... succeeds if there's only one list; when there are additional lists, the last column class ('column-3') is added to all remaining list items on the page. In other words, the script is presently indexing continuously through all subsequent lists/list items, rather than being re-set to [0] for each ordered list. Can someone please show me the proper method/syntax to correct/amend this, so that the script addresses/indexes each ordered list anew? many thanks in advance. shecky p.s. the markup is pretty straight-up: <div class="tertiary"> <h1>header</h1> <ol> <li><a href="#" title="a link">a link</a></li> <li><a href="#" title="a link">a link</a></li> <li><a href="#" title="a link">a link</a></li> </ol> </div><!-- END div class="tertiary" -->

    Read the article

  • How to create multiple Repository object inside a Repository class using Unit Of Work?

    - by Santosh
    I am newbie to MVC3 application development, currently, we need following Application technologies as requirement MVC3 framework IOC framework – Autofac to manage object creation dynamically Moq – Unit testing Entity Framework Repository and Unit Of Work Pattern of Model class I have gone through many article to explore an basic idea about the above points but still I am little bit confused on the “Repository and Unit Of Work Pattern “. Basically what I understand Unit Of Work is a pattern which will be followed along with Repository Pattern in order to share the single DB Context among all Repository object, So here is my design : IUnitOfWork.cs public interface IUnitOfWork : IDisposable { IPermitRepository Permit_Repository{ get; } IRebateRepository Rebate_Repository { get; } IBuildingTypeRepository BuildingType_Repository { get; } IEEProjectRepository EEProject_Repository { get; } IRebateLookupRepository RebateLookup_Repository { get; } IEEProjectTypeRepository EEProjectType_Repository { get; } void Save(); } UnitOfWork.cs public class UnitOfWork : IUnitOfWork { #region Private Members private readonly CEEPMSEntities context = new CEEPMSEntities(); private IPermitRepository permit_Repository; private IRebateRepository rebate_Repository; private IBuildingTypeRepository buildingType_Repository; private IEEProjectRepository eeProject_Repository; private IRebateLookupRepository rebateLookup_Repository; private IEEProjectTypeRepository eeProjectType_Repository; #endregion #region IUnitOfWork Implemenation public IPermitRepository Permit_Repository { get { if (this.permit_Repository == null) { this.permit_Repository = new PermitRepository(context); } return permit_Repository; } } public IRebateRepository Rebate_Repository { get { if (this.rebate_Repository == null) { this.rebate_Repository = new RebateRepository(context); } return rebate_Repository; } } } PermitRepository .cs public class PermitRepository : IPermitRepository { #region Private Members private CEEPMSEntities objectContext = null; private IObjectSet<Permit> objectSet = null; #endregion #region Constructors public PermitRepository() { } public PermitRepository(CEEPMSEntities _objectContext) { this.objectContext = _objectContext; this.objectSet = objectContext.CreateObjectSet<Permit>(); } #endregion public IEnumerable<RebateViewModel> GetRebatesByPermitId(int _permitId) { // need to implment } } PermitController .cs public class PermitController : Controller { #region Private Members IUnitOfWork CEEPMSContext = null; #endregion #region Constructors public PermitController(IUnitOfWork _CEEPMSContext) { if (_CEEPMSContext == null) { throw new ArgumentNullException("Object can not be null"); } CEEPMSContext = _CEEPMSContext; } #endregion } So here I am wondering how to generate a new Repository for example “TestRepository.cs” using same pattern where I can create more then one Repository object like RebateRepository rebateRepo = new RebateRepository () AddressRepository addressRepo = new AddressRepository() because , what ever Repository object I want to create I need an object of UnitOfWork first as implmented in the PermitController class. So if I would follow the same in each individual Repository class that would again break the priciple of Unit Of Work and create multiple instance of object context. So any idea or suggestion will be highly appreciated. Thank you

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • parallel_for_each from amp.h – part 1

    - by Daniel Moth
    This posts assumes that you've read my other C++ AMP posts on index<N> and extent<N>, as well as about the restrict modifier. It also assumes you are familiar with C++ lambdas (if not, follow my links to C++ documentation). Basic structure and parameters Now we are ready for part 1 of the description of the new overload for the concurrency::parallel_for_each function. The basic new parallel_for_each method signature returns void and accepts two parameters: a grid<N> (think of it as an alias to extent) a restrict(direct3d) lambda, whose signature is such that it returns void and accepts an index of the same rank as the grid So it looks something like this (with generous returns for more palatable formatting) assuming we are dealing with a 2-dimensional space: // some_code_A parallel_for_each( g, // g is of type grid<2> [ ](index<2> idx) restrict(direct3d) { // kernel code } ); // some_code_B The parallel_for_each will execute the body of the lambda (which must have the restrict modifier), on the GPU. We also call the lambda body the "kernel". The kernel will be executed multiple times, once per scheduled GPU thread. The only difference in each execution is the value of the index object (aka as the GPU thread ID in this context) that gets passed to your kernel code. The number of GPU threads (and the values of each index) is determined by the grid object you pass, as described next. You know that grid is simply a wrapper on extent. In this context, one way to think about it is that the extent generates a number of index objects. So for the example above, if your grid was setup by some_code_A as follows: extent<2> e(2,3); grid<2> g(e); ...then given that: e.size()==6, e[0]==2, and e[1]=3 ...the six index<2> objects it generates (and hence the values that your lambda would receive) are:    (0,0) (1,0) (0,1) (1,1) (0,2) (1,2) So what the above means is that the lambda body with the algorithm that you wrote will get executed 6 times and the index<2> object you receive each time will have one of the values just listed above (of course, each one will only appear once, the order is indeterminate, and they are likely to call your code at the same exact time). Obviously, in real GPU programming, you'd typically be scheduling thousands if not millions of threads, not just 6. If you've been following along you should be thinking: "that is all fine and makes sense, but what can I do in the kernel since I passed nothing else meaningful to it, and it is not returning any values out to me?" Passing data in and out It is a good question, and in data parallel algorithms indeed you typically want to pass some data in, perform some operation, and then typically return some results out. The way you pass data into the kernel, is by capturing variables in the lambda (again, if you are not familiar with them, follow the links about C++ lambdas), and the way you use data after the kernel is done executing is simply by using those same variables. In the example above, the lambda was written in a fairly useless way with an empty capture list: [ ](index<2> idx) restrict(direct3d), where the empty square brackets means that no variables were captured. If instead I write it like this [&](index<2> idx) restrict(direct3d), then all variables in the some_code_A region are made available to the lambda by reference, but as soon as I try to use any of those variables in the lambda, I will receive a compiler error. This has to do with one of the direct3d restrictions, where only one type can be capture by reference: objects of the new concurrency::array class that I'll introduce in the next post (suffice for now to think of it as a container of data). If I write the lambda line like this [=](index<2> idx) restrict(direct3d), all variables in the some_code_A region are made available to the lambda by value. This works for some types (e.g. an integer), but not for all, as per the restrictions for direct3d. In particular, no useful data classes work except for one new type we introduce with C++ AMP: objects of the new concurrency::array_view class, that I'll introduce in the post after next. Also note that if you capture some variable by value, you could use it as input to your algorithm, but you wouldn’t be able to observe changes to it after the parallel_for_each call (e.g. in some_code_B region since it was passed by value) – the exception to this rule is the array_view since (as we'll see in a future post) it is a wrapper for data, not a container. Finally, for completeness, you can write your lambda, e.g. like this [av, &ar](index<2> idx) restrict(direct3d) where av is a variable of type array_view and ar is a variable of type array - the point being you can be very specific about what variables you capture and how. So it looks like from a large data perspective you can only capture array and array_view objects in the lambda (that is how you pass data to your kernel) and then use the many threads that call your code (each with a unique index) to perform some operation. You can also capture some limited types by value, as input only. When the last thread completes execution of your lambda, the data in the array_view or array are ready to be used in the some_code_B region. We'll talk more about all this in future posts… (a)synchronous Please note that the parallel_for_each executes as if synchronous to the calling code, but in reality, it is asynchronous. I.e. once the parallel_for_each call is made and the kernel has been passed to the runtime, the some_code_B region continues to execute immediately by the CPU thread, while in parallel the kernel is executed by the GPU threads. However, if you try to access the (array or array_view) data that you captured in the lambda in the some_code_B region, your code will block until the results become available. Hence the correct statement: the parallel_for_each is as-if synchronous in terms of visible side-effects, but asynchronous in reality.   That's all for now, we'll revisit the parallel_for_each description, once we introduce properly array and array_view – coming next. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How to save data from multiple views of an iPhone app?

    - by DownUnder
    Hi Everyone. I'm making an app where I need to save the text in multiple views in the app when the app quits. I also need to be able to remove all of the data from just one of those views and when the app quits, it's possible not all of those views will have been created yet. After reading this post I thought perhaps it would be good to use a singleton that manages my app data which loads in the data when it is first requested and saved it when the app quits. Then in each view where I need to save data I can just set it on the singleton. I gave it a go but have run into some issues. At first I didn't synthesize the properties (as in the post I was using as a guide) but the compiler told me I needed to make getters and setters, so I did. Now when my applicationWIllTerminate: gets call the app crashes and the console says "Program received signal: “EXC_BAD_ACCESS”. kill quit". Is anyone able to tell me what I'm doing wrong, or suggest a better approach to saving the data? //SavedData.h #import <Foundation/Foundation.h> #define kFileName @"appData.plist" @interface SavedData : NSObject { NSString *information; NSString *name; NSString *email; NSString *phone; NSString *mobile; } @property(assign) NSString *information; @property(assign) NSString *name; @property(assign) NSString *email; @property(assign) NSString *phone; @property(assign) NSString *mobile; + (SavedData *)singleton; + (NSString *)dataFilePath; + (void)applicationWillTerminate:(NSNotification *)notification; @end //SavedData.m #import "SavedData.h" @implementation SavedData @synthesize information; @synthesize name; @synthesize email; @synthesize phone; @synthesize mobile; static SavedData * SavedData_Singleton = nil; + (SavedData *)singleton{ if (nil == SavedData_Singleton){ SavedData_Singleton = [[SavedData_Singleton alloc] init]; NSString *filePath = [self dataFilePath]; if([[NSFileManager defaultManager] fileExistsAtPath:filePath]){ NSMutableArray * array = [[NSMutableArray alloc] initWithContentsOfFile:filePath]; information = [array objectAtIndex:0]; name = [array objectAtIndex:1]; email = [array objectAtIndex:2]; phone = [array objectAtIndex:3]; mobile = [array objectAtIndex:4]; [array release]; } UIApplication *app = [UIApplication sharedApplication]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(applicationWillTerminate:) name:UIApplicationWillTerminateNotification object:app]; } return SavedData_Singleton; } + (NSString *)dataFilePath{ NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *DocumentsDirectory = [paths objectAtIndex:0]; return [DocumentsDirectory stringByAppendingPathComponent:kFileName]; } + (void)applicationWillTerminate:(NSNotification *)notification{ NSLog(@"Application will terminate received"); NSMutableArray *array = [[NSMutableArray alloc] init]; [array addObject:information]; [array addObject:name]; [array addObject:email]; [array addObject:phone]; [array addObject:mobile]; [array writeToFile:[self dataFilePath] atomically:YES]; [array release]; } @end Then when I want to use it I do myLabel.text = [SavedData singleton].information; And when I change the field [SavedData singleton].information = @"my string"; Any help will be very much appreciated!

    Read the article

  • Sending changes from multiple tables in disconnected dataset to SQLServer...

    - by Stecy
    We have a third party application that accept calls using an XML RPC mechanism for calling stored procs. We send a ZIP-compressed dataset containing multiple tables with a bunch of update/delete/insert using this mechanism. On the other end, a CLR sproc decompress the data and gets the dataset. Then, the following code gets executed: using (var conn = new SqlConnection("context connection=true")) { if (conn.State == ConnectionState.Closed) conn.Open(); try { foreach (DataTable table in ds.Tables) { string columnList = ""; for (int i = 0; i < table.Columns.Count; i++) { if (i == 0) columnList = table.Columns[0].ColumnName; else columnList += "," + table.Columns[i].ColumnName; } var da = new SqlDataAdapter("SELECT " + columnList + " FROM " + table.TableName, conn); var builder = new SqlCommandBuilder(da); builder.ConflictOption = ConflictOption.OverwriteChanges; da.RowUpdating += onUpdatingRow; da.Update(ds, table.TableName); } } catch (....) { ..... } } Here's the event handler for the RowUpdating event: public static void onUpdatingRow(object sender, SqlRowUpdatingEventArgs e) { if ((e.StatementType == StatementType.Update) && (e.Command == null)) { e.Command = CreateUpdateCommand(e.Row, sender as SqlDataAdapter); e.Status = UpdateStatus.Continue; } } and the CreateUpdateCommand method: private static SqlCommand CreateUpdateCommand(DataRow row, SqlDataAdapter da) { string whereClause = ""; string setClause = ""; SqlConnection conn = da.SelectCommand.Connection; for (int i = 0; i < row.Table.Columns.Count; i++) { char quoted; if ((row.Table.Columns[i].DataType == Type.GetType("System.String")) || (row.Table.Columns[i].DataType == Type.GetType("System.DateTime"))) quoted = '\''; else quoted = ' '; string val = row[i].ToString(); if (row.Table.Columns[i].DataType == Type.GetType("System.Boolean")) val = (bool)row[i] ? "1" : "0"; bool isPrimaryKey = false; for (int j = 0; j < row.Table.PrimaryKey.Length; j++) { if (row.Table.PrimaryKey[j].ColumnName == row.Table.Columns[i].ColumnName) { if (whereClause != "") whereClause += " AND "; if (row[i] == DBNull.Value) whereClause += row.Table.Columns[i].ColumnName + "=NULL"; else whereClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; isPrimaryKey = true; break; } } /* Only values for column that is not a primary key can be modified */ if (!isPrimaryKey) { if (setClause != "") setClause += ", "; if (row[i] == DBNull.Value) setClause += row.Table.Columns[i].ColumnName + "=NULL"; else setClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; } } return new SqlCommand("UPDATE " + row.Table.TableName + " SET " + setClause + " WHERE " + whereClause, conn); } However, this is really slow when we have a lot of records. Is there a way to optimize this or an entirely different way to send lots of udpate/delete on several tables? I would really much like to use TSQL for this but can't figure a way to send a dataset to a regular sproc. Additional notes: We cannot directly access the SQLServer database. We tried without compression and it was slower.

    Read the article

  • Manipulate multiple check boxes by ID using unobtrusive java script?

    - by Dean
    I want to be able to select multiple check boxes onmouseover, but instead of applying onmouseover to every individual box, I've been trying to work out how to do so by manipulating check boxes by ID instead, although I'm not sure where to go from using getElementById,. So instead of what you see below. <html> <head> <script> var Toggle = true; var Highlight=false; function handleKeyPress(evt) { var nbr; if (window.Event) nbr = evt.which; else nbr = event.keyCode; if(nbr==16)Highlight=true; return true; } function MakeFalse(){Highlight=false;} function SelectIt(X){ if(X.checked && Toggle)X.checked=false; else X.checked=true; } function ChangeText() { var test1 = document.getElementById("1"); test1.innerHTML = "onmouseover=SelectIt(this)" } </script> </head> <body> <form name="A"> <input type="checkbox" name="C1" onmouseover="SelectIt(this)" id="1"><br> <input type="checkbox" name="C2" onmouseover="SelectIt(this)" id="2"><br> <input type="checkbox" name="C3" onmouseover="SelectIt(this)" id="3"><br> <input type="checkbox" name="C4" onmouseover="SelectIt(this)" checked="" disabled="disabled" id="4"><br> <input type="checkbox" name="C5" onmouseover="SelectIt(this)" id="5"><br> <input type="checkbox" name="C6" onmouseover="SelectIt(this)" id="6"><br> <input type="checkbox" name="C7" onmouseover="SelectIt(this)" id="7"><br> <input type="checkbox" name="C8" onmouseover="SelectIt(this)" id="8"><br> </form> </body> </html> I want to be able to apply the onmousover effect to an array of check boxes like this <form name="A"> <input type="checkbox" name="C1" id="1"><br> <input type="checkbox" name="C2" id="2"><br> <input type="checkbox" name="C3" id="3"><br> <input type="checkbox" name="C4" checked="" disabled="disabled" id="4"><br> <input type="checkbox" name="C5" id="5"><br> <input type="checkbox" name="C6" id="6"><br> <input type="checkbox" name="C7" id="7"><br> <input type="checkbox" name="C8" id="8"><br> </form> After trying the search feature of stackoverflow.com and looking around on Google I haven't been able to a find a solution that makes sense to me thus far, although I'm still in the process of learning so I fear I might be trying to do something too advanced for my level of knowledge.

    Read the article

  • More localized, efficient Lowest Common Ancestor algorithm given multiple binary trees?

    - by mstksg
    I have multiple binary trees stored as an array. In each slot is either nil (or null; pick your language) or a fixed tuple storing two numbers: the indices of the two "children". No node will have only one child -- it's either none or two. Think of each slot as a binary node that only stores pointers to its children, and no inherent value. Take this system of binary trees: 0 1 / \ / \ 2 3 4 5 / \ / \ 6 7 8 9 / \ 10 11 The associated array would be: 0 1 2 3 4 5 6 7 8 9 10 11 [ [2,3] , [4,5] , [6,7] , nil , nil , [8,9] , nil , [10,11] , nil , nil , nil , nil ] I've already written simple functions to find direct parents of nodes (simply by searching from the front until there is a node that contains the child) Furthermore, let us say that at relevant times, both all trees are anywhere between a few to a few thousand levels deep. I'd like to find a function P(m,n) to find the lowest common ancestor of m and n -- to put more formally, the LCA is defined as the "lowest", or deepest node in which have m and n as descendants (children, or children of children, etc.). If there is none, a nil would be a valid return. Some examples, given our given tree: P( 6,11) # => 2 P( 3,10) # => 0 P( 8, 6) # => nil P( 2,11) # => 2 The main method I've been able to find is one that uses an Euler trace, which turns the given tree, with a node A to be the invisible parent of 0 and 1 with a depth of -1, into: A-0-2-6-2-7-10-7-11-7-2-0-3-0-A-1-4-1-5-8-5-9-5-1-A And from that, simply find the node between your given m and n that has the lowest number; For example, to find P(6,11), look for a 6 and an 11 on the trace. The number between them that is the lowest is 2, and that's your answer. If A is in between them, return nil. -- Calculating P(6,11) -- A-0-2-6-2-7-10-7-11-7-2-0-3-0-A-1-4-1-5-8-5-9-5-1-A ^ ^ ^ | | | m lowest n Unfortunately, I do believe that finding the Euler trace of a tree that can be several thousands of levels deep is a bit machine-taxing...and because my tree is constantly being changed throughout the course of the programming, every time I wanted to find the LCA, I'd have to re-calculate the Euler trace and hold it in memory every time. Is there a more memory efficient way, given the framework I'm using? One that maybe iterates upwards? One way I could think of would be the "count" the generation/depth of both nodes, and climb the lowest node until it matched the depth of the highest, and increment both until they find someone similar. But that'd involve climbing up from level, say, 3025, back to 0, twice, to count the generation, and using a terribly inefficient climbing-up algorithm in the first place, and then re-climbing back up. Are there any other better ways?

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >