Search Results

Search found 4451 results on 179 pages for 'split brain'.

Page 163/179 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Algorithm to retrieve every possible combination of sublists of a two lists

    - by sgmoore
    Suppose I have two lists, how do I iterate through every possible combination of every sublist, such that each item appears once and only once. I guess an example could be if you have employees and jobs and you want split them into teams, where each employee can only be in one team and each job can only be in one team. Eg List<string> employees = new List<string>() { "Adam", "Bob"} ; List<string> jobs = new List<string>() { "1", "2", "3"}; I want Adam : 1 Bob : 2 , 3 Adam : 1 , 2 Bob : 3 Adam : 1 , 3 Bob : 2 Adam : 2 Bob : 1 , 3 Adam : 2 , 3 Bob : 1 Adam : 3 Bob : 1 , 2 Adam, Bob : 1, 2, 3 I tried using the answer to this stackoverflow question to generate a list of every possible combination of employees and every possible combination of jobs and then select one item from each from each list, but that's about as far as I got. I don't know the maximum size of the lists, but it would be certainly be less than 100 and there may be other limiting factors (such as each team can have no more than 5 employees) Update Not sure whether this can be tidied up more and/or simplified, but this is what I have ended up with so far. It uses the Group algorithm supplied by Yorye (see his answer below), but I removed the orderby which I don't need and caused problems if the keys are not comparable. var employees = new List<string>() { "Adam", "Bob" } ; var jobs = new List<string>() { "1", "2", "3" }; int c= 0; foreach (int noOfTeams in Enumerable.Range(1, employees.Count)) { var hs = new HashSet<string>(); foreach( var grouping in Group(Enumerable.Range(1, noOfTeams).ToList(), employees)) { // Generate a unique key for each group to detect duplicates. var key = string.Join(":" , grouping.Select(sub => string.Join(",", sub))); if (!hs.Add(key)) continue; List<List<string>> teams = (from r in grouping select r.ToList()).ToList(); foreach (var group in Group(teams, jobs)) { foreach (var sub in group) { Console.WriteLine(String.Join(", " , sub.Key ) + " : " + string.Join(", ", sub)); } Console.WriteLine(); c++; } } } Console.WriteLine(String.Format("{0:n0} combinations for {1} employees and {2} jobs" , c , employees.Count, jobs.Count)); Since I'm not worried about the order of the results, this seems to give me what I need.

    Read the article

  • bin_at in dlmalloc

    - by chunhui
    In glibc malloc.c or dlmalloc It said "repositioning tricks"As in blew, and use this trick in bin_at. bins is a array,the space is allocated when av(struct malloc_state) is allocated.doesn't it? the sizeof(bin[i]) is less then sizeof(struct malloc_chunk*)? Who can describe this trick for me? I can't understand the bin_at macro.why they get the bins address use this method?how it works? Very thanks,and sorry for my poor English. /* To simplify use in double-linked lists, each bin header acts as a malloc_chunk. This avoids special-casing for headers. But to conserve space and improve locality, we allocate only the fd/bk pointers of bins, and then use repositioning tricks to treat these as the fields of a malloc_chunk*. */ typedef struct malloc_chunk* mbinptr; /* addressing -- note that bin_at(0) does not exist */ #define bin_at(m, i) \ (mbinptr) (((char *) &((m)->bins[((i) - 1) * 2])) \ - offsetof (struct malloc_chunk, fd)) The malloc_chunk struct like this: struct malloc_chunk { INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */ INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */ struct malloc_chunk* fd; /* double links -- used only if free. */ struct malloc_chunk* bk; /* Only used for large blocks: pointer to next larger size. */ struct malloc_chunk* fd_nextsize; /* double links -- used only if free. */ struct malloc_chunk* bk_nextsize; }; And the bin type like this: typedef struct malloc_chunk* mbinptr; struct malloc_state { /* Serialize access. */ mutex_t mutex; /* Flags (formerly in max_fast). */ int flags; #if THREAD_STATS /* Statistics for locking. Only used if THREAD_STATS is defined. */ long stat_lock_direct, stat_lock_loop, stat_lock_wait; #endif /* Fastbins */ mfastbinptr fastbinsY[NFASTBINS]; /* Base of the topmost chunk -- not otherwise kept in a bin */ mchunkptr top; /* The remainder from the most recent split of a small request */ mchunkptr last_remainder; /* Normal bins packed as described above */ mchunkptr bins[NBINS * 2 - 2]; /* Bitmap of bins */ unsigned int binmap[BINMAPSIZE]; /* Linked list */ struct malloc_state *next; #ifdef PER_THREAD /* Linked list for free arenas. */ struct malloc_state *next_free; #endif /* Memory allocated from the system in this arena. */ INTERNAL_SIZE_T system_mem; INTERNAL_SIZE_T max_system_mem; };

    Read the article

  • asp.net repeater returns weird html

    - by emre
    I have a repeater that is supposed to create div tags with their onclick functions from databind. Which is like onclick='<%# "functionname('" + Eval("somestring") +"');" %>' but the single quotes ' after the js func ( and before ) , they become something like ?,= .. I don't understand what's happening. .. the full code is below <div id="dvPager"> <asp:Repeater ID="pagerSorular" runat="server"> <ItemTemplate> <div class='<%# "pageritem pagertext" + ((this.PageNumber == int.Parse(Container.DataItem.ToString())) ? " pagerselected" : "") %>' onclick='<%# "gotopage('" + this.PageName + "', " + Container.DataItem + ");" %>'> <%# Container.DataItem %> </div> </ItemTemplate> </asp:Repeater> </div> and codebehind is: public int PageNumber { get { string strPgNum = Request.QueryString["pg"] as String; if (!String.IsNullOrEmpty(strPgNum)) { int pgnum; if (int.TryParse(strPgNum, out pgnum)) { if (pgnum <= 0) { return pgnum; } else { return 1; } } else { return 1; } } else { return 1; } } } public string PageName { get { string url = Request.Url.ToString(); string[] parts = url.Split(new char[]{'/'}); return parts[parts.Length - 1]; } } public void Doldur(List<SoruGridView> sorulistesi) { gridSorular.DataSource = sorulistesi; gridSorular.DataBind(); /// PagerYap(sorulistesi); } protected void PagerYap(List<SoruGridView> sorulistesi) { List<int> numpgs = new List<int>(); int count = 1; foreach (SoruGridView sgv in sorulistesi) { numpgs.Add(count); count++; } pagerSorular.DataSource = numpgs.ToArray(); pagerSorular.DataBind(); }

    Read the article

  • JQuery fadeIn after src changed but fadeIn on the previous src anyway !

    - by Anna
    Hello ! I have a jquery bug and I've been looking for hours now, I can't figure out what's wrong... I have this code : $(document).ready(function(){ $('#ulPhotos a').click(function() { var newSrc= $(this).find('img').attr('src').split("/"); bigPictureName = 'big'+newSrc[2]; $('#pho').hide(); $('#imageBig').attr("src", "images/photos/"+bigPictureName); $('#pho').fadeIn('slow'); var alt = $(this).find('img').attr('alt'); $('#legend').html(alt); }); }); and this in html : <ul id="ulPhotos"> <li><a href="#centre"><img src="images/photos/09.jpg" title="La Reine de la Nuit au Comedia" alt="<em>La Reine de la Nuit</em> au Comedia"/></a> <a href="#centre"><img src="images/photos/03.jpg" title="Manuelita, La Périchole à l&rsquo;Opéra Comique" alt="Manuelita, <em>La Périchole</em> à l&#8217;Opéra Comique" /></a></li> <li><a href="#centre" ><img src="images/photos/12.png" title="" alt="Marion Baglan Carnac Ré" /></a> and this in for bigImage : </div> <div id="pho" a name="centre"> <p id="legend"> La Reine de la Nuit</p> <img src="images/photos/big09.jpg" alt="Marion Baglan" id="imageBig"/> </div> It simply changes the source of my img in a div named pho... but sometimes when the new image is too heavy, the fadeIn executes on the previous src !! so we see the fadeIn first on the previous image, and then, the right picture appears without fadeIn.... am I missing something? ps : the page is here http://www.marion-baglan.net/photos.htm#centre if you click fast you can see it... and when I try to put some bigger photos, it's very obvious...

    Read the article

  • Structs, strtok, segmentation fault

    - by FILIaS
    I'm trying to make a programme with structs and files.The following is just a part of my code(it;s not all). What i'm trying to do is: ask the user to write his command. eg. delete John eg. enter John James 5000 ipad purchase. The problem is that I want to split the command in order to save its 'args' for a struct element. That's why i used strtok. BUT I'm facing another problem in who to 'put' these on the struct. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX 100 char command[1500]; struct catalogue { char short_name[50]; char surname[50]; signed int amount; char description[1000]; }*catalog[MAX]; int main ( int argc, char *argv[] ) { int i,n; char choice[3]; printf(">sort1: Print savings sorted by surname\n"); printf(">sort2: Print savings sorted by amount\n"); printf(">search+name:Print savings of each name searched\n"); printf(">delete+full_name+amount: Erase saving\n"); printf(">enter+full_name+amount+description: Enter saving \n"); printf(">quit: Update + EXIT program.\n"); printf("Choose your selection:\n>"); gets(command); //it save the whole command /*in choice it;s saved only the first 2 letters(needed for menu choice again)*/ strncpy(choice,command,2); choice[2]='\0'; char** args = (char**)malloc(strlen(command)*sizeof(char*)); memset(args, 0, sizeof(char*)*strlen(command)); char* curToken = strtok(command, " \t"); for (n = 0; curToken != NULL; ++n) { args[n] = strdup(curToken); curToken = strtok(NULL, " \t"); *catalog[n]->short_name=*args[1]; *catalog[n]->surname=args[2]; catalog[n]->amount=atoi(args[3]); *catalog[n]->description=args[4]; } return 0; } I get a warning (warning: assignment makes integer from pointer without a cast) for the lines: *catalog[n]->short_name=*args[1]; *catalog[n]->surname=args[2]; *catalog[n]->description=args[4]; As a result, after running the program i get a Segmentation Fault... Any help? Any ideas?

    Read the article

  • string categorization strategies

    - by Andrew Heath
    I'm the one-man dev team on a fledgling military history website. One aspect of the site is a catalog of ~1,200 individual battles, including the nations & formations (regiments, divisions, etc) which took part. The formation information (as well as the other battle info) was manually imported from a series of books by a 10-man volunteer team. The formations were listed in groups with varying formatting and abbreviation patterns. At the time I set up the data collection forms I couldn't think of a good way to process that data... and elected to store it all as strings in the MySQL database and sort it out later. Well, "later" - as it tends to happen - has arrived. :-) Each battle has 2+ records in the database - one for each nation that participated. Each record has a formations text string listing the formations present as the volunteer chose to add them. Some real examples: 39th Grenadier Rgmt, 26th Volksgrenadier Division 2nd Luftwaffe Field Division, 246th Infantry Division 247th Rifle Division, 255th Tank Brigade 2nd Luftwaffe Field Division, SS Cavalry Division 28th Tank Brigade, 158th Rifle Division, 135th Rifle Division, 81st Tank Brigade, 242nd Tank Brigade 78th Infantry Division 3rd Kure Special Naval Landing Force, Tulagi Seaplane Base personnel 1st Battalion 505th Infantry Regiment The ultimate goal is for each individual force to have an ID, so that its participation can be traced throughout the battle database. Formation hierarchy, such as the final item above 1st Battalion (of the) 505th Infantry Regiment also needs to be preserved. In that case, 1st Battalion and 505th Infantry Regiment would be split, but 1st Battalion would be flagged as belonging to the 505th. In database terms, I think I want to pull the formation field out of the current battle info table and create three new tables: FORMATION [id] [name] FORMATION_HIERARCHY [id] [parent] [child] FORMATION_BATTLE [f_id] [battle_id] It's simple to explain, but complicated to enact. What I'm looking for from the SO community is just some tips on how best to tackle this problem. Ideally there's some sort of method to solving this that I'm not aware of. However, as a last resort, I could always code a classification framework and call my volunteers back to sort through 2,500+ records...

    Read the article

  • Multiple Connection Types for one Designer Generated TableAdapter

    - by Tim
    I have a Windows Forms application with a DataSet (.xsd) that is currently set to connect to a Sql Ce database. Compact Edition is being used so the users can use this application in the field without an internet connection, and then sync their data at day's end. I have been given a new project to create a supplemental web interface for displaying some of the same reports as the Windows Forms application so certain users can obtain reports without installing the Windows app. What I've done so far is create a new Web Project and added it to my current Solution. I have split both the reports (.rdlc) and DataSets out of the Windows Forms project into their own projects so they can be accessed by both the Windows and Web applications. So far, this is working fine. Here's my dilemma: As I said before, the DataSets are currently set up to connect to a local Sql Ce database file. This is correct for the Windows app, but for the Web application I would like to use these same TableAdapters and queries to connect to the Sql Server 2005 database. I have found that the designer generated, strongly-typed TableAdapter classes have a ConnectionModifier property that allows you to make the TableAdapter's Connection public. This exposes the Connection property and allows me to set it, however it is strongly-typed as a SqlCeConnection, whereas I would like to set it to a SqlConnection for my Web project. I'm assuming the DataSet Designer strongly-types the Connection, Command, and DataAdapter objects based on the Provider of the ConnectionString as indicated in the app.config file. Is there any way I can use some generic provider so that the DataSet Designer will use object types that can connect to both a Sql Ce database file AND the actual Sql Server 2005 database? I know that SqlCeConnection and SqlConnection both inherit from DbConnection, which implements IDbConnection. Relatively, the same goes for SqlCeCommand/SqlCommand:DbCommand:IDbCommand. It would be nice if I could just figure out a way for the designer to use the Interface types rather than the strong types, but I'm hesitant that that is possible. I hope my problem and question are clear. Any help is much appreciated. Let me know if there's anything I can clarify.

    Read the article

  • javascript and css working on firefox but not working on IE

    - by Nirbhay saini
    Hi I have this code which working on fitrefox but not working on IE missing last charector on IE <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>wrapped</title> <script type="text/javascript" language="javascript"> function set_padd(){ var tt = document.getElementById("span_padding").innerHTML; var txt = new Array(); txt = tt.split(" "); var atxt = ''; var f_txt = ''; var wrd_pr_linr = 4; var cnt = 1; for(var i = 0; i < txt.length; i++){ if(txt[i].length > 0){ txt[i] = txt[i].replace(' ',''); if(cnt < wrd_pr_linr){ if(txt[i].length > 0){ atxt += ' '+txt[i].replace(' ',''); cnt++; } }else{ f_txt += '<a class="padd_txt" >'+atxt+'</a><br />'; atxt = ''; cnt = 1; } } } document.getElementById("span_padding").innerHTML = f_txt; } </script> <style type="text/css"> .padd_txt{padding:7px;background:#009;color:#FFF;line-height:26px;font-size:14px;} body{font-family:'Trebuchet MS', Arial, Helvetica, sans-serif; font-size:24px; line-height:1.2em;} span{background-color: #009; width:200px; color: #FFF;" class="blocktext;} </style> </head> <body onload="set_padd();"> <div style="width: 350px;"> <p> <span id="span_padding"> This is what I want to happen where one long string is wrapped and the text has this highlight color behind it. </span> </div> </body> </html> out put on firefox is This is I want to happen where one string is wrapped and the text this highlight behind it. and output on IE This is what want to happen one long string wrapped and the has this highlight missing last two word

    Read the article

  • Twisted: why is it that passing a deferred callback to a deferred thread makes the thread blocking a

    - by surtyaarthoughts
    I unsuccessfully tried using txredis (the non blocking twisted api for redis) for a persisting message queue I'm trying to set up with a scrapy project I am working on. I found that although the client was not blocking, it became much slower than it could have been because what should have been one event in the reactor loop was split up into thousands of steps. So instead, I tried making use of redis-py (the regular blocking twisted api) and wrapping the call in a deferred thread. It works great, however I want to perform an inner deferred when I make a call to redis as I would like to set up connection pooling in attempts to speed things up further. Below is my interpretation of some sample code taken from the twisted docs for a deferred thread to illustrate my use case: #!/usr/bin/env python from twisted.internet import reactor,threads from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(): print 'doing lookup... this may take a while' time.sleep(10) return 'results from redis' def result(res): print res def main(): lc = LoopingCall(main_loop) lc.start(2) d = threads.deferToThread(aBlockingRedisCall) d.addCallback(result) reactor.run() if __name__=='__main__': main() And here is my alteration for connection pooling that makes the code in the deferred thread blocking : #!/usr/bin/env python from twisted.internet import reactor,defer from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(x): if x<5: #all connections are busy, try later print '%s is less than 5, get a redis client later' % x x+=1 d = defer.Deferred() d.addCallback(aBlockingRedisCall) reactor.callLater(1.0,d.callback,x) return d else: print 'got a redis client; doing lookup.. this may take a while' time.sleep(10) # this is now blocking.. any ideas? d = defer.Deferred() d.addCallback(gotFinalResult) d.callback(x) return d def gotFinalResult(x): return 'final result is %s' % x def result(res): print res def aBlockingMethod(): print 'going to sleep...' time.sleep(10) print 'woke up' def main(): lc = LoopingCall(main_loop) lc.start(2) d = defer.Deferred() d.addCallback(aBlockingRedisCall) d.addCallback(result) reactor.callInThread(d.callback, 1) reactor.run() if __name__=='__main__': main() So my question is, does anyone know why my alteration causes the deferred thread to be blocking and/or can anyone suggest a better solution?

    Read the article

  • jQuery gallery turn over with next and previous buttons

    - by Ralf
    Hi, i'm trying to do some kind of Gallery-Turn Over Script with jQuery. Therefor i got an array with - let's say 13 - images: galleryImages = new Array( 'images/tb_01.jpg', 'images/tb_02.jpg', 'images/tb_03.jpg', 'images/tb_04.jpg', 'images/tb_05.jpg', 'images/tb_06.jpg', 'images/tb_07.jpg', 'images/tb_08.jpg', 'images/tb_09.jpg', 'images/tb_10.jpg', 'images/tb_11.jpg', 'images/tb_12.jpg', 'images/tb_13.jpg' ); My gallery looks like a grid showing only 9 images at once. My current script already counts the number of li-elements in #gallery, loads the first 9 images and displays them. The HTML looks like this: <ul id="gallery"> <li></li> <li></li> <li></li> <li></li> <li></li> <li></li> <li></li> <li></li> <li></li> </ul> <ul id="gallery-controls"> <li id="gallery-prev"><a href="#">Previous</a></li> <li id="gallery-next"><a href="#">Next</a></li> </ul> I'm pretty new to jQuery an my problem is that i can't figure out how to split the array in portions with 9 elements to attach it as a link on the control buttons. I need something like this: $('#gallery-next').click(function(){ $('ul#gallery li').children().remove(); $('ul#gallery li').each(function(index,el){ var img = new Image(); $(img).load(function () { $(this).css('display','none'); $(el).append(this); $(this).fadeIn(); }).attr('src', galleryImages[index]); //index for the next 9 images?!?! }); }); Thanks for help!

    Read the article

  • MVVM Binding Orthogonal Aspects in Views e.g. Application Settings

    - by chibacity
    I have an application which I am developing using WPF\Prism\MVVM. All is going well and I have some pleasing MVVM implementations. However, in some of my views I would like to be able to bind application settings e.g. when a user reloads an application, the checkbox for auto-scrolling a grid should be checked in the state it was last time the user used the application. My view needs to bind to something that holds the "auto-scroll" setting state. I could put this on the view-model, but applications settings are orthogonal to the purpose of the view-model. The "auto-scroll" setting is controlling an aspect of the view. This setting is just an example. There will be quite a number of them and splattering my view-models with properties to represent application settings (so I can bind them) feels decidedly yucky. One view-model per view seems to be de rigeuer... What is best\usual practice here? Splatter my view-models with application settings? Have multiple view-models per view so settings can be represented in their own right? Split views so that controls can bind to an ApplicationSettingsViewModel? = too many views? Something else? Edit 1 To add a little more context, I am developing a UI with a tabbed interface. Each tab will host a single widget and there a variety of widgets. Each widget is a Prism composition of individual views. Some views are common amongst widgets e.g. a file picker view. Whilst each widget is composed of several views, as a whole, conceptually a widget has a single set of user settings e.g. last file selected, auto-scroll enabled, etc. These need to be persisted and retrieved\applied when the application starts again, and the widget views are created. My question is focused on the fact that conceptually a widget has a single set of user settings which is at right-angles to the fact that a widget consists of many views. Each view in the widget has it's own view-model (which works nicely and logically) but if I stick to a one view-model per view, I would have to splatter each view-model with user settings appropriate to it. This doesn't sound right ?!?

    Read the article

  • Given a trace of packets, how would you group them into flows?

    - by zxcvbnm
    I've tried it these ways so far: 1) Make a hash with the source IP/port and destination IP/port as keys. Each position in the hash is a list of packets. The hash is then saved in a file, with each flow separated by some special characters/line. Problem: Not enough memory for large traces. 2) Make a hash with the same key as above, but only keep in memory the file handles. Each packet is then put into the hash[key] that points to the right file. Problems: Too many flows/files (~200k) and it might run out of memory as well. 3) Hash the source IP/port and destination IP/port, then put the info inside a file. The difference between 2 and 3 is that here the files are opened and closed for each operation, so I don't have to worry about running out of memory because I opened too many at the same time. Problems: WAY too slow, same number of files as 2 so also impractical. 4) Make a hash of the source IP/port pairs and then iterate over the whole trace for each flow. Take the packets that are part of that flow and place them into the output file. Problem: Suppose I have a 60 MB trace that has 200k flows. This way, I would process, say, a 60 MB file 200k times. Maybe removing the packets as I iterate would make it not so painful, but so far I'm not sure this would be a good solution. 5) Split them by IP source/destination and then create a single file for each one, separating the flows by special characters. Still too many files (+50k). Right now I'm using Ruby to do it, which might've been a bad idea, I guess. Currently I've filtered the traces with tshark so that they only have relevant info, so I can't really make them any smaller. I thought about loading everything in memory as described in 1) using C#/Java/C++, but I was wondering if there wouldn't be a better approach here, especially since I might also run out of memory later on even with a more efficient language if I have to use larger traces. In summary, the problem I'm facing is that I either have too many files or that I run out of memory. I've also tried searching for some tool to filter the info, but I don't think there is one. The ones I've found only return some statistics and wouldn't scan for every flow as I need.

    Read the article

  • Calculating a Sample Covariance Matrix for Groups with plyr

    - by John A. Ramey
    I'm going to use the sample code from http://gettinggeneticsdone.blogspot.com/2009/11/split-apply-and-combine-in-r-using-plyr.html for this example. So, first, let's copy their example data: mydata=data.frame(X1=rnorm(30), X2=rnorm(30,5,2), SNP1=c(rep("AA",10), rep("Aa",10), rep("aa",10)), SNP2=c(rep("BB",10), rep("Bb",10), rep("bb",10))) I am going to ignore SNP2 in this example and just pretend the values in SNP1 denote group membership. So then, I may want some summary statistics about each group in SNP1: "AA", "Aa", "aa". Then if I want to calculate the means for each variable, it makes sense (modifying their code slightly) to use: > ddply(mydata, c("SNP1"), function(df) data.frame(meanX1=mean(df$X1), meanX2=mean(df$X2))) SNP1 meanX1 meanX2 1 aa 0.05178028 4.812302 2 Aa 0.30586206 4.820739 3 AA -0.26862500 4.856006 But what if I want the sample covariance matrix for each group? Ideally, I would like a 3D array, where the I have the covariance matrix for each group, and the third dimension denotes the corresponding group. I tried a modified version of the previous code and got the following results that have convinced me that I'm doing something wrong. > daply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) , , = 1 SNP1 1 2 aa 1.4961210 -0.9496134 Aa 0.8833190 -0.1640711 AA 0.9942357 -0.9955837 , , = 2 SNP1 1 2 aa -0.9496134 2.881515 Aa -0.1640711 2.466105 AA -0.9955837 4.938320 I was thinking that the dim() of the 3rd dimension would be 3, but instead, it is 2. Really this is a sliced up version of the covariance matrix for each group. If we manually compute the sample covariance matrix for aa, we get: [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 Using plyr, the following gives me what I want in list() form: > dlply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) $aa [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 $Aa [,1] [,2] [1,] 0.8833190 -0.1640711 [2,] -0.1640711 2.4661046 $AA [,1] [,2] [1,] 0.9942357 -0.9955837 [2,] -0.9955837 4.9383196 attr(,"split_type") [1] "data.frame" attr(,"split_labels") SNP1 1 aa 2 Aa 3 AA But like I said earlier, I would really like this in a 3D array. Any thoughts on where I went wrong with daply() or suggestions? Of course, I could typecast the list from dlply() to a 3D array, but I'd rather not do this because I will be repeating this process many times in a simulation. As a side note, I found one method (http://www.mail-archive.com/[email protected]/msg86328.html) that provides the sample covariance matrix for each group, but the outputted object is bloated. Thanks in advance.

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Replace HTML entities in a string avoiding <img> tags

    - by Xeos
    I have the following input: Hi! How are you? <script>//NOT EVIL!</script> Wassup? :P LOOOL!!! :D :D :D Which is then run through emoticon library and it become this: Hi! How are you? <script>//NOT EVIL!</script> Wassup? <img class="smiley" alt="" title="tongue, :P" src="ui/emoticons/15.gif"> LOOOL!!! <img class="smiley" alt="" title="big grin, :D" src="ui/emoticons/5.gif"> <img class="smiley" alt="" title="big grin, :P" src="ui/emoticons/5.gif"> <img class="smiley" alt="" title="big grin, :P" src="ui/emoticons/5.gif"> I have a function that escapes HTML entites to prevent XSS. So running it on raw input for the first line would produce: Hi! How are you? &lt;script&gt;//NOT EVIL!&lt;/script&gt; Now I need to escape all the input, but at the same time I need to preserve emoticons in their initial state. So when there is <:-P emoticon, it stays like that and does not become &lt;:-P. I was thinking of running a regex split on the emotified text. Then processing each part on its own and then concatenating the string together, but I am not sure how easily can Regex be bypassed? I know the format will always be this: [<img class="smiley" alt="] [empty string] [" title="] [one of the values from a big list] [, ] [another value from the list (may be matching original emoticon)] [" src="ui/emoticons/] [integer from Y to X] [.gif">] Using the list MAY be slow, since I need to run that regex on text that may have 20-30-40 emoticons. Plus there may be 5-10-15 text messages to process. What could be an elegant solution to this? I am ready to use third-party library or jQuery for this. PHP preprocessing is possible as well.

    Read the article

  • dynamiclly schedule a lead sales agent

    - by Josh
    I have a website that I'm trying to migrate from classic asp to asp.net. It had a lead schedule, where each sales agent would be featured for the current day, or part of the day.The next day a new agent would be scheduled. It was driven off a database table that had a row for each day in it. So to figure out if a sales agent would show on a day, it was easy, just find today's date in the table. Problem was it ran out rows, and you had to run a script to update the lead days 6 months at a time. Plus if there was ever any change to the schedule, you had to delete all the rows and re-run the script. So I'm trying to code it where sql server figures that out for me, and no script has to be ran. I have a table like so CREATE TABLE [dbo].[LeadSchedule]( [leadid] [int] IDENTITY(1,1) NOT NULL, [userid] [int] NOT NULL, [sunday] [bit] NOT NULL, [monday] [bit] NOT NULL, [tuesday] [bit] NOT NULL, [wednesday] [bit] NOT NULL, [thursday] [bit] NOT NULL, [friday] [bit] NOT NULL, [saturday] [bit] NOT NULL, [StartDate] [smalldatetime] NULL, [EndDate] [smalldatetime] NULL, [StartTime] [time](0) NULL, [EndTime] [time](0) NULL, [order] [int] NULL, So the user can schedule a sales agent depending on their work schedule. Also if they wanted to they could split certain days, or sales agents by time, So from Midnight to 4 it was one agent, from 4-midnight it was another. So far I've tried using a numbers table, row numbers, goofy date math, and I'm at a loss. Any suggestions on how to handle this purely from sql code? If it helps, the table should always be small, like less than 20 never over 100. update After a few hours all I've managed to come up with is the below. It doesn't handle filling in days not available or times, just rotates through all the sales agents with leadTable as ( select leadid,userid,[order],StartDate, case DATEPART(dw,getdate()) when 1 then sunday when 2 then monday when 3 then tuesday when 4 then wednesday when 5 then thursday when 6 then friday when 7 then saturday end as DayAvailable , ROW_NUMBER() OVER (ORDER BY [order] ASC) AS ROWID from LeadSchedule where GETDATE()>=StartDate and (CONVERT(time(0),GETDATE())>= StartTime or StartTime is null) and (CONVERT(time(0),GETDATE())<= EndTime or EndTime is null) ) select userid, DATEADD(d,(number+ROWID-2)*totalUsers,startdate ) leadday from (select *, (select COUNT(1) from leadTable) totalUsers from leadTable inner join Numbers on 1=1 where DayAvailable =1 ) tb1 order by leadday asc

    Read the article

  • What exactly dose this piece of JavaScript do ?

    - by helpmarnie
    ok, so I saw this page growing in popularity amoung my social circles on facebook, what 98 percent bla bla... and it walks users through copying the bellow JavaScript (i added some indentation to make it more readable) into their address bar. Looks dodgy to me, but I only have a very basic knowledge of JavaScript. simply put, What dose this do? javascript:(function(){ a='app120668947950042_jop'; b='app120668947950042_jode'; ifc='app120668947950042_ifc'; ifo='app120668947950042_ifo'; mw='app120668947950042_mwrapper'; eval(function(p,a,c,k,e,r){ e=function(c){ return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))} ; if(!''.replace(/^/,String)){ while(c--)r[e(c)]=k[c]||e(c); k=[function(e){ return r[e]} ]; e=function(){ return'\\w+'} ; c=1} ; while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]); return p} ('J e=["\\n\\g\\j\\g\\F\\g\\i\\g\\h\\A","\\j\\h\\A\\i\\f","\\o\\f\\h\\q\\i\\f\\r\\f\\k\\h\\K\\A\\L\\t","\\w\\g\\t\\t\\f\\k","\\g\\k\\k\\f\\x\\M\\N\\G\\O","\\n\\l\\i\\y\\f","\\j\\y\\o\\o\\f\\j\\h","\\i\\g\\H\\f\\r\\f","\\G\\u\\y\\j\\f\\q\\n\\f\\k\\h\\j","\\p\\x\\f\\l\\h\\f\\q\\n\\f\\k\\h","\\p\\i\\g\\p\\H","\\g\\k\\g\\h\\q\\n\\f\\k\\h","\\t\\g\\j\\z\\l\\h\\p\\w\\q\\n\\f\\k\\h","\\j\\f\\i\\f\\p\\h\\v\\l\\i\\i","\\j\\o\\r\\v\\g\\k\\n\\g\\h\\f\\v\\P\\u\\x\\r","\\B\\l\\Q\\l\\R\\B\\j\\u\\p\\g\\l\\i\\v\\o\\x\\l\\z\\w\\B\\g\\k\\n\\g\\h\\f\\v\\t\\g\\l\\i\\u\\o\\S\\z\\w\\z","\\j\\y\\F\\r\\g\\h\\T\\g\\l\\i\\u\\o"]; d=U; d[e[2]](V)[e[1]][e[0]]=e[3]; d[e[2]](a)[e[4]]=d[e[2]](b)[e[5]]; s=d[e[2]](e[6]); m=d[e[2]](e[7]); c=d[e[9]](e[8]); c[e[11]](e[10],I,I); s[e[12]](c); C(D(){ W[e[13]]()} ,E); C(D(){ X[e[16]](e[14],e[15])} ,E); C(D(){ m[e[12]](c); d[e[2]](Y)[e[4]]=d[e[2]](Z)[e[5]]} ,E); ',62,69,'||||||||||||||_0x95ea|x65|x69|x74|x6C|x73|x6E|x61||x76|x67|x63|x45|x6D||x64|x6F|x5F|x68|x72|x75|x70|x79|x2F|setTimeout|function|5000|x62|x4D|x6B|true|var|x42|x49|x48|x54|x4C|x66|x6A|x78|x2E|x44|document|mw|fs|SocialGraphManager|ifo|ifc|||||||'.split('|'),0,{ } ))} )();

    Read the article

  • IE 8: Object Expected for Every Javascript Function

    - by Yetkin EREN
    Hi; Moz say's everything is ok! IE say's object expected everywhere.. for example this my make a box function (in all.js file); function kutuyap(Eid,iduzan,text,yer,ekle){ var div; if (document.createElement && (div = document.createElement('div'))) { div.name = div.id = Eid+iduzan; document.getElementById(yer).appendChild(div); } //$('#'+yer).append("<div id="+Eid+iduzan+"></div>") $('#'+Eid+iduzan).addClass("minikutu"); $('#'+Eid+iduzan).html("&nbsp;"+text+'<span id='+Eid+'y'+iduzan+' class="yokedici">X</span>'); $("#"+Eid+'y'+iduzan).attr("onclick","kutusil('"+Eid+"y"+iduzan+"','"+iduzan+"','"+ekle+"');"); $('#'+ekle).val($('#'+ekle).val()+Eid+'-'); } and after that i call function like this; HTML; <select name="Mturs" class="inputs" id="Mturs"> <option value="0" selected="selected">Choise One</option> <option value="4">Pop</option> <option value="3">Pop-Rock </option> <option value="5">Rock (Yabanci)</option> </select> <input name="secMtur" id="secMtur" value="" type="hidden"> <script> $('#Mturs').live('change', function() { $('#Mturs :selected').each(function (i) { if ( $('#Mturs :selected').val() != 0 ) { secMturde=$('#secMtur').val().indexOf($('#Mturs :selected').val()+'-'); splitter=$('#secMtur').val().split("-") if(splitter.length<=12){ if (secMturde<0) { kutuyap($('#Mturs :selected').val(),'mtur',$(this).html(),'divmtur','secMtur'); }else{ alert("Choisen before") } }else{ alert("Max limit is 12 !") } } }); }); </script> sory for my realy bad english.. edit: and i have this tags; <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" /> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js"> </script> <script type="text/javascript" src="alljs.js"></script>

    Read the article

  • EF4 Import/Lookup thousands of records - my performance stinks!

    - by Dennis Ward
    I'm trying to setup something for a movie store website (using ASP.NET, EF4, SQL Server 2008), and in my scenario, I want to allow a "Member" store to import their catalog of movies stored in a text file containing ActorName, MovieTitle, and CatalogNumber as follows: Actor, Movie, CatalogNumber John Wayne, True Grit, 4577-12 (repeated for each record) This data will be used to lookup an actor and movie, and create a "MemberMovie" record, and my import speed is terrible if I import more than 100 or so records using these tables: Actor Table: Fields = {ID, Name, etc.} Movie Table: Fields = {ID, Title, ActorID, etc.} MemberMovie Table: Fields = {ID, CatalogNumber, MovieID, etc.} My methodology to import data into the MemberMovie table from a text file is as follows (after the file has been uploaded successfully): Create a context. For each line in the file, lookup the artist in the Actor table. For each Movie in the Artist table, lookup the matching title. If a matching Movie is found, add a new MemberMovie record to the context and call ctx.SaveChanges(). The performance of my implementation is terrible. My expectation is that this can be done with thousands of records in a few seconds (after the file has been uploaded), and I've got something that times out the browser. My question is this: What is the best approach for performing bulk lookups/inserts like this? Should I call SaveChanges only once rather than for each newly created MemberMovie? Would it be better to implement this using something like a stored procedure? A snippet of my loop is roughly this (edited for brevity): while ((fline = file.ReadLine()) != null) { string [] token = fline.Split(separator); string Actor = token[0]; string Movie = token[1]; string CatNumber = token[2]; Actor found_actor = ctx.Actors.Where(a => a.Name.Equals(actor)).FirstOrDefault(); if (found_actor == null) continue; Movie found_movie = found_actor.Movies.Where( s => s.Title.Equals(title, StringComparison.CurrentCultureIgnoreCase)).FirstOrDefault(); if (found_movie == null) continue; ctx.MemberMovies.AddObject(new MemberMovie() { MemberProfileID = profile_id, CatalogNumber = CatNumber, Movie = found_movie }); try { ctx.SaveChanges(); } catch { } } Any help is appreciated! Thanks, Dennis

    Read the article

  • Callback function in jquery doesn't seem to work......

    - by Pandiya Chendur
    I use the following jquery pagination plugin and i got the error a.parentNode is undefined when i executed it... <script type="text/javascript"> $(document).ready(function() { getRecordspage(1, 5); $(".pager").pagination(17, { callback: pagechange, current_page: '0', items_per_page: '5', num_display_entries : '5', next_text: 'Next', prev_text: 'Prev', num_edge_entries: '1' }); }); function pagechange() { $("#ResultsDiv").empty(); $("#ResultsDiv").css('display', 'none'); getRecordspage($(this).text(), 5); } function getRecordspage(curPage, pagSize) { $.ajax({ type: "POST", url: "Default.aspx/GetRecords", data: "{'currentPage':" + curPage + ",'pagesize':" + pagSize + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(jsonObj) { var strarr = jsonObj.d.split('##'); var jsob = jQuery.parseJSON(strarr[0]); var divs = ''; $.each(jsob.Table, function(i, employee) { divs += '<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>'; }); $("#ResultsDiv").append(divs).show('slow'); $(".resultsdiv:even").addClass("resultseven"); $(".resultsdiv").hover(function() { $(this).addClass("resultshover"); }, function() { $(this).removeClass("resultshover"); }); } }); } </script> and in my page, <div id="ResultsDiv" style="display:none;"> </div> <div id="pager" class="pager"> </div> Any suggestion....

    Read the article

  • modify this code .. please help me?

    - by Sam
    i wana modify this code from static choices to dynamic this for 3 choices var PollhttpObject=null; function DoVote() {if(document.getElementById('PollRadio1').checke d)DoVote_Submit(1);else if(document.getElementById('PollRadio2').checked)DoVote_Submit(2);else if(document.getElementById('PollRadio3').checked)DoVote_Submit(3);else alert('?????: ?????? ?????? ??? ?????????? ??????? ?? ????? ??? ?? ???????');return false;} function DisbalePoll(TheCase) {document.getElementById('VoteBttn').onclick=function(){alert('!?????? ??? ?? ??????? ??????');} document.getElementById('PollRadio1').disabled='true';document.getElementById('PollRadio2').disabled='true';document.getElementById('PollRadio3').disabled='true';if(TheCase=='EXPIRED') {document.getElementById('VoteBttn').src='images/design/VoteBttn_OFF.jpg';document.getElementById('ResultBttn').src='images/design/ResultsBttn_OFF.jpg';document.getElementById('VoteBttn').onclick='';document.getElementById('ResultBttn').onclick='';document.getElementById('ResultBttn').style.cursor='';document.getElementById('VoteBttn').style.cursor='';}} function DoVote_Submit(VoteID) {if(VoteID!=0)DisbalePoll();try{PollhttpObject=getHTTPObject();if(PollhttpObject!=null) {PollhttpObject.onreadystatechange=PollOutput;PollhttpObject.open("GET","Ajax.aspx?ACTION=POLL&VOTEID="+ VoteID+"&RND="+ Math.floor(Math.random()*10001),true);PollhttpObject.send(null);}} catch(e){} return false;} function PollOutput(){if(PollhttpObject.readyState==4) {var SearchResult=PollhttpObject.responseText;document.getElementById('PollProgress').style.display='none';document.getElementById('PollFormDiv').style.display='block';if(SearchResult.length=2&&SearchResult.substr(0,2)=='OK') {var ReturnedValue=SearchResult.split("#");document.getElementById('PollBar1').style.width=0+'px';document.getElementById('PollBar2').style.width=0+'px';document.getElementById('PollBar3').style.width=0+'px';document.getElementById('PollRate1').innerHTML="0 (0%)";document.getElementById('PollRate2').innerHTML="0 (0%)";document.getElementById('PollRate3').innerHTML="0 (0%)";window.setTimeout('DrawPollBars(0, '+ ReturnedValue[1]+', 0, '+ ReturnedValue[2]+', 0, '+ ReturnedValue[3]+')',150);} else if(SearchResult.length=2&&SearchResult.substr(0,2)=='NO') {alert("?????: ??? ??? ???????? ?????");}} else {document.getElementById('PollProgress').style.display='block';document.getElementById('PollFormDiv').style.display='none';}} function DrawPollBars(Bar1Var,Bar1Width,Bar2Var,Bar2Width,Bar3Var,Bar3Width) {var TotalVotes=parseInt(Bar1Width)+parseInt(Bar2Width)+parseInt(Bar3Width);var IncVal=parseFloat(TotalVotes/10);var NewBar1Width=0;var NewBar2Width=0;var NewBar3Width=0;var Bar1NextVar;var Bar2NextVar;var Bar3NextVar;if(parseInt(parseInt(Bar1Var)*200/TotalVotes)0)NewBar1Width=parseInt(Bar1Var)*200/TotalVotes;else if(Bar1Var0)NewBar1Width=1;else NewBar1Width=0;if(parseInt(parseInt(Bar2Var)*200/TotalVotes)0)NewBar2Width=parseInt(Bar2Var)*200/TotalVotes;else if(Bar2Var0)NewBar2Width=1;else NewBar2Width=0;if(parseInt(parseInt(Bar3Var)*200/TotalVotes)0)NewBar3Width=parseInt(Bar3Var)*200/TotalVotes;else if(Bar3Var0)NewBar3Width=1;else NewBar3Width=0;document.getElementById('PollBar1').style.width=NewBar1Width+'px';document.getElementById('PollBar2').style.width=NewBar2Width+'px';document.getElementById('PollBar3').style.width=NewBar3Width+'px';document.getElementById('PollRate1').innerHTML=parseFloat(Bar1Var).toFixed(0)+" ("+ parseFloat(parseFloat(Bar1Var)/TotalVotes*100).toFixed(1)+"%)";document.getElementById('PollRate2').innerHTML=parseFloat(Bar2Var).toFixed(0)+" ("+ parseFloat(parseFloat(Bar2Var)/TotalVotes*100).toFixed(1)+"%)";document.getElementById('PollRate3').innerHTML=parseFloat(Bar3Var).toFixed(0)+" ("+ parseFloat(parseFloat(Bar3Var)/TotalVotes*100).toFixed(1)+"%)";if(Bar1Var!=Bar1Width||Bar2Var!=Bar2Width||Bar3Var!=Bar3Width) {if(parseFloat(Bar1Var)+IncVal<=parseInt(Bar1Width))Bar1NextVar=parseFloat(Bar1Var)+IncVal;else Bar1NextVar=Bar1Width;if(parseFloat(Bar2Var)+IncVal<=parseInt(Bar2Width))Bar2NextVar=parseFloat(Bar2Var)+IncVal;else Bar2NextVar=Bar2Width;if(parseFloat(Bar3Var)+IncVal<=parseInt(Bar3Width))Bar3NextVar=parseFloat(Bar3Var)+IncVal;else Bar3NextVar=Bar3Width;window.setTimeout('DrawPollBars('+ Bar1NextVar+', '+ Bar1Width+', '+ Bar2NextVar+', '+ Bar2Width+', '+ Bar3NextVar+', '+ Bar3Width+')',80); }}

    Read the article

  • When using delegates, need better way to do sequential processing

    - by Padawan
    I have a class WebServiceCaller that uses NSURLConnection to make asynchronous calls to a web service. The class provides a delegate property and when the web service call is done, it calls a method webServiceDoneWithXXX on the delegate. There are several web service methods that can be called, two of which are say GetSummary and GetList. The classes that use WebServiceCaller initially need both the summary and list so they are written like this: -(void)getAllData { [webServiceCaller getSummary]; } -(void)webServiceDoneWithGetSummary { [webServiceCaller getList]; } -(void)webServiceDoneWithGetList { ... } This works but there are at least two problems: The calls are split across delegate methods so it's hard to see the sequence at a glance but more important it's hard to control or modify the sequence. Sometimes I want to call just GetSummary and not also GetList so I would then have to use an ugly class-level state variable that tells webServiceDoneWithGetSummary whether to call GetList or not. Assume that GetList cannot be done until GetSummary completes and returns some data which is used as input to GetList. Is there a better way to handle this and still get asynchronous calls? Update based on Matt Long's answer: Using notifications instead of a delegate, it looks like I can solve problem #2 by setting a different selector depending on whether I want the full sequence (GetSummary+GetList) or just GetSummary. Both observers would still use the same notification name when calling GetSummary. I would have to write two separate methods to handle GetSummaryDone instead of using a single delegate method (where I would have needed some class-level variable to tell whether to then call GetList). -(void)getAllData { [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getSummaryDoneAndCallGetList:)                  name:kGetSummaryDidFinish object:nil];     [webServiceCaller getSummary]; } -(void)getSummaryDoneAndCallGetList { [NSNotificationCenter removeObserver] //process summary data [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getListDone:)                  name:kGetListDidFinish object:nil];     [webServiceCaller getList]; } -(void)getListDone { [NSNotificationCenter removeObserver] //process list data } -(void)getJustSummaryData { [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getJustSummaryDone:) //different selector but                  name:kGetSummaryDidFinish object:nil]; //same notification name     [webServiceCaller getSummary]; } -(void)getJustSummaryDone { [NSNotificationCenter removeObserver] //process summary data } I haven't actually tried this yet. It seems better than having state variables and if-then statements but you have to write more methods. I still don't see a solution for problem 1.

    Read the article

  • Strange results - I obtain same value for all keys

    - by Pietro Luciani
    I have a problem with mapreduce. Giving as input a list of song ("Songname"#"UserID"#"boolean") i must have as result a song list in which is specified how many time different useres listen them... so a output ("Songname","timelistening"). I used hashtable to allow only one couple . With short files it works well but when I put as input a list about 1000000 of records it returns me the same value (20) for all records. This is my mapper: public static class CanzoniMapper extends Mapper<Object, Text, Text, IntWritable>{ private IntWritable userID = new IntWritable(0); private Text song = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { /*StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); }*/ String[] caratteri = value.toString().split("#"); if(caratteri[2].equals("1")){ song.set(caratteri[0]); userID.set(Integer.parseInt(caratteri[1])); context.write(song,userID); } } } This is my reducer: public static class CanzoniReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { Hashtable<IntWritable,Text> doppioni = new Hashtable<IntWritable,Text>(); for (IntWritable val : values) { doppioni.put(val,key); } result.set(doppioni.size()); //doppioni.clear(); context.write(key,result); } } and main: Configuration conf = new Configuration(); Job job = new Job(conf, "word count"); job.setJarByClass(Canzoni.class); job.setMapperClass(CanzoniMapper.class); //job.setCombinerClass(CanzoniReducer.class); //job.setNumReduceTasks(2); job.setReducerClass(CanzoniReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); Any idea???

    Read the article

  • When I really need to use [NSThread sleepForTimeInterval:1];

    - by Timbo
    Hi there, I have a program that needs to use sleep. Like really needs to. In lieu of spending ages explaining why, suffice to say that it needs it. Now I'm told to split off my code into a separate thread if it requires sleep so I don't lose interface responsiveness, so I've started learning how to use NSThread. I've created a brand new program that is conceptual so to solve the issue for this example will help me in my real program. Short story is I have a class, it has instance variables and I need a loop with a sleep to be depended on the value of that instance variable. Here's what I've put together anyway, your help is very much appreciated :) Cheers Tim /// Start Test1ViewController.h /// #import <UIKit/UIKit.h> @interface Test1ViewController : UIViewController { UILabel* label; } @property (assign) IBOutlet UILabel *label; @end /// End Test1ViewController.h /// /// Start Test1ViewController.m /// #import "Test1ViewController.h" #import "MyClass.h" @implementation Test1ViewController @synthesize label; - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; label.text = @"1"; [NSThread detachNewThreadSelector:@selector(backgroundProcess) toTarget:self withObject:nil]; } - (void)backgroundProcess { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; // Create an instance of a class that will eventually store a whole load of variables MyClass *aMyClassInstance = [MyClass new]; [aMyClassInstance createMyClassInstance:(@"Timbo")]; while (aMyClassInstance.myVariable--) { NSLog(@"blah = %i",aMyClassInstance.myVariable); label.text = [NSString stringWithFormat:@"blah = %d", aMyClassInstance.myVariable]; //how do I pass the new value out to the updateLabel method, or reference aMyClassInstance.myVariable? [self performSelectorOnMainThread:@selector(updateLabel) withObject:nil waitUntilDone:NO]; //the sleeping of the thread is absolutely mandatory and must be worked around. The whole point of using NSThread is so I can have sleeps [NSThread sleepForTimeInterval:1]; } [pool release]; } - (void)updateLabel {label.text = [NSString stringWithFormat:@"blah = %d", aMyClassInstance.myVariable]; // be nice if i could } - (void)didReceiveMemoryWarning {[super didReceiveMemoryWarning];} - (void)viewDidUnload {} - (void)dealloc {[super dealloc];} @end /// End Test1ViewController.m /// /// Start MyClass.h /// #import <Foundation/Foundation.h> @interface MyClass : NSObject { NSString* name; int myVariable; } @property int myVariable; @property (assign) NSString *name; - (void) createMyClassInstance: (NSString*)withName; - (int) changeVariable: (int)toAmount; @end /// End MyClass.h /// /// Start MyClass.h /// #import "MyClass.h" @implementation MyClass @synthesize name, myVariable; - (void) createMyClassInstance: (NSString*)withName{ name = withName; myVariable = 10; } - (int) changeVariable: (int)toAmount{ myVariable = toAmount; return toAmount; } @end /// End MyClass.h ///

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >