Search Results

Search found 2264 results on 91 pages for 'odd rationale'.

Page 4/91 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • question about Batcher odd-even sort

    - by davit-datuashvili
    hi i ave question about Batcher's odd-even sort i have following code public class Batcher{ public static void batchsort(int a[],int l,int r){ int n=r-l+1; for (int p=1;p<n;p+=p) for (int k=p;k>0;k/=2) for (int j=k%p;j+k<n;j+=(k+k)) for (int i=0;i<n-j-k;i++) if ((j+i)/(p+p)==(j+i+k)/(p+p)) exch(a,l+j+i,l+j+i+k); } public static void main(String[]args){ int a[]=new int[]{2,4,3,4,6,5,3}; batchsort(a,0,a.length-1); for (int i=0;i<a.length;i++){ System.out.println(a[i]); } } public static void exch(int a[],int i,int j){ int t=a[i]; a[i]=a[j]; a[j]=t; } } //result is 3 3 4 4 5 2 6 what i missed ? hat is wrong?

    Read the article

  • Odd background image resizing on animating UIView

    - by Woody
    I have a UIView in the middle of a view that I am using as a game playing area (in a 2d cocoa view). This image has a background image of the same size as the view. Resizing the view I use animation to make it look smooth (and that works fine). However, when the animation starts, the background image immediately changes size, tiling or being clipped to a size that when the animation finishes, the background image is physically the same size. I don't want this, I want the image to always fit the view, regardless of the view size. UIImage *bgImage = [UIImage imageNamed:@"head.png"]; ... // resize the image returning another image self.view.backgroundColor = [[UIColor alloc] initWithPatternImage:resizedImage]; [UIView beginAnimations:@"resizeView" context:nil]; [UIView setAnimationDuration:.5]; int localViewSize = ... // work out view sizes self.view.frame = CGRectMake(... ,localViewSize,localViewSize); [UIView commitAnimations]; It looks very odd as it jumps to a different size, then animates to the original size. I am guessing that maybe I would have to make a separate view underneath my main view but is that the only way?

    Read the article

  • Odd DOM Problem with Firefox

    - by Bob
    Hello. I'm experiencing an odd problem when trying to navigate through a table's rows and cells in a while loop using javascript. I'm using Firefox 3.5.7 on Win7 with Firebug enabled. I have this markup: <table> <tbody> <tr id='firstRow'><td>a</td><td>b</td><td>c</td></tr> <tr><td>a</td><td>b</td><td>c</td></tr> <tr><td>a</td><td>b</td><td>c</td></tr> </tbody> </table> And this javascript: var row = document.getElementById('firstRow'); console.log(row); // Row call 1 while (row) { console.log(row); // Row call 2 row = row.nextSibling; } The problem I'm having is that on the line commented "Row call 1", Firebug is outputting <tr id='firstRow'> as expected. However, in the while loop, Firebug is giving me <tr id='firstRow'> <TextNode textContent="\n"> It is giving me different output for the exact same row, even immediately after the while loop begins executing and nothing else touched the row. For subsequent rows, it of course does not have id='firstRow' as an attribute. The bigger problem this is giving me is that if I'm in the while loop, and I want to access a particular cell of the current row using row.cells[0], Firebug will give me an error that row.cells is undefined. I want to know if someone could shed some light on this situation I am experiencing.

    Read the article

  • MPI Odd/Even Compare-Split Deadlock

    - by erebel55
    I'm trying to write an MPI version of a program that runs an odd/even compare-split operation on n randomly generated elements. Process 0 should generated the elements and send nlocal of them to the other processes, (keeping the first nlocal for itself). From here, process 0 should print out it's results after running the CompareSplit algorithm. Then, receive the results from the other processes run of the algorithm. Finally, print out the results that it has just received. I have a large chunk of this already done, but I'm getting a deadlock that I can't seem to fix. I would greatly appreciate any hints that people could give me. Here is my code http://pastie.org/3742474 Right now I'm pretty sure that the deadlock is coming from the Send/Recv at lines 134 and 151. I've tried changing the Send to use "tag" instead of myrank for the tag parameter..but when I did that I just keep getting a "MPI_ERR_TAG: invalid tag" for some reason. Obviously I would also run the algorithm within the processors 0 but I took that part out for now, until I figure out what is going wrong. Any help is appreciated.

    Read the article

  • Int Showing as Long Odd Value

    - by Josh Kahane
    Hi I am trying to send an int in my iphone game for game center multiplayer. The integer is coming up and appearing as an odd long integer value rather than the expected one. I have this in my .h: typedef enum { kPacketTypeScore, } EPacketTypes; typedef struct { EPacketTypes type; size_t size; } SPacketInfo; typedef struct { SPacketInfo packetInfo; int score; } SScorePacket; Then .m: Sending data: scoreData *score = [scoreData sharedData]; SScorePacket packet; packet.packetInfo.type = kPacketTypeScore; packet.packetInfo.size = sizeof(SScorePacket); packet.score = score.score; NSData* dataToSend = [NSData dataWithBytes:&packet length:packet.packetInfo.size]; NSError *error; [self.myMatch sendDataToAllPlayers: dataToSend withDataMode: GKMatchSendDataUnreliable error:&error]; if (error != nil) { // handle the error } Receiving: SPacketInfo* packet = (SPacketInfo*)[data bytes]; switch (packet->type) { case kPacketTypeScore: { SScorePacket* scorePacket = (SScorePacket*)packet; scoreData *score = [scoreData sharedData]; [scoreLabel setString:[NSString stringWithFormat:@"You: %d Challenger: %d", score.score, scorePacket]]; break; } default: CCLOG(@"received unknown packet type %i (size: %u)", packet->type, packet->size); break; } Any ideas? Thanks.

    Read the article

  • Finding an odd perfect number

    - by Coin Bird
    I wrote these two methods to determine if a number is perfect. My prof wants me to combine them to find out if there is an odd perfect number. I know there isn't one(that is known), but I need to actually write the code to prove that. The issue is with my main method. I tested the two test methods. I tried debugging and it gets stuck on the number 5, though I can't figure out why. Here is my code: public class Lab6 { public static void main (String[]args) { int testNum = 3; while (testNum != sum_of_divisors(testNum) && testNum%2 != 0) testNum++; } public static int sum_of_divisors(int numDiv) { int count = 1; int totalDivisors = 0; while (count < numDiv) if (numDiv%count == 0) { totalDivisors = totalDivisors + count; count++; } else count++; return totalDivisors; } public static boolean is_perfect(int numPerfect) { int count = 1; int totalPerfect = 0; while (totalPerfect < numPerfect) { totalPerfect = totalPerfect + count; count++; } if (numPerfect == totalPerfect) return true; else return false; } }

    Read the article

  • python variable scope

    - by Oscar Reyes
    I'm teaching my self python and I was translating some sample class Student: def __init__( self, name, a,b,c ): self.name = name self.a = a self.b = b self.c = c def average(self): return ( a+b+c ) / 3.0 Which is pretty much my intended class definition Later in the main method I create an instance and call it a if __name__ == "__main__" : a = Student( "Oscar", 10, 10, 10 ) That's how I find out that the variable a declared in main is available to the method average and that to make that method work , I have to type self.a + self.b + self.c instead What's the rationale of this? I found related questions, but I don't really know if they are about the same

    Read the article

  • Odd values/movement with UITouch and CGPoint.

    - by Joshua
    I'm getting odd numbers from UITouch and CGPoint and one is different, I also think this maybe causing a flickering affect in my app when I try to move something by following a touch. This is the code I'm using: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"touchDown"); UITouch *touch = [touches anyObject]; firstTouch = [touch locationInView:self.view]; if (CGRectContainsPoint(but.frame, firstTouch)) { butContains = YES; NSLog(@"butContains = %d", butContains); } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; currentTouch = [touch locationInView:self.view]; NSInteger x = currentTouch.x; NSInteger y = currentTouch.y; CGFloat CGX = (CGFloat)x; CGFloat CGY = (CGFloat)y; if (butContains == YES) { NSLog(@"touch in subView/contentView"); sub.frame = CGRectMake(CGX, CGY, 130.0, 21.0); } NSLog(@"touch moved"); } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; currentTouch = [touch locationInView:self.view]; NSLog(@"User tapped at %@", NSStringFromCGPoint(currentTouch)); NSLog(@"Point %a, %a", currentTouch.x, currentTouch.y); NSInteger x = currentTouch.x; NSInteger y = currentTouch.y; NSLog(@"Point %a, %a", y, x); CGFloat CGX = (CGFloat)x; CGFloat CGY = (CGFloat)y; NSLog(@"Point %g, %g", CGX, CGY); if (butContains == YES) { NSLog(@"touch in subView/contentView"); sub.frame = CGRectMake(CGX, CGY, 130.0, 21.0); } butContains = NO; NSLog(@"touch ended"); } - (IBAction)add:(id)sender{ InSightViewController *contentView = [[InSightViewController alloc] initWithNibName:@"SubView" bundle:[NSBundle mainBundle]]; [contentView loadView]; [self.view insertSubview:contentView.view atIndex:0]; } This is what I get from the touchesEnded method in the Debugger. 2010-04-20 20:06:13.045 InSight[25042:207] User tapped at {50, 78} 2010-04-20 20:06:13.047 InSight[25042:207] Point 0x1.9p+5, 0x1.38p+6 2010-04-20 20:06:13.048 InSight[25042:207] Point 0x1.900000027p-1037, 0x1.38p+6 2010-04-20 20:06:13.048 InSight[25042:207] Point 50, 78 And this is what's happening in the Simulator. fwdr.org/file:y8bd As this is a complicated problem this is the source code of my XCode Project aswell. http://cl.ly/Qjj

    Read the article

  • Odd tcp deadlock under windows

    - by John Robertson
    We are moving large amounts of data on a LAN and it has to happen very rapidly and reliably. Currently we use windows TCP as implemented in C++. Using large (synchronous) sends moves the data much faster than a bunch of smaller (synchronous) sends but will frequently deadlock for large gaps of time (.15 seconds) causing the overall transfer rate to plummet. This deadlock happens in very particular circumstances which makes me believe it should be preventable altogether. More importantly if we don't really know the cause we don't really know it won't happen some time with smaller sends anyway. Can anyone explain this deadlock? Deadlock description (OK, zombie-locked, it isn't dead, but for .15 or so seconds it stops, then starts again) The receiving side sends an ACK. The sending side sends a packet containing the end of a message (push flag is set) The call to socket.recv takes about .15 seconds(!) to return About the time the call returns an ACK is sent by the receiving side The the next packet from the sender is finally sent (why is it waiting? the tcp window is plenty big) The odd thing about (3) is that typically that call doesn't take much time at all and receives exactly the same amount of data. On a 2Ghz machine that's 300 million instructions worth of time. I am assuming the call doesn't (heaven forbid) wait for the received data to be acked before it returns, so the ack must be waiting for the call to return, or both must be delayed by something else. The problem NEVER happens when there is a second packet of data (part of the same message) arriving between 1 and 2. That part very clearly makes it sound like it has to do with the fact that windows TCP will not send back a no-data ACK until either a second packet arrives or a 200ms timer expires. However the delay is less than 200 ms (its more like 150 ms). The third unseemly character (and to my mind the real culprit) is (5). Send is definitely being called well before that .15 seconds is up, but the data NEVER hits the wire before that ack returns. That is the most bizarre part of this deadlock to me. Its not a tcp blockage because the TCP window is plenty big since we set SO_RCVBUF to something like 500*1460 (which is still under a meg). The data is coming in very fast (basically there is a loop spinning out data via send) so the buffer should fill almost immediately. According to msdn the buffer being full and at least one pending send should cause the data to be sent (though in another place it mentions that there various "heuristics" used in deciding when a send hits the wire). Anway, why the sender doesn't actually send more data during that .15 second pause is the most bizarre part to me. The information above was captured on the receiving side via wireshark (except of course the socket.recv return times which were logged in a text file). We tried changing the send buffer to zero and turning off Nagle on the sender (yes, I know Nagle is about not sending small packets - but we tried turning Nagle off in case that was part of the unstated "heuristics" affecting whether the message would be posted to the wire. Technically microsoft's Nagle is that a small packet isn't sent if the buffer is full and there is an outstanding ACK, so it seemed like a possibility).

    Read the article

  • NHibernate, and odd "Session is Closed!" errors

    - by Sekhat
    Note: Now that I've typed this out, I have to apologize for the super long question, however, I think all the code and information presented here is in some way relevant. Okay, I'm getting odd "Session Is Closed" errors, at random points in my ASP.NET webforms application. Today, however, it's finally happening in the same place over and over again. I am near certain that nothing is disposing or closing the session in my code, as the bits of code that use are well contained away from all other code as you'll see below. I'm also using ninject as my IOC, which may / may not be important. Okay, so, First my SessionFactoryProvider and SessionProvider classes: SessionFactoryProvider public class SessionFactoryProvider : IDisposable { ISessionFactory sessionFactory; public ISessionFactory GetSessionFactory() { if (sessionFactory == null) sessionFactory = Fluently.Configure() .Database( MsSqlConfiguration.MsSql2005.ConnectionString(p => p.FromConnectionStringWithKey("QoiSqlConnection"))) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<JobMapping>()) .BuildSessionFactory(); return sessionFactory; } public void Dispose() { if (sessionFactory != null) sessionFactory.Dispose(); } } SessionProvider public class SessionProvider : IDisposable { ISessionFactory sessionFactory; ISession session; public SessionProvider(SessionFactoryProvider sessionFactoryProvider) { this.sessionFactory = sessionFactoryProvider.GetSessionFactory(); } public ISession GetCurrentSession() { if (session == null) session = sessionFactory.OpenSession(); return session; } public void Dispose() { if (session != null) { session.Dispose(); } } } These two classes are wired up with Ninject as so: NHibernateModule public class NHibernateModule : StandardModule { public override void Load() { Bind<SessionFactoryProvider>().ToSelf().Using<SingletonBehavior>(); Bind<SessionProvider>().ToSelf().Using<OnePerRequestBehavior>(); } } and as far as I can tell work as expected. Now my BaseDao<T> class: BaseDao public class BaseDao<T> : IDao<T> where T : EntityBase { private SessionProvider sessionManager; protected ISession session { get { return sessionManager.GetCurrentSession(); } } public BaseDao(SessionProvider sessionManager) { this.sessionManager = sessionManager; } public T GetBy(int id) { return session.Get<T>(id); } public void Save(T item) { using (var transaction = session.BeginTransaction()) { session.SaveOrUpdate(item); transaction.Commit(); } } public void Delete(T item) { using (var transaction = session.BeginTransaction()) { session.Delete(item); transaction.Commit(); } } public IList<T> GetAll() { return session.CreateCriteria<T>().List<T>(); } public IQueryable<T> Query() { return session.Linq<T>(); } } Which is bound in Ninject like so: DaoModule public class DaoModule : StandardModule { public override void Load() { Bind(typeof(IDao<>)).To(typeof(BaseDao<>)) .Using<OnePerRequestBehavior>(); } } Now the web request that is causing this is when I'm saving an object, it didn't occur till I made some model changes today, however the changes to my model has not changed the data access code in anyway. Though it changed a few NHibernate mappings (I can post these too if anyone is interested) From as far as I can tell, BaseDao<SomeClass>.Get is called then BaseDao<SomeOtherClass>.Get is called then BaseDao<TypeImTryingToSave>.Save is called. it's the third call at the line in Save() using (var transaction = session.BeginTransaction()) that fails with "Session is Closed!" or rather the exception: Session is closed! Object name: 'ISession'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ObjectDisposedException: Session is closed! Object name: 'ISession'. And indeed following through on the Debugger shows the third time the session is requested from the SessionProvider it is indeed closed and not connected. I have verified that Dispose on my SessionFactoryProvider and on my SessionProvider are called at the end of the request and not before the Save call is made on my Dao. So now I'm a little stuck. A few things pop to mind. Am I doing anything obviously wrong? Does NHibernate ever close sessions without me asking to? Any workarounds or ideas on what I might do? Thanks in advance

    Read the article

  • Odd problem with IE8 and z-index CSS property

    - by DK39
    I not been able to put one DIV over his parent DIV in Internet Explorer. With Firefox is working as suposed to. The odd part is that if I open the html file directly in IE, everything works fine. But if I upload to the server and open from there, the div is hidden underneath his parent. I've tried several z-index combinations and none works. Here's the code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>Test</title> <meta http-equiv="content-type" content="text-html; charset=utf-8" /> <style type="text/css"> .col { float:left; width:310px; margin-right:13px; } .art { position:relative; border-bottom: 1px solid #d0d0d0; font: normal normal bold 11px Arial,Verdana,Helvetica; color:#A0A0A0; width:310px; height:50px; top:0px; left: 0px; margin-right:10px; background-color:#F0F0F0; } .art a { padding:3px; display:block; width:304px; height:100%; color:#707070; } .art a:visited { color:#A0A0A0; } .art a:hover { background-color:#E0E0E0; } .box { z-index:1000; background-color:#A0A0A0; color:#404040; font: normal normal bold 11px Arial,Verdana,Helvetica; display:none; position:absolute; top:30px; left:10px; text-align:left; border:3px solid #707070; margin:5px 0px 5px 5px; font-size:10px; color:White; width:100%; } </style> <script type="text/javascript"> function sh(obj) { var el = document.getElementById(obj); if ( el.style.display != 'block' ) { el.style.display = 'block'; } else { el.style.display = 'none'; } } </script> </head> <body> <div class="col"> <div class="art"> <a href="" target="_blank" onmouseover="javascript:sh('i0')" onmouseout="javascript:sh('i0')">Title 1</a> <div id="i0" class="box"> <div class="text"> Les "chemises rouges" manifestent depuis la mi-mars pour faire tomber le gouvernement et occupent depuis trois semaines un quartier touristique et commerçant autour duquel ils ont érigé des barricades. </div> </div> </div> <div class="art"> <a href="" target="_blank" onmouseover="javascript:sh('i1')" onmouseout="javascript:sh('i1')">Title2</a> <div id="i1" class="box"> <div class="text"> Une association ardéchoise accueillant des séminaires de "bien-être" et de "développement personnel" a refusé d'accueillir un stage de danse en invoquant l'homosexualité des participants, ont indiqué aujourd'hui les organisateurs. </div> </div> </div> </div> </body> </html> What's is going on here?

    Read the article

  • Why does std::map operator[] create an object if the key doesn't exist?

    - by n1ck
    Hi, I'm pretty sure I already saw this question somewhere (comp.lang.c++? Google doesn't seem to find it there either) but a quick search here doesn't seem to find it so here it is: Why does the std::map operator[] create an object if the key doesn't exist? I don't know but for me this seems counter-intuitive if you compare to most other operator[] (like std::vector) where if you use it you must be sure that the index exists. I'm wondering what's the rationale for implementing this behavior in std::map. Like I said wouldn't it be more intuitive to act more like an index in a vector and crash (well undefined behavior I guess) when accessed with an invalid key? Refining my question after seeing the answers: Ok so far I got a lot of answers saying basically it's cheap so why not or things similar. I totally agree with that but why not use a dedicated function for that (I think one of the comment said that in java there is no operator[] and the function is called put)? My point is why doesn't map operator[] work like a vector? If I use operator[] on an out of range index on a vector I wouldn't like it to insert an element even if it was cheap because that probably mean an error in my code. My point is why isn't it the same thing with map. I mean, for me, using operator[] on a map would mean: i know this key already exist (for whatever reason, i just inserted it, I have redundancy somewhere, whatever). I think it would be more intuitive that way. That said what are the advantage of doing the current behavior with operator[] (and only for that, I agree that a function with the current behavior should be there, just not operator[])? Maybe it give clearer code that way? I don't know. Another answer was that it already existed that way so why not keep it but then, probably when they (the ones before stl) choose to implement it that way they found it provided an advantage or something? So my question is basically: why choose to implement it that way, meaning a somewhat lack of consistency with other operator[]. What benefit do it give? Thanks

    Read the article

  • What is the rationale behind snazzy Window Managers/Composers?

    - by Emanuele
    This is more of a generic question, based on trying out Window Managers like Awesome, Mate and others. To me looks like that other Window Managers like Gnome3 and/or Unity are heavy and pointless. I do understand that having all the composed UIs is more pleasant for the eye, but apart that, what are the other major benefits? To make an example, when I run the game Heroes of Newerth (using nVidia drivers) under: Unity : the FPS drops sharply Gnome3 : FPS is ok, but X and other processes use 15~20% of CPU and quite some additional memory Awesome : FPS is ok, and other processes use very little memory and CPU Below some numbers regarding what I'm saying (please note my system is 64 bit, AMD Phenom II X4, 8 GB RAM, nd nVidia 470 GTX, SSD disk). All data is sorted by mem usage (watch -d -n 10 "ps -e -o pcpu,pmem,pid,user,cmd --sort=-pmem | head -20"); again note that CPU time of ./hon-x86_64 might be different due to the fact I can't take the snapshot of the system during exactly same time. Awesome: %CPU %MEM PID USER CMD 91.8 21.6 3579 ema ./hon-x86_64 2.4 0.9 3223 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.6 0.4 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinp 0.3 0.2 3602 ema gnome-terminal 0.0 0.2 2698 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Gnome3: %CPU %MEM PID USER CMD 82.7 21.0 5528 ema ./hon-x86_64 17.7 1.7 5315 ema /usr/bin/gnome-shell 5.8 1.2 5062 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.0 0.4 5657 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.7 0.3 5331 ema nautilus -n 1.6 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- - 0.9 0.2 5451 ema gnome-terminal 0.1 0.2 5400 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Unity 3D: %CPU %MEM PID USER CMD 87.2 21.1 6554 ema ./hon-x86_64 10.7 2.6 6105 ema compiz 17.8 1.1 5842 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.3 0.9 6672 root /usr/bin/python /usr/sbin/aptd 0.4 0.4 6606 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.5 0.3 6115 ema nautilus -n 1.5 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinput -sasl errl 0.3 0.2 6180 ema /usr/lib/unity/unity-panel-service So my point is, what's the rationale behind going towards such heavy WMs/Composers?

    Read the article

  • WPF DataGrid : CanContentScroll property causing odd behavior

    - by Sonic Soul
    i have a solution where i generate a DataGrid (or multiple instances) based on user criteria.. each grid keeps receiving data as it comes in via ObservableCollection the problem i had, was that the scroll acted weird. It was choppy, and scrollbar would resize it self while scrolling. than i found.. CanContentScroll property! It completely fixes the weird scrolling behavior bringing me temporary bliss and happiness. however, it causes 2 unfortunate side effects. whenever i re-create grid instances and bind them to my observable collection, it freezes my entire window for 5 seconds. when my grid grows to a big size, this delay can last for 30 seconds. when i call TradeGrid.ScrollIntoView(TradeGrid.Items(TradeGrid.Items.Count - 1)) to scroll to the bottom, it jumps to bottom and than back to the top. is there another way to achieve smooth scrolling perhaps?

    Read the article

  • Odd Infragistics UltraComboEditor data binding non-bug

    - by Richard Dunlap
    Within an Infragistics 8.2 UltraComboEditor, we had the following properties set via C#: DataSource = dataSource; ValueMember = "Measure"; DisplayMember = "Name"; DataBindings.Add("Value", repository, "Measure"); DataBindings["Value"].DataSourceUpdateMode = DataSourceUpdateMode.OnPropertyChanged; where dataSource was an array of objects, each with a property Measure, and repository was an object with a property Measure. (Those strings are actually constructor parameters -- just using explicit strings to simplify the example.) In the course of some refactoring, the name of the property on the objects in the array was changed to BaseEnum (the objects are actually wrapped enumerations, for the curious), but the name of ValueMember above was not changed. And yet, the combo box binding continued to work through initial testing, beta testing, and even after release... until two customers emailed in noting that the combo box was no longer changing the underlying parameter. We were able to dig out the problem by careful study of the source code repository... despite being in the awkward position of not being able to replicate the buggy behavior internally. Two part question: What's happening under the hood that allowed the binding to continue to function, and/or what might be unique about those two users that caused the binding to (correctly) fail? (O/S version isn't alone the answer, and we get the unexpectedly functioning binding on machines that have never had a version of the software before, so we're not looking at rogue binaries). Are there tools that might have been able to warn us about the misbind, even if something was cleaning up behind?

    Read the article

  • [ASP.NET] Odd HttpRequest behaviour

    - by barguast
    I have a web service which runs with a HttpHandler class. In this class, I inspect the request stream for form / query string parameters. In some circumstances, it seemed as though these parameters weren't getting through. After a bit of digging around, I came across some behaviour I don't quite understand. See below: // The request contains 'a=1&b=2&c=3' // TEST ONLY: Read the entire request string contents; using (StreamReader sr = new StreamReader(context.Request.InputStream)) { contents = sr.ReadToEnd(); } // Here 'contents' is usually correct - containing 'a=1&b=2&c=3'. Sometimes it is empty. string a = context.Request["a"]; // Here, a = null, regardless of whether the 'contents' variable above is correct Can anyone explain to me why this might be happening? I'm using a .NET WebClient and UploadDataAsync to perform the request on the client if that makes any difference. If you need any more information, please let me know.

    Read the article

  • Odd SQL Results

    - by Ryan Burnham
    So i have the following query Select id, [First], [Last] , [Business] as contactbusiness, (Case When ([Business] != '' or [Business] is not null) Then [Business] Else 'No Phone Number' END) from contacts The results look like id First Last contactbusiness (No column name) 2 John Smith 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number I'd expect record 2 to also show No Phone Number If i change the "[Business] is not null" to [Business] != null then i get the correct results id First Last contactbusiness (No column name) 2 John Smith No Phone Number 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number Normally you need to use is not null rather than != null. whats going on here?

    Read the article

  • Passing html parameters to server odd problem

    - by StealthRT
    Hey all i am having a weird problem with sending data back to my server. This is the code i am using: NSString *theURL =[NSString stringWithFormat:@"http://www.xxx.com/confirm.asp?theID=%@&theName=%@&empID=%@&theComp=%@", theConfirmNum, tmpNBUserRow.userName, labelTxt.text, theID]; NSLog(@"%@,%@,%@,%@", theConfirmNum, tmpNBUserRow.userName, labelTxt.text, theID); NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:theURL]]; [request setHTTPMethod:@"POST"]; NSError *error; NSURLResponse *response; NSData *urlData=[NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSString *data=[[NSString alloc]initWithData:urlData encoding:NSUTF8StringEncoding]; if ([data isEqualToString:@"Done"]) I can run the code from the browser and it works just fine using the data i got from the NSLog output. The NSLog output for each value is correct. But for some reason when i put a break on the IF ([data isEqualToString:@"Done"]) it has no return value. I checked each value for what it was sending (and again, it was correct in the NSLog output) and i found that the value "theID" said "Out of scope". Although, again, the NSLog had the value in it correctly? So i searched the forum and found a simular problem. I took their advice and added "RETAIN" to the "theID" value like so: theID = [customObjInstance TID]; [theID retain]; However, that did not solve the issue... Here is the console NSLog output: [Session started at 2010-04-11 01:31:50 -0400.] wait_fences: failed to receive reply: 10004003 wait_fences: failed to receive reply: 10004003 nbTxt(5952,0xa0937500) malloc: *** error for object 0x3c0ebc0: double free *** set a breakpoint in malloc_error_break to debug 2010-04-11 01:32:12.270 nbTxt[5952:207] 5122,Rob S.,5122,NB010203 The NSLog values i am sending is the last line "5122,Rob S.,5122,NB010203" Any help would be great :o) David

    Read the article

  • Odd behavior in Django Form (readonly field/widget)

    - by jamida
    I'm having a problem with a test app I'm writing to verify some Django functionality. The test app is a small "grade book" application that is currently using Alex Gaynor's readonly field functionality http://lazypython.blogspot.com/2008/12/building-read-only-field-in-django.html There are 2 problems which may be related. First, when I flop the comment on these 2 lines below: # myform = GradeForm(data=request.POST, instance=mygrade) myform = GradeROForm(data=request.POST, instance=mygrade) it works like I expect, except of course that the student field is changeable. When the comments are the shown way, the "studentId" field is displayed as a number (not the name, problem 1) and when I hit submit I get an error saying that studentId needs to be a Student instance. I'm at a loss as to how to fix this. I'm not wedded to Alex Gaynor's code. ANY code will work. I'm relatively new to both Python and Django, so the hints I've seen on websites that say "making a read-only field is easy" are still beyond me. // models.py class Student(models.Model): name = models.CharField(max_length=50) parent = models.CharField(max_length=50) def __unicode__(self): return self.name class Grade(models.Model): studentId = models.ForeignKey(Student) finalGrade = models.CharField(max_length=3) # testbed.grades.readonly is alex gaynor's code from testbed.grades.readonly import ReadOnlyField class GradeROForm(ModelForm): studentId = ReadOnlyField() class Meta: model=Grade class GradeForm(ModelForm): class Meta: model=Grade // views.py def modifyGrade(request,student): student = Student.objects.get(name=student) mygrade = Grade.objects.get(studentId=student) if request.method == "POST": # myform = GradeForm(data=request.POST, instance=mygrade) myform = GradeROForm(data=request.POST, instance=mygrade) if myform.is_valid(): grade = myform.save() info = "successfully updated %s" % grade.studentId else: # myform=GradeForm(instance=mygrade) myform=GradeROForm(instance=mygrade) return render_to_response('grades/modifyGrade.html',locals()) // template <p>{{ info }}</p> <form method="POST" action=""> <table> {{ myform.as_table }} </table> <input type="submit" value="Submit"> </form> // Alex Gaynor's code from django import forms from django.utils.html import escape from django.utils.safestring import mark_safe from django.forms.util import flatatt class ReadOnlyWidget(forms.Widget): def render(self, name, value, attrs): final_attrs = self.build_attrs(attrs, name=name) if hasattr(self, 'initial'): value = self.initial return mark_safe("<span %s>%s</span>" % (flatatt(final_attrs), escape(value) or '')) def _has_changed(self, initial, data): return False class ReadOnlyField(forms.FileField): widget = ReadOnlyWidget def __init__(self, widget=None, label=None, initial=None, help_text=None): forms.Field.__init__(self, label=label, initial=initial, help_text=help_text, widget=widget) def clean(self, value, initial): self.widget.initial = initial return initial

    Read the article

  • Odd difference between Python 2.5 and Python 2.6 on MacOS 10.6 using ctypes and libproc proc_pidinfo

    - by cemasoniv
    I'm trying to determine the current working directory of a process given its PID. The command-line utility lsof does something similar. Here's the source to the python script: import ctypes from ctypes import util import sys PROC_PIDVNODEPATHINFO = 9 proc = ctypes.cdll.LoadLibrary(util.find_library("libproc")) print(proc.proc_pidinfo) class vnode_info(ctypes.Structure): _fields_ = [('data', ctypes.c_ubyte * 152)] class vnode_info_path(ctypes.Structure): _fields_ = [('vip_vi', vnode_info), ('vip_path', ctypes.c_char * 1024)] class proc_vnodepathinfo(ctypes.Structure): _fields_ = [('pvi_cdir', vnode_info_path), ('pvi_rdir', vnode_info_path)] inst = proc_vnodepathinfo() pid = int(sys.argv[1]) ret = proc.proc_pidinfo( pid, PROC_PIDVNODEPATHINFO, 0, ctypes.byref(inst), ctypes.sizeof(inst) ) print(ret, inst.pvi_cdir.vip_path) However, even though this script behaves as expected on Python 2.6, it does not work in Python 2.5: host:dir user$ sudo /usr/bin/python2.6 script.py 2698 <_FuncPtr object at 0x100419ae0> (2352, '/') host:dir user$ sudo /usr/bin/python2.5 script.py 2698 <_FuncPtr object at 0x19fdc0> (0, '') (PID 2698 is "Activity Monitor.app"). Note the different return values. Since this program strongly based on ctypes, I can't imagine any difference in Python itself that would cause this. The same behavior (as Python 2.5) occurs with my self-built Python 3.2. I'm not sure what versioning information I can give to help track down the weirdness -- or even come up with a solution for 2.5 -- but here's some stuff: host:dir user$ otool -L /usr/bin/python2.6 /usr/bin/python2.6: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ otool -L /usr/bin/python2.5 /usr/bin/python2.5 (architecture i386): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) /usr/bin/python2.5 (architecture ppc7400): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ uname -a Darwin host.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 Thanks to anyone that has a clue about what's going on here:)

    Read the article

  • Git-svn branch hoses dcommit when using an odd branch structure

    - by Chuck Vose
    I had a boss, past-tense, who decided to put svn branches in the same folder as trunk. Normally, this wouldn't affect me that much but since I'm using git-svn things are going so well. After I did a fetch it created a folder for each branch in my root folder so I have three folders, drupal, trunk, and client. The drupal folder is git's master branch, client and trunk are the svn branches. Merging and committing works great, in fact everything git related is working superb. However dcommit is totally hosed, it's trying to commit a folder called client and one called trunk. I can't even imagine what havoc this would cause for svn later on. So my question is, what have I done wrong in my .git/config and is there anything I can do to fix this or am I going to have to suffer and go back to using svn? Please don't make me go back. I don't think I can take it anymore. Bastard boss knows how to leave a legacy. [svn-remote "svn"] url = https://svn.mydomain.com/svn/project_name fetch = trunk:refs/remotes/trunk branches = *:refs/remotes/* tags = tags/*:refs/remotes/tags/* Normally the branches line would look like this (when using --stdlayout): branches = branches/*:refs/remotes/branches/* ls output is thus: $ ls client/ docs/ drupal/ sql/ trunk/

    Read the article

  • python raw_input odd behavior with accents containing strings

    - by Ryan
    I'm writing a program that asks the user for input that contains accents. The user input string is tested to see if it matches a string declared in the program. As you can see below, my code is not working: code # -*- coding: utf-8 -*- testList = ['má'] myInput = raw_input('enter something here: ') print myInput, repr(myInput) print testList[0], repr(testList[0]) print myInput in testList output in eclipse with pydev enter something here: má mv° 'm\xe2\x88\x9a\xc2\xb0' má 'm\xc3\xa1' False output in IDLE enter something here: má má u'm\xe1' má 'm\xc3\xa1' Warning (from warnings module): File "/Users/ryanculkin/Desktop/delete.py", line 8 print myInput in testList UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal False How can I get my code to print True when comparing the two strings? Additionally, I note that the result of running this code on the same input is different depending on whether I use eclipse or IDLE. Why is this? My eventual goal is to put my program on the web; is there anything that I need to be aware of, since the result seems to be so volatile?

    Read the article

  • Vector of pointers to base class, odd behaviour calling virtual functions

    - by Ink-Jet
    I have the following code #include <iostream> #include <vector> class Entity { public: virtual void func() = 0; }; class Monster : public Entity { public: void func(); }; void Monster::func() { std::cout << "I AM A MONSTER" << std::endl; } class Buddha : public Entity { public: void func(); }; void Buddha::func() { std::cout << "OHMM" << std::endl; } int main() { const int num = 5; // How many of each to make std::vector<Entity*> t; for(int i = 0; i < num; i++) { Monster m; Entity * e; e = &m; t.push_back(e); } for(int i = 0; i < num; i++) { Buddha b; Entity * e; e = &b; t.push_back(e); } for(int i = 0; i < t.size(); i++) { t[i]->func(); } return 0; } However, when I run it, instead of each class printing out its own message, they all print the "Buddha" message. I want each object to print its own message: Monsters print the monster message, Buddhas print the Buddha message. What have I done wrong?

    Read the article

  • Odd ActiveRecord model dynamic initialization bug in production

    - by qfinder
    I've got an ActiveRecord (2.3.5) model that occasionally exhibits incorrect behavior that appears to be related to a problem in its dynamic initialization. Here's the code: class Widget < ActiveRecord::Base extend ActiveSupport::Memoizable serialize :settings VALID_SETTINGS = %w(show_on_sale show_upcoming show_current show_past) VALID_SETTINGS.each do |setting| class_eval %{ def #{setting}=(val); self.settings[:#{setting}] = (val == "1"); end def #{setting}; self.settings[:#{setting}]; end } end def initialize_settings self.settings ||= { :show_on_sale => true, :show_upcoming => true } end after_initialize :initialize_settings # All the other stuff the model does end The idea was to use a single record field (settings) to persist a bunch of configuration data for this object, but allow all the settings to seamlessly work with form helpers and the like. (Why this approach makes sense here is a little out of scope, but let's assume that it does.) Net-net, Widget should end up with instance methods (eg #show_on_sale= #show_on_sale) for all the entires in the VALID_SETTINGS array. Any default values should be specified in initialize_settings. And indeed this works, mostly. In dev and staging, no problems at all. But in production, the app sometimes ends up in a state where a) any writes to the dynamically generated setters fail and b) none of the default values appear to be set - although my leading theory is that the dynamically generated reader methods are just broken. The code, db, and environment is otherwise identical between the three. A typical error message / backtrace on the fail looks like: IndexError: index 141145 out of string (eval):2:in []=' (eval):2:inshow_on_sale=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:in send' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:in each' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2634:in `update_attributes!' ...(then controller and all the way down) Ideas or theories as to what might be going on? My leading theory is that something is going wrong in instance initialization wherein the class instance variable settings is ending up as a string rather than a hash. This explains both the above setter failure (:show_on_sale is being used to index into the string) and the fact that getters don't work (an out of bounds [] call on a string just returns nil). But then how and why might settings occasionally end up as a string rather than hash?

    Read the article

  • Odd "Object reference not set to an instance of an object" involving xWinForms

    - by Kyle
    Hey, I've been trying to get the xWinForms 3.0 library (a library with forms support in xna) working with my C# XNA Game project but I keep getting the same problem. I add the reference to my project, put in the using statement, declare a formCollection variable and then I try to initialize it. whenever I run the project I get stopped on this line: formCollection = new FormCollection(this.Window, Services, ref graphics); it gives me the error: " System.NullReferenceException was unhandled Message="Object reference not set to an instance of an object." Source="Microsoft.Xna.Framework" StackTrace: at Microsoft.Xna.Framework.Graphics.VertexShader..ctor(GraphicsDevice graphicsDevice, Byte[] shaderCode) at Microsoft.Xna.Framework.Graphics.SpriteBatch.ConstructPlatformData() at Microsoft.Xna.Framework.Graphics.SpriteBatch..ctor(GraphicsDevice graphicsDevice) at xWinFormsLib.FormCollection..ctor(GameWindow window, IServiceProvider services, GraphicsDeviceManager& graphics) at GameSolution.Game2.LoadContent() in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Game2.cs:line 45 at Microsoft.Xna.Framework.Game.Initialize() at GameSolution.Game2.Initialize() in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Game2.cs:line 37 at Microsoft.Xna.Framework.Game.Run() at GameSolution.Program.Main(String[] args) in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Program.cs:line 14 InnerException: " In a project I downloaded that used the xWinForms, I put the following code in and it compiled and ran no error. but when I put it in my project I get the error. Am I making some stupid mistake about including dlls or something? I've been at this for hours and I can't seem to find anything that would cause this. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Net; using Microsoft.Xna.Framework.Storage; using xWinFormsLib; namespace GameSolution { public class Game2 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; FormCollection formCollection; public Game2() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); formCollection = new FormCollection(this.Window, Services, ref graphics); } protected override void Update(GameTime gameTime) { base.Update(gameTime); } protected override void Draw(GameTime gameTime) { base.Draw(gameTime); } } } Any help would be greatly appreciated ._.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >