Search Results

Search found 5287 results on 212 pages for 'pseudo frame'.

Page 70/212 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Are lambda expressions/delegates in C# "pure", or can they be?

    - by Bob
    I recently asked about functional programs having no side effects, and learned what this means for making parallelized tasks trivial. Specifically, that "pure" functions make this trivial as they have no side effects. I've also recently been looking into LINQ and lambda expressions as I've run across examples many times here on StackOverflow involving enumeration. That got me to wondering if parallelizing an enumeration or loop can be "easier" in C# now. Are lambda expressions "pure" enough to pull off trivial parallelizing? Maybe it depends on what you're doing with the expression, but can they be pure enough? Would something like this be theoretically possible/trivial in C#?: Break the loop into chunks Run a thread to loop through each chunk Run a function that does something with the value from the current loop position of each thread For instance, say I had a bunch of objects in a game loop (as I am developing a game and was thinking about the possibility of multiple threads) and had to do something with each of them every frame, would the above be trivial to pull off? Looking at IEnumerable it seems it only keeps track of the current position, so I'm not sure I could use the normal generic collections to break the enumeration into "chunks". Sorry about this question. I used bullets above instead of pseudo-code because I don't even know enough to write pseudo-code off the top of my head. My .NET knowledge has been purely simple business stuff and I'm new to delegates and threads, etc. I mainly want to know if the above approach is good for pursuing, and if delegates/lambdas don't have to be worried about when it comes to their parallelization.

    Read the article

  • How can I serialize and communicate ActiveRecord instances across identical Rails apps?

    - by Blaine LaFreniere
    The main idea is that I have several worker instances of a Rails app, and then a main aggregate I want to do something like this with the following pseudo pseudo-code posts = Post.all.to_json( :include => { :comments => { :include => :blah } }) # send data to another, identical, exactly the same Rails app # ... # Fast forward to the separate but identical Rails app: # ... # remote_posts is the posts results from the first Rails app posts = JSON.parse(remote_posts) posts.each do |post| p = Post.new p = post p.save end I'm shying away from Active Resource because I have thousands of records to create, which would mean thousands of requests for each record. Unless there is a way to do it all in one request with Active Resource that is simple, I'd like to avoid it. Format doesn't matter. Whatever makes it convenient. The IDs don't need to be sent, because the other app will just be creating records and assigning new IDs in the "aggregate" system. The hierarchy would need to be preserved (E.g. "Hey other Rails app, I have genres, and each genre has an artist, and each artist has an album, and each album has songs" etc.)

    Read the article

  • Dragging a copy of all selected elements from a select box--possible?

    - by Sean
    I have a picklist web interface: a pair of select elements with a pair of buttons (a left-pointing arrow and a right-pointing arrow) between them. Users can move items between the two columns by, eg, selecting one of options in the left column and clicking on the right-pointing arrow. Now I have an enhancement request: someone wants to be able to drag-and-drop items between the two columns instead of clicking a button. The problem with my initial two-select-box setup is that as soon as I click one of the highlighted options to initiate a drag, all of the other selected options are deselected. Using jQuery, I've attached mousedown event handlers to both the select boxes and each individual option that just call preventDefault() on the event object, but this isn't sufficient. On Firefox 3 the clicked-on option loses focus immediately, but all other options are still deseleted, and on IE6 (which I regrettably still have to support) it makes no difference at all. So I thought I could maybe create a reasonable facsimile of a select box using list elements or divs or something. I can create something reasonable-looking that works on Firefox, but on IE6, if I shift-click on an element of my pseudo-select object (in order to select a range of options), IE selects all of the text between where I click and the last place I clicked. Again, I've attached preventDefault-ing mousedown, mouseup, and click handlers to all of the elements involved, but it doesn't help. I've even tried overlaying transparent divs over both my original select boxes and my pseudo-select objects, thinking to intercept mouse clicks and manage the selections manually, but I can't make it work on IE. If I use a select box, I can't prevent clicks from changing the selection, and if I use text that just looks like a select element, I can't prevent it from selecting a range of text on a shift-click. Is there some general solution, or am I just out of luck?

    Read the article

  • How to mult-thread this?

    - by WilliamKF
    I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); while (thread2IsExclusive) { } } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. while (!thread1IsReady()) { } Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.

    Read the article

  • Hibernate query for multiple items in a collection

    - by aarestad
    I have a data model that looks something like this: public class Item { private List<ItemAttribute> attributes; // other stuff } public class ItemAttribute { private String name; private String value; } (this obviously simplifies away a lot of the extraneous stuff) What I want to do is create a query to ask for all Items with one OR MORE particular attributes, ideally joined with arbitrary ANDs and ORs. Right now I'm keeping it simple and just trying to implement the AND case. In pseudo-SQL (or pseudo-HQL if you would), it would be something like: select all items where attributes contains(ItemAttribute(name="foo1", value="bar1")) AND attributes contains(ItemAttribute(name="foo2", value="bar2")) The examples in the Hibernate docs didn't seem to address this particular use case, but it seems like a fairly common one. The disjunction case would also be useful, especially so I could specify a list of possible values, i.e. where attributes contains(ItemAttribute(name="foo", value="bar1")) OR attributes contains(ItemAttribute(name="foo", value="bar2")) -- etc. Here's an example that works OK for a single attribute: return getSession().createCriteria(Item.class) .createAlias("itemAttributes", "ia") .add(Restrictions.conjunction() .add(Restrictions.eq("ia.name", "foo")) .add(Restrictions.eq("ia.attributeValue", "bar"))) .list(); Learning how to do this would go a long ways towards expanding my understanding of Hibernate's potential. :)

    Read the article

  • I'm about to learn x86 assembly on os x 10.6 let me know how compile..plz

    - by kevin choung
    hello~ I'm about to learn x86 assembly language on mac os x... I'm using as instruction to compile assembly file in commend window. but I have several errors.. and I don't know how I can get through.. here is the errors and my assembly code.. which is quite simple. **ung-mi-lims-macbook-pro:pa2 ungmi$ as swap.s swap.s:16:Unknown pseudo-op: .type swap.s:16:Rest of line ignored. 1st junk character valued 115 (s). swap.s:19:suffix or operands invalid for `push' swap.s:46:suffix or operands invalid for `pop' ung-mi-lims-macbook-pro:pa2 ungmi$** and the source is .text .align 4 .globl swap .type swap,@function swap: pushl %ebp movl %esp, %ebp movl %ebp, %esp popl %ebp ret and I searched some solution which is I have to put -arch i386 than **ung-mi-lims-macbook-pro:pa2 ungmi$ as -arch i386 swap.s swap.s:16:Unknown pseudo-op: .type swap.s:16:Rest of line ignored. 1st junk character valued 115 (s). ung-mi-lims-macbook-pro:pa2 ungmi$** could you help me out.. just let me know what I need to compile assembly file.. I have xcode already.. and I'd rather to do this with commend window..and vi editor.. I will be waiting for your answer... plz help me.

    Read the article

  • Long running method causing race condition

    - by keeleyt83
    Hi, I'm relatively new with hibernate so please be gentle. I'm having an issue with a long running method (~2 min long) and changing the value of a status field on an object stored in the DB. The pseudo-code below should help explain my issue. public foo(thing) { if (thing.getStatus() == "ready") { thing.setStatus("finished"); doSomethingAndTakeALongTime(); } else { // Thing already has a status of finished. Send the user back a message. } } The pseudo-code shouldn't take much explanation. I want doSomethingAndTakeALongTime() to run, but only if it has a status of "ready". My issue arises whenever it takes 2 minutes for doSomethingAndTakeALongTime() to finish and the change to thing's status field doesn't get persisted to the database until it leaves foo(). So another user can put in a request during those 2 minutes and the if statement will evaluate to true. I've already tried updating the field and flushing the session manually, but it didn't seem to work. I'm not sure what to do from here and would appreciate any help. PS: My hibernate session is managed by spring.

    Read the article

  • JavaScript: how to create a JS event that requires 2 seperate JS files to be loaded first while down

    - by Teddyk
    I want to perform asynchronous JavaScript downloads of two files that have dependencies attached to them. // asynch download of jquery and gmaps function addScript(url) { var script = document.createElement('script'); script.src = url; document.getElementsByTagName('head')[0].appendChild(script); } addScript('http://google.com/gmaps.js'); addScript('http://jquery.com/jquery.js'); // define some function dependecies function requiresJQuery() { ... } function requiresGmaps() { ... } function requiresBothJQueryGmaps() { ... } // do some work that has no dependencies on either JQuery or Google maps ... // QUESTION - Pseudo code below // now call a function that requires Gmaps to be loaded if (GmapsIsLoaded) { requiresGmaps(); } // QUESTION - Pseudo code below // then do something that requires both JQuery & Gmaps (or wait until they are loaded) if (JQueryAndGmapsIsLoaded) { requiresBothJQueryGmaps(); } Question: How can I create an event to indicate when: JQuery is loaded? Google Maps is loaded JQuery & Google Maps are both loaded?

    Read the article

  • NHibernate : Root collection with an root object

    - by Daniel
    Hi, I want to track a list of root objects which are not contained by any element. I want the following pseudo code to work: IList<FavoriteItem> list = session.Linq<FavoriteItem>().ToList(); list.Add(item1); list.Add(item2); list.Remove(item3); list.Remove(item4); var item5 = list.First(i => i.Name = "Foo"); item5.Name = "Bar"; session.Save(list); This should automatically insert item1 and item2, delete item3 and item3 and update item5 (i.e. I don't want to call sesssion.SaveOrUpdate() for all items separately. Is it possible to define a pseudo entity that is not associated with a table? For example I want to define the class Favorites and map 2 collection properties of it and than I want to write code like this: var favs = session.Linq<Favorites>(); favs.FavoriteColors.Add(new FavoriteColor(...)); favs.FavoriteMovies.Add(new FavoriteMovie(...)); session.SaveOrUpdate(favs); FavoriteColors and FavoriteMovies are the only properties of the Favorites class and are of type IList and IList. I do only want to persist the these two collection properties but not the Favorites class. Any help is much appreciated.

    Read the article

  • Alter a function as a parameter before evaluating it in R?

    - by Shane
    Is there any way, given a function passed as a parameter, to alter its input parameter string before evaluating it? Here's pseudo-code for what I'm hoping to achieve: test.func <- function(a, b) { # here I want to alter the b expression before evaluating it: b(..., val1=a) } Given the function call passed to b, I want to add in a as another parameter without needing to always specify ... in the b call. So the output from this test.func call should be: test.func(a="a", b=paste(1, 2)) "1" "2" "a" Edit: Another way I could see doing something like this would be if I could assign the additional parameter within the scope of the parent function (again, as pseudo-code); in this case a would be within the scope of t1 and hence t2, but not globally assigned: t2 <- function(...) { paste(a=a, ...) } t1 <- function(a, b) { local( { a <<- a; b } ) } t1(a="a", b=t2(1, 2)) This is somewhat akin to currying in that I'm nesting the parameter within the function itself. Edit 2: Just to add one more comment to this: I realize that one related approach could be to use "prototype-based programming" such that things would be inherited (which could be achieved with the proto package). But I was hoping for a easier way to simply alter the input parameters before evaluating in R.

    Read the article

  • question about partition

    - by davit-datuashvili
    i have question about hoare partition method here is code and also pseudo code please if something is wrong correct pseudo code HOARE-PARTITION ( A, p, r) 1 x ? A[ p] 2 i ? p-1 3 j ? r +1 4 while TRUE 5 do repeat j ? j - 1 6 until A[ j ] = x 7 do repeat i ? i + 1 8 until A[i] = x 9 if i < j 10 then exchange A[i] ? A[ j ] 11 else return j and my code public class Hoare { public static int partition(int a[],int p,int r) { int x=a[p]; int i=p-1; int j=r+1; while (true) { do { j=j-1; } while(a[j]>=x); do { i=i+1; } while(a[i]<=x); if (i<j) { int t=a[i]; a[i]=a[j]; a[j]=t; } else { return j; } } } public static void main(String[]args){ int a[]=new int[]{13,19,9,5,12,8,7,4,11,2,6,21}; partition(a,0,a.length-1); } } and mistake is this error: Class names, 'Hoare', are only accepted if annotation processing is explicitly requested 1 error any ideas

    Read the article

  • Database PK-FK design for future-effective-date entries?

    - by Scott Balmos
    Ultimately I'm going to convert this into a Hibernate/JPA design. But I wanted to start out from purely a database perspective. We have various tables containing data that is future-effective-dated. Take an employee table with the following pseudo-definition: employee id INT AUTO_INCREMENT ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Very simplistic. But let's say Employee A has id = 1, effectiveFrom = 1/1/2011, effectiveTo = 1/1/2099. That employee is going to be changing jobs in the future, which would in theory create a new row, id = 2 with effectiveFrom = 7/1/2011, effectiveTo = 1/1/2099, and id = 1's effectiveTo updated to 6/30/2011. But now, my program would have to go through any table that has a FK relationship to employee every night, and update those FK to reference the newly-effective employee entry. I have seen various postings in both pure SQL and Hibernate forums that I should have a separate employee_versions table, which is where I would have all effective-dated data stored, resulting in the updated pseudo-definition below: employee id INT AUTO_INCREMENT employee_versions id INT AUTO_INCREMENT employee_id INT FK employee.id ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Then to get any actual data, one would have to actually select from employee_versions with the proper employee_id and date range. This feels rather unnatural to have this secondary "versions" table for each versioned entity. Anyone have any opinions, suggestions from your own prior work, etc? Like I said, I'm taking this purely from a general SQL design standpoint first before layering in Hibernate on top. Thanks!

    Read the article

  • How to multi-thread this?

    - by WilliamKF
    I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); while (thread2IsExclusive) { } } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. while (!thread1IsReady()) { } Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.

    Read the article

  • Querying for a unique value based on the aggregate of another value while grouping on a third value

    - by Justin Swartsel
    So I know this problem isn't a new one, but I'm trying to wrap my head around it and understand the best way to deal with scenarios like this. Say I have a hypothetical table 'X' that looks like this: GroupID ID (identity) SomeDateTime -------------------------------------------- 1 1000 1/1/01 1 1001 2/2/02 1 1002 3/3/03 2 1003 4/4/04 2 1004 5/5/05 I want to query it so the result set looks like this: ---------------------------------------- 1 1002 3/3/03 2 1004 5/5/05 Basically what I want is the MAX SomeDateTime value grouped by my GroupID column. The kicker is that I DON'T want to group by the ID column, I just want to know the 'ID' that corresponds to the MAX SomeDateTime. I know one pseudo-solution would be: ;WITH X1 as ( SELECT MAX(SomeDateTime) as SomeDateTime, GroupID FROM X GROUP BY GroupID ) SELECT X1.SomeDateTime, X1.GroupID, X2.ID FROM X1 INNER JOIN X as X2 ON X.DateTime = X2.DateTime But this doesn't solve the fact that a DateTime might not be unique. And it seems sloppy to join on a DateTime like that. Another pseudo-solution could be: SELECT X.GroupID, MAX(X.ID) as ID, MAX(X.SomeDateTime) as SomeDateTime FROM X GROUP BY X.GroupID But there are no guarantees that ID will actually match the row that SomeDateTime comes from. A third less useful option might be: SELECT TOP 1 X.GroupID, X.ID, X.SomeDateTime FROM X WHERE X.GroupID = 1 ORDER BY X.SomeDateTime DESC But obviously that only works with a single, known, GroupID. I want to be able to join this result set on GroupID and/or ID. Does anyone know of any clever solutions? Any good uses of windowing functions? Thanks!

    Read the article

  • Should I use an interface or factory (and interface) for a cross-platform implementation?

    - by nbolton
    Example A: // pseudo code interface IFoo { void bar(); } class FooPlatformA : IFoo { void bar() { /* ... */ } } class FooPlatformB : IFoo { void bar() { /* ... */ } } class Foo : IFoo { IFoo m_foo; public Foo() { if (detectPlatformA()} { m_foo = new FooPlatformA(); } else { m_foo = new FooPlatformB(); } } // wrapper function - downside is we'd have to create one // of these for each function, which doesn't seem right. void bar() { m_foo.bar(); } } Main() { Foo foo = new Foo(); foo.bar(); } Example B: // pseudo code interface IFoo { void bar(); } class FooPlatformA : IFoo { void bar() { /* ... */ } } class FooPlatformB : IFoo { void bar() { /* ... */ } } class FooFactory { IFoo newFoo() { if (detectPlatformA()} { return new FooPlatformA(); } else { return new FooPlatformB(); } } } Main() { FooFactory factory = new FooFactory(); IFoo foo = factory.newFoo(); foo.bar(); } Which is the better option, example A, B, neither, or "it depends"?

    Read the article

  • Problem with video playback on iPad with MPMoviePlayerViewController

    - by Symo
    Hello everybody... I have been fighting some code for about a week, and am hoping that someone else may have experienced this problem and can point me in the right direction. I am using the MPMoviePlayerViewController to play a video on the iPad. The primary problem is that it works FLAWLESSLY on the iPad Simulator, but will not play at all on the iPad. I have tried re-encoding the video to make sure that isn't an issue. The video I'm using is currently a 480x360 video encoded with H.264 Basline 3.0 with AAC/LC audio. The video plays fine on the iPhone, and also does play through Safari on the iPad. The video actually loads, and you can scrub through the video with the scrubber bar and see that it is there. The frames actually display, but just will not play. If you click play, it just immediately stops. Even when I have mp.moviePlayer.shouldAutoplay=YES set, you can see the player attempt to play, but only for a split second (maybe 1 frame?). I have tried just adding view with the following code: in .h ------ MPMoviePlayerViewController *vidViewController; @property (readwrite, retain) MPMoviePlayerViewController *vidViewController; in .m ------ MPMoviePlayerViewController *mp=[[MPMoviePlayerViewController alloc] initWithContentURL:[NSURL URLWithString:videoURL]]; [mp shouldAutorotateToInterfaceOrientation:YES]; mp.moviePlayer.scalingMode=MPMovieScalingModeAspectFit; mp.moviePlayer.shouldAutoplay=YES; mp.moviePlayer.controlStyle=MPMovieControlStyleFullscreen; [videoURL release]; self.vidViewController = mp; [mp release]; [self.view addSubview:vidViewController.view]; float w = self.view.frame.size.width; float h = w * 0.75; self.vidViewController.view.frame = CGRectMake(0, 0, w, h); I have also just tried to do a: [self presentMoviePlayerViewControllerAnimated:self.vidViewController]; which I actually can not get to orient properly...always shows up in Portrait and almost completely off the screen on the bottom, and the app is only intended to run in either of the Landscape views... If anybody needs more info, just let me know. I'm about at my wits end on this. ANY help will be GREATLY appreciated.

    Read the article

  • UIView inside UIView with Shadow?

    - by Rnegi
    Hi, I've been trying to figure out how to draw a shadow for an UIView that was added inside a UIView with addSubview. I searched online and read the docs, but the Apple docs simply draw new shapes as shown below. I want to use the Core Graphics to add a shadow to the UIView, but am unsure how to do this to a UIView directly. CGContextRef myContext = UIGraphicsGetCurrentContext(); //CGContextRef myContext = myCGREF; CGSize myShadowOffset = CGSizeMake (10, 10);// 2 CGContextSetShadow (myContext, myShadowOffset, 0); // 3 CGContextBeginTransparencyLayer (myContext, NULL);// 4 // Your drawing code here// 5 CGContextSetRGBFillColor (myContext, 0, 1, 0, 1); CGContextFillRect (myContext, CGRectMake (a_view.frame.origin.x, a_view.frame.origin.y , wd, ht)); CGContextEndTransparencyLayer (myContext);// 6 I know I should put this in the SuperView drawRect method, but I don't know how to make it so it adds a shadow to the views I add in addSubView. Thanks!

    Read the article

  • An offscreen MKMapView behaves differently in 3.2, 4.0

    - by Duane Fields
    In 3.1 I've been using an "offscreen" MKMapView to create map images that I can rotate, crop and so forth before presenting them the user. In 3.2 and 4.0 this technique no longer works quite right. Here's some code that illustrates the problem, followed by my theory. // create map view _mapView = [[MKMapView alloc] initWithFrame:CGRectMake(0, 0, MAP_FRAME_SIZE, MAP_FRAME_SIZE)]; _mapView.zoomEnabled = NO; _mapView.scrollEnabled = NO; _mapView.delegate = self; _mapView.mapType = MKMapTypeSatellite; // zoom in to something enough to fill the screen MKCoordinateRegion region; CLLocationCoordinate2D center = {30.267222, -97.763889}; region.center = center; MKCoordinateSpan span = {0.1, 0.1 }; region.span = span; _mapView.region = region; // set scrollview content size to full the imageView _scrollView.contentSize = _imageView.frame.size; // force it to load #ifndef __IPHONE_3_2 // in 3.1 we can render to an offscreen context to force a load UIGraphicsBeginImageContext(_mapView.frame.size); [_mapView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIGraphicsEndImageContext(); #else // in 3.2 and above, the renderInContext trick doesn't work... // this at least causes the map to render, but it's clipped to what appears to be // the viewPort size, plus some padding [self.view addSubview:_mapView]; #endif when the map is done loading, I snap picture of it and stuff it in my scrollview - (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView { NSLog(@"[MapBuilder] mapViewDidFinishLoadingMap"); // render the map to a UIImage UIGraphicsBeginImageContext(mapView.bounds.size); // the first sub layer is just the map, the second is the google layer, this sublayer structure might change of course [[[mapView.layer sublayers] objectAtIndex:0] renderInContext:UIGraphicsGetCurrentContext()]; // we are done with the mapView at this point, we need its ram! _mapView.delegate = nil; [_mapView release]; [_mapView removeFromSuperview]; _mapView = nil; UIImage* mapImage = [UIGraphicsGetImageFromCurrentImageContext() retain]; UIGraphicsEndImageContext(); _imageView.image = mapImage; [mapImage release], mapImage = nil; } The first problem is that in 3.1 rendering to a context would trigger the map to begin loading. This no longer works in 3.2, 4.0. The only thing I have found would trigger the load is to temporarily add the map to the view (i.e. make it visible). The problem being that the map only renders to the visible area of the screen, plus a little padding. The frame/bounds are fine, but it appears to be "helpfully" optimizes the loading to limit the tiles to those visible on the screen or close to it. Any ideas how to force the map to load at full size? Anyone else have this issue?

    Read the article

  • iphone uiscrollview and uiimageview - setting initial zoom

    - by Brodie4598
    I use the following code to load an image into an scroll view. The image always loads at 100% zoom. Is there a way to set it to load to another zoom level, say .37? UIImageView *tempImage = [[UIImageView alloc]initWithImage:[UIImage imageWithData:data]]; self.imageView = tempImage; scrollView.contentSize = CGSizeMake(imageView.frame.size.width , imageView.frame.size.height); scrollView.maximumZoomScale = 1; scrollView.minimumZoomScale = .37; scrollView.clipsToBounds = YES; scrollView.delegate = self; [scrollView addSubview:imageView];

    Read the article

  • UIWebView EXC_BAD_ACCESS crash

    - by HARDWARRIOR
    I'm experiencing crashes of an app that uses UIWebView. Usually it's when page is not fully loaded and UIWebView is sent stopLoading selector. Or when UIWebView fully loaded page. I've got EXC_BAD_ACCESS. Stack looks like this: #0 0x95bb7688 in objc_msgSend #1 0x30a671db in -[UIWebView webView:decidePolicyForNavigationAction:request:frame:decisionListener:] #2 0x3024a10d in __invoking___ #3 0x30249ff8 in -[NSInvocation invoke] #4 0x358ab160 in HandleDelegateSource #5 0x302452c1 in CFRunLoopRunSpecific #6 0x30244628 in CFRunLoopRunInMode #7 0x32044c31 in GSEventRunModal #8 0x32044cf6 in GSEventRun #9 0x309021ee in UIApplicationMain #10 0x0000239c in main at main.m:13 for me most strange thing here is webView:decidePolicyForNavigationAction:request:frame:decisionListener: selector sent to UIWebView, because there is no such selector in UIWebView documentation! Only for Cocoa (not cocoa touch) WebView. I suspect that there is something wrong with UIWebView or its delegate. But I can't set breakpoint to watch them. Please advise how I can get more info in this situation.

    Read the article

  • How to draw an UIImageView in -drawRect:?

    - by mystify
    I'm trying since 5 hours now: - (void)drawRect:(CGRect)rect { // flip the wrong coordinate system CGContextTranslateCTM(context, 0.0f, rect.size.height); //shift the origin up CGContextScaleCTM(context, 1.0f, -1.0f); //flip the y-axis CGContextDrawImage(context, myImgView.frame, myImageView.image.CGImage); } The problem: While the image draws correctly, the coordinates specified by the UIImageView frame are completely useless. The image appears placed completely wrong on screen. I guess I must also flip the CGRect of the UIImageView? But how?

    Read the article

  • drawRect not being called in my subclass of UIImageView

    - by Ben Collins
    I have subclassed UIImageView and tried to override drawRect so I could draw on top of the image using Quartz 2D. I know this is a dumb newbie question, but I'm not seeing what I did wrong. Here's the interface: #import <UIKit/UIKit.h> @interface UIImageViewCustom : UIImageView { } - (void)drawRect:(CGRect)rect; And the implementation: #import "UIImageViewCustom.h" @implementation UIImageViewCustom - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { } return self; } - (void)drawRect:(CGRect)rect { // do stuff } I set a breakpoint on drawRect and it never hits, leading me to think it never gets called at all. Isn't it supposed to be called when the view first loads? Have I incorrectly overridden it?

    Read the article

  • h264 RTP timestamp

    - by user269090
    Hi Guys, I have a confusion about the timestamp of h264 RTP packet. I know the wall clock rate of video is 90KHz which I defined in the SIP SDP. The frame rate of my encoder is not exactly 30 FPS, it is variable. It varies from 15 FPS to 30 FPS on the fly. So, I cannot use any fixed timestamp. Could any one tell me the timestamp of the following encoded packet. After 0 milisecond encoded RTP timestamp = 0 (Let the starting timestamp 0) After 50 milisecond encoded RTP timestamp = ? After 40 milisecond encoded RTP timestamp = ? After 33 milisecond encoded RTP timestamp = ? What is the formula when the encoded frame rate is variable? Thank you in advance.

    Read the article

  • Problem in Eclipse 3.5 and ubuntu 9.10

    - by ki0
    Someone knows why eclipse close when i click any button. This is because when i try to do something, update eclipse or whatever, eclipse close and it gives me this log... # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fc329864f7a, pid=9392, tid=140476827293968 # # JRE version: 6.0_16-b01 # Java VM: Java HotSpot(TM) 64-Bit Server VM (14.2-b01 mixed mode linux-amd64 ) # Problematic frame: # C [libpango-1.0.so.0+0x24f7a] pango_layout_new+0x2a # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # I think it is something about java, anyone has any solution?¿ Thanks

    Read the article

  • UIScrollView setZoomScale not working ?

    - by iFloh
    Hi, I am struggeling with my UIScrollview to get it to zoom-in the underlying UIImageView. In my view controller I set - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView { return myImageView; } In the viewDidLoad method I try to set the zoomScale to 2 as follows (note the UIImageView and Image is set in Interface Builder): - (void)viewDidLoad { [super viewDidLoad]; myScrollView.contentSize = CGSizeMake(myImageView.frame.size.width, myImageView.frame.size.height); myScrollView.contentOffset = CGPointMake(941.0, 990.0); myScrollView.minimumZoomScale = 0.1; myScrollView.maximumZoomScale = 10.0; myScrollView.zoomScale = 0.7; myScrollView.clipsToBounds = YES; myScrollView.delegate = self; NSLog(@"zoomScale: %.1f, minZoolScale: %.3f", myScrollView.zoomScale, myScrollView.minimumZoomScale); } I tried a few variations of this, but the NSLog always shows a zoomScale of 1.0. Any ideas where I screw this one up?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >