Search Results

Search found 6135 results on 246 pages for 'init d'.

Page 207/246 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Zend Routes conflict

    - by meder
    I have defined 2 custom routes. One for threads/:id/:name and the other for threads/tags/:tagName however the second one conflicts with the first because if I enable both then the first breaks and treats :id literally as an action, not obeying the \d+ requirement ( I also tried using pure regex routes, see bottom ). Action "1" does not exist and was not trapped in __call() I tried re-arranging the order of the routes but if I do that then the threads/tags/:tagName doesnt correctly capture the tagName. I also tried disabling default routes but the routes still don't properly work after that. Here's my route init function: protected function _initRoutes() { $fc = Zend_Controller_Front::getInstance(); $router = $fc->getRouter(); $router->addRoute( 'threads', new Zend_Controller_Router_Route('threads/:id/:name', array( 'controller' => 'threads', 'action' => 'thread', ), array( 'id' => '\d+' ) ) ); $router->addRoute( 'threads', new Zend_Controller_Router_Route('threads/tags/:tagName', array( 'controller' => 'threads', 'action' => 'tags', ), array( 'tagName' => '[a-zA-Z]+' ) ) ); } I also tried using a pure regex route but was unsuccessful, most likely because I did it wrong: $router->addRoute( 'threads', new Zend_Controller_Router_Route_Regex( 'threads/(\d+)/([a-zA-Z]+)', array( 'controller' => 'threads', 'action' => 'thread', ), array( 1 => 'tagName', 2 => 'name' ) ) );

    Read the article

  • iOS - Passing variable to view controller

    - by gj15987
    I have a view with a view controller and when I show this view on screen, I want to be able to pass variables to it from the calling class, so that I can set the values of labels etc. First, I just tried creating a property for one of the labels, and calling that from the calling class. For example: SetTeamsViewController *vc = [[SetTeamsViewController alloc] init]; vc.myLabel.text = self.teamCount; [self presentModalViewController:vc animated:YES]; [vc release]; However, this didn't work. So I tried creating a convenience initializer. SetTeamsViewController *vc = [[SetTeamsViewController alloc] initWithTeamCount:self.teamCount]; And then in the SetTeamsViewController I had - (id)initWithTeamCount:(int)teamCount { self = [super initWithNibName:nil bundle:nil]; if (self) { // Custom initialization self.teamCountLabel.text = [NSString stringWithFormat:@"%d",teamCount]; } return self; } However, this didn't work either. It's just loading whatever value I've given the label in the nib file. I've littered the code with NSLog()s and it is passing the correct variable values around, it's just not setting the label. Any help would be greatly appreciated. EDIT: I've just tried setting an instance variable in my designated initializer, and then setting the label in viewDidLoad and that works! Is this the best way to do this? Also, when dismissing this modal view controller, I update the text of a button in the view of the calling ViewController too. However, if I press this button again (to show the modal view again) whilst the other view is animating on screen, the button temporarily has it's original value again (from the nib). Does anyone know why this is?

    Read the article

  • Find Exact difference between two dates

    - by iPhone Fun
    Hi all , I want some changes in the date comparison. In my application I am comparing two dates and getting difference as number of Days, but if there is only one day difference the system shows me 0 as a difference of days. I do use following code NSDateFormatter *date_formater=[[NSDateFormatter alloc]init]; [date_formater setDateFormat:@"MMM dd,YYYY"]; NSString *now=[NSString stringWithFormat:@"%@",[date_formater stringFromDate:[NSDate date]]]; LblTodayDate.text = [NSString stringWithFormat:@"%@",[NSString stringWithFormat:@"%@",now]]; NSDate *dateofevent = [[NSUserDefaults standardUserDefaults] valueForKey:@"CeremonyDate_"]; NSDate *endDate =dateofevent; NSDate *startDate = [NSDate date]; gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; unsigned int unitFlags = NSDayCalendarUnit; NSDateComponents *components = [gregorian components:unitFlags fromDate:startDate toDate:endDate options:0]; int days = [components day]; I found some solutions that If we make the time as 00:00:00 for comparision then it will show me proper answer , I am right or wrong i don't know. Please help me to solve the issue

    Read the article

  • Set Renderbuffer Width and Height (Open GL ES)

    - by Josh Elsasser
    I'm currently experiencing an issue with an Open GL ES renderbuffer where the backing and width are are both set to 15. Is there any way to set them to the width of 320 and 480? My project is built up on Apple's EAGLView class and ES1Renderer, but I've moved it from the app delegate to a controller. I also moved the CADisplayLink outside of it (I update my game logic with the timestamp from this) Any help would be greatly appreciated. I add the glview to the window as follows: CGRect applicationFrame = [[UIScreen mainScreen] applicationFrame]; [window addSubview:gameController.glview]; [window makeKeyAndVisible]; I synthesize the controller and the glview within it. The EAGLView and Renderer are otherwise unmodified. Renderer Initialization: // Get the layer CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = TRUE; eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; renderer = [[ES1Renderer alloc] init]; Render "resize from layer" Method - (BOOL)resizeFromLayer:(CAEAGLLayer *)layer { // Allocate color buffer backing based on the current layer size glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer]; glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); NSLog(@"Backing Width:%i and Height: %i", backingWidth, backingHeight); if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); return NO; } return YES; }

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • C++/CLI : Interop window is not properly configured

    - by raytaller
    Hi, I'm trying to load a WPF control in a C++/CLI application, using the HwndSource class. Here is my code : UBOOL MyWindowWrapper::Init(const HWND InParentWindowHandle) { Interop::HwndSourceParameters sourceParams( "WindowName" ); sourceParams.PositionX = 0; sourceParams.PositionY = 0; sourceParams.ParentWindow = (IntPtr)InParentWindowHandle; sourceParams.WindowStyle = (WS_VISIBLE | WS_CHILD); sourceParams.HwndSourceHook = nullptr; InteropWindow = gcnew Interop::HwndSource(sourceParams); Control = gcnew MyWPFUserControl(); InteropWindow-RootVisual = Control; InteropWindow-AddHook( gcnew Interop::HwndSourceHook( this, &MyWindowWrapper::MessageHookFunction ) ); return TRUE; } And I define a Hook function so the keyboard events are passed to the window : IntPtr MyWindowWrapper::MessageHookFunction( IntPtr HWnd, int Msg, IntPtr WParam, IntPtr LParam, bool% OutHandled ) { IntPtr Result = (IntPtr)0; OutHandled = false; if( Msg == WM_GETDLGCODE ) { OutHandled = true; // This tells Windows that we'll need keyboard events for this control Result = IntPtr( DLGC_WANTALLKEYS | DLGC_WANTCHARS | DLGC_WANTMESSAGE ); } return Result; } And here are my problems : The window title is empty (so the "WindowName" parameter is not taken in account) Only some keyboard events are transferred : space, control, arrows are ok, but I can't type any character in all the text boxes What am I doing wrong ? Thanks !

    Read the article

  • Javascript storing properties and functions in variables

    - by richard
    Hello, I'm having trouble with my programming style and I hope to get some feedback here. I recently bought Javascript: The Good Parts and while I find this a big help, I'm still having trouble designing this application. Especially when it comes to writing function and methods. Example: I have a function that let's the user switches games in my app. This function updates game-specific information in the current view. var games = { active: Titanium.App.Properties.getString('active_game'), gameswitcher_positions: { 'Game 1': 0, 'Game 2': 1, 'Game 3': 2, 'Game 4': 3, 'Game 5': 4 }, change: function(game) { if (active_game !== game) { gameswitcher.children[this.gameswitcher_positions[this.active]].backgroundImage = gameswitcher.children[this.gameswitcher_positions[this.active]].backgroundImage.replace('_selected', ''); gameswitcher.children[this.gameswitcher_positions[game]].backgroundImage = gameswitcher.children[this.gameswitcher_positions[game]].backgroundImage.replace('.png', '_selected.png'); events.update(game); this.active = game; } }, init: function() { gameswitcher.children[this.gameswitcher_positions[this.active]].backgroundImage = gameswitcher.children[this.gameswitcher_positions[this.active]].backgroundImage.replace('.png', '_selected.png'); events.update(this.active); } }; gameswitcher is a container view which contains buttons to switch games. I am not satisfied with this approach but I cannot think of a better one. Should I place the gameswitcher_positions outside of the variable in a seperate variable instead of as a property? And what about the active game? Please give me feedback, what am I doing wrong?

    Read the article

  • Why does grails use hsqldb when I ask for mysql?

    - by John
    I'm following the racetrack example from Jason Rudolph's book at InfoQ, using grails-1.2.1. I got up to the part where I was to switch from hsqldb to mysql. I think I've deleted every reference to hsqldb in the DataSource.groovy file, but I get an exception and the stack trace shows it's still using hsqldb. DataSource.groovy dataSource { boolean pooled = true String driverClassName = "com.mysql.jdbc.Driver" String url = "jdbc:mysql://localhost/dfpc2" String dbCreate = "create" String username = "dfpc2" String password = "dfpc2" dialect = org.hibernate.dialect.MySQL5InnoDBDialect } hibernate { cache.use_second_level_cache=true cache.use_query_cache=true cache.provider_class='net.sf.ehcache.hibernate.EhCacheProvider' } // environment specific settings environments { development { } test { } production { } } When I grails run-app it all starts up with no errors. I can navigate to the home page. But when I click on one of the links, I get a stack trace: java.sql.SQLException: Table not found in statement [select this_.id as id0_0_, this_.version as version0_0_, this_.name as name0_0_, this_.variant as variant0_0_ from domainObject this_ limit ?] at org.hsqldb.jdbc.Util.throwError(Unknown Source) at org.hsqldb.jdbc.jdbcPreparedStatement.<init>(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.prepareStatement(Unknown Source) at dfpc2.domainObjectController$_closure2.doCall(script1269434425504953491149.groovy:13) at dfpc2.domainObjectController$_closure2.doCall(script1269434425504953491149.groovy) at java.lang.Thread.run(Thread.java:619) My mysql database shows no tables created. (I don't think groovy's connected to mysql yet.) Things I've checked: mysql-connector-java-5.1.6.jar is in lib directory. I've tried grails clean I tried putting the dataSource info in the development environment (I haven't graduated to test or prod yet), but it seemed to make no difference. The stdout shows I'm using development env. I've googled for solutions, but the only solution I've found is when people don't change the test or production environments.

    Read the article

  • Multiple Facebook scripts?

    - by J Set
    This may be kind of a dumb question, but the script for the facebook button on the like button page is different from the script on the javascript sdk page, but similar. Did facebook just forget to update the documentation or do I need both scripts? The like button page gives: <div id="fb-root"></div> <script> (function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/all.js#xfbml=1"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script> But on the javascript sdk page: <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId : 'YOUR_APP_ID', // App ID channelUrl : '//WWW.YOUR_DOMAIN.COM/channel.html', // Channel File status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); // Additional initialization code here }; // Load the SDK Asynchronously (function(d){ var js, id = 'facebook-jssdk', ref = d.getElementsByTagName('script')[0]; if (d.getElementById(id)) {return;} js = d.createElement('script'); js.id = id; js.async = true; js.src = "//connect.facebook.net/en_US/all.js"; ref.parentNode.insertBefore(js, ref); }(document)); </script>

    Read the article

  • Pymedia video encoding failed

    - by user1474837
    I am using Python 2.5 with Windows XP. I am trying to make a list of pygame images into a video file using this function. I found the function on the internet and edited it. It worked at first, than it stopped working. This is what it printed out: Making video... Formating 114 Frames... starting loop making encoder Frame 1 process 1 Frame 1 process 2 Frame 1 process 2.5 This is the error: Traceback (most recent call last): File "ScreenCapture.py", line 202, in <module> makeVideoUpdated(record_files, video_file) File "ScreenCapture.py", line 151, in makeVideoUpdated d = enc.encode(da) pymedia.video.vcodec.VCodecError: Failed to encode frame( error code is 0 ) This is my code: def makeVideoUpdated(files, outFile, outCodec='mpeg1video', info1=0.1): fw = open(outFile, 'wb') if (fw == None) : print "Cannot open file " + outFile return if outCodec == 'mpeg1video' : bitrate= 2700000 else: bitrate= 9800000 start = time.time() enc = None frame = 1 print "Formating "+str(len(files))+" Frames..." print "starting loop" for img in files: if enc == None: print "making encoder" params= {'type': 0, 'gop_size': 12, 'frame_rate_base': 125, 'max_b_frames': 90, 'height': img.get_height(), 'width': img.get_width(), 'frame_rate': 90, 'deinterlace': 0, 'bitrate': bitrate, 'id': vcodec.getCodecID(outCodec) } enc = vcodec.Encoder(params) # Create VFrame print "Frame "+str(frame)+" process 1" bmpFrame= vcodec.VFrame(vcodec.formats.PIX_FMT_RGB24, img.get_size(), # Covert image to 24bit RGB (pygame.image.tostring(img, "RGB"), None, None) ) print "Frame "+str(frame)+" process 2" # Convert to YUV, then codec da = bmpFrame.convert(vcodec.formats.PIX_FMT_YUV420P) print "Frame "+str(frame)+" process 2.5" d = enc.encode(da) #THIS IS WHERE IT STOPS print "Frame "+str(frame)+" process 3" fw.write(d.data) print "Frame "+str(frame)+" process 4" frame += 1 print "savng file" fw.close() Could somebody tell me why I have this error and possibly how to fix it? The files argument is a list of pygame images, outFile is a path, outCodec is default, and info1 is not used anymore. UPDATE 1 This is the code I used to make that list of pygame images. from PIL import ImageGrab import time, pygame pygame.init() f = [] #This is the list that contains the images fps = 1 for n in range(1, 100): info = ImageGrab.grab() size = info.size mode = info.mode data = info.tostring() info = pygame.image.fromstring(data, size, mode) f.append(info) time.sleep(fps)

    Read the article

  • UILabel + IRR, KRW and KHR currencies with wrong symbol

    - by serb
    Hi, I'm experiencing issues when converting decimal to currency for Korean Won, Cambodian Riel and Iranian Rial and showing the result to the UILabel text. Conversion itself passes just fine and I can see correct currency symbol at the debugger, even the NSLog prints the symbol well. If I assign this NSString instance to the UILabel text, the currency symbol is shown as a crossed box instead of the correct symbol. There is no other code between, does not matter what font I use. I tried to print ? (Korean Won) using the unicode value (0x20A9) or even using UTF8 representation (\xe2\x82\xa9), but all I get is the crossed box on the label. Any other supported currency in iPhone SDK and NSLocale (nearly 170 currencies) works perfectly fine no matter how exotic the currency is. Anyone else experiencing the same problem? Is there a "cure" for this? Thanks EDIT: -(NSString *)decimalToCurrency:(NSDecimalNumber *)value byLocale:(NSLocale *)locale { NSNumberFormatter *fmt = [[NSNumberFormatter alloc] init]; [fmt setLocale: locale]; [fmt setNumberStyle: NSNumberFormatterCurrencyStyle]; NSString *res = [fmt stringFromNumber: value]; [fmt release]; return res; } lbValue.text = [self decimalToCurrency: price byLocale: koreanLocale];

    Read the article

  • Renderbuffer Width (Open GL ES)

    - by Josh Elsasser
    I'm currently experiencing an issue with an Open GL ES renderbuffer where the backing and width are are both set to 15. Is there any way to set them to the width of 320 and 480? My project is built up on Apple's EAGLView class and ES1Renderer, but I've moved it from the app delegate to a controller. I also moved the CADisplayLink outside of it (I update my game logic with the timestamp from this) Any help would be greatly appreciated. I add the glview to the window as follows: CGRect applicationFrame = [[UIScreen mainScreen] applicationFrame]; [window addSubview:gameController.glview]; [window makeKeyAndVisible]; I synthesize the controller and the glview within it. The EAGLView and Renderer are otherwise unmodified. Renderer Initialization: // Get the layer CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = TRUE; eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; renderer = [[ES1Renderer alloc] init]; Render "resize from layer" Method - (BOOL)resizeFromLayer:(CAEAGLLayer *)layer { // Allocate color buffer backing based on the current layer size glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer]; glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); NSLog(@"Backing Width:%i and Height: %i", backingWidth, backingHeight); if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); return NO; } return YES; }

    Read the article

  • Can get members, but not count of NSMutableArray

    - by Curyous
    I'm filling an NSMutableArray from a CoreData call. I can get the first object, but when I try to get the count, the app crashes with Program received signal: “EXC_BAD_ACCESS”. How can I get the count? Here's the relevant code - I've put a comment on the line where it crashes. - (void)viewDidLoad { [super viewDidLoad]; managedObjectContext = [[MySingleton sharedInstance] managedObjectContext]; if (managedObjectContext != nil) { charactersRequest = [[NSFetchRequest alloc] init]; charactersEntity = [NSEntityDescription entityForName:@"Character" inManagedObjectContext:managedObjectContext]; [charactersEntity retain]; [charactersRequest setEntity:charactersEntity]; [charactersRequest retain]; NSError *error; characters = [[managedObjectContext executeFetchRequest:charactersRequest error:&error] mutableCopy]; if (characters == nil) { NSLog(@"Did not get results for characters: %@", error.localizedDescription); } else { [characters retain]; NSLog(@"Found some character(s)."); Character* character = (Character *)[characters objectAtIndex:0]; NSLog(@"Name of first one: %@", character.name); NSLog(@"Found %@ character(s).", characters.count); // Crashes on this line with - Program received signal: “EXC_BAD_ACCESS”. } } } And previous declarations from the header file: @interface CrowdViewController : UITableViewController { NSManagedObjectContext *managedObjectContext; NSFetchRequest *charactersRequest; NSEntityDescription *charactersEntity; NSMutableArray *characters; } I'm a bit perplexed and would really appreciate finding out what is going on.

    Read the article

  • How do actually castings work at the CLR level?

    - by devoured elysium
    When doing an upcast or downcast, what does really happen behind the scenes? I had the idea that when doing something as: string myString = "abc"; object myObject = myString; string myStringBack = (string)myObject; the cast in the last line would have as only purpose tell the compiler we are safe we are not doing anything wrong. So, I had the idea that actually no casting code would be embedded in the code itself. It seems I was wrong: .maxstack 1 .locals init ( [0] string myString, [1] object myObject, [2] string myStringBack) L_0000: nop L_0001: ldstr "abc" L_0006: stloc.0 L_0007: ldloc.0 L_0008: stloc.1 L_0009: ldloc.1 L_000a: castclass string L_000f: stloc.2 L_0010: ret Why does the CLR need something like castclass string? There are two possible implementations for a downcast: You require a castclass something. When you get to the line of code that does an castclass, the CLR tries to make the cast. But then, what would happen had I ommited the castclass string line and tried to run the code? You don't require a castclass. As all reference types have a similar internal structure, if you try to use a string on an Form instance, it will throw an exception of wrong usage (because it detects a Form is not a string or any of its subtypes). Also, is the following statamente from C# 4.0 in a Nutshell correct? Upcasting and downcasting between compatible reference types performs reference conversions: a new reference is created that points to the same object. Does it really create a new reference? I thought it'd be the same reference, only stored in a different type of variable. Thanks

    Read the article

  • Encrypt string with public key only

    - by vlahovic
    i'm currently working on a android project where i need to encrypt a string using 128 bit AES, padding PKCS7 and CBC. I don't want to use any salt for this. I've tried loads of different variations including PBEKey but i can't come up with working code. This is what i currently have: String plainText = "24124124123"; String pwd = "BobsPublicPassword"; byte[] key = pwd.getBytes(); key = cutArray(key, 16); byte[] input = plainText.getBytes(); byte[] output = null; SecretKeySpec keySpec = null; keySpec = new SecretKeySpec(key, "AES"); Cipher cipher = Cipher.getInstance("AES/CBC/PKCS7Padding"); cipher.init(Cipher.ENCRYPT_MODE, keySpec); output = cipher.doFinal(input); private static byte[] cutArray(byte[] arr, int length){ byte[] resultArr = new byte[length]; for(int i = 0; i < length; i++ ){ resultArr[i] = arr[i]; } return resultArr; } Any help appreciated //Vlahovic

    Read the article

  • How can I remove the present view when I have added more than one subView ?

    - by srikanth rongali
    How can I remove the present view when I touched the close button. I did [self.view removeFromSuperView] when I added only one subView. It worked. But, I have now added another subView to the previous subView. I need to to get the parent view . How can I do it. My code of adding the subView is here. When I did the following only one view is removed. //FirstViewController: UIViewController -(void)libraryFunction:(id)sender { LibraryController *libraryController = [[LibraryController alloc]init]; [self.view addSubview:libraryController.view]; } //LibraryController: UIViewController -(void)viewDidLoad{ RootViewController *rootViewController = [[RootViewController alloc] initWithStyle:UITableViewStylePlain]; UINavigationController *aNavigationController = [[UINavigationController alloc] initWithRootViewController:rootViewController]; self.navController = aNavigationController; [aNavigationController release]; [rootViewController release]; [self.view addSubview:navController.view]; } //RootViewController: UITableViewController - (void)viewDidLoad { [super viewDidLoad]; self.navigationItem.rightBarButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"Close" style:UIBarButtonItemStyleBordered target:self action:@selector(close:)]; } -(void)close:(id)sender { [self.view removeFromSuperview]; } Thank You.

    Read the article

  • Python : How to close a UDP socket while is waiting for data in recv ?

    - by alexroat
    Hello, let's consider this code in python: import socket import threading import sys import select class UDPServer: def __init__(self): self.s=None self.t=None def start(self,port=8888): if not self.s: self.s=socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.s.bind(("",port)) self.t=threading.Thread(target=self.run) self.t.start() def stop(self): if self.s: self.s.close() self.t.join() self.t=None def run(self): while True: try: #receive data data,addr=self.s.recvfrom(1024) self.onPacket(addr,data) except: break self.s=None def onPacket(self,addr,data): print addr,data us=UDPServer() while True: sys.stdout.write("UDP server> ") cmd=sys.stdin.readline() if cmd=="start\n": print "starting server..." us.start(8888) print "done" elif cmd=="stop\n": print "stopping server..." us.stop() print "done" elif cmd=="quit\n": print "Quitting ..." us.stop() break; print "bye bye" It runs an interactive shell with which I can start and stop an UDP server. The server is implemented through a class which launches a thread in which there's a infinite loop of recv/*onPacket* callback inside a try/except block which should detect the error and the exits from the loop. What I expect is that when I type "stop" on the shell the socket is closed and an exception is raised by the recvfrom function because of the invalidation of the file descriptor. Instead, it seems that recvfrom still to block the thread waiting for data even after the close call. Why this strange behavior ? I've always used this patter to implements an UDP server in C++ and JAVA and it always worked. I've tried also with a "select" passing a list with the socket to the xread argument, in order to get an event of file descriptor disruption from select instead that from recvfrom, but select seems to be "insensible" to the close too. I need to have a unique code which maintain the same behavior on Linux and Windows with python 2.5 - 2.6. Thanks.

    Read the article

  • iPhone code forUIImageViewControllerPickerSourceTypeCamera

    - by aman-gupta
    - (IBAction)pickAndDecode:(id) sender { UIImagePickerControllerSourceType sourceType; int i = [sender tag]; switch (i) { case 0: sourceType = UIImagePickerControllerSourceTypeCamera; break; case 1: sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; break; case 2: sourceType = UIImagePickerControllerSourceTypePhotoLibrary; break; default: sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; } [self pickAndDecodeFromSource:sourceType]; } - (void) updateToolbar { self.cameraBarItem.enabled = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]; self.savedPhotosBarItem.enabled = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeSavedPhotosAlbum]; self.libraryBarItem.enabled = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypePhotoLibrary]; self.archiveBarItem.enabled = true; self.actionBarItem.enabled = (self.result != nil) && ([self.result actions] != nil) && ([self.result actions].count > 0); } - (void)pickAndDecodeFromSource:(UIImagePickerControllerSourceType) sourceType { [self reset]; // Create the Image Picker if ([UIImagePickerController isSourceTypeAvailable:sourceType]) { UIImagePickerController* picker = [[UIImagePickerController alloc] init]; picker.sourceType = sourceType; picker.delegate = self; picker.allowsImageEditing = YES; // [[NSUserDefaults standardUserDefaults] boolForKey:@"allowEditing"]; // Picker is displayed asynchronously. [self presentModalViewController:picker animated:YES]; } else { NSLog(@"Attempted to pick an image with illegal source type '%d'", sourceType); } } Where I Put this line in my above codes; [picker setShowsCameraControls:FALSE]; please help me so that i can change the real view of iPhone camera according to my view which i have designed

    Read the article

  • iPhone: Helpful Classes or extended Subclasses which should have been in the SDK

    - by disp
    This is more a community sharing post than a real question. In my iPhone OS projects I'm always importing a helper class with helpful methods which I can use for about every project. So I thought it might be a good idea, if everyone shares some of their favorite methods, which should have been in everyones toolcase. I'll start with an extension of the NSString class, so I can make strings with dates on the fly providing format and locale. Maybe someone can find some need in this. @implementation NSString (DateHelper) +(NSString *) stringWithDate:(NSDate*)date withFormat:(NSString *)format withLocaleIdent:(NSString*)localeString{ NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; //For example @"de-DE", or @"en-US" NSLocale *locale = [[NSLocale alloc] initWithLocaleIdentifier:localeString]; [dateFormatter setLocale:locale]; // For example @"HH:mm" [dateFormatter setDateFormat:format]; NSString *string = [dateFormatter stringFromDate:date]; [dateFormatter release]; [locale release]; return string; } @end I'd love to see some of your tools.

    Read the article

  • Mail function wont send eMail. ERROR

    - by Peter
    I think i tried to fix this issue fr 3 days now and cant seem to find the problem. I use XAMPP and use this code: <?php $to = "[email protected]"; $subject = "Test mail"; $message = "Hello! This is a simple email message."; $from = "[email protected]"; $headers = "From: $from"; $res= mail($to,$subject,$message,$headers); echo " $res Mail Sent."; ?> when i enter that page i get an error that says: Warning: mail() [function.mail]: Failed to connect to mailserver at "localhost" port 25, verify your "SMTP" and "smtp_port" setting in php.ini or use ini_set( My php.init file in xampp are as follow: [mail function] ; For Win32 only. ; http://php.net/smtp SMTP = smpt.gmail.com ; http://php.net/smtp-port smtp_port = 25 That is all my codes.

    Read the article

  • How to design app to be modular / support plugins

    - by Lee
    I'm currently in the process of refactoring my webplayer so that we'll be more easily able to run it on our other internet radio stations. Much of the setup between these players will be very similar, however, some will need to have different UI plugins / other plugins. Currently in the webplayer I do something like this in it's init(): _this.ui = new UI(); _this.ui.playlist = new Playlist(); _this.ui.channelDropdown = new ChannelDropdown(); _this.ui.timecode = ne Timecode(); etc etc This works fine but that blocks me into requiring those objects at run time. What I'd like to do is be able to add those based on the stations needs. Basically my question is, do I need to add some kind of "addPlugin()" functionality here? And if I do that, do I need to constantly check from my WebPlayer object if that plugin exists before it attempts to use it? Like... if (_hasPlugin('playlist')) this.plugins.playlist.add(track); I apologize if some of this might not be clear... really trying to get my head wrapped around all of this. I feel I'm closer but I'm still stuck. Any advice on how I should proceed with this would be greatly appreciated. Thanks in advance, Lee

    Read the article

  • Strange behaviour of NSScanner on simple whitespace removal

    - by Michael Waterfall
    I'm trying to replace all multiple whitespace in some text with a single space. This should be a very simple task, however for some reason it's returning a different result than expected. I've read the docs on the NSScanner and it seems like it's not working properly! NSScanner *scanner = [[NSScanner alloc] initWithString:@"This is a test of NSScanner !"]; NSMutableString *result = [[NSMutableString alloc] init]; NSString *temp; NSCharacterSet *whitespace = [NSCharacterSet whitespaceCharacterSet]; while (![scanner isAtEnd]) { // Scan upto and stop before any whitespace [scanner scanUpToCharactersFromSet:whitespace intoString:&temp]; // Add all non whotespace characters to string [result appendString:temp]; // Scan past all whitespace and replace with a single space if ([scanner scanCharactersFromSet:whitespace intoString:NULL]) { [result appendString:@" "]; } } But for some reason the result is @"ThisisatestofNSScanner!" instead of @"This is a test of NSScanner !". If you read through the comments and what each line should achieve it seems simple enough!? scanUpToCharactersFromSet should stop the scanner just as it encounters whitespace. scanCharactersFromSet should then progress the scanner past the whitespace up to the non-whitespace characters. And then the loop continues to the end. What am I missing or not understanding?

    Read the article

  • Simple App Engine Sessions Implementation

    - by raz0r
    Here is a very basic class for handling sessions on App Engine: """Lightweight implementation of cookie-based sessions for Google App Engine. Classes: Session """ import os import random import Cookie from google.appengine.api import memcache _COOKIE_NAME = 'app-sid' _COOKIE_PATH = '/' _SESSION_EXPIRE_TIME = 180 * 60 class Session(object): """Cookie-based session implementation using Memcached.""" def __init__(self): self.sid = None self.key = None self.session = None cookie_str = os.environ.get('HTTP_COOKIE', '') self.cookie = Cookie.SimpleCookie() self.cookie.load(cookie_str) if self.cookie.get(_COOKIE_NAME): self.sid = self.cookie[_COOKIE_NAME].value self.key = 'session-' + self.sid self.session = memcache.get(self.key) if self.session: self._update_memcache() else: self.sid = str(random.random())[5:] + str(random.random())[5:] self.key = 'session-' + self.sid self.session = dict() memcache.add(self.key, self.session, _SESSION_EXPIRE_TIME) self.cookie[_COOKIE_NAME] = self.sid self.cookie[_COOKIE_NAME]['path'] = _COOKIE_PATH print self.cookie def __len__(self): return len(self.session) def __getitem__(self, key): if key in self.session: return self.session[key] raise KeyError(str(key)) def __setitem__(self, key, value): self.session[key] = value self._update_memcache() def __delitem__(self, key): if key in self.session: del self.session[key] self._update_memcache() return None raise KeyError(str(key)) def __contains__(self, item): try: i = self.__getitem__(item) except KeyError: return False return True def _update_memcache(self): memcache.replace(self.key, self.session, _SESSION_EXPIRE_TIME) I would like some advices on how to improve the code for better security. Note: In the production version it will also save a copy of the session in the datastore. Note': I know there are much more complete implementations available online though I would like to learn more about this subject so please don't answer the question with "use that" or "use the other" library.

    Read the article

  • Are there any garanties in JLS about order of execution static initialization blocks?

    - by Roman
    I wonder if it's reliable to use a construction like: private static final Map<String, String> engMessages; private static final Map<String, String> rusMessages; static { engMessages = new HashMap<String, String> () {{ put ("msgname", "value"); }}; rusMessages = new HashMap<String, String> () {{ put ("msgname", "????????"); }}; } private static Map<String, String> msgSource; static { msgSource = engMessages; } public static String msg (String msgName) { return msgSource.get (msgName); } Is there a possibility that I'll get NullPointerException because msgSource initialization block will be executed before the block which initializes engMessages? (about why don't I do msgSource initialization at the end of upper init. block: just the matter of taste; I'll do so if the described construction is unreliable)

    Read the article

  • Zend Framework Form Element Validators - validate a field even if not required

    - by Jeremy Hicks
    Is there a way to get a validator to fire even if the form element isn't required? I have a form where I want to validate the contents of a texbox (make sure not empty) if the value of another form element, which is a couple of radio buttons, has a specific value selected. Right now I'm doing this by overriding the isValid() function of my form class and it works great. However, I'd like to move this to either its on validator or use the Callback validator. Here's what I have so far, but it never seems to get called unless I change the field to setRequired(true) which I don't want to do at all times, only if the value of the other form element is set to a specific value. // In my form class's init function $budget = new Zend_Form_Element_Radio('budget'); $budget->setLabel('Budget') ->setRequired(true) ->setMultiOptions($options); $budgetAmount = new Zend_Form_Element_Text('budget_amount'); $budgetAmount->setLabel('Budget Amount') ->setRequired(false) ->addFilter('StringTrim') ->addValidator(new App_Validate_BudgetAmount()); //Here is my custom validator (incomplete) but just testing to see if it even gets called. class App_Validate_BudgetAmount extends Zend_Validate_Abstract { const STRING_EMPTY = 'stringEmpty'; protected $_messageTemplates = array( self::STRING_EMPTY => 'please provide a budget amount' ); public function isValid($value) { echo 'validating...'; var_dump($value); return true; } }

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >