Search Results

Search found 9611 results on 385 pages for 'low power'.

Page 335/385 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Altruistic network connection bandwidth estimation

    - by datenwolf
    Assume two peers Alice and Bob connected over a IP network. Alice and Bob are exchanging packets of lossy compressed data which are generated and to be consumes in real time (think a VoIP or video chat application). The service is designed to cope with as little bandwidth available, but relies on low latencies. Alice and Bob would mark their connection with an apropriate QoS profile. Alice and Bob want use a variable bitrate compression and would like to consume all of the leftover bandwidth available for the connection between them, but would voluntarily reduce the consumed bitrate depending on the state of the network. However they'd like to retain a stable link, i.e. avoid interruptions in their decoded data stream caused by congestion and the delay until the bandwidth got adjusted. However it is perfectly possible for them to loose a few packets. TL;DR: Alice and Bob want to implement a VoIP protocol from scratch, and are curious about bandwidth and congestion control. What papers and resources do you suggest for Alice and Bob to read? Mainly in the area of bandwidth estimation and congestion control.

    Read the article

  • How to create a CGBitmapContext which works for Retina display and not wasting space for regular display?

    - by ????
    Is it true that if it is in UIKit, including drawRect, the HD aspect of Retina display is automatically handled? So does that mean in drawRect, the current graphics context for a 1024 x 768 view is actually a 2048 x 1536 pixel Bitmap context? (is there a way to print this size out to verify it). We actually enjoy the luxury of 1 point = 4 pixels automatically handled for us. However, if we use CGBitmapContextCreate, then those will really be pixels, not points? (at least if we provide a data buffer for that bitmap, the size is not for the higher resolution, but for the standard resolution, and even if we pass NULL as the buffer so that CGBitmapContextCreate handles the buffer for us, the size probably is the same as if we pass in a data buffer, and it is just standard resolution, not Retina's resolution). We can always create 2048 x 1536 for iPad 1 and iPad 2 as well as the New iPad, but it will waste memory and processor and GPU power, as it is only needed for the New iPad. So do we have to use a if () { } else { } to create such a bitmap context and how do we actually do so? And all our code CGContextMoveToPoint has to be adjusted for Retina display to use x * 2 and y * 2 vs non-retina display of just using x, y as well? That can be quite messy for the code. (or maybe we can define a local variable scaleFactor and set it to 1 for standard resolution and 2 if it is retina, so our x and y will always be x * scaleFactor, y * scaleFactor instead of just x and y.) It seems that UIGraphicsBeginImageContextWithOptions can create one for Retina automatically if the scale of 0.0 is passed in, but I don't think it can be used if I need to create the context and keep it (and using ivar or property of UIViewController to hold it). If I don't release it using UIGraphicsEndImageContext, then it stays in the graphics context stack, so it seems like I have to use CGBitmapContextCreate instead. (or do we just let it stay at the bottom of the stack and not worry about it?)

    Read the article

  • Overriding Node Paths

    - by fighella
    Can a PAGE override the priority of a NODE? I want to use the power of "Pages" to override my links from the nodes... For example: A nodes link is /content/202/hello-world I want to use my Panels to use the URL to create a "PAGE"... The panels can use the arguments from the URL to create a pretty cool page around the content of a node... So the PAGE path I've made is /content/%argument1/%title... I need the links created to the node from the node itself to go to this panel page and not to the content created by the node on its own... so i make the path alias do the same as the PAGE settings... /content/nid/title... A hand typed in link to this PAGE works fine when I dont have this as the path alias... It does exactly what I need... but as soon as I make this the Path Alias, it's like it goes to the single NODE before it goes to the PAGE... Anyone have any clues... Is there an order which Drupal looks for the correct URL? Im sure I have done this before. JD

    Read the article

  • android & libgdx - disable blurry images rendering

    - by android developer
    i'm trying out libgdx as an opengl wrapper , and i have some issues with its graphical rendering : for some reason , all images (textures) on android device look a little blurred using libgdx . this also includes text (font) . however, for normal images , even though i show the entire image , i expect it to look as sharp as i see it on a computer , especially if i have such a good screen on the device (it's galaxy nexus) . i've tried to set the anti-aliasing off , by using the next code : final AndroidApplicationConfiguration androidApplicationConfiguration=new AndroidApplicationConfiguration(); androidApplicationConfiguration.numSamples=0; //tried the value of 1 too. ... i've also tried to set the scaling method to various methods , but with no luck. example: texture.setFilter(TextureFilter.Nearest,TextureFilter.Nearest); as a test , i've found a sharp image that is exactly the same as the seen resolution on the device (720x1184 for galaxy nexus , because of the buttons bar) , and i've put it to be on the background of the libgdx app . of course , i had to add extra blank space in order for the texute to be loaded , so the final size of the image (which will include content and empty space) is still a power of 2 for both width and height (1024x2048 in my case) . on the desktop app , it look ok . on the device , it looked blurred. a weird thing that i've noticed is that when i change the device's orientation (horizontal <= vertical) , for the very short time before the rotating animation starts , i see both the image and the text very well . can anyone please help me?

    Read the article

  • Handling XMLHttpRequest to call external application

    - by Ian
    I need a simple way to use XMLHttpRequest as a way for a web client to access applications in an embedded device. I'm getting confused trying to figure out how to make something thin and light that handles the XMLHttpRequests coming to the web server and can translate those to application calls. The situation: The web client using Ajax (ExtJS specifically) needs to send and receive asynchronously to an existing embedded application. This isn't just to have a thick client/thin server, the client needs to run background checking on the application status. The application can expose a socket interface, with a known set of commands, events, and configuration values. Configuration could probably be transmitted as XML since it comes from a SQLite database. In between the client and app is a lighttpd web server running something that somehow handles the translation. This something is the problem. What I think I want: Lighttpd can use FastCGI to route all XMLHttpRequest to an external process. This process will understand HTML/XML, and translate between that and the application's language. It will have custom logic to simulate pushing notifications to the client (receive XMLHttpRequest, don't respond until the next notification is available). C/C++. I'd really like to avoid installing Java/PHP/Perl on an embedded device. So I'll need more low level understanding. How do I do this? Are there good C++ libraries for interpreting the CGI headers and HTML so that I don't have to do any syntax processing, I can just deal with the request/response contents? Are there any good references to exactly what goes on, server side, when handling the XMLHttpRequest and CGI interfaces? Is there any package that does most of this job already, or will I have to build the non-HTTP/CGI stuff from scratch? Thanks for any help! I am really having trouble learning about the server side of these technologies.

    Read the article

  • Drawing only part of a

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • How to generate random numbers of lognormal distribution within specific range in Matlab

    - by Harpreet
    My grain sizes are defined as D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]. The problem is described below in steps. Grain sizes should follow lognormal distribution. The mean of the grain sizes is fixed as 0.84 and the standard deviation should be as low as possible but not zero. 90% of the grains (by weight %) fall in the size range of 1.19 to 0.59, and the rest 10% fall in size range of 0.50 to 0.42. Now I want to find the probabilities (weight percentage) of the grains falling in each grain size. It is allowable to split this grain size distribution into further small sizes but it must always be in the range of 1.19 and 0.42, i.e. 'D' can be continuous but 0.42 < D < 1.19. I need it fast. I tried on my own but I am not able to get the correct result. I am getting negative probabilities (weight percentages). Thanks to anyone who helps. I didn't incorporate the point 3 as I came to know about that condition later. Here are simple steps I tried: %% D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]; s=0.30; % std dev of the lognormal distribution m=0.84; % mean of the lognormal distribution mu=log(m^2/sqrt(s^2+m^2)); % mean of the associated normal dist. sigma=sqrt(log((s^2/m^2)+1)); % std dev of the associated normal dist. [r,c]=size(D); for i=1:c D(i)=mu+(sigma.*randn(1)); w(i)=(log(D(i))-mu)/sigma; % the probability or the wt. percentage of the grain sizes end grain_size=exp(D); %%

    Read the article

  • Simplest way to debug an android bluetooth application

    - by intiha
    I am trying to test and build a sample android application that can simply connect to a BT server to send and receive a few packets. Since the emulator has no support what is the next best thing to test BT communication? Can I just run a code that acts as a server on my laptop and dumps BT connection onto a console? Do I have to write this code, or is their a simple tool that saves me that hassle? One more thing, I have windows 7, and currently I attach to my PC an anycom USB BT adapter. This is able to show up my nexus one as a BT device. I can pair to my laptop but connection always fails. When it tries to pair I have tried both 0000 and 1234 as suggested on my nexus, but in my connection list it still lists my laptop as paired but not connected. Any idea why? I searched for this problem and apparently people talk about rebooting and power cycling BT, but these solutions dont work for me. Thanks. Affan.

    Read the article

  • DVCS with a Windows central repository

    - by Mikko Rantanen
    We are currently using VSS for version control. Quite few of our developers are interested in a distributed model (And want to get rid of VSS). Our network is full of Windows machines and while our IT department has experience maintaining Linux machines they would prefer not to. What DVCS systems can host their central repository on Windows while providing.. Push access to the repository. Basic authentication. Mostly just a way to allow or deny access to the whole repository. No need for fine grained access. Server process so users don't need write right to the repository reducing the risk of accidentally messing with it. On the client side a GUI such as Tortoise would be more or less a requirement (Sorry, Windows shell sucks. :|). Ease of installation would be a huge plus as our IT department is already quite low on resources. And using windows credentials for authentication would be an advantage but not a requirement as long as the client is able to store the credentials. I have had a (really) quick look at Git, Mercurial and Bazaar. Git seemed to use ssh or simple WebDAV for repository access, requiring write permission for the users. Mercurial had a built in http server, but this seemed to be only for pull purposes. Update: Mercurial supports push as well. Bazaar Seemed to use sftp for repository access, again requiring a write permission for the users. Are there windows server processes for any DVCS systems and has anyone managed to set one up in a Windows land? And apologies if this is a duplicate question. I couldn't find one. Update Got Mercurial working for push purposes! Detailed list what was required can be found as an answer below.

    Read the article

  • Payment Processors - What do I need to know if I want to accept credit cards on my website?

    - by Michael Pryor
    This question talks about different payment processors and what they cost, but I'm looking for the answer to what do I need to do if I want to accept credit card payments? Assume I need to store credit card numbers for customers, so that the obvious solution of relying on the credit card processor to do the heavy lifting is not available. PCI Data Security, which is apparently the standard for storing credit card info, has a bunch of general requirements, but how does one implement them? And what about the vendors, like Visa, who have their own best practices? Do I need to have keyfob access to the machine? What about physically protecting it from hackers in the building? Or even what if someone got their hands on the backup files with the sql server data files on it? What about backups? Are there other physical copies of that data around? Tip: If you get a merchant account, you should negotiate that they charge you "interchange-plus" instead of tiered pricing. With tiered pricing, they will charge you different rates based on what type of Visa/MC is used -- ie. they charge you more for cards with big rewards attached to them. Interchange plus billing means you only pay the processor what Visa/MC charges them, plus a flat fee. (Amex and Discover charge their own rates directly to merchants, so this doesn't apply to those cards. You'll find Amex rates to be in the 3% range and Discover could be as low as 1%. Visa/MC is in the 2% range). This service is supposed to do the negotiation for you (I haven't used it, this is not an ad, and I'm not affiliated with the website, but this service is greatly needed.) This blog post gives a complete rundown of handling credit cards (specifically for the UK).

    Read the article

  • iPhone OS: Why is my managedModelObject not complying with Key Value Coding?

    - by nickthedude
    Ok so I'm trying to build this stat tracker for my app and I have built a data model object called statTracker that keeps track of all the stuff I want it to. I can set and retrieve values using the selectors, but if I try and use KVC (ie setValue: forKey: ) everything goes bad and says my StatTracker class is not KVC compliant: valueForUndefinedKey:]: the entity StatTracker is not key value coding-compliant for the key "timesLauched".' 2010-05-18 15:55:08.573 here's the code that is triggering it: NSArray *statTrackerArray = [[NSArray alloc] init]; statTrackerArray = [[CoreDataSingleton sharedCoreDataSingleton] getStatTracker]; NSNumber *number1 = [[NSNumber alloc] init]; number1 = [NSNumber numberWithInt:(1 + [[(StatTracker *)[statTrackerArray objectAtIndex:0] valueForKey:@"timesLauched"] intValue])]; [(StatTracker *)[statTrackerArray objectAtIndex:0] setValue:number1 forKey:@"timesLaunched" ]; NSError *error; if (![[[CoreDataSingleton sharedCoreDataSingleton] managedObjectContext] save:&error]) { NSLog(@"error writing to db"); } Not sure if this is enough code for you folks let me know what you need if you do need more. This would be so sweet if I could use KVC because I could then abstract all this stat tracking stuff into a single method call with a string argument for the value in question. At least that is what I hope to accomplish here. I'm actually now understanding the power of KVC but now I'm just trying to figure out how to make it work. Thanks! Nick

    Read the article

  • Can iPad/iPhone Touch Points be Wrong Due to Calibration?

    - by Kristopher Johnson
    I have an iPad application that uses the whole screen (that is, UIStatusBarHidden is set true in the Info.plist file). The main window's frame is set to (0, 0, 768, 1024), as is the main view in that frame. The main view has multitouch enabled. The view has code to handle touches: - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { for (UITouch *touch in touches) { CGPoint location = [touch locationInView:nil]; NSLog(@"touchesMoved at location %@", NSStringFromCGPoint(location)); } } When I run the app in the simulator, it works pretty much as expected. As I move the mouse from one edge of the screen to the other, reported X values go from 0 to 767. Reported Y values go from 20 to 1023, but it is a known issue that the simulator doesn't report touches in the top 20 pixels of the screen, even when there is no status bar. Here's what's weird: When I run the app on an actual iPad, the X values go from 0 to 767 as expected, but reported Y values go from -6 to 1017. The fact that it seems to work properly on the simulator leads me to suspect that real devices' touchscreens are not perfectly calibrated, and mine is simply reporting values six pixels too low. Can anyone verify that this is the case? Otherwise, is there anything else that could account for the Y values being six pixels off from what I expect? (In a few days, I should have a second iPad, so I can test this with another device and compare the results.)

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software? (90% of the projects are .net C#)

    Read the article

  • Problem with load testing Web Service - VSTS 2008

    - by Carlos
    Hello, I have a webtest with makes a simple call to a WebService which looks like that: MyWebService webService = new MyWebService(); webService.Timeout = 180000; webService.myMethod(); I am not using ThinkTimes, also the Run Duration is set to 5 minutes. When I ran this test simulating only 1 user, I check the counters and I found something like that: Tests Total: 4500 Network Interface\Bytes sent (agent machine): 35,500 Then I ran the same tests, but this time simulating 2 users and I got something like that: Tests Total: 2225 Network Interface\Bytes sent (agent machine): 30,500 So when I increased the numbers of users the tests/sec was half than when I use only 1 user and the bytes sent by the agent was also lower. I think it is strange, because it doesn't seems I have a bottleneck in my agent machine since CPU is never higher than 30% and I have over 1.5GB of RAM free, also my network utilization is like 0.5% of its capacity. In order to troubleshot this I ran a test using Step Pattern, the simulated users went from 20 to 800 users. When I check the requests/sec it is practically constant through the whole test, so it is clear there is something in my test or my environment which is preventing the number of requests from gets higher. It would be a expected behavior if the "response time" was getting higher because it would tell me the requests wasn't been processed properly, but the strange thing is the response time is practically constant all the time and it is pretty low actually. I have no idea why my agent can't send more requests when I increase the numbers of users, any help/tip/guess would be really appreciate.

    Read the article

  • working with large sprite sheets on iphone

    - by lukya
    Hi All, I am trying to use sprite sheet animation in my application. The first POC with a small sprite sheet worked fine but as i change the sprite sheet to a bigger one, i get "check_safe_call: could not restore current frame" warning and the application quits. A quick search revealed that this problem meant my app is taking too much memory or the image is too huge in dimension. My image is 4.9 Mb and dimensions are 6720 * 10080 (oops!!). i read that iphone allows maximum 3 Mb image with dimensions up to 1024 * 1024. Also that the sprite sheet image dimensions should be a power of two. So please let me know how i can use a sprite sheet this big. One approach could be to cut the sprite sheet into many smaller sprite sheets and use them one at a time. Please suggest if you know any other/better approach to accommodate bigger sprite sheets and whether the problem with my sprite sheet is size (4.9 Mb) OR dimensions (6720 * 10080). (Just FYI, i am not trying to play a movie so using MP4 file instead is not an option for me. i need to animate the sprite sheet based on accelerometer input and i have been able to achieve that in my POC with smaller sprite sheet.) Thanks, Swapnil

    Read the article

  • Hibernate JDBCConnectionException: Communications link failure and java.io.EOFException: Can not read response from server

    - by Marc
    I get a quite well-known using MySql jdbc driver : JDBCConnectionException: Communications link failure, java.io.EOFException: Can not read response from server. This is caused by the wait_timeout parameter in my.cnf. So I decided to use c3p0 pool connection along with Hibernate. Here is what I added to hibernate.cfg.xml : <property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property> <property name="c3p0.min_size">10</property> <property name="c3p0.max_size">100</property> <property name="c3p0.timeout">1000</property> <property name="c3p0.preferredTestQuery">SELECT 1</property> <property name="c3p0.acquire_increment">1</property> <property name="c3p0.idle_test_period">2</property> <property name="c3p0.max_statements">50</property> idle_test_period is volontarily low for test purposes. Looking at the mysql logs I can see the "SELECT 1" request which is regularly sent to the mysql server so it works. Unfortunately I still get this EOF exception within my app if I wait longer than 'wait_timout' seconds (set to 10 for test purposes). I'm using Hibernate 4.1.1 and mysql-jdbc-connector 5.1.18. So what am I doing wrong? Thanks, Marc.

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Cross-Platform Language + GUI Toolkit for Prototyping Multimedia Applications

    - by msutherl
    I'm looking for a language + GUI toolkit for rapidly prototyping utility applications for multimedia installations. I've been working with Max/MSP/Jitter for many years, but I'd like to add a text-based language to my 'arsenal' for tasks apart from 'content production'. (When it comes to actual media synthesis, my choices are clear [SuperCollider + MSP for audio, Jitter + Quartz + openFrameworks for video]). I'm looking for something that maintains some of the advantages of Max, but is lower-level, faster, more cross-platfrom (Linux support), and text-based. Integration with powerful sound/video libraries is not a requirement. Some requirements: Cross-platform (at least OSX and Linux, Windows is a plus) Fast and easy cross-platform GUIs with no platform-specific modification GUI code separated from backend code as much as possible Good for interfacing with external serial devices (micro-controllers) Good network support (UDP/TCP) Good libraries for multi-media (video, sound, OSC) are a plus Asynchronous synchronous UNIX integration is a plus The options that come to mind: AS3/Flex (not a fan of AS3 or the idea of running in the Flash Player) openFrameworks (C++ framework, perhaps a bit too low level [looking for fast development time] and biased toward video work) Java w/ Processing libraries (like openFrameworks, just slower) Python + Qt (is Qt appropriate for rapid prototyping?) Python + Another GUI toolkit SuperCollider + Swing (yucky GUI development) Java w/ SWT Any other options? What do you recommend?

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • What do you use to play sound in iPhone games?

    - by zoul
    Hello! I have a performance-intensive iPhone game I would like to add sounds to. There seem to be about three main choices: (1) AVAudioPlayer, (2) Audio Queues and (3) OpenAL. I’d hate to write pages of low-level code just to play a sample, so that I would like to use AVAudioPlayer. The problem is that it seems to kill the performace – I’ve done a simple measuring using CFAbsoluteTimeGetCurrent and the play message seems to take somewhere from 9 to 30 ms to finish. That’s quite miserable, considering that 25 ms == 40 fps. Of course there is the prepareToPlay method that should speed things up. That’s why I wrote a simple class that keeps several AVAudioPlayers at its disposal, prepares them beforehand and then plays the sample using the prepared player. No cigar, still it takes the ~20 ms I mentioned above. Such performance is unusable for games, so what do you use to play sounds with a decent performance on iPhone? Am I doing something wrong with the AVAudioPlayer? Do you play sounds with Audio Queues? (I’ve written something akin to AVAudioPlayer before 2.2 came out and I would love to spare that experience.) Do you use OpenAL? If yes, is there a simple way to play sounds with OpenAL, or do you have to write pages of code? Update: Yes, playing sounds with OpenAL is fairly simple.

    Read the article

  • Convert pre-IEEE-574 C++ floating-point numbers to/from C#

    - by Richard Kucia
    Before .Net, before math coprocessors, before IEEE-574, Microsoft defined a bit pattern for floating-point numbers. Old versions of the C++ compiler happily used that definition. I am writing a C# app that needs to read/write such floating-point numbers in a file. How can I do the conversions between the 2 bit formats? I need conversion methods in both directions. This app is going to run in a PocketPC/WinCE environment. Changing the structure of the file is out-of-scope for this project. Is there a C++ compiler option that instructs it to use the old FP format? That would be ideal. I could then exchange data between the C# code and C++ code by using a null-terminated text string, and the C++ methods would be simple wrappers around sprintf and atof functions. At the very least, I'm hoping someone can reply with the bit definitions for the old FP format, so I can put together a low-level bit manipulation algorithm if necessary. Thanks.

    Read the article

  • Targeted Simplify in Mathematica

    - by Timo
    I generate very long and complex analytic expressions of the general form: (...something not so complex...)(...ditto...)(...ditto...)...lots... When I try to use Simplify, Mathematica grinds to a halt, I am assuming due to the fact that it tries to expand the brackets and or simplify across different brackets. The brackets, while containing long expressions, are easily simplified by Mathematica on their own. Is there some way I can limit the scope of Simplify to a single bracket at a time? Edit: Some additional info and progress. So using the advice from you guys I have now started using something in the vein of In[1]:= trouble = Log[(x + I y) (x - I y) + Sqrt[(a + I b) (a - I b)]]; In[2]:= Replace[trouble, form_ /; (Head[form] == Times) :> Simplify[form],{3}] Out[2]= Log[Sqrt[a^2 + b^2] + (x - I y) (x + I y)] Changing Times to an appropriate head like Plus or Power makes it possible to target the simplification quite accurately. The problem / question that remains, though, is the following: Simplify will still descend deeper than the level specified to Replace, e.g. In[3]:= Replace[trouble, form_ /; (Head[form] == Plus) :> Simplify[form], {1}] Out[3]= Log[Sqrt[a^2 + b^2] + x^2 + y^2] simplifies the square root as well. My plan was to iteratively use Replace from the bottom up one level at a time, but this clearly will result in vast amount of repeated work by Simplify and ultimately result in the exact same bogging down of Mathematica I experienced in the outset. Is there a way to restrict Simplify to a certain level(s)? I realize that this sort of restriction may not produce optimal results, but the idea here is getting something that is "good enough".

    Read the article

  • undefined reference to function, despite giving reference in c

    - by Jamie Edwards
    I'm following a tutorial, but when it comes to compiling and linking the code I get the following error: /tmp/cc8gRrVZ.o: In function `main': main.c:(.text+0xa): undefined reference to `monitor_clear' main.c:(.text+0x16): undefined reference to `monitor_write' collect2: ld returned 1 exit status make: *** [obj/main.o] Error 1 What that is telling me is that I haven't defined both 'monitor_clear' and 'monitor_write'. But I have, in both the header and source files. They are as follows: monitor.c: // monitor.c -- Defines functions for writing to the monitor. // heavily based on Bran's kernel development tutorials, // but rewritten for JamesM's kernel tutorials. #include "monitor.h" // The VGA framebuffer starts at 0xB8000. u16int *video_memory = (u16int *)0xB8000; // Stores the cursor position. u8int cursor_x = 0; u8int cursor_y = 0; // Updates the hardware cursor. static void move_cursor() { // The screen is 80 characters wide... u16int cursorLocation = cursor_y * 80 + cursor_x; outb(0x3D4, 14); // Tell the VGA board we are setting the high cursor byte. outb(0x3D5, cursorLocation >> 8); // Send the high cursor byte. outb(0x3D4, 15); // Tell the VGA board we are setting the low cursor byte. outb(0x3D5, cursorLocation); // Send the low cursor byte. } // Scrolls the text on the screen up by one line. static void scroll() { // Get a space character with the default colour attributes. u8int attributeByte = (0 /*black*/ << 4) | (15 /*white*/ & 0x0F); u16int blank = 0x20 /* space */ | (attributeByte << 8); // Row 25 is the end, this means we need to scroll up if(cursor_y >= 25) { // Move the current text chunk that makes up the screen // back in the buffer by a line int i; for (i = 0*80; i < 24*80; i++) { video_memory[i] = video_memory[i+80]; } // The last line should now be blank. Do this by writing // 80 spaces to it. for (i = 24*80; i < 25*80; i++) { video_memory[i] = blank; } // The cursor should now be on the last line. cursor_y = 24; } } // Writes a single character out to the screen. void monitor_put(char c) { // The background colour is black (0), the foreground is white (15). u8int backColour = 0; u8int foreColour = 15; // The attribute byte is made up of two nibbles - the lower being the // foreground colour, and the upper the background colour. u8int attributeByte = (backColour << 4) | (foreColour & 0x0F); // The attribute byte is the top 8 bits of the word we have to send to the // VGA board. u16int attribute = attributeByte << 8; u16int *location; // Handle a backspace, by moving the cursor back one space if (c == 0x08 && cursor_x) { cursor_x--; } // Handle a tab by increasing the cursor's X, but only to a point // where it is divisible by 8. else if (c == 0x09) { cursor_x = (cursor_x+8) & ~(8-1); } // Handle carriage return else if (c == '\r') { cursor_x = 0; } // Handle newline by moving cursor back to left and increasing the row else if (c == '\n') { cursor_x = 0; cursor_y++; } // Handle any other printable character. else if(c >= ' ') { location = video_memory + (cursor_y*80 + cursor_x); *location = c | attribute; cursor_x++; } // Check if we need to insert a new line because we have reached the end // of the screen. if (cursor_x >= 80) { cursor_x = 0; cursor_y ++; } // Scroll the screen if needed. scroll(); // Move the hardware cursor. move_cursor(); } // Clears the screen, by copying lots of spaces to the framebuffer. void monitor_clear() { // Make an attribute byte for the default colours u8int attributeByte = (0 /*black*/ << 4) | (15 /*white*/ & 0x0F); u16int blank = 0x20 /* space */ | (attributeByte << 8); int i; for (i = 0; i < 80*25; i++) { video_memory[i] = blank; } // Move the hardware cursor back to the start. cursor_x = 0; cursor_y = 0; move_cursor(); } // Outputs a null-terminated ASCII string to the monitor. void monitor_write(char *c) { int i = 0; while (c[i]) { monitor_put(c[i++]); } } void monitor_write_hex(u32int n) { s32int tmp; monitor_write("0x"); char noZeroes = 1; int i; for (i = 28; i > 0; i -= 4) { tmp = (n >> i) & 0xF; if (tmp == 0 && noZeroes != 0) { continue; } if (tmp >= 0xA) { noZeroes = 0; monitor_put (tmp-0xA+'a' ); } else { noZeroes = 0; monitor_put( tmp+'0' ); } } tmp = n & 0xF; if (tmp >= 0xA) { monitor_put (tmp-0xA+'a'); } else { monitor_put (tmp+'0'); } } void monitor_write_dec(u32int n) { if (n == 0) { monitor_put('0'); return; } s32int acc = n; char c[32]; int i = 0; while (acc > 0) { c[i] = '0' + acc%10; acc /= 10; i++; } c[i] = 0; char c2[32]; c2[i--] = 0; int j = 0; while(i >= 0) { c2[i--] = c[j++]; } monitor_write(c2); } monitor.h: // monitor.h -- Defines the interface for monitor.h // From JamesM's kernel development tutorials. #ifndef MONITOR_H #define MONITOR_H #include "common.h" // Write a single character out to the screen. void monitor_put(char c); // Clear the screen to all black. void monitor_clear(); // Output a null-terminated ASCII string to the monitor. void monitor_write(char *c); #endif // MONITOR_H common.c: // common.c -- Defines some global functions. // From JamesM's kernel development tutorials. #include "common.h" // Write a byte out to the specified port. void outb ( u16int port, u8int value ) { asm volatile ( "outb %1, %0" : : "dN" ( port ), "a" ( value ) ); } u8int inb ( u16int port ) { u8int ret; asm volatile ( "inb %1, %0" : "=a" ( ret ) : "dN" ( port ) ); return ret; } u16int inw ( u16int port ) { u16int ret; asm volatile ( "inw %1, %0" : "=a" ( ret ) : "dN" ( port ) ); return ret; } // Copy len bytes from src to dest. void memcpy(u8int *dest, const u8int *src, u32int len) { const u8int *sp = ( const u8int * ) src; u8int *dp = ( u8int * ) dest; for ( ; len != 0; len-- ) *dp++ =*sp++; } // Write len copies of val into dest. void memset(u8int *dest, u8int val, u32int len) { u8int *temp = ( u8int * ) dest; for ( ; len != 0; len-- ) *temp++ = val; } // Compare two strings. Should return -1 if // str1 < str2, 0 if they are equal or 1 otherwise. int strcmp(char *str1, char *str2) { int i = 0; int failed = 0; while ( str1[i] != '\0' && str2[i] != '\0' ) { if ( str1[i] != str2[i] ) { failed = 1; break; } i++; } // Why did the loop exit? if ( ( str1[i] == '\0' && str2[i] != '\0' || (str1[i] != '\0' && str2[i] =='\0' ) ) failed =1; return failed; } // Copy the NULL-terminated string src into dest, and // return dest. char *strcpy(char *dest, const char *src) { do { *dest++ = *src++; } while ( *src != 0 ); } // Concatenate the NULL-terminated string src onto // the end of dest, and return dest. char *strcat(char *dest, const char *src) { while ( *dest != 0 ) { *dest = *dest++; } do { *dest++ = *src++; } while ( *src != 0 ); return dest; } common.h: // common.h -- Defines typedefs and some global functions. // From JamesM's kernel development tutorials. #ifndef COMMON_H #define COMMON_H // Some nice typedefs, to standardise sizes across platforms. // These typedefs are written for 32-bit x86. typedef unsigned int u32int; typedef int s32int; typedef unsigned short u16int; typedef short s16int; typedef unsigned char u8int; typedef char s8int; void outb ( u16int port, u8int value ); u8int inb ( u16int port ); u16int inw ( u16int port ); #endif //COMMON_H main.c: // main.c -- Defines the C-code kernel entry point, calls initialisation routines. // Made for JamesM's tutorials <www.jamesmolloy.co.uk> #include "monitor.h" int main(struct multiboot *mboot_ptr) { monitor_clear(); monitor_write ( "hello, world!" ); return 0; } here is my makefile: C_SOURCES= main.c monitor.c common.c S_SOURCES= boot.s C_OBJECTS=$(patsubst %.c, obj/%.o, $(C_SOURCES)) S_OBJECTS=$(patsubst %.s, obj/%.o, $(S_SOURCES)) CFLAGS=-nostdlib -nostdinc -fno-builtin -fno-stack-protector -m32 -Iheaders LDFLAGS=-Tlink.ld -melf_i386 --oformat=elf32-i386 ASFLAGS=-felf all: kern/kernel .PHONY: clean clean: -rm -f kern/kernel kern/kernel: $(S_OBJECTS) $(C_OBJECTS) ld $(LDFLAGS) -o $@ $^ $(C_OBJECTS): obj/%.o : %.c gcc $(CFLAGS) $< -o $@ vpath %.c source $(S_OBJECTS): obj/%.o : %.s nasm $(ASFLAGS) $< -o $@ vpath %.s asem Hopefully this will help you understand what is going wrong and how to fix it :L Thanks in advance. Jamie.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >