Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 38/151 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • What is the proper way to code a read-while loop in Scala?

    - by ARKBAN
    What is the "proper" of writing the standard read-while loop in Scala? By proper I mean written in a Scala-like way as opposed to a Java-like way. Here is the code I have in Java: MessageDigest md = MessageDigest.getInstance( "MD5" ); InputStream input = new FileInputStream( "file" ); byte[] buffer = new byte[1024]; int readLen; while( ( readLen = input.read( buffer ) ) != -1 ) md.update( buffer, 0, readLen ); return md.digest(); Here is the code I have in Scala: val md = MessageDigest.getInstance( hashInfo.algorithm ) val input = new FileInputStream( "file" ) val buffer = new Array[ Byte ]( 1024 ) var readLen = 0 while( readLen != -1 ) { readLen = input.read( buffer ) if( readLen != -1 ) md.update( buffer, 0, readLen ) } md.digest The Scala code is correct and works, but feels very un-Scala-ish. For one it is a literal translation of the Java code, taking advantage of none of the advantages of Scala. Further it is actually longer than the Java code! I really feel like I'm missing something, but I can't figure out what. I'm fairly new to Scala, and so I'm asking the question to avoid falling into the pitfall of writing Java-style code in Scala. I'm more interested in the Scala way to solve this kind of problem than in any specific helper method that might be provided by the Scala API to hash a file. (I apologize in advance for my ad hoc Scala adjectives throughout this question.)

    Read the article

  • Handling user interface in a multi-threaded application (or being forced to have a UI-only main thre

    - by Patrick
    In my application, I have a 'logging' window, which shows all the logging, warnings, errors of the application. Last year my application was still single-threaded so this worked [quite] good. Now I am introducing multithreading. I quickly noticed that it's not a good idea to update the logging window from different threads. Reading some articles on keeping the UI in the main thread, I made a communication buffer, in which the other threads are adding their logging messages, and from which the main thread takes the messages and shows them in the logging window (this is done in the message loop). Now, in a part of my application, the memory usage increases dramatically, because the separate threads are generating lots of logging messages, and the main thread cannot empty the communication buffer quickly enough. After the while the memory decreases again (if the other threads have finished their work and the main thread gradually empties the communication buffer). I solved this problem by having a maximum size on the communication buffer, but then I run into a problem in the following situation: the main thread has to perform a complex action the main thread takes some parts of the action and let's separate threads execute this while the seperate threads are executing their logic, the main thread processes the results from the other threads and continues with its work if the other threads are finished Problem is that in this situation, if the other threads perform logging, there is no UI-message loop, and so the communication buffer is filled, but not emptied. I see two solutions in solving this problem: require the main thread to do regular polling of the communication buffer only performing user interface logic in the main thread (no other logic) I think the second solution seems the best, but this may not that easy to introduce in a big application (in my case it performs mathematical simulations). Are there any other solutions or tips? Or is one of the two proposed the best, easiest, most-pragmatic solution? Thanks, Patrick

    Read the article

  • How to handle image/gif type response on client side using GWT

    - by user200340
    Hi all, I have a question about how to handle image/gif type response on client side, any suggestion will be great. There is a service which responds for retrieving image (only one each time at the moment) from database. The code is something like, JDBC Connection Construct MYSQL query. Execute query If has ResultSet, retrieve first one { //save image into Blob image, “img” is the only entity in the image table. image = rs.getBlob("img"); } response.setContentType("image/gif"); //set response type InputStream in = image.getBinaryStream(); //output Blob image to InputStream int bufferSize = 1024; //buffer size byte[] buffer = new byte[bufferSize]; //initial buffer int length =0; //read length data from inputstream and store into buffer while ((length = in.read(buffer)) != -1) { out.write(buffer, 0, length); //write into ServletOutputStream } in.close(); out.flush(); //write out The code on client side .... imgform.setAction(GWT.getModuleBaseURL() + "serviceexample/ImgRetrieve"); .... ClickListener { OnClick, then imgform.submit(); } formHandler { onSubmit, form validation onSubmitComplete ??????? //handle response, and display image **Here is my question, i had tried Image img = new Image(GWT.getHostPageBaseURL() +"serviceexample/ImgRetrieve"); mg.setSize("300", "300"); imgpanel.add(img); but i only got a non-displayed image with 300X300 size.** } So, how should i handle the responde in this case? Thanks,

    Read the article

  • Why does mmap() fail with ENOMEM on a 1TB sparse file?

    - by metadaddy
    I've been working with large sparse files on openSUSE 11.2 x86_64. When I try to mmap() a 1TB sparse file, it fails with ENOMEM. I would have thought that the 64 bit address space would be adequate to map in a terabyte, but it seems not. Experimenting further, a 1GB file works fine, but a 2GB file (and anything bigger) fails. I'm guessing there might be a setting somewhere to tweak, but an extensive search turns up nothing. Here's some sample code that shows the problem - any clues? #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/mman.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { char * filename = argv[1]; int fd; off_t size = 1UL << 40; // 30 == 1GB, 40 == 1TB fd = open(filename, O_RDWR | O_CREAT | O_TRUNC, 0666); ftruncate(fd, size); printf("Created %ld byte sparse file\n", size); char * buffer = (char *)mmap(NULL, (size_t)size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if ( buffer == MAP_FAILED ) { perror("mmap"); exit(1); } printf("Done mmap - returned 0x0%lx\n", (unsigned long)buffer); strcpy( buffer, "cafebabe" ); printf("Wrote to start\n"); strcpy( buffer + (size - 9), "deadbeef" ); printf("Wrote to end\n"); if ( munmap(buffer, (size_t)size) < 0 ) { perror("munmap"); exit(1); } close(fd); return 0; }

    Read the article

  • Gtk: How can I get a part of a file in a textview with scrollbars relating to the full file

    - by badgerman1
    I'm trying to make a very large file editor (where the editor only stores a part of the buffer in memory at a time), but I'm stuck while building my textview object. Basically- I know that I have to be able to update the text view buffer dynamically, and I don't know hot to get the scrollbars to relate to the full file while the textview contains only a small buffer of the file. I've played with Gtk.Adjustment on a Gtk.ScrolledWindow and ScrollBars, but though I can extend the range of the scrollbars, they still apply to the range of the buffer and not the filesize (which I try to set via Gtk.Adjustment parameters) when I load into textview. I need to have a widget that "knows" that it is looking at a part of a file, and can load/unload buffers as necessary to view different parts of the file. So far, I believe I'll respond to the "change_view" to calculate when I'm off, or about to be off the current buffer and need to load the next, but I don't know how to get the scrollbars to have the top relate to the beginning of the file, and the bottom relate to the end of the file, rather than to the loaded buffer in textview. Any help would be greatly appreciated, thanks!

    Read the article

  • Sha256 is giving junk output

    - by user1746617
    hey can u be more specific on how to convert to bin to hex. bool HashStatus::calculate_digest_value(char * path,unsigned char * output) { FILE* file = fopen(path, "rb"); if(!file) { g_message("SignatureValidator::VerifyReferences,file not opened"); return -1; } unsigned char hash[SHA256_DIGEST_LENGTH]; SHA256_CTX sha256; SHA256_Init(&sha256); const int bufSize = 32768; unsigned char* buffer = malloc(bufSize); int bytesRead = 0; if(!buffer) return NULL; while((bytesRead = fread(buffer, 1, bufSize, file))) { g_message("calculate digest value,verify.cpp::%s",buffer); SHA256_Update(&sha256, buffer, bytesRead); } SHA256_Final(hash, &sha256); g_message("verify.cpp,after final"); sha256_hash_string(hash, output); g_message("verify.cpp,after sha256_hash_string %s",hash); fclose(file); free(buffer); return true; } this is my code to convert file data into hash using sha256 openssl function o/p is :1d54e12333988471354907a760b9cde861423615bb5255ee837e3b27b32366 but actual o/p is:HVThAjM5iEcTVJB6dgC5zehhQjYVu1JV7oN+OyezI2Y= can you guys please help me with whatz wrong with this code,ASAP and i'm new to this please guide me step by step and in detail..

    Read the article

  • Problem with reading and writing to binary file in C++

    - by Reem
    I need to make a file that contains "name" which is a string -array of char- and "data" which is array of bytes -array of char in C++- but the first problem I faced is how to separate the "name" from the "data"? newline character could work in this case (assuming that I don't have "\n" in the name) but I could have special characters in the "data" part so there's no way to know when it ends so I'm putting an int value in the file before the data which has the size of the "data"! I tried to do this with code as follow: if((fp = fopen("file.bin","wb")) == NULL) { return false; } char buffer[] = "first data\n"; fwrite( buffer ,1,sizeof(buffer),fp ); int number[1]; number[0]=10; fwrite( number ,1,1, fp ); char data[] = "1234567890"; fwrite( data , 1, number[0], fp ); fclose(fp); but I didn't know if the "int" part was right, so I tried many other codes including this one: char buffer[] = "first data\n"; fwrite( buffer ,1,sizeof(buffer),fp ); int size=10; fwrite( &size ,sizeof size,1, fp ); char data[] = "1234567890"; fwrite( data , 1, number[0], fp ); I see 4 "NULL" characters in the file when I open it instead of seeing an integer. Is that normal? The other problem I'm facing is reading that again from the file! The code I tried to read didn't work at all :( I tried it with "fread" but I'm not sure if I should use "fseek" with it or it just read the other character after it. Forgive me but I'm a beginner :(

    Read the article

  • iPhone AES encryption issue

    - by Dilshan
    Hi, I use following code to encrypt using AES. - (NSData*)AES256EncryptWithKey:(NSString*)key theMsg:(NSData *)myMessage { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [myMessage length]; //See the doc: For block ciphers, the output size will always be less than or //equal to the input size plus the size of one block. //That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL /* initialization vector (optional) */, [myMessage bytes], dataLength, /* input */ buffer, bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { //the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted]; } free(buffer); //free the buffer; return nil; } However the following code chunk returns null if I tried to print the encryptmessage variable. Same thing applies to decryption as well. What am I doing wrong here? NSData *encrData = [self AES256EncryptWithKey:theKey theMsg:myMessage]; NSString *encryptmessage = [[NSString alloc] initWithData:encrData encoding:NSUTF8StringEncoding]; Thank you

    Read the article

  • Storing "binary" data type in C program

    - by puchu
    I need to create a program that converts one number system to other number systems. I used itoa in Windows (Dev C++) and my only problem is that I do not know how to convert binary numbers to other number systems. All the other number systems conversion work accordingly. Does this involve something like storing the input to be converted using %? Here is a snippet of my work: case 2: { printf("\nEnter a binary number: "); scanf("%d", &num); itoa(num,buffer,8); printf("\nOctal %s",buffer); itoa(num,buffer,10); printf("\nDecimal %s",buffer); itoa(num,buffer,16); printf("\nHexadecimal %s \n",buffer); break; } For decimal I used %d, for octal I used %o and for hexadecimal I used %x. What could be the correct one for binary? Thanks for future answers!

    Read the article

  • Canvas draw calls are rendering out of sequence

    - by Tom Murray
    I have the following code for writing draw calls to a "back buffer" canvas, then placing those in a main canvas using drawImage. This is for optimization purposes and to ensure all images get placed in sequence. Before placing the buffer canvas on top of the main one, I'm using fillRect to create a dark-blue background on the main canvas. However, the blue background is rendering after the sprites. This is unexpected, as I am making its fillRect call first. Here is my code: render: function() { this.buffer.clearRect(0,0,this.w,this.h); this.context.fillStyle = "#000044"; this.context.fillRect(0,0,this.w,this.h); for (var i in this.renderQueue) { for (var ii = 0; ii < this.renderQueue[i].length; ii++) { sprite = this.renderQueue[i][ii]; // Draw it! this.buffer.fillStyle = "green"; this.buffer.fillRect(sprite.x, sprite.y, sprite.w, sprite.h); } } this.context.drawImage(this.bufferCanvas,0,0); } This also happens when I use fillRect on the buffer canvas, instead of the main one. Changing the globalCompositeOperation between 'source-over' and 'destination-over' (for both contexts) does nothing to change this. Paradoxically, if I instead place the blue fillRect inside the nested for loops with the other draw calls, it works as expected... Thanks in advance!

    Read the article

  • Pixel Perfect Collision Detection in Cocos2dx

    - by Happybirthday
    I am trying to port the pixel perfect collision detection in Cocos2d-x the original version was made for Cocos2D and can be found here: http://www.cocos2d-iphone.org/forums/topic/pixel-perfect-collision-detection-using-color-blending/ Here is my code for the Cocos2d-x version bool CollisionDetection::areTheSpritesColliding(cocos2d::CCSprite *spr1, cocos2d::CCSprite *spr2, bool pp, CCRenderTexture* _rt) { bool isColliding = false; CCRect intersection; CCRect r1 = spr1-boundingBox(); CCRect r2 = spr2-boundingBox(); intersection = CCRectMake(fmax(r1.getMinX(),r2.getMinX()), fmax( r1.getMinY(), r2.getMinY()) ,0,0); intersection.size.width = fmin(r1.getMaxX(), r2.getMaxX() - intersection.getMinX()); intersection.size.height = fmin(r1.getMaxY(), r2.getMaxY() - intersection.getMinY()); // Look for simple bounding box collision if ( (intersection.size.width0) && (intersection.size.height0) ) { // If we're not checking for pixel perfect collisions, return true if (!pp) { return true; } unsigned int x = intersection.origin.x; unsigned int y = intersection.origin.y; unsigned int w = intersection.size.width; unsigned int h = intersection.size.height; unsigned int numPixels = w * h; //CCLog("Intersection X and Y %d, %d", x, y); //CCLog("Number of pixels %d", numPixels); // Draw into the RenderTexture _rt-beginWithClear( 0, 0, 0, 0); // Render both sprites: first one in RED and second one in GREEN glColorMask(1, 0, 0, 1); spr1-visit(); glColorMask(0, 1, 0, 1); spr2-visit(); glColorMask(1, 1, 1, 1); // Get color values of intersection area ccColor4B *buffer = (ccColor4B *)malloc( sizeof(ccColor4B) * numPixels ); glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer); _rt-end(); // Read buffer unsigned int step = 1; for(unsigned int i=0; i 0 && color.g 0) { isColliding = true; break; } } // Free buffer memory free(buffer); } return isColliding; } My code is working perfectly if I send the "pp" parameter as false. That is if I do only a bounding box collision but I am not able to get it working correctly for the case when I need Pixel Perfect collision. I think the opengl masking code is not working as I intended. Here is the code for "_rt" _rt = CCRenderTexture::create(visibleSize.width, visibleSize.height); _rt-setPosition(ccp(origin.x + visibleSize.width * 0.5f, origin.y + visibleSize.height * 0.5f)); this-addChild(_rt, 1000000); _rt-setVisible(true); //For testing I think I am making a mistake with the implementation of this CCRenderTexture Can anyone guide me with what I am doing wrong ? Thank you for your time :)

    Read the article

  • Finding the Twins when Implementing Catmull-Clark subdivision using Half-Edge mesh [migrated]

    - by Ailurus
    Note: The description became a little longer than expected. Do you know a readable implementation of this algorithm using this mesh? Please let me know! I'm trying to implement Catmull-Clark subdivision using Matlab (because later on the results have to be compared with some other stuff already implemented in Matlab). First try was with a Vertex-Face mesh, the algorithm works but it is of course not very efficient (since you need neighbouring information for edges and faces). Therefore, I'm now using a Half-Edge mesh (info), see also the paper of Lutz Kettner. Wikipedia link to the idea behind Catmull-Clark SDV: Wiki. My problem lies in finding the Twin HalfEdges, I'm just not sure how to do this. Below I'm describing my thoughts on the implementation, trying to keep it concise. Half-Edge mesh (using indices to Vertices/HalfEdges/Faces): Vertex (x,y,z,Outgoing_HalfEdge) HalfEdge (HeadVertex (or TailVertex, which one should I use), Next, Face, Twin). Face (HalfEdge) To keep it simple for now, assume that every face is a quadrilateral. The actual mesh is a list of Vertices, HalfEdges and Faces. The new mesh will consist of NewVertices, NewHalfEdges and NewFaces, like this (note: Number_... is the number of ...): NumberNewVertices: Number_Faces + Number_HalfEdges/2 + Number_Vertices NumberNewHalfEdges: 4 * 4 * NumberFaces NumberNewfaces: 4 * NumberFaces Catmull-Clark: Find the FacePoint (centroid) of each Face: --> Just average the x,y,z values of the vertices, save as a NewVertex. Find the EdgePoint of each HalfEdge: --> To prevent duplicates (each HalfEdge has a Twin which would result in the same HalfEdge) --> Only calculate EdgePoints of the HalfEdge which has the lowest index of the Pair. Update old Vertices Ok, now all the new Vertices are calculated (however, their Outgoing_HalfEdge is still unknown). Next step to save the new HalfEdges and Faces. This is the part causing me problems! Loop through each old Face, there are 4 new Faces to be created (because of the quadrilateral assumption) First create the 4 new HalfEdges per New Face, starting at the FacePoint to the Edgepoint Next a new HalfEdge from the EdgePoint to an Updated Vertex Another new one from the Updated Vertex to the next EdgePoint Finally the fourth new HalfEdge from the EdgePoint back to the FacePoint. The HeadVertex of each new HalfEdge is known, the Next HalfEdge too. The Face is also known (since it is the new face you're creating!). Only the Twin HalfEdge is unknown, how should I know this? By the way, while looping through the Vertices of the new Face, assign the Outgoing_HalfEdge to the Vertices. This is probably the place to find out which HalfEdge is the Twin. Finally, after the 4 new HalfEdges are created, save the Face with the HalfVertex index the last newly created HalfVertex. I hope this is clear, if needed I can post my (obviously not-yet-finished) Matlab code.

    Read the article

  • How do you calculate UVW coordinates?

    - by Jenko
    I'm working on a 3d engine and I'm calculating UVT coordinates, where U and V represent pixels on the texture measured in 0-1, and T is: T = perspective / Z But I'm trying to use this perspective-correct triangle rasteriser, which requires a W, per vertex. How do I calculate the W for each vertex for the drawPerspectiveTexturedPolygon() function? Hint: The code comments refer to W as the "homogenous coordinate" ... does that mean anything?

    Read the article

  • Engine Rendering pipeline : Making shaders generic

    - by fakhir
    I am trying to make a 2D game engine using OpenGL ES 2.0 (iOS for now). I've written Application layer in Objective C and a separate self contained RendererGLES20 in C++. No GL specific call is made outside the renderer. It is working perfectly. But I have some design issues when using shaders. Each shader has its own unique attributes and uniforms that need to be set just before the main draw call (glDrawArrays in this case). For instance, in order to draw some geometry I would do: void RendererGLES20::render(Model * model) { // Set a bunch of uniforms glUniformMatrix4fv(.......); // Enable specific attributes, can be many glEnableVertexAttribArray(......); // Set a bunch of vertex attribute pointers: glVertexAttribPointer(positionSlot, 2, GL_FLOAT, GL_FALSE, stride, m->pCoords); // Now actually Draw the geometry glDrawArrays(GL_TRIANGLES, 0, m->vertexCount); // After drawing, disable any vertex attributes: glDisableVertexAttribArray(.......); } As you can see this code is extremely rigid. If I were to use another shader, say ripple effect, i would be needing to pass extra uniforms, vertex attribs etc. In other words I would have to change the RendererGLES20 render source code just to incorporate the new shader. Is there any way to make the shader object totally generic? Like What if I just want to change the shader object and not worry about game source re-compiling? Any way to make the renderer agnostic of uniforms and attributes etc?. Even though we need to pass data to uniforms, what is the best place to do that? Model class? Is the model class aware of shader specific uniforms and attributes? Following shows Actor class: class Actor : public ISceneNode { ModelController * model; AIController * AI; }; Model controller class: class ModelController { class IShader * shader; int textureId; vec4 tint; float alpha; struct Vertex * vertexArray; }; Shader class just contains the shader object, compiling and linking sub-routines etc. In Game Logic class I am actually rendering the object: void GameLogic::update(float dt) { IRenderer * renderer = g_application->GetRenderer(); Actor * a = GetActor(id); renderer->render(a->model); } Please note that even though Actor extends ISceneNode, I haven't started implementing SceneGraph yet. I will do that as soon as I resolve this issue. Any ideas how to improve this? Related design patterns etc? Thank you for reading the question.

    Read the article

  • loading 3d model data into buffers

    - by mulletdevil
    I am using assimp to load 3d model data. I have noticed that each loaded model is made up of different meshes. I was wondering should each mesh have it's own vertex/index buffer or should there just be one for the whole model? From looking through the index data that is loaded it seems to suggest that I will need a vertex buffer per mesh but I'm not 100% sure. I am using C++ and DirectX9 Thank you, Mark

    Read the article

  • Why is my card Unity blacklisted with all the requirements fulfilled?

    - by Oxwivi
    The following is the Unity test output: OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.30 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity supported: no As you can see, all requirements are fulfilled but my GPU is blacklisted. What can I do about it?

    Read the article

  • XNA ModelMesh.Draw vs GraphicsDevice.DrawIndexedPrimitives

    - by cubrman
    I am using XNA 4.0 and I wonder if drawing models with multiple meshes is better by filling the vertex and index buffers first and calling GraphicsDevice.DrawIndexedPrimitives() or by simply using good ol' foreach(...) {ModelMesh.Draw()}. Is it possible to add data to vertex/index buffers at all in order to pack all the models on the scene in them and then call Draw only once per frame? I would appreciate a link to an in-depth explanation. Thanks.

    Read the article

  • iPhone SDK: Audio Queue control

    - by codemercenary
    Hi all, I am new to the audio queue services so I have taken an example from a book called iPhone Cool Projects where it describes how to stream audio. I want to extend this to being able to play a continuous playlist of links to mp3 files like an internet radio. The problem with the example code it that it does not detect when a stream ends and does not call AudioQueueStop at any point, so I added a counter to number of buffers added to the queue, and then decrement this counter each time audioQueueOutputCallback is called by the queue. This works fine except if when the buffer count goes to 0, and then I add a call AudioQueueFlush(audioQueue) and then AudioQueueStop(audioQueue, false) I get an error. If I only call AudioQueueReset, it continues to load the buffers again, but plays them out faster then it loads them... getting stuck in a loop and then crashing. 2010-04-14 13:56:29.745 AudioPlayer[2269:207] init player with URL 2010-04-14 13:56:29.941 AudioPlayer[2269:207] did recieve data 2010-04-14 13:56:29.942 AudioPlayer[2269:207] audio request didReceiveData 2010-04-14 13:56:29.944 AudioPlayer[2269:207] >>> start audio queue 2010-04-14 13:56:29.960 AudioPlayer[2269:207] packetCallback count 2 2010-04-14 13:56:29.961 AudioPlayer[2269:207] add buffer: 1 2010-04-14 13:56:29.962 AudioPlayer[2269:207] did recieve data 2010-04-14 13:56:29.963 AudioPlayer[2269:207] audio request didReceiveData 2010-04-14 13:56:29.963 AudioPlayer[2269:207] packetCallback count 1 2010-04-14 13:56:29.964 AudioPlayer[2269:207] add buffer: 2 2010-04-14 13:56:29.965 AudioPlayer[2269:207] packetCallback count 13 2010-04-14 13:56:29.967 AudioPlayer[2269:207] add buffer: 3 2010-04-14 13:56:29.968 AudioPlayer[2269:207] done with buffer: 3 2010-04-14 13:56:29.969 AudioPlayer[2269:207] done with buffer: 2 2010-04-14 13:56:29.974 AudioPlayer[2269:207] done with buffer: 1 So this loop continues some 20 - 30 times and then it crashes. The first time it plays an audio file it queues up the buffers and then plays sound, but doesn't callback to delete them until some 100 or more have been played. Can anyone explain this behavior? I read that there was a limit of 1 audio queue for MP3 playback for the iPhone. Is that still true? If not then I suppose I should use another audio queue for the next mp3 stream. I've had a look through the apple docs but it doesn't explain this in any particular detail. A better insight into this would be great. TIA.

    Read the article

  • MySQL daemon keeps terminating unexpectedly

    - by Yehia A.Salam
    The MySQL daemon on my CentOS server keeps crashing, i got the logs from /var/logs/mysqld but still i am not sure how to fix this: 121114 16:22:56 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 121114 21:55:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121114 21:55:11 [Note] Plugin 'FEDERATED' is disabled. 121114 21:55:11 InnoDB: The InnoDB memory heap is disabled 121114 21:55:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 21:55:11 InnoDB: Compressed tables use zlib 1.2.3 121114 21:55:11 InnoDB: Using Linux native AIO 121114 21:55:11 InnoDB: Initializing buffer pool, size = 128.0M 121114 21:55:11 InnoDB: Completed initialization of buffer pool 121114 21:55:11 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121114 21:55:11 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121114 21:55:12 InnoDB: Waiting for the background threads to start 121114 21:55:13 InnoDB: 1.1.6 started; log sequence number 77177262 121114 21:55:13 [Note] Event Scheduler: Loaded 0 events 121114 21:55:13 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.5.12' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) by Remi 121115 00:19:44 mysqld_safe Number of processes running now: 0 121115 00:19:44 mysqld_safe mysqld restarted 121115 0:19:47 [Note] Plugin 'FEDERATED' is disabled. 121115 0:19:47 InnoDB: The InnoDB memory heap is disabled 121115 0:19:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121115 0:19:47 InnoDB: Compressed tables use zlib 1.2.3 121115 0:19:47 InnoDB: Using Linux native AIO 121115 0:19:47 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121115 0:19:47 InnoDB: Completed initialization of buffer pool 121115 0:19:47 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121115 0:19:47 [ERROR] Plugin 'InnoDB' init function returned error. 121115 0:19:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121115 0:19:47 [ERROR] Unknown/unsupported storage engine: InnoDB 121115 0:19:47 [ERROR] Aborting Edit #1 total used free shared buffers cached Mem: 496 370 126 0 24 110 -/+ buffers/cache: 234 261 Swap: 1023 9 1014 Edit #2 Also, largest table in my mysql is 20MB, so my the memory used should be pretty moderate. SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER BY data_length + index_length DESC LIMIT 10;

    Read the article

  • sending mail using mutt + emacs

    - by laks
    How to sent mail from emacs? I found this There are two ways to send the message. C-c C-s (mail-send) sends the message and marks the mail buffer unmodified, but leaves that buffer selected so that you can modify the message (perhaps with new recipients) and send it again. C-c C-c (mail-send-and-exit) sends and then deletes the window or switches to another buffer But both ( ctrl+c ctrl+s ) and (ctrl-c crtl+c) are not working

    Read the article

  • sending mail using mutt + emacs

    - by laks
    How to sent mail from emacs? I found this There are two ways to send the message. C-c C-s (mail-send) sends the message and marks the mail buffer unmodified, but leaves that buffer selected so that you can modify the message (perhaps with new recipients) and send it again. C-c C-c (mail-send-and-exit) sends and then deletes the window or switches to another buffer But both ( ctrl+c ctrl+s ) and (ctrl-c crtl+c) are not working

    Read the article

  • calling concurrently Graphics.Draw and new Bitmap from memory in thread take long time

    - by Abdul jalil
    Example1 public partial class Form1 : Form { public Form1() { InitializeComponent(); pro = new Thread(new ThreadStart(Producer)); con = new Thread(new ThreadStart(Consumer)); } private AutoResetEvent m_DataAvailableEvent = new AutoResetEvent(false); Queue<Bitmap> queue = new Queue<Bitmap>(); Thread pro; Thread con ; public void Producer() { MemoryStream[] ms = new MemoryStream[3]; for (int y = 0; y < 3; y++) { StreamReader reader = new StreamReader("image"+(y+1)+".JPG"); BinaryReader breader = new BinaryReader(reader.BaseStream); byte[] buffer=new byte[reader.BaseStream.Length]; breader.Read(buffer,0,buffer.Length); ms[y] = new MemoryStream(buffer); } while (true) { for (int x = 0; x < 3; x++) { Bitmap bmp = new Bitmap(ms[x]); queue.Enqueue(bmp); m_DataAvailableEvent.Set(); Thread.Sleep(6); } } } public void Consumer() { Graphics g= pictureBox1.CreateGraphics(); while (true) { m_DataAvailableEvent.WaitOne(); Bitmap bmp = queue.Dequeue(); if (bmp != null) { // Bitmap bmp = new Bitmap(ms); g.DrawImage(bmp,new Point(0,0)); bmp.Dispose(); } } } private void pictureBox1_Click(object sender, EventArgs e) { con.Start(); pro.Start(); } } when Creating bitmap and Drawing to picture box are in seperate thread then Bitmap bmp = new Bitmap(ms[x]) take 45.591 millisecond and g.DrawImage(bmp,new Point(0,0)) take 41.430 milisecond when i make bitmap from memoryStream and draw it to picture box in one thread then Bitmap bmp = new Bitmap(ms[x]) take 29.619 and g.DrawImage(bmp,new Point(0,0)) take 35.540 the code is for Example 2 is why it take more time to draw and bitmap take time in seperate thread and how to reduce the time when processing in seperate thread. i am using ANTS performance profiler 4.3 public Form1() { InitializeComponent(); pro = new Thread(new ThreadStart(Producer)); con = new Thread(new ThreadStart(Consumer)); } private AutoResetEvent m_DataAvailableEvent = new AutoResetEvent(false); Queue<MemoryStream> queue = new Queue<MemoryStream>(); Thread pro; Thread con ; public void Producer() { MemoryStream[] ms = new MemoryStream[3]; for (int y = 0; y < 3; y++) { StreamReader reader = new StreamReader("image"+(y+1)+".JPG"); BinaryReader breader = new BinaryReader(reader.BaseStream); byte[] buffer=new byte[reader.BaseStream.Length]; breader.Read(buffer,0,buffer.Length); ms[y] = new MemoryStream(buffer); } while (true) { for (int x = 0; x < 3; x++) { // Bitmap bmp = new Bitmap(ms[x]); queue.Enqueue(ms[x]); m_DataAvailableEvent.Set(); Thread.Sleep(6); } } } public void Consumer() { Graphics g= pictureBox1.CreateGraphics(); while (true) { m_DataAvailableEvent.WaitOne(); //Bitmap bmp = queue.Dequeue(); MemoryStream ms= queue.Dequeue(); if (ms != null) { Bitmap bmp = new Bitmap(ms); g.DrawImage(bmp,new Point(0,0)); bmp.Dispose(); } } } private void pictureBox1_Click(object sender, EventArgs e) { con.Start(); pro.Start(); }

    Read the article

  • Help with Silverlight Sockets and Message delivery

    - by pixel3cs
    There are 4 months since I stopped developing my Silverlight Multiplayer Chess game. The problem was a bug wich I couldn't reproduce. Sice I got some free time this week I managed to discover the problem and I am now able to reproduce the bug. It seems that if I send 10 messages from client, one after another, with no delay between them, just like in the below example // when I press Enter, the client will 10 messages with no delay between them private void textBox_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Enter && textBox.Text.Length > 0) { for (int i = 0; i < 10; i++) { MessageBuilder mb = new MessageBuilder(); mb.Writer.Write((byte)GameCommands.NewChatMessageInTable); mb.Writer.Write(string.Format("{0}{2}: {1}", ClientVars.PlayerNickname, textBox.Text, i)); SendChatMessageEvent(mb.GetMessage()); //System.Threading.Thread.Sleep(100); } textBox.Text = string.Empty; } } // the method used by client to send a message to server public void SendData(Message message) { if (socket.Connected) { SocketAsyncEventArgs myMsg = new SocketAsyncEventArgs(); myMsg.RemoteEndPoint = socket.RemoteEndPoint; byte[] buffer = message.Buffer; myMsg.SetBuffer(buffer, 0, buffer.Length); socket.SendAsync(myMsg); } else { string err = "Server does not respond. You are disconnected."; socket.Close(); uiContext.Post(this.uiClient.ProcessOnErrorData, err); } } // the method used by server to receive data from client private void OnDataReceived(IAsyncResult async) { ClientSocketPacket client = async.AsyncState as ClientSocketPacket; int count = 0; try { if (client.Socket.Connected) count = client.Socket.EndReceive(async); // THE PROBLEM IS HERE // IF SERVER WAS RECEIVE ALL MESSAGES SEPARATELY, ONE BY ONE, THE COUNT // WAS ALWAYS 15, BUT BECAUSE THE SERVER RECEIVE 3 MESSAGES IN 1, THE COUNT // IS SOMETIME 45 } catch { HandleException(client); } client.MessageStream.Write(client.Buffer, 0, count); Message message; while (client.MessageStream.Read(out message)) { message.Tag = client; ThreadPool.QueueUserWorkItem(new WaitCallback(this.processingThreadEvent.ServerGotData), message); totalReceivedBytes += message.Buffer.Length; } try { if (client.Socket.Connected) client.Socket.BeginReceive(client.Buffer, 0, client.Buffer.Length, 0, new AsyncCallback(OnDataReceived), client); } catch { HandleException(client); } } there are sent only 3 big messages, and every big message contain 3 or 4 small messages. This is not the behavior I want. If I put a 100 milliseconds delay between message delivery, everything is work fine, but in a real world scenario users can send messages to server even at 1 millisecond between them. Are there any settings to be done in order to make the client send only one message at a time, or Even if I receive 3 messages in 1, are they full messages all the time (I dont't want to receive 2.5 messages in one big message) ? because if they are, I can read them and treat this new situation

    Read the article

  • TCP client in C and server in Java

    - by faldren
    I would like to communicate with 2 applications : a client in C which send a message to the server in TCP and the server in Java which receive it and send an acknowledgement. Here is the client (the code is a thread) : static void *tcp_client(void *p_data) { if (p_data != NULL) { char const *message = p_data; int sockfd, n; struct sockaddr_in serv_addr; struct hostent *server; char buffer[256]; sockfd = socket(AF_INET, SOCK_STREAM, 0); if (sockfd < 0) { error("ERROR opening socket"); } server = gethostbyname(ALARM_PC_IP); if (server == NULL) { fprintf(stderr,"ERROR, no such host\n"); exit(0); } bzero((char *) &serv_addr, sizeof(serv_addr)); serv_addr.sin_family = AF_INET; bcopy((char *)server->h_addr, (char *)&serv_addr.sin_addr.s_addr, server->h_length); serv_addr.sin_port = htons(TCP_PORT); if (connect(sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0) { error("ERROR connecting"); } n = write(sockfd,message,strlen(message)); if (n < 0) { error("ERROR writing to socket"); } bzero(buffer,256); n = read(sockfd,buffer,255); if (n < 0) { error("ERROR reading from socket"); } printf("Message from the server : %s\n",buffer); close(sockfd); } return 0; } And the java server : try { int port = 9015; ServerSocket server=new ServerSocket(port); System.out.println("Server binded at "+((server.getInetAddress()).getLocalHost()).getHostAddress()+":"+port); System.out.println("Run the Client"); while (true) { Socket socket=server.accept(); BufferedReader in= new BufferedReader(new InputStreamReader(socket.getInputStream())); System.out.println(in.readLine()); PrintStream out=new PrintStream(socket.getOutputStream()); out.print("Welcome by server\n"); out.flush(); out.close(); in.close(); System.out.println("finished"); } } catch(Exception err) { System.err.println("* err"+err); } With n = read(sockfd,buffer,255); the client is waiting a response and for the server, the message is never ended so it doesn't send a response with PrintStream. If I remove these lines : bzero(buffer,256); n = read(sockfd,buffer,255); if (n < 0) { error("ERROR reading from socket"); } printf("Message from the server : %s\n",buffer); The server knows that the message is finished but the client can't receive the response. How solve that ? Thank you

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >