Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 80/151 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Creating a DrawableGameComponent

    - by Christian Frantz
    If I'm going to draw cubes effectively, I need to get rid of the numerous amounts of draw calls I have and what has been suggested is that I create a "mesh" of my cubes. I already have them being stored in a single vertex buffer, but the issue lies in my draw method where I am still looping through every cube in order to draw them. I thought this was necessary as each cube will have a set position, but it lowers the frame rate incredibly. What's the easiest way to go about this? I have a class CubeChunk that inherits Microsoft.Stuff.DrawableGameComponent, but I don't know what comes next. I suppose I could just use the chunk of cubes created in my cube class, but that would just keep me going in circles and drawing each cube individually. The goal here is to create a draw method that draws my chunk as a whole, and to not draw individual cubes as I've been doing.

    Read the article

  • Phone complains that identical GLSL struct definition differs in vert/frag programs

    - by stephelton
    When I provide the following struct definition in linked frag and vert shaders, my phone (Samsung Vibrant / Android 2.2) complains that the definition differs. struct Light { mediump vec3 _position; lowp vec4 _ambient; lowp vec4 _diffuse; lowp vec4 _specular; bool _isDirectional; mediump vec3 _attenuation; // constant, linear, and quadratic components }; uniform Light u_light; I know the struct is identical because its included from another file. These shaders work on a linux implementation and on my Android 3.0 tablet. Both shaders declare "precision mediump float;" The exact error is: Uniform variable u_light type/precision does not match in vertex and fragment shader Am I doing anything wrong here, or is my phone's implementation broken? Any advice (other than file a bug report?)

    Read the article

  • Can GJK be used with the same "direction finding method" every time?

    - by the_Seppi
    In my deliberations on GJK (after watching http://mollyrocket.com/849) I came up with the idea that it ins not neccessary to use different methods for getting the new direction in the doSimplex function. E.g. if the point A is closest to the origin, the video author uses the negative position vector AO as the direction in which the next point is searched. If an edge (with A as an endpoint) is closest, he creates a normal vector to this edge, lying in the plane the edge and AO form. If a face is the feature closest to the origin, he uses even another method (which I can't recite from memory right now) However, while thinking about the implementation of GJK in my current came, I noticed that the negative direction vector of the newest simplex point would always make a good direction vector. Of course, the next vertex found by the support function could form a simplex that less likely encases the origin, but I assume it would still work. Since I'm currently experiencing problems with my (yet unfinished) implementation, I wanted to ask whether this method of forming the direction vector is usable or not.

    Read the article

  • Having trouble's understanding NIF model file format?

    - by NoobScratcher
    I'm attempting too develop a 3rd party application to make it easy to import 3d model part's into my mod for skyrim the plan was to have a fileviewer and preview window of the nif model but since , I don't know what the NIF file format actually is or where to get the vertex data from it or the hole nine yards of parsing a text file in detail I'm at a lost what to do. I'm very good at C++ but not at this super over complicated file formats , id much prefer .obj over the nif file format specification here -- http://niftools.sourceforge.net/doc/nif/index.html If someone could help me in understanding the file format in a natural and simple way and the exact parsing needed to create the 3D Model in the frustum and a explanation on how you figured that out would be happy to know. I use cygwin , notepad++ , win32 7

    Read the article

  • OpenGL ES 2.0. Sprite Sheet Animation

    - by Project Dumbo Dev
    I've found a bunch of tutorials on how to make this work on Open GL 1 & 1.1 but I can't find it for 2.0. I would work it out by loading the texture and use a matrix on the vertex shader to move through the sprite sheet. I'm looking for the most efficient way to do it. I've read that when you do the thing I'm proposing you are constantly changing the VBO's and that that is not good. Edit: Been doing some research myself. Came upon this two Updating Texture and referring to the one before PBO's. I can't use PBO's since i'm using ES version of OpenGL so I suppose the best way is to make FBO's but, what I still don't get, is if I should create a Sprite atlas/batch and make a FBO/loadtexture for each frame of if I should load every frame into the buffer and change just de texture directions.

    Read the article

  • What is an achievable way of setting content budgets (e.g. polygon count) for level content in a 3D title?

    - by MrCranky
    In answering this question for swquinn, the answer raised a more pertinent question that I'd like to hear answers to. I'll post our own strategy (promise I won't accept it as the answer), but I'd like to hear others. Specifically: how do you go about setting a sensible budget for your content team. Usually one of the very first questions asked in a development is: what's our polygon budget? Of course, these days it's rare that vertex/poly count alone is the limiting factor, instead shader complexity, fill-rate, lighting complexity, all come into play. What the content team want are some hard numbers / limits to work to such that they have a reasonable expectation that their content, once it actually gets into the engine, will not be too heavy. Given that 'it depends' isn't a particularly useful answer, I'd like to hear a strategy that allows me to give them workable limits without being a) misleading, or b) wrong.

    Read the article

  • Why does this exported cube have too many vertices?

    - by Joewsh
    I'm trying to export md5mesh models. Just as a test I decided to export a simple cube (i.e. with 8 vertices). When I opened the .md5mesh file it lists the following: numverts 24 numtris 12 numweights 24 Obviously the number of triangles makes sense: 6 faces * 2 to triangulate = 12. The model only has one bone so again it even makes sense that there is one weight for each vertex. The question is though, why is the file listing 24 vertices? Is the problem the exporter or is this normal for md5mesh's? Is it something that you have to rectify when you come to parsing the file in engine? I don't want to be parsing or drawing duplicated vertices without reason. I'm guessing it's something to do with shading and normals. Is it a case of listing each vert 3 times, one for each facing normal?

    Read the article

  • How to update a mesh position base on a pressed key?

    - by steven166
    I have a mesh loaded from a file, like a tiger mesh. At the first time it locates at A position, then if I press a left key, it will moves to B position but the problem is if I press a left key one more time, it will move from B position to C position. It means that the amount I want to move the mesh will base on the current position instead of the first time rendering position. I can do it if I have a array vertices then I just update the vertex buffer, but a mesh loaded from a file does not have an array vertices, so how to do it? Anybody help me, please?

    Read the article

  • How do I connect the seams between my terrain?

    - by gnomgrol
    I'm using c++ and D3D11 and I'm trying to create a (pretty) large terrain, lets say 4096x4096, maybe larger. I've got the basics of terrain creation and already split it up into chunks. But, when I'm rendering them (every chunk has its own vertex and index buffer, as well as its own heightmap), there are still little pieces missing between them. I read a lot about LOD(Level Of Detail) and GMM(Geometry Mipmap), but I can't really implement the theory I read. At the moment, it looks like this: I could really use some help, everything is welcome. If you have some good tutorials on any of this, please share them.

    Read the article

  • Rotate an object given only by its points?

    - by d33tah
    I was recently writing a simple 3D maze FPP game. Once I was done fiddling with planes in OpenGL, I wanted to add support for importing Blender objects. The approach I used was triangulization of the object, then using Three.js to export the points to plaintext and then parsing the result JSON in my app. The example file can be seen here: https://github.com/d33tah/tinyfpp/blob/master/Data/Models/cross.txt The numbers represent x,y,z,u,v of a single vertex, which combined in three make a triangle. Then I rendered such an object triangle-by-triangle and played with it. I could move it back and forth and sideways, but I still have no idea how to rotate it by some axis. Let's say I'd like to rotate all the points by five degrees to the left, how would a code doing it look like?

    Read the article

  • does glBindAttribLocation silently ignore names not found in a shader?

    - by rwols
    Does glBindAttribLocation silently ignore names that are not found? For example, in a shader: // Some vertex shader in vec3 position; in vec3 normal; // ... And in some set up code: // While setting up shader GLuint program = glCreateProgram(); glBindAttribLocation(program, 0, "position"); glBindAttribLocation(program, 1, "normal"); glBindAttribLocation(program, 2, "color"); // What about this one? glLinkProgram(program);

    Read the article

  • Reading in a 5000 line text file on the Iphone

    - by howsyourface
    Gday, I am trying to create a tiled map for my game, i have had this previously working using other xml methods but i had memory leaks and all sorts of errors. However i had a map load time of about 2.5 - 3 seconds. So i rewrote all of the code using NSMutableStrings and NSStrings. After my best attempt at optomizing it i had a map load time of 10 - 11 seconds, which is far too slow. So i have now rewritten the code using char* arrays, only to now have a load time of 18 seconds -_-. Here is the latest code, i don't know much c so i could have easily botched the whole thing up. FILE* file = fopen(a, "r"); fseek(file, 0L, SEEK_END); length = ftell(file); fseek(file,0L, SEEK_SET); char fileText[length +1]; char buffer[1024];// = malloc(1024); while(fgets(buffer, 1024, file) != NULL) { strncat(fileText, buffer, strlen(buffer)); } fclose(file); [self parseMapFile:fileText]; - (void)parseMapFile:(char*)tiledXML { currentLayerID = 0; currentTileSetID = 0; tileX = 0; tileY = 0; int tmpGid; NSString* tmpName; int tmpTileWidth; int tmpTileHeight; int tilesetCounter = 0; NSString* tmpLayerName; int tmpLayerHeight; int tmpLayerWidth; int layerCounter = 0; tileX = 0; tileY = 0; int tmpFirstGid = 0; int x; int index; char* r; int counter = 0; while ((x = [self findSubstring:tiledXML substring:"\n"]) != 0) { counter ++; char result[x + 1]; r = &result[0]; [self substringIndex:tiledXML index:x newArray:result]; tiledXML += x+2; index = 0; if (counter == 1) { continue; } else if (counter == 2) { char result1[5]; index = [self getStringBetweenStrings:r substring1:"th=\"" substring2:"\"" newArray:result1]; if (r != 0); mapWidth = atoi(result1); r += index +1; index = 0; index = [self getStringBetweenStrings:r substring1:"ht=\"" substring2:"\"" newArray:result1]; if (r != 0); mapHeight = atoi(result1); r += index +1; index = 0; index = [self getStringBetweenStrings:r substring1:"th=\"" substring2:"\"" newArray:result1]; if (r != 0); tileWidth = atoi(result1); r += index +1; index = 0; index = [self getStringBetweenStrings:r substring1:"ht=\"" substring2:"\"" newArray:result1]; if (r != 0); tileHeight = atoi(result1); continue; } char result2[50]; char result3[3]; if ((index = [self getStringBetweenStrings:r substring1:" gid=\"" substring2:"\"" newArray:result3]) != 0) { tmpGid = atoi(result3); free(result2); if(tmpGid == 0) { [currentLayer addTileAtX:tileX y:tileY tileSetID:-1 tileID:0 globalID:0]; } else { [currentLayer addTileAtX:tileX y:tileY tileSetID:[currentTileSet tileSetID] tileID:tmpGid - [currentTileSet firstGID] globalID:tmpGid]; } tileX ++; if (tileX > [currentLayer layerWidth]-1) { tileY ++; tileX = 0; } } else if ((index = [self getStringBetweenStrings:r substring1:"tgid=\"" substring2:"\"" newArray:result2]) != 0) { tmpFirstGid = atoi(result2); r += index +1; index = 0; index = [self getStringBetweenStrings:r substring1:"me=\"" substring2:"\"" newArray:result2]; if (r != 0); tmpName = [NSString stringWithUTF8String:result2]; r += index +1; index = 0; index = [self getStringBetweenStrings:r substring1:"th=\"" substring2:"\"" newArray:result2]; if (r != 0); tmpTileWidth = atoi(result2); r += index +1; index = 0; index = [self getStringBetweenStrings:r substring1:"ht=\"" substring2:"\"" newArray:result2]; if (r != 0); tmpTileHeight = atoi(result2); } else if ((index = [self getStringBetweenStrings:r substring1:"rce=\"" substring2:"\"" newArray:result2]) != 0) { currentTileSet = [[TileSet alloc] initWithImageNamed:[NSString stringWithUTF8String:result2] name:tmpName tileSetID:tilesetCounter firstGID:tmpFirstGid tileWidth:tmpTileWidth tileHeight:tmpTileHeight spacing:0]; [tileSets addObject:currentTileSet]; [currentTileSet release]; tilesetCounter ++; } else if ((index = [self getStringBetweenStrings:r substring1:"r name=\"" substring2:"\"" newArray:result2]) != 0) { tileX = 0; tileY = 0; tmpLayerName = [NSString stringWithUTF8String:result2]; r += index +1; index = 0; index = [self getStringBetweenStrings:r substring1:"th=\"" substring2:"\"" newArray:result2]; if (r != 0); tmpLayerWidth = atoi(result2); r += index +1; index = 0; index = [self getStringBetweenStrings:r substring1:"ht=\"" substring2:"\"" newArray:result2]; if (r != 0); tmpLayerHeight = atoi(result2); currentLayer = [[Layer alloc] initWithName:tmpLayerName layerID:layerCounter layerWidth:tmpLayerWidth layerHeight:tmpLayerHeight]; [layers addObject:currentLayer]; [currentLayer release]; layerCounter ++; } } } -(void)substringIndex:(char*)c index:(int)x newArray:(char*)result { result[0] = 0; for (int i = 0; i < strlen(c); i++) { result[i] = c[i]; if (i == x) { result[i+1] = '\0'; break; } } } -(int)findSubstring:(char*)c substring:(char*)s { int sCounter = 0; int index = 0; int d; for (int i = 0; i < strlen(c); i ++) { if (i > 500)//max line size break; if (c[i] == s[sCounter]) { d = strlen(s); sCounter ++; if (d > sCounter) { } else { index = i - (d); break; } } else sCounter = 0; } return index; } -(int)getStringBetweenStrings:(char*)c substring1:(char*)s substring2:(char*)s2 newArray:(char*)result { int sCounter = 0; int sCounter2 = 0; int index = 0; int index2 = 0; int d; for (int i = 0; i < strlen(c); i ++) { if (index != 0) { if (c[i] == s2[sCounter2]) { d = strlen(s2); sCounter2 ++; if (d > sCounter2) { } else { index2 = i - (d); break; } } else sCounter2 = 0; } else { if (c[i] == s[sCounter]) { d = strlen(s); sCounter ++; if (d > sCounter) { } else { index = i; } } else sCounter = 0; } } if (index != 0 && index2 != 0) [self substringIndex:(c + index+1) index:index2-index-1 newArray:result]; return index; } (I know it's a lot of code to be putting in here) I thought the by using basic char arrays i could drastically increase the performance, at least over the initial node based code that i was replacing. Thanks for all your efforts.

    Read the article

  • c# Unable to open file for reading

    - by Maks
    I'm writing a program that uses FileSystemWatcher to monitor changes to a given directory, and when it recieves OnCreated or OnChanged event, it copies those created/changed files to a specified directorie(s). At first I had problems with the fact that OnChanged/OnCreated events can be sent twice (not acceptable in case it needed to process 500MB file) but I made a way around this and with what I'm REALLY STUCKED with is getting the following IOException: The process cannot access the file 'C:\Where are Photos\bookmarks (11).html' because it is being used by another process. Thus, preventing the program from copying all the files it should. So as I mentioned, when user uses this program he/she specifes monitored directory, when user copies/creates/changes file in that directory, program should get OnCreated/OnChanged event and then copy that file to few other directories. Above error happens in all casess, if user copies few files that needs to owerwrite other ones in folder being monitored or when copying bulk of several files or even sometimes when copying one file in a monitored directory. Whole program is quite big so I'm sending the most important parts. OnCreated: private void OnCreated(object source, FileSystemEventArgs e) { AddLogEntry(e.FullPath, "created", ""); // Update last access data if it's file so the same file doesn't // get processed twice because of sending another event. if (fileType(e.FullPath) == 2) { lastPath = e.FullPath; lastTime = DateTime.Now; } // serves no purpose now, it will be remove soon string fileName = GetFileName(e.FullPath); // copies file from source to few other directories Copy(e.FullPath, fileName); Console.WriteLine("OnCreated: " + e.FullPath); } OnChanged: private void OnChanged(object source, FileSystemEventArgs e) { // is it directory if (fileType(e.FullPath) == 1) return; // don't mind directory changes itself // Only if enough time has passed or if it's some other file // because two events can be generated int timeDiff = ((TimeSpan)(DateTime.Now - lastTime)).Seconds; if ((timeDiff < minSecsDiff) && (e.FullPath.Equals(lastPath))) { Console.WriteLine("-- skipped -- {0}, timediff: {1}", e.FullPath, timeDiff); return; } // Update last access data for above to work lastPath = e.FullPath; lastTime = DateTime.Now; // Only if size is changed, the rest will handle other handlers if (e.ChangeType == WatcherChangeTypes.Changed) { AddLogEntry(e.FullPath, "changed", ""); string fileName = GetFileName(e.FullPath); Copy(e.FullPath, fileName); Console.WriteLine("OnChanged: " + e.FullPath); } } fileType: private int fileType(string path) { if (Directory.Exists(path)) return 1; // directory else if (File.Exists(path)) return 2; // file else return 0; } Copy: private void Copy(string srcPath, string fileName) { foreach (string dstDirectoy in paths) { string eventType = "copied"; string error = "noerror"; string path = ""; string dirPortion = ""; // in case directory needs to be made if (srcPath.Length > fsw.Path.Length) { path = srcPath.Substring(fsw.Path.Length, srcPath.Length - fsw.Path.Length); int pos = path.LastIndexOf('\\'); if (pos != -1) dirPortion = path.Substring(0, pos); } if (fileType(srcPath) == 1) { try { Directory.CreateDirectory(dstDirectoy + path); //Directory.CreateDirectory(dstDirectoy + fileName); eventType = "created"; } catch (IOException e) { eventType = "error"; error = e.Message; } } else { try { if (!overwriteFile && File.Exists(dstDirectoy + path)) continue; // create new dir anyway even if it exists just to be sure Directory.CreateDirectory(dstDirectoy + dirPortion); // copy file from where event occured to all specified directories using (FileStream fsin = new FileStream(srcPath, FileMode.Open, FileAccess.Read, FileShare.Read)) { using (FileStream fsout = new FileStream(dstDirectoy + path, FileMode.Create, FileAccess.Write)) { byte[] buffer = new byte[32768]; int bytesRead = -1; while ((bytesRead = fsin.Read(buffer, 0, buffer.Length)) > 0) fsout.Write(buffer, 0, bytesRead); } } } catch (Exception e) { if ((e is IOException) && (overwriteFile == false)) { eventType = "skipped"; } else { eventType = "error"; error = e.Message; // attempt to find and kill the process locking the file. // failed, miserably System.Diagnostics.Process tool = new System.Diagnostics.Process(); tool.StartInfo.FileName = "handle.exe"; tool.StartInfo.Arguments = "\"" + srcPath + "\""; tool.StartInfo.UseShellExecute = false; tool.StartInfo.RedirectStandardOutput = true; tool.Start(); tool.WaitForExit(); string outputTool = tool.StandardOutput.ReadToEnd(); string matchPattern = @"(?<=\s+pid:\s+)\b(\d+)\b(?=\s+)"; foreach (Match match in Regex.Matches(outputTool, matchPattern)) { System.Diagnostics.Process.GetProcessById(int.Parse(match.Value)).Kill(); } Console.WriteLine("ERROR: {0}: [ {1} ]", e.Message, srcPath); } } } AddLogEntry(dstDirectoy + path, eventType, error); } } I checked everywhere in my program and whenever I use some file I use it in using block so even writing event to log (class for what I ommited since there is probably too much code already in post) wont lock the file, that is it shouldn't since all operations are using using statement block. I simply have no clue who's locking the file if not my program "copy" process from user through Windows or something else. Right now I have two possible "solutions" (I can't say they are clean solutions since they are hacks and as such not desireable). Since probably the problem is with fileType method (what else could lock the file?) I tried changing it to this, to simulate "blocking-until-ready-to-open" operation: fileType: private int fileType(string path) { FileStream fs = null; int ret = 0; bool run = true; if (Directory.Exists(path)) ret = 1; else { while (run) { try { fs = new FileStream(path, FileMode.Open); ret = 2; run = false; } catch (IOException) { } finally { if (fs != null) { fs.Close(); fs.Dispose(); } } } } return ret; } This is working as much as I could tell (test), but... it's hack, not to mention other deficients. The other "solution" I could try (I didn't test it yet) is using GC.Collect() somewhere at the end of fileType() method. Maybe even worse "solution" than previous one. Can someone pleas tell me, what on earth is locking the file, preventing it from opening and how can I fix that? What am I missing to see? Thanks in advance.

    Read the article

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • boost::asio::async_read_until problem

    - by user368831
    I'm trying to modify the echo server example from boost asio and I'm running into problem when I try to use boost::asio::async_read_until. Here's the code: #include <cstdlib> #include <iostream> #include <boost/bind.hpp> #include <boost/asio.hpp> using boost::asio::ip::tcp; class session { public: session(boost::asio::io_service& io_service) : socket_(io_service) { } tcp::socket& socket() { return socket_; } void start() { std::cout<<"starting"<<std::endl; boost::asio::async_read_until(socket_, boost::asio::buffer(data_, max_length), ' ', boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void handle_read(const boost::system::error_code& error, size_t bytes_transferred) { std::cout<<"handling read"<<std::endl; if (!error) { boost::asio::async_write(socket_, boost::asio::buffer(data_, bytes_transferred), boost::bind(&session::handle_write, this, boost::asio::placeholders::error)); } else { delete this; } } void handle_write(const boost::system::error_code& error) { if (!error) { /* socket_.async_read_some(boost::asio::buffer(data_, max_length), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); */ } else { delete this; } } private: tcp::socket socket_; enum { max_length = 1024 }; char data_[max_length]; }; class server { public: server(boost::asio::io_service& io_service, short port) : io_service_(io_service), acceptor_(io_service, tcp::endpoint(tcp::v4(), port)) { session* new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } void handle_accept(session* new_session, const boost::system::error_code& error) { if (!error) { new_session->start(); new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } else { delete new_session; } } private: boost::asio::io_service& io_service_; tcp::acceptor acceptor_; }; int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: async_tcp_echo_server <port>\n"; return 1; } boost::asio::io_service io_service; using namespace std; // For atoi. server s(io_service, atoi(argv[1])); io_service.run(); } catch (std::exception& e) { std::cerr << "Exception: " << e.what() << "\n"; } return 0; } The problem is when I try to compile I get this weird error: server.cpp: In member function ‘void session::start()’: server.cpp:27: error: no matching function for call to ‘async_read_until(boost::asio::basic_stream_socket &, boost::asio::mutable_buffers_1, char, boost::_bi::bind_t, boost::_bi::list3, boost::arg<1 ()(), boost::arg<2 ()() )’ Can someone please explain what's going on? From what I can tell the arguments to async_read_until are correct. Thanks!

    Read the article

  • Optimizing Jaro-Winkler algorithm

    - by Pentium10
    I have this code for Jaro-Winkler algorithm taken from this website. I need to run 150,000 times to get distance between differences. It takes a long time, as I run on an Android mobile device. Can it be optimized more? public class Jaro { /** * gets the similarity of the two strings using Jaro distance. * * @param string1 the first input string * @param string2 the second input string * @return a value between 0-1 of the similarity */ public float getSimilarity(final String string1, final String string2) { //get half the length of the string rounded up - (this is the distance used for acceptable transpositions) final int halflen = ((Math.min(string1.length(), string2.length())) / 2) + ((Math.min(string1.length(), string2.length())) % 2); //get common characters final StringBuffer common1 = getCommonCharacters(string1, string2, halflen); final StringBuffer common2 = getCommonCharacters(string2, string1, halflen); //check for zero in common if (common1.length() == 0 || common2.length() == 0) { return 0.0f; } //check for same length common strings returning 0.0f is not the same if (common1.length() != common2.length()) { return 0.0f; } //get the number of transpositions int transpositions = 0; int n=common1.length(); for (int i = 0; i < n; i++) { if (common1.charAt(i) != common2.charAt(i)) transpositions++; } transpositions /= 2.0f; //calculate jaro metric return (common1.length() / ((float) string1.length()) + common2.length() / ((float) string2.length()) + (common1.length() - transpositions) / ((float) common1.length())) / 3.0f; } /** * returns a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1. * * @param string1 * @param string2 * @param distanceSep * @return a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1 */ private static StringBuffer getCommonCharacters(final String string1, final String string2, final int distanceSep) { //create a return buffer of characters final StringBuffer returnCommons = new StringBuffer(); //create a copy of string2 for processing final StringBuffer copy = new StringBuffer(string2); //iterate over string1 int n=string1.length(); int m=string2.length(); for (int i = 0; i < n; i++) { final char ch = string1.charAt(i); //set boolean for quick loop exit if found boolean foundIt = false; //compare char with range of characters to either side for (int j = Math.max(0, i - distanceSep); !foundIt && j < Math.min(i + distanceSep, m - 1); j++) { //check if found if (copy.charAt(j) == ch) { foundIt = true; //append character found returnCommons.append(ch); //alter copied string2 for processing copy.setCharAt(j, (char)0); } } } return returnCommons; } } I mention that in the whole process I make just instance of the script, so only once jaro= new Jaro(); If you are going to test and need examples so not break the script, you will find it here, in another thread for python optimization.

    Read the article

  • Is Multicast broken for Android 2.0.1 (currently on the DROID) or am I missing something?

    - by Gubatron
    This code works perfectly in Ubuntu, in Windows and MacOSX, it also works fine with a Nexus-One currently running firmware 2.1.1. I start sending and listening multicast datagrams, and all the computers and the Nexus-One will see each other perfectly. Then I run the same code on a Droid (Firmware 2.0.1), and everybody will get the packets sent by the Droid, but the droid will listen only to it's own packets. This is the run() method of a thread that's constantly listening on a Multicast group for incoming packets sent to that group. I'm running my tests on a local network where I have multicast support enabled in the router. My goal is to have devices meet each other as they come on line by broadcasting packages to a multicast group. public void run() { byte[] buffer = new byte[65535]; DatagramPacket dp = new DatagramPacket(buffer, buffer.length); try{ MulticastSocket ms = new MulticastSocket(_port); ms.setNetworkInterface(_ni); //non loopback network interface passed ms.joinGroup(_ia); //the multicast address, currently 224.0.1.16 Log.v(TAG,"Joined Group " + _ia); while (true) { ms.receive(dp); String s = new String(dp.getData(),0,dp.getLength()); Log.v(TAG,"Received Package on "+ _ni.getName() +": " + s); Message m = new Message(); Bundle b = new Bundle(); b.putString("event", "Listener ("+_ni.getName()+"): \"" + s + "\""); m.setData(b); dispatchMessage(m); //send to ui thread } } catch (SocketException se) { System.err.println(se); } catch (IOException ie) { System.err.println(ie); } } Over here, is the code that sends the Multicast Datagram out of every valid network interface available (that's not the loopback interface). public void sendPing() { MulticastSocket ms = null; try { ms = new MulticastSocket(_port); ms.setTimeToLive(TTL_GLOBAL); List<NetworkInterface> interfaces = getMulticastNonLoopbackNetworkInterfaces(); for (NetworkInterface iface : interfaces) { //skip loopback if (iface.getName().equals("lo")) continue; ms.setNetworkInterface(iface); _buffer = ("FW-"+ _name +" PING ("+iface.getName()+":"+iface.getInetAddresses().nextElement()+")").getBytes(); DatagramPacket dp = new DatagramPacket(_buffer, _buffer.length,_ia,_port); ms.send(dp); Log.v(TAG,"Announcer: Sent packet - " + new String(_buffer) + " from " + iface.getDisplayName()); } } catch (IOException e) { e.printStackTrace(); } catch (Exception e2) { e2.printStackTrace(); } } Update (April 2nd 2010) I found a way to have the Droid's network interface to communicate using Multicast! _wifiMulticastLock = ((WifiManager) context.getSystemService(Context.WIFI_SERVICE)).createMulticastLock("multicastLockNameHere"); _wifiMulticastLock.acquire(); Then when you're done... if (_wifiMulticastLock != null && _wifiMulticastLock.isHeld()) _wifiMulticastLock.release(); After I did this, the Droid started sending and receiving UDP Datagrams on a Multicast group. gubatron

    Read the article

  • FMOD.net streaming, callback and exinfo parameters

    - by Tesserex
    I posted a question on gamedev about how to play nsf files (NES console music) in FMOD. It didn't get any results, but since then I made some progress. I decided that the easiest method was just to compile an existing player into a dll and then call it from C# to populate my buffer. The problem now is getting it to sound right, and making sure all my paremeters are correct. Here are the facts so far: The nsf dll is dealing with shorts, so the data is PCM16. The sample nsf I'm using has a playback rate of 60 Hz. Just for playing around now, I'm using a frequency of 48000. Based on 2 and 3, the dll calculates a necessary buffer size of 48000 / 60hz = 800. This means it will render 800 shorts worth of buffer for every simulated NES frame. I've so far got my C# code to play the nsf, at the correct pitch and tempo, but it's very grainy / fuzzy, which I'm attributing to the fact that the FMOD read callback is giving a data length of 1600, whereas I should be expecting 800. I've tried playing around with all the numbers and it either crashes, or the music changes pitch, tempo, or both. Here's some of my C# code: uint channels = 1, frequency = 48000; FMOD.MODE mode = (FMOD.MODE.DEFAULT | FMOD.MODE.OPENUSER | FMOD.MODE.LOOP_NORMAL); FMOD.Sound sound = new FMOD.Sound(); FMOD.CREATESOUNDEXINFO ex = new FMOD.CREATESOUNDEXINFO(); ex.cbsize = Marshal.SizeOf(ex); ex.fileoffset = 0; ex.format = FMOD.SOUND_FORMAT.PCM16; // does this even matter? It doesn't change my results as long as it's long enough for one update ex.length = frequency; ex.numchannels = (int)channels; ex.defaultfrequency = (int)frequency; ex.pcmreadcallback = pcmreadcallback; ex.dlsname = null; // eventually I will calculate this with frequency / nsf hz, but I'm just testing for now ex.decodebuffersize = 800; // from the dll load_nsf_file("file.nsf", 8, (int)frequency); // 8 is the track number to play var result = system.createSound( (string)null, (mode | FMOD.MODE.CREATESTREAM), ref ex, ref sound); channel = new FMOD.Channel(); result = system.playSound(FMOD.CHANNELINDEX.FREE, sound, false, ref channel); private FMOD.RESULT PCMREADCALLBACK(IntPtr soundraw, IntPtr data, uint datalen) { // from the dll process_buffer(data, (int)800); // if I use datalen, it usually crashes (I can't get datalen to = 800 safely) return FMOD.RESULT.OK; } So here are some of my questions: What is the relationship between exinfo.decodebuffersize, frequency, and the datalen parameter of the read callback? With this code sample, it's coming in as 3200. I don't know where that factor of 4 between it and the decodebuffersize comes from. Is datalen in the callback referring to number of bytes, or shorts? The process_buffer function takes a short array and its length. I would expect fmod is talking about shorts as well because I told it PCM16. Maybe my playback quality is bad for some totally different reason. If so I have no idea where to begin solving that. Any ideas there?

    Read the article

  • Does boost::asio makes excessive small heap allocations or am I wrong?

    - by Poni
    #include <cstdlib> #include <iostream> #include <boost/bind.hpp> #include <boost/asio.hpp> using boost::asio::ip::tcp; class session { public: session(boost::asio::io_service& io_service) : socket_(io_service) { } tcp::socket& socket() { return socket_; } void start() { socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void handle_read(const boost::system::error_code& error, size_t bytes_transferred) { if (!error) { data_[bytes_transferred] = '\0'; if(NULL != strstr(data_, "quit")) { this->socket().shutdown(boost::asio::ip::tcp::socket::shutdown_both); this->socket().close(); // how to make this dispatch "handle_read()" with a "disconnected" flag? } else { boost::asio::async_write(socket_, boost::asio::buffer(data_, bytes_transferred), boost::bind(&session::handle_write, this, boost::asio::placeholders::error)); socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } } else { delete this; } } void handle_write(const boost::system::error_code& error) { if (!error) { // } else { delete this; } } private: tcp::socket socket_; enum { max_length = 1024 }; char data_[max_length]; }; class server { public: server(boost::asio::io_service& io_service, short port) : io_service_(io_service), acceptor_(io_service, tcp::endpoint(tcp::v4(), port)) { session* new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } void handle_accept(session* new_session, const boost::system::error_code& error) { if (!error) { new_session->start(); new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } else { delete new_session; } } private: boost::asio::io_service& io_service_; tcp::acceptor acceptor_; }; int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: async_tcp_echo_server <port>\n"; return 1; } boost::asio::io_service io_service; using namespace std; // For atoi. server s(io_service, atoi(argv[1])); io_service.run(); } catch (std::exception& e) { std::cerr << "Exception: " << e.what() << "\n"; } return 0; } While experimenting with boost::asio I've noticed that within the calls to async_write()/async_read_some() there is a usage of the C++ "new" keyword. Also, when stressing this echo server with a client (1 connection) that sends for example 100,000 times some data, the memory usage of this program is getting higher and higher. What's going on? Will it allocate memory for every call? Or am I wrong? Asking because it doesn't seem right that a server app will allocate, anything. Can I handle it, say with a memory pool? Another side-question: See the "this-socket().close();" ? I want it, as the comment right to it says, to dispatch that same function one last time, with a disconnection error. Need that to do some clean-up. How do I do that? Thank you all gurus (:

    Read the article

  • Getting EOFException while trying to read from SSLSocket

    - by Isac
    Hi, I am developing a SSL client that will do a simple request to a SSL server and wait for the response. The SSL handshake and the writing goes OK but I can't READ data from the socket. I turned on the debug of java.net.ssl and got the following: [..] main, READ: TLSv1 Change Cipher Spec, length = 1 [Raw read]: length = 5 0000: 16 03 01 00 20 .... [Raw read]: length = 32 [..] main, READ: TLSv1 Handshake, length = 32 Padded plaintext after DECRYPTION: len = 32 [..] * Finished verify_data: { 29, 1, 139, 226, 25, 1, 96, 254, 176, 51, 206, 35 } %% Didn't cache non-resumable client session: [Session-1, SSL_RSA_WITH_RC4_128_MD5] [read] MD5 and SHA1 hashes: len = 16 0000: 14 00 00 0C 1D 01 8B E2 19 01 60 FE B0 33 CE 23 ..........`..3.# Padded plaintext before ENCRYPTION: len = 70 [..] a.j.y. main, WRITE: TLSv1 Application Data, length = 70 [Raw write]: length = 75 [..] Padded plaintext before ENCRYPTION: len = 70 [..] main, WRITE: TLSv1 Application Data, length = 70 [Raw write]: length = 75 [..] main, received EOFException: ignored main, called closeInternal(false) main, SEND TLSv1 ALERT: warning, description = close_notify Padded plaintext before ENCRYPTION: len = 18 [..] main, WRITE: TLSv1 Alert, length = 18 [Raw write]: length = 23 [..] main, called close() main, called closeInternal(true) main, called close() main, called closeInternal(true) The [..] are the certificate chain. Here is a code snippet: try { System.setProperty("javax.net.debug","all"); /* * Set up a key manager for client authentication * if asked by the server. Use the implementation's * default TrustStore and secureRandom routines. */ SSLSocketFactory factory = null; try { SSLContext ctx; KeyManagerFactory kmf; KeyStore ks; char[] passphrase = "importkey".toCharArray(); ctx = SSLContext.getInstance("TLS"); kmf = KeyManagerFactory.getInstance("SunX509"); ks = KeyStore.getInstance("JKS"); ks.load(new FileInputStream("keystore.jks"), passphrase); kmf.init(ks, passphrase); ctx.init(kmf.getKeyManagers(), null, null); factory = ctx.getSocketFactory(); } catch (Exception e) { throw new IOException(e.getMessage()); } SSLSocket socket = (SSLSocket)factory.createSocket("server ip", 9999); /* * send http request * * See SSLSocketClient.java for more information about why * there is a forced handshake here when using PrintWriters. */ SSLSession session = socket.getSession(); [build query] byte[] buff = query.toWire(); out.write(buff); out.flush(); InputStream input = socket.getInputStream(); int readBytes = -1; int randomLength = 1024; byte[] buffer = new byte[randomLength]; while((readBytes = input.read(buffer, 0, randomLength)) != -1) { LOG.debug("Read: " + new String(buffer)); } input.close(); socket.close(); } catch (Exception e) { e.printStackTrace(); } I can write multiple times and I don't get any error but the EOFException happens on the first read. Am I doing something wrong with the socket or with the SSL authentication? Thank you.

    Read the article

  • Adding functionality to any TextReader

    - by strager
    I have a Location class which represents a location somewhere in a stream. (The class isn't coupled to any specific stream.) The location information will be used to match tokens to location in the input in my parser, to allow for nicer error reporting to the user. I want to add location tracking to a TextReader instance. This way, while reading tokens, I can grab the location (which is updated by the TextReader as data is read) and give it to the token during the tokenization process. I am looking for a good approach on accomplishing this goal. I have come up with several designs. Manual location tracking Every time I need to read from the TextReader, I call AdvanceString on the Location object of the tokenizer with the data read. Advantages Very simple. No class bloat. No need to rewrite the TextReader methods. Disadvantages Couples location tracking logic to tokenization process. Easy to forget to track something (though unit testing helps with this). Bloats existing code. Plain TextReader wrapper Create a LocatedTextReaderWrapper class which surrounds each method call, tracking a Location property. Example: public class LocatedTextReaderWrapper : TextReader { private TextReader source; public Location Location { get; set; } public LocatedTextReaderWrapper(TextReader source) : this(source, new Location()) { } public LocatedTextReaderWrapper(TextReader source, Location location) { this.Location = location; this.source = source; } public override int Read(char[] buffer, int index, int count) { int ret = this.source.Read(buffer, index, count); if(ret >= 0) { this.location.AdvanceString(string.Concat(buffer.Skip(index).Take(count))); } return ret; } // etc. } Advantages Tokenization doesn't know about Location tracking. Disadvantages User needs to create and dispose a LocatedTextReaderWrapper instance, in addition to their TextReader instance. Doesn't allow different types of tracking or different location trackers to be added without layers of wrappers. Event-based TextReader wrapper Like LocatedTextReaderWrapper, but decouples it from the Location object raising an event whenever data is read. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. Can have multiple, independent Location objects (or other methods of tracking) tracking at once. Disadvantages Requires boilerplate code to enable location tracking. User needs to create and dispose the wrapper instance, in addition to their TextReader instance. Aspect-orientated approach Use AOP to perform like the event-based wrapper approach. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. No need to rewrite the TextReader methods. Disadvantages Requires external dependencies, which I want to avoid. I am looking for the best approach in my situation. I would like to: Not bloat the tokenizer methods with location tracking. Not require heavy initialization in user code. Not have any/much boilerplate/duplicated code. (Perhaps) not couple the TextReader with the Location class. Any insight into this problem and possible solutions or adjustments are welcome. Thanks! (For those who want a specific question: What is the best way to wrap the functionality of a TextReader?) I have implemented the "Plain TextReader wrapper" and "Event-based TextReader wrapper" approaches and am displeased with both, for reasons mentioned in their disadvantages.

    Read the article

  • processing an audio wav file with C

    - by sa125
    Hi - I'm working on processing the amplitude of a wav file and scaling it by some decimal factor. I'm trying to wrap my head around how to read and re-write the file in a memory-efficient way while also trying to tackle the nuances of the language (I'm new to C). The file can be in either an 8- or 16-bit format. The way I thought of doing this is by first reading the header data into some pre-defined struct, and then processing the actual data in a loop where I'll read a chunk of data into a buffer, do whatever is needed to it, and then write it to the output. #include <stdio.h> #include <stdlib.h> typedef struct header { char chunk_id[4]; int chunk_size; char format[4]; char subchunk1_id[4]; int subchunk1_size; short int audio_format; short int num_channels; int sample_rate; int byte_rate; short int block_align; short int bits_per_sample; short int extra_param_size; char subchunk2_id[4]; int subchunk2_size; } header; typedef struct header* header_p; void scale_wav_file(char * input, float factor, int is_8bit) { FILE * infile = fopen(input, "rb"); FILE * outfile = fopen("outfile.wav", "wb"); int BUFSIZE = 4000, i, MAX_8BIT_AMP = 255, MAX_16BIT_AMP = 32678; // used for processing 8-bit file unsigned char inbuff8[BUFSIZE], outbuff8[BUFSIZE]; // used for processing 16-bit file short int inbuff16[BUFSIZE], outbuff16[BUFSIZE]; // header_p points to a header struct that contains the file's metadata fields header_p meta = (header_p)malloc(sizeof(header)); if (infile) { // read and write header data fread(meta, 1, sizeof(header), infile); fwrite(meta, 1, sizeof(meta), outfile); while (!feof(infile)) { if (is_8bit) { fread(inbuff8, 1, BUFSIZE, infile); } else { fread(inbuff16, 1, BUFSIZE, infile); } // scale amplitude for 8/16 bits for (i=0; i < BUFSIZE; ++i) { if (is_8bit) { outbuff8[i] = factor * inbuff8[i]; if ((int)outbuff8[i] > MAX_8BIT_AMP) { outbuff8[i] = MAX_8BIT_AMP; } } else { outbuff16[i] = factor * inbuff16[i]; if ((int)outbuff16[i] > MAX_16BIT_AMP) { outbuff16[i] = MAX_16BIT_AMP; } else if ((int)outbuff16[i] < -MAX_16BIT_AMP) { outbuff16[i] = -MAX_16BIT_AMP; } } } // write to output file for 8/16 bit if (is_8bit) { fwrite(outbuff8, 1, BUFSIZE, outfile); } else { fwrite(outbuff16, 1, BUFSIZE, outfile); } } } // cleanup if (infile) { fclose(infile); } if (outfile) { fclose(outfile); } if (meta) { free(meta); } } int main (int argc, char const *argv[]) { char infile[] = "file.wav"; float factor = 0.5; scale_wav_file(infile, factor, 0); return 0; } I'm getting differing file sizes at the end (by 1k or so, for a 40Mb file), and I suspect this is due to the fact that I'm writing an entire buffer to the output, even though the file may have terminated before filling the entire buffer size. Also, the output file is messed up - won't play or open - so I'm probably doing the whole thing wrong. Any tips on where I'm messing up will be great. Thanks!

    Read the article

  • Implement OAuth in Java

    - by phineas
    I made an an attempt to implement OAuth for my programming idea in Java, but I failed miserably. I don't know why, but my code doesn't work. Every time I run my program, an IOException is thrown with the reason "java.io.IOException: Server returned HTTP response code: 401" (401 means Unauthorized). I had a close look at the docs, but I really don't understand why it doesn't work. My OAuth provider I wanted to use is twitter, where I've registered my app, too. Thanks in advance phineas OAuth docs Twitter API wiki Class Base64Coder import java.io.InputStreamReader; import java.io.BufferedReader; import java.io.OutputStream; import java.io.IOException; import java.io.UnsupportedEncodingException; import java.net.URL; import java.net.URLEncoder; import java.net.URLConnection; import java.net.MalformedURLException; import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import java.security.NoSuchAlgorithmException; import java.security.InvalidKeyException; public class Request { public static String read(String url) { StringBuffer buffer = new StringBuffer(); try { /** * get the time - note: value below zero * the millisecond value is used for oauth_nonce later on */ int millis = (int) System.currentTimeMillis() * -1; int time = (int) millis / 1000; /** * Listing of all parameters necessary to retrieve a token * (sorted lexicographically as demanded) */ String[][] data = { {"oauth_callback", "SOME_URL"}, {"oauth_consumer_key", "MY_CONSUMER_KEY"}, {"oauth_nonce", String.valueOf(millis)}, {"oauth_signature", ""}, {"oauth_signature_method", "HMAC-SHA1"}, {"oauth_timestamp", String.valueOf(time)}, {"oauth_version", "1.0"} }; /** * Generation of the signature base string */ String signature_base_string = "POST&"+URLEncoder.encode(url, "UTF-8")+"&"; for(int i = 0; i < data.length; i++) { // ignore the empty oauth_signature field if(i != 3) { signature_base_string += URLEncoder.encode(data[i][0], "UTF-8") + "%3D" + URLEncoder.encode(data[i][1], "UTF-8") + "%26"; } } // cut the last appended %26 signature_base_string = signature_base_string.substring(0, signature_base_string.length()-3); /** * Sign the request */ Mac m = Mac.getInstance("HmacSHA1"); m.init(new SecretKeySpec("CONSUMER_SECRET".getBytes(), "HmacSHA1")); m.update(signature_base_string.getBytes()); byte[] res = m.doFinal(); String sig = String.valueOf(Base64Coder.encode(res)); data[3][1] = sig; /** * Create the header for the request */ String header = "OAuth "; for(String[] item : data) { header += item[0]+"=\""+item[1]+"\", "; } // cut off last appended comma header = header.substring(0, header.length()-2); System.out.println("Signature Base String: "+signature_base_string); System.out.println("Authorization Header: "+header); System.out.println("Signature: "+sig); String charset = "UTF-8"; URLConnection connection = new URL(url).openConnection(); connection.setDoInput(true); connection.setDoOutput(true); connection.setRequestProperty("Accept-Charset", charset); connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded;charset=" + charset); connection.setRequestProperty("Authorization", header); connection.setRequestProperty("User-Agent", "XXXX"); OutputStream output = connection.getOutputStream(); output.write(header.getBytes(charset)); BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream())); String read; while((read = reader.readLine()) != null) { buffer.append(read); } } catch(Exception e) { e.printStackTrace(); } return buffer.toString(); } public static void main(String[] args) { System.out.println(Request.read("http://api.twitter.com/oauth/request_token")); } }

    Read the article

  • bind() fails with windows socket error 10038

    - by herrturtur
    I'm trying to write a simple program that will receive a string of max 20 characters and print that string to the screen. The code compiles, but I get a bind() failed: 10038. After looking up the error number on msdn (socket operation on nonsocket), I changed some code from int sock; to SOCKET sock which shouldn't make a difference, but one never knows. Here's the code: #include <iostream> #include <winsock2.h> #include <cstdlib> using namespace std; const int MAXPENDING = 5; const int MAX_LENGTH = 20; void DieWithError(char *errorMessage); int main(int argc, char **argv) { if(argc!=2){ cerr << "Usage: " << argv[0] << " <Port>" << endl; exit(1); } // start winsock2 library WSAData wsaData; if(WSAStartup(MAKEWORD(2,0), &wsaData)!=0){ cerr << "WSAStartup() failed" << endl; exit(1); } // create socket for incoming connections SOCKET servSock; if(servSock=socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)==INVALID_SOCKET) DieWithError("socket() failed"); // construct local address structure struct sockaddr_in servAddr; memset(&servAddr, 0, sizeof(servAddr)); servAddr.sin_family = AF_INET; servAddr.sin_addr.s_addr = INADDR_ANY; servAddr.sin_port = htons(atoi(argv[1])); // bind to the local address int servAddrLen = sizeof(servAddr); if(bind(servSock, (SOCKADDR*)&servAddr, servAddrLen)==SOCKET_ERROR) DieWithError("bind() failed"); // mark the socket to listen for incoming connections if(listen(servSock, MAXPENDING)<0) DieWithError("listen() failed"); // accept incoming connections int clientSock; struct sockaddr_in clientAddr; char buffer[MAX_LENGTH]; int recvMsgSize; int clientAddrLen = sizeof(clientAddr); for(;;){ // wait for a client to connect if((clientSock=accept(servSock, (sockaddr*)&clientAddr, &clientAddrLen))<0) DieWithError("accept() failed"); // clientSock is connected to a client // BEGIN Handle client cout << "Handling client " << inet_ntoa(clientAddr.sin_addr) << endl; if((recvMsgSize = recv(clientSock, buffer, MAX_LENGTH, 0)) <0) DieWithError("recv() failed"); cout << "Word in the tubes: " << buffer << endl; closesocket(clientSock); // END Handle client } } void DieWithError(char *errorMessage) { fprintf(stderr, "%s: %d\n", errorMessage, WSAGetLastError()); exit(1); }

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >