Search Results

Search found 3117 results on 125 pages for 'z buffer'.

Page 16/125 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • Per-vertex position/normal and per-index texture coordinate

    - by Boreal
    In my game, I have a mesh with a vertex buffer and index buffer up and running. The vertex buffer stores a Vector3 for the position and a Vector2 for the UV coordinate for each vertex. The index buffer is a list of ushorts. It works well, but I want to be able to use 3 discrete texture coordinates per triangle. I assume I have to create another vertex buffer, but how do I even use it? Here is my vertex/index buffer creation code: // vertices is a Vertex[] // indices is a ushort[] // VertexDefs stores the vertex size (sizeof(float) * 5) // vertex data numVertices = vertices.Length; DataStream data = new DataStream(VertexDefs.size * numVertices, true, true); data.WriteRange<Vertex>(vertices); data.Position = 0; // vertex buffer parameters BufferDescription vbDesc = new BufferDescription() { BindFlags = BindFlags.VertexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = VertexDefs.size * numVertices, StructureByteStride = VertexDefs.size, Usage = ResourceUsage.Default }; // create vertex buffer vertexBuffer = new Buffer(Graphics.device, data, vbDesc); vertexBufferBinding = new VertexBufferBinding(vertexBuffer, VertexDefs.size, 0); data.Dispose(); // index data numIndices = indices.Length; data = new DataStream(sizeof(ushort) * numIndices, true, true); data.WriteRange<ushort>(indices); data.Position = 0; // index buffer parameters BufferDescription ibDesc = new BufferDescription() { BindFlags = BindFlags.IndexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = sizeof(ushort) * numIndices, StructureByteStride = sizeof(ushort), Usage = ResourceUsage.Default }; // create index buffer indexBuffer = new Buffer(Graphics.device, data, ibDesc); data.Dispose(); Engine.Log(MessageType.Success, string.Format("Mesh created with {0} vertices and {1} indices", numVertices, numIndices)); And my drawing code: // ShaderEffect, ShaderTechnique, and ShaderPass all store effect data // e is of type ShaderEffect // get the technique ShaderTechnique t; if(!e.techniques.TryGetValue(techniqueName, out t)) return; // effect variables e.SetMatrix("worldView", worldView); e.SetMatrix("projection", projection); e.SetResource("diffuseMap", texture); e.SetSampler("textureSampler", sampler); // set per-mesh/technique settings Graphics.context.InputAssembler.SetVertexBuffers(0, vertexBufferBinding); Graphics.context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R16_UInt, 0); Graphics.context.PixelShader.SetSampler(sampler, 0); // render for each pass foreach(ShaderPass p in t.passes) { Graphics.context.InputAssembler.InputLayout = p.layout; p.pass.Apply(Graphics.context); Graphics.context.DrawIndexed(numIndices, 0, 0); } How can I do this?

    Read the article

  • DVD RW+ Not showing up

    - by Manywa R.
    I'm running Ubuntu 12.10 on a Toshiba Satellite Pro A120 and my built in DVD Drive is not opening any cd/dvd/dvd rw that am trying to play on them. the drive seems to be mounted and recongnized: Output of sudo lshw: ... *-cdrom description: DVD-RAM writer product: DVD-RAM UJ-841S vendor: MATSHITA physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/sr0 version: 1.40 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=ready *-medium physical id: 0 logical name: /dev/cdrom and the disk seems to start but hang with the dvd drive LED solid amber.... the output of jun@jun-Satellite-Pro-A120:~$ dmesg | grep "sr0" [679396.184901] sr 1:0:0:0: [sr0] Unhandled sense code [679396.184910] sr 1:0:0:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [679396.184920] sr 1:0:0:0: [sr0] Sense Key : Hardware Error [current] [679396.184931] sr 1:0:0:0: [sr0] Add. Sense: Id CRC or ECC error [679396.184942] sr 1:0:0:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [679396.184965] end_request: I/O error, dev sr0, sector 0 [679396.184975] Buffer I/O error on device sr0, logical block 0 [679396.184984] Buffer I/O error on device sr0, logical block 1 [679396.184990] Buffer I/O error on device sr0, logical block 2 [679396.184996] Buffer I/O error on device sr0, logical block 3 [679396.185002] Buffer I/O error on device sr0, logical block 4 [679396.185008] Buffer I/O error on device sr0, logical block 5 [679396.185014] Buffer I/O error on device sr0, logical block 6 [679396.185020] Buffer I/O error on device sr0, logical block 7 [679396.185031] Buffer I/O error on device sr0, logical block 8 [679396.185038] Buffer I/O error on device sr0, logical block 9 [679396.185070] sr 1:0:0:0: [sr0] unaligned transfer [679396.185108] sr 1:0:0:0: [sr0] unaligned transfer Can someone help me through this? tired of moving around with an external dvd drive. Thanks

    Read the article

  • Tcp window size won't go above 130048

    - by Roger
    I have 2 servers set up with about 80ms latency between them. Both are centos 6 and run a java app that transfers data from on location to another. Both are on 1gbps connections. I have been trying different sysctl settings and different send & receive buffer settings in java but no matter what I set them to, I cannot get the tcp window size to go above 130048 in the tcp dumps. This equates to roughly 13mbps which is the actual throughput I am getting.

    Read the article

  • Write a signal handler to catch SIGSEGV

    - by Adi
    Hi all, I want to write a signal handler to catch SIGSEGV. First , I would protect a block of memory for read or writes using char *buffer; char *p; char a; int pagesize = 4096; " mprotect(buffer,pagesize,PROT_NONE) " What this will do is , it will protect the memory starting from buffer till pagesize for any reads or writes. Second , I will try to read the memory by doing something like p = buffer; a = *p This will generate a SIGSEGV and i have initialized a handler for this. The handler will be called . So far so good. Now the problem I am facing is , once the handler is called, I want to change the access write of the memory by doing mprotect(buffer, pagesize,PROT_READ); and continue my normal functioning of the code. I do not want to exit the function. On future writes to the same memory, I want again catch the signal and modify the write rights and then take account of that event. Here is the code I am trying : #include <signal.h> #include <stdio.h> #include <malloc.h> #include <stdlib.h> #include <errno.h> #include <sys/mman.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) char *buffer; int flag=0; static void handler(int sig, siginfo_t *si, void *unused) { printf("Got SIGSEGV at address: 0x%lx\n",(long) si->si_addr); printf("Implements the handler only\n"); flag=1; //exit(EXIT_FAILURE); } int main(int argc, char *argv[]) { char *p; char a; int pagesize; struct sigaction sa; sa.sa_flags = SA_SIGINFO; sigemptyset(&sa.sa_mask); sa.sa_sigaction = handler; if (sigaction(SIGSEGV, &sa, NULL) == -1) handle_error("sigaction"); pagesize=4096; /* Allocate a buffer aligned on a page boundary; initial protection is PROT_READ | PROT_WRITE */ buffer = memalign(pagesize, 4 * pagesize); if (buffer == NULL) handle_error("memalign"); printf("Start of region: 0x%lx\n", (long) buffer); printf("Start of region: 0x%lx\n", (long) buffer+pagesize); printf("Start of region: 0x%lx\n", (long) buffer+2*pagesize); printf("Start of region: 0x%lx\n", (long) buffer+3*pagesize); //if (mprotect(buffer + pagesize * 0, pagesize,PROT_NONE) == -1) if (mprotect(buffer + pagesize * 0, pagesize,PROT_NONE) == -1) handle_error("mprotect"); //for (p = buffer ; ; ) if(flag==0) { p = buffer+pagesize/2; printf("It comes here before reading memory\n"); a = *p; //trying to read the memory printf("It comes here after reading memory\n"); } else { if (mprotect(buffer + pagesize * 0, pagesize,PROT_READ) == -1) handle_error("mprotect"); a = *p; printf("Now i can read the memory\n"); } /* for (p = buffer;p<=buffer+4*pagesize ;p++ ) { //a = *(p); *(p) = 'a'; printf("Writing at address %p\n",p); }*/ printf("Loop completed\n"); /* Should never happen */ exit(EXIT_SUCCESS); } The problem I am facing with this is ,only the signal handler is running and I am not able to return to the main function after catching the signal.. Any help in this will be greatly appreciated. Thanks in advance Aditya

    Read the article

  • Create links programmatically inside an EmberJS view

    - by Michael Gallego
    I have a pretty complex view to render which involves some kind of recursion (the typical folder/file nested list). The fact that it contains heterogeneous objects (folders and files) make it even harder to write Handlebars templates. Therefore, the only solution I've found is to create a view, and manually fill the render buffer. I came with the following solution: App.LibraryContentList = Ember.View.extend({ tagName: 'ol', classNames: ['project-list', 'dd-list'], nameChanged: function() { this.rerender(); }.observes('[email protected]'), render: function(buffer) { // We only start with depth of zero var content = this.get('content').filterProperty('depth', 0); content.forEach(function(item) { this.renderItem(buffer, item); }, this); }, renderItem: function(buffer, item) { switch (item.constructor.toString()) { case 'Photo.Folder': this.renderFolder(buffer, item); break; case 'Photo.File': this.renderFile(buffer, item); break; } }, renderFolder: function(buffer, folder) { buffer.push('<li class="folder dd-item">'); buffer.push('<span class="dd-handle">' + folder.get('name') + '</span>'); // Merge sub folders and files, and sort them by sort order var content = this.mergeAndSort(); if (content.get('length') > 0) { buffer.push('<ol>'); content.forEach(function(item) { this.renderItem(buffer, item); }, this); buffer.push('</ol>'); } buffer.push('</li>'); }, renderFile: function(buffer, album) { buffer.push('<li class="album dd-item">'); buffer.push('<span class="dd-handle">' + file.get('name') + '</span>'); buffer.push('</li>'); } }); Now, what I'd like is to be able to add links so that each folder and each file is clickable and redirect to another route. But how am I supposed to do that, as I don't have access to the linkTo helper? I've tried to play with the LinkView view, but without any success. Should I register handlers manually for each item? I've also thought about breaking that with a CollectionView instead, and splitting the content by depth so that I could render it using templates, but it seems more complicated. Any thoughts?

    Read the article

  • Fastest way to read data from a lot of ASCII files

    - by Alsenes
    Hi guys, for a college exercise that I've already submitted I needed to read a .txt file wich contained a lot of names of images(1 in each line). Then I needed to open each image as an ascii file, and read their data(images where in ppm format), and do a series of things with them. The things is, I noticed my program was taking 70% of the time in the reading the data from the file part, instead of in the other calculations that I was doing (finding number of repetitions of each pixel with a hash table, finding diferents pixels beetween 2 images etc..), which I found quite odd to say the least. This is how the ppm format looks like: P3 //This value can be ignored when reading the file, because all image will be correctly formatted 4 4 255 //This value can be also ignored, will be always 255. 0 0 0 0 0 0 0 0 0 15 0 15 0 0 0 0 15 7 0 0 0 0 0 0 0 0 0 0 0 0 0 15 7 0 0 0 15 0 15 0 0 0 0 0 0 0 0 0 This is how I was reading the data from the files: ifstream fdatos; fdatos.open(argv[1]); //Open file with the name of all the images const int size = 128; char file[size]; //Where I'll get the image name Image *img; while (fdatos >> file) { //While there's still images anmes left, continue ifstream fimagen; fimagen.open(file); //Open image file img = new Image(fimagen); //Create new image object with it's data file ……… //Rest of the calculations whith that image ……… delete img; //Delete image object after done fimagen.close(); //Close image file after done } fdatos.close(); And inside the image object read the data like this: const int tallafirma = 100; char firma[tallafirma]; fich_in >> std::setw(100) >> firma; // Read the P3 part, can be ignored int maxvalue, numpixels; fich_in >> height >> width >> maxvalue; // Read the next three values numpixels = height*width; datos = new Pixel[numpixels]; int r,g,b; //Don't need to be ints, max value is 256, so an unsigned char would be ok. for (int i=0; i<numpixels; i++) { fich_in >> r >> g >> b; datos[i] = Pixel( r, g ,b); } //This last part is the slow one, //I thing I should be able to read all this data in one single read //to buffer or something which would be stored in an array of unsigned chars, //and then I'd only need to to do: //buffer[0] -> //Pixel 1 - Red data //buffer[1] -> //Pixel 1 - Green data //buffer[2] -> //Pixel 1 - Blue data So, any Ideas? I think I can improve it quite a bit reading all to an array in one single call, I just don't know how that is done. Also, is it posible to know how many images will be in the "index file"? Is it posiible to know the number of lines a file has?(because there's one file name per line..) Thanks!!

    Read the article

  • unsigned char* buffer (FreeType2 Bitmap) to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } // as *.bmp array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; } Update FT_Bitmap *bitmap = &face->glyph->bitmap; // stride must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); target = gcnew Bitmap(width, height, PixelFormat::Format8bppIndexed); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = target->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, target->PixelFormat); array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)bitmap->buffer, values, 0, bytes); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); target->UnlockBits(bitmapData);

    Read the article

  • Steganography : Encoded audio and video file not being played, getting corrupted. What is the issue

    - by Shantanu Gupta
    I have made a steganography program to encrypt/Decrypt some text under image audio and video. I used image as bmp(54 byte header) file, audio as wav(44 byte header) file and video as avi(56 byte header) file formats. When I tries to encrypt text under all these file then it gets encrypted successfully and are also getting decrypted correctly. But it is creating a problem with audio and video i.e these files are not being played after encrypted result. What can be the problem. I am working on Turbo C++ compiler. I know it is super outdated compiler but I have to do it in this only. Here is my code to encrypt. int Binary_encode(char *txtSourceFileName, char *binarySourceFileName, char *binaryTargetFileName,const short headerSize) { long BinarySourceSize=0,TextSourceSize=0; char *Buffer; long BlockSize=10240, i=0; ifstream ReadTxt, ReadBinary; //reads ReadTxt.open(txtSourceFileName,ios::binary|ios::in);//file name, mode of open, here input mode i.e. read only if(!ReadTxt) { cprintf("\nFile can not be opened."); return 0; } ReadBinary.open(binarySourceFileName,ios::binary|ios::in);//file name, mode of open, here input mode i.e. read only if(!ReadBinary) { ReadTxt.close();//closing opened file cprintf("\nFile can not be opened."); return 0; } ReadBinary.seekg(0,ios::end);//setting pointer to a file at the end of file. ReadTxt.seekg(0,ios::end); BinarySourceSize=(long )ReadBinary.tellg(); //returns the position of pointer TextSourceSize=(long )ReadTxt.tellg(); //returns the position of pointer ReadBinary.seekg(0,ios::beg); //sets the pointer to the begining of file ReadTxt.seekg(0,ios::beg); //sets the pointer to the begining of file if(BinarySourceSize<TextSourceSize*50) //Minimum size of an image should be 50 times the size of file to be encrypted { cout<<"\n\n"; cprintf("Binary File size should be bigger than text file size."); ReadBinary.close(); ReadTxt.close(); return 0; } cout<<"\n"; cprintf("\n\nSize of Source Image/Audio File is : "); cout<<(float)BinarySourceSize/1024; cprintf("KB"); cout<<"\n"; cprintf("Size of Text File is "); cout<<TextSourceSize; cprintf(" Bytes"); cout<<"\n"; getch(); //write header to file without changing else file will not open //bmp image's header size is 53 bytes Buffer=new char[headerSize]; ofstream WriteBinary; // writes to file WriteBinary.open(binaryTargetFileName,ios::binary|ios::out|ios::trunc);//file will be created or truncated if already exists ReadBinary.read(Buffer,headerSize);//reads no of bytes and stores them into mem, size contains no of bytes in a file WriteBinary.write(Buffer,headerSize);//writes header to 2nd image delete[] Buffer;//deallocate memory /* Buffer = new char[sizeof(long)]; Buffer = (char *)(&TextSourceSize); cout<<Buffer; */ WriteBinary.write((char *)(&TextSourceSize),sizeof(long)); //writes no of byte to be written in image immediate after header ends //to decrypt file if(!(Buffer=new char[TextSourceSize])) { cprintf("Enough Memory could not be assigned."); return 0; } ReadTxt.read(Buffer,TextSourceSize);//read all data from text file ReadTxt.close();//file no more needed WriteBinary.write(Buffer,TextSourceSize);//writes all text file data into image delete[] Buffer;//deallocate memory //replace Tsize+1 below with Tsize and run the program to see the change //this is due to the reason that 50-54 byte no are of colors which we will be changing ReadBinary.seekg(TextSourceSize+1,ios::cur);//move pointer to the location-current loc i.e. 53+content of text file //write remaining image content to image file while(i<BinarySourceSize-headerSize-TextSourceSize+1) { i=i+BlockSize; Buffer=new char[BlockSize]; ReadBinary.read(Buffer,BlockSize);//reads no of bytes and stores them into mem, size contains no of bytes in a file WriteBinary.write(Buffer,BlockSize); delete[] Buffer; //clear memory, else program can fail giving correct output } ReadBinary.close(); WriteBinary.close(); //Encoding Completed return 0; } Code to decrypt int Binary_decode(char *binarySourceFileName, char *txtTargetFileName, const short headerSize) { long TextDestinationSize=0; char *Buffer; long BlockSize=10240; ifstream ReadBinary; ofstream WriteText; ReadBinary.open(binarySourceFileName,ios::binary|ios::in);//file will be appended if(!ReadBinary) { cprintf("File can not be opened"); return 0; } ReadBinary.seekg(headerSize,ios::beg); Buffer=new char[4]; ReadBinary.read(Buffer,4); TextDestinationSize=*((long *)Buffer); delete[] Buffer; cout<<"\n\n"; cprintf("Size of the File that will be created is : "); cout<<TextDestinationSize; cprintf(" Bytes"); cout<<"\n\n"; sleep(1); WriteText.open(txtTargetFileName,ios::binary|ios::out|ios::trunc);//file will be created if not exists else truncate its data while(TextDestinationSize>0) { if(TextDestinationSize<BlockSize) BlockSize=TextDestinationSize; Buffer= new char[BlockSize]; ReadBinary.read(Buffer,BlockSize); WriteText.write(Buffer,BlockSize); delete[] Buffer; TextDestinationSize=TextDestinationSize-BlockSize; } ReadBinary.close(); WriteText.close(); return 0; } int text_encode(char *SourcefileName, char *DestinationfileName) { ifstream fr; //reads ofstream fw; // writes to file char c; int random; clrscr(); fr.open(SourcefileName,ios::binary);//file name, mode of open, here input mode i.e. read only if(!fr) { cprintf("File can not be opened."); getch(); return 0; } fw.open(DestinationfileName,ios::binary|ios::out|ios::trunc);//file will be created or truncated if already exists while(fr) { int i; while(fr!=0) { fr.get(c); //reads a character from file and increments its pointer char ch; ch=c; ch=ch+1; fw<<ch; //appends character in c to a file } } fr.close(); fw.close(); return 0; } int text_decode(char *SourcefileName, char *DestinationName) { ifstream fr; //reads ofstream fw; // wrrites to file char c; int random; clrscr(); fr.open(SourcefileName,ios::binary);//file name, mode of open, here input mode i.e. read only if(!fr) { cprintf("File can not be opened."); return 0; } fw.open(DestinationName,ios::binary|ios::out|ios::trunc);//file will be created or truncated if already exists while(fr) { int i; while(fr!=0) { fr.get(c); //reads a character from file and increments its pointer char ch; ch=c; ch=ch-1; fw<<ch; //appends character in c to a file } } fr.close(); fw.close(); return 0; }

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

  • How do I get google protocol buffer messages over a socket connection without disconnecting the clie

    - by Dan
    Hi there, I'm attempting to send a .proto message from an iPhone application to a Java server via a socket connection. However so far I'm running into an issue when it comes to the server receiving the data; it only seems to process it after the client connection has been terminated. This points to me that the data is getting sent, but the server is keeping its inputstream open and waiting for more data. Would anyone know how I might go about solving this? The current code (or at least the relevant parts) is as follows: iPhone: Person *person = [[[[Person builder] setId:1] setName:@"Bob"] build]; RequestWrapper *request = [[[RequestWrapper builder] setPerson:person] build]; NSData *data = [request data]; AsyncSocket *socket = [[AsyncSocket alloc] initWithDelegate:self]; if (![socket connectToHost:@"192.168.0.6" onPort:6666 error:nil]){ [self updateLabel:@"Problem connecting to socket!"]; } else { [self updateLabel:@"Sending data to server..."]; [socket writeData:data withTimeout:-1 tag:0]; [self updateLabel:@"Data sent, disconnecting"]; //[socket disconnect]; } Java: try { RequestWrapper wrapper = RequestWrapper.parseFrom(socket.getInputStream()); Person person = wrapper.getPerson(); if (person != null) { System.out.println("Persons name is " + person.getName()); socket.close(); } On running this, it seems to hang on the line where the RequestWrapper is processing the inputStream. I did try replacing the socket writedata method with [request writeToOutputStream:[socket getCFWriteStream]]; Which I thought might work, however I get an error claiming that the "Protocol message contained an invalid tag (zero)". I'm fairly certain that it doesn't contain an invalid tag as the message works when sending it via the writedata method. Any help on the matter would be greatly appreciated! Cheers! Dan (EDIT: I should mention, I am using the metasyntactic gpb code; and the cocoaasyncsocket implementation)

    Read the article

  • Implementing a continuous "revert-buffer" aka Textpad

    - by vedang
    One of my colleagues uses TextPad, and one feature I found really useful is the Auto-Reload. (The feature has been described in this SO quesion: http://stackoverflow.com/questions/1246083/alternative-to-textpads-prompt-to-reload-file). Basically, it keeps reloading the file without any prompt from the user, which is really helpful when monitoring log files that are updated in real-time. Is there something similar available for Emacs? If not, can anyone whip up the required elisp magic?

    Read the article

  • Qt 5.3 OpenGL - vertex buffer object drawing using the core profile

    - by user3700881
    Im using Qt 5.3 to create a QWindow to do some basic rendering stuff. The QWindow is declared like this: class OpenGLWindow : public QWindow, protected QOpenGLFunctions_3_3_Core { Q_OBJECT ... } It is initialized in the constructor: OpenGLWindow::OpenGLWindow(QWindow *parent) : QWindow(parent) { QSurfaceFormat format; format.setVersion(3,3); format.setProfile(QSurfaceFormat::CoreProfile); this->setSurfaceType(OpenGLSurface); this->setFormat(format); this->create(); _context = new QOpenGLContext; _context->setFormat(format); _context->create(); _context->makeCurrent(this); this->initializeOpenGLFunctions(); ... } And that's the rendering code: void OpenGLWindow::render() { if(!isExposed()) return; _context->makeCurrent(this); glClear(GL_COLOR_BUFFER_BIT); glUseProgram(_shaderProgram); glBindBuffer(GL_ARRAY_BUFFER, _positionBufferObject); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glUseProgram(0); _context->swapBuffers(this); } I am trying to draw a simple triangle using a vertex and fragment shader. The problem is that the triangle is not showing up when the core profile is set. Only when I set the OpenGL version to 2.0 or when I use the compatibility profile, it shows up. From my point of view that doesn't make any sense because I am not using fixed functionality at all. What am I missing?

    Read the article

  • Pipe overwrites buffer, don't know how to overcome

    - by Kalec
    I use a simple pipe. I read with a while, 1 char at a time, I think every time I read a char I overwrite something #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <sys/wait.h> #include <string.h> int main () { int pipefd[2]; int cpid; char buf[31]; if (pipe(pipefd) == -1) { perror("pipe"); exit(EXIT_FAILURE) } cpid = fork(); if (cpid == -1) P perror("cpid"); exit(EXIT_FAILURE); } if (cpid == 0) { // child reads from pipe close (pipefd[1]); // close unused write end while (read (pipefd[0], &buf, 1)>0); printf ("Server receives: %s", buf); close (pipefd[0])l exit (EXIT_SUCCESS); } else { // parent writes to pipe close (pipefd[0]); // closing unused read end; char buf2[30]; printf("Server transmits: "); scanf ("%s", buf2); write (pipefd[1], buf2, strlen(buf2)+1); close(pipefd[1]); wait(NULL); exit(EXIT_SUCCESS); } return 0; } For example, if I input: "Flowers" it prints F and then ~6 unprintable characters

    Read the article

  • AS3: Showing bufferlength of NetStream

    - by Tinelise
    I am trying to show the buffered amount of a video that is playing. I am using netstream.bufferLength to do this and it kinda seems to be right. Exept from the fact that it is almost constantly the same amount that is buffered. This can't be right? I want it to be like youtube where you can press pause and the buffer will continue to rise. When I click pause it just stays the same.. Anybody knows how to show buffer length?

    Read the article

  • Rendering Swing Components to an Offscreen buffer

    - by Nick C
    I have a Java (Swing) application, running on a 32-bit Windows 2008 Server, which needs to render it's output to an off-screen image (which is then picked up by another C++ application for rendering elsewhere). Most of the components render correctly, except in the odd case where a component which has just lost focus is occluded by another component, for example where there are two JComboBoxes close to each other, if the user interacts with the lower one, then clicks on the upper one so it's pull-down overlaps the other box. In this situation, the component which has lost focus is rendered after the one occluding it, and so appears on top in the output. It renders correctly in the normal Java display (running full-screen on the primary display), and attempting to change the layers of the components in question does not help. I am using a custom RepaintManager to paint the components to the offscreen image, and I assume the problem lies with the order in which addDirtyRegion() is called for each of the components in question, but I can't think of a good way of identifying when this particular state occurs in order to prevent it. Hacking it so that the object which has just lost focus is not repainted stops the problem, but obviously causes the bigger problem that it is not repainted in all other, normal, circumstances. Is there any way of programmatically identifying this state, or of reordering things so that it does not occur? Many thanks, Nick

    Read the article

  • Multiple frame buffer in android

    - by user332158
    Hi All, I am working on creating multiple displays on a single screen, i.e., I want to run two different activities simultaneously. I came to know that, to achieve this requirement we need to change the surfaceflinger code and some hardware properties in the android source. Can anybody help me in finding the exact procedure in modifying the surfaceflinger and other parts of the android source in order to get two displays Thanks in advance.

    Read the article

  • Default input and output buffering for fopen'd files?

    - by Evan Teran
    So a FILE stream can have both input and output buffers. You can adjust the output stream using setvbuf (I am unaware of any method to play with the input buffer size and behavior). Also, by default the buffer is BUFSIZ (not sure if this is a POSIX or C thing). It is very clear what this means for stdin/stdout/stderr, but what are the defaults for newly opened files? Are they buffered for both input and output? Or perhaps just one? If it is buffered, does output default to block or line mode?

    Read the article

  • Using read() directly into a C++ std:vector

    - by Joe
    I'm wrapping up user space linux socket functionality in some C++ for an embedded system (yes, this is probably reinventing the wheel again). I want to offer a read and write implementation using a vector. Doing the write is pretty easy, I can just pass &myvec[0] and avoid unnecessary copying. I'd like to do the same and read directly into a vector, rather than reading into a char buffer then copying all that into a newly created vector. Now, I know how much data I want to read, and I can allocate appropriately (vec.reserve). I can also read into &myvec[0], though this is probably a VERY BAD IDEA. Obviously doing this doesn't allow myvec.size to return anything sensible. Is there any way of doing this that 1) Doesn't completely feel yucky from a safety/C++ perspective and 2) Doesn't involve two copies of the data block - once from kernel to user space and once from a C char * style buffer into a C++ vector. Any thoughts collective?

    Read the article

  • [PHP] - Output buffer based progress bar

    - by KPL
    Hello people, I have been trying to get the following code working. It's a progress bar trick which uses ob_get_clean() function. Don't know why but this script just don't work! Only the initial percent - 1% comes up and nothing after that. <?php error_reporting(8191); function flush_buffers(){ @ob_end_flush(); @ob_flush(); @flush(); @ob_start(); } $ini = 2; echo '<script>document.getElementById(\'lpt\').style.width=\'1%\';</script><br>'; for($i=1;$i<=100;$i++) { $k=$ini-1; $str=str_replace("width=\'$k%\'","width=\'$i%\'",ob_get_clean()); $ini++; echo $str; flush_buffers(); } ?>

    Read the article

  • displaying a saved buffer in OpenGL ES

    - by Adam
    Hi everyone, So basically I have a screenshot that I've saved that I want to later display on the screen. I've saved it with: glReadPixels(0, 0, self.bounds.size.width, self.bounds.size.height, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); And later I'm trying to write it to the screen with: GLuint RenderedTex; glGenTextures(1, &RenderedTex); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, RenderedTex); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA, self.bounds.size.width, self.bounds.size.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); glDisable(GL_TEXTURE_2D); I'm pretty new to OpenGL so I don't know if I'm doing things right... actually I know I'm not, because nothing shows up. Also not sure how to dispose of the texture when I'm done with it. Anyone know the correct way to do this? Edit: I think I might be having a problem loading the texture because it's not a power of 2, it's 320x480... also, I think this code is just loading the texture, but not drawing it, I'd need a call to glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) in there somewhere...

    Read the article

  • vim plugin to show current Perl subroutine

    - by Andrew
    I'm trying to make a vim plugin that will split the window on load and simulate a info bar at the top of my terminal. I've got it sorta working but I think I've either reached limits of my knowledge of vim syntax or there's a logic problem in my code. The desired effect would be to do a reverse search for any declaration of a Perl subroutine form my current location in the active buffer and display the line in the top buffer. I'm also trying to make it skip that buffer when I switch buffers with <C-R>. My attempt at that so far can be seen in the mess of nested if statements. Anyway, here's the code. I would greatly appreciate feedback from anyone. (pastebin pastebin.com/8cuMPn1Q) let s:current_function_bufname = 'Current\ Function\/Subroutine' function! s:get_current_function_name(no_echo) let lnum = line(".") let col = col(".") if a:no_echo let s:current_function_name = getline(search("^[^s]sub .$", 'bW')) else echohl ModeMsg echo getline(search("^[^s]sub .$", 'bW')) "echo getline(search("^[^ \t#/]\{2}.[^:]\s$", 'bW')) echohl None endif endfunction let s:previous_winbufnr = 1 let s:current_function_name = '' let s:current_function_buffer_created = 0 let s:current_function_bufnr = 2 function! s:show_current_function() let total_buffers = winnr('$') let current_winbufnr = winnr() if s:previous_winbufnr != current_winbufnr if bufname(current_winbufnr) == s:current_function_bufname if s:previous_winbufnr < current_winbufnr let i = current_winbufnr + 1 if i total_buffers let i = 1 endif if i == s:current_function_bufnr let i = i + 1 endif if i total buffers let i = 1 endif exec i.'wincmd w' else let i = current_winbufnr - 1 if i < 1 let i = total_buffers endif if i == s:current_function_bufnr let i = i - 1 endif if i < 1 let i = total_buffers endif try exec i.'wincmd w' finally exec total_buffers.'wincmd w' endtry endif endif let s:previous_winbufnr = current_winbufnr return 1 endif if s:current_function_buffer_created == 0 exec 'top 1 split '.s:current_function_bufname call s:set_as_scratch_buffer() let s:current_function_buffer_created = 1 let s:current_function_bufnr = winnr() endif call s:activate_buffer_by_name(s:current_function_bufname) setlocal modifiable call s:get_current_function_name(1) call setline(1, s:current_function_name) setlocal nomodifiable call s:activate_buffer_by_name(bufname(current_winbufnr)) endfunction function! s:set_as_scratch_buffer() setlocal noswapfile setlocal nomodifiable setlocal bufhidden=delete setlocal buftype=nofile setlocal nobuflisted setlocal nonumber setlocal nowrap setlocal cursorline endfunction function! s:activate_buffer_by_name(name) for i in range(1, winnr('$')) let name = bufname(winbufnr(i)) let full_name = fnamemodify(bufname(winbufnr(i)), ':p') if name == a:name || full_name == a:name exec i.'wincmd w' return 1 endif endfor return 0 endfunction set laststatus=2 autocmd! CursorMoved,CursorMovedI,BufWinEnter * call s:show_current_function() (pastebin pastebin.com/8cuMPn1Q) similar to VIM: display custom reference bar on top of window and http://vim.wikia.com/wiki/Show_current_function_name_in_C_programs

    Read the article

  • How large is a "buffer" in PostgreSQL

    - by Konrad Garus
    I am using pg_buffercache module for finding hogs eating up my RAM cache. For example when I run this query: SELECT c.relname, count(*) AS buffers FROM pg_buffercache b INNER JOIN pg_class c ON b.relfilenode = c.relfilenode AND b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database())) GROUP BY c.relname ORDER BY 2 DESC LIMIT 10; I discover that sample_table is using 120 buffers. How much is 120 buffers in bytes?

    Read the article

  • Protocol buffer deserialization and a dynamically loaded DLL in Compact Framework

    - by cloudraven
    I saw a question related to this on the full framework here. Since it seems to have stayed unresolved for quite a while and this is for the compact framework, I though it would be better to create a new question for it. I want to deserialize types for which I am loading assemblies dynamically (with Assembly.LoadFrom) and I am getting a "Unable to identify known-type for ProtoIncludeAttribute" error. In the related question I mentioned, it was hinted that hooking AppDomain.AssemblyResolve event would help solving the problem. It makes sense for the full framework, but that event is not available in the CF. I wonder if there is a way to do this with CF. The structures I am using look a lot like this and all the classes required for deserialization are loaded from the same Assembly. If the assembly is referenced instead of dynamically loaded it works fine, but fails if done dynamically.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >