Search Results

Search found 3117 results on 125 pages for 'buffer'.

Page 28/125 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • SQL Server and Hyper-V Dynamic Memory Part 3

    - by SQLOS Team
    In parts 1 and 2 of this series we looked at the basics of Hyper-V Dynamic Memory and SQL Server memory management. In this part Serdar looks at configuration guidelines for SQL Server memory management. Part 3: Configuration Guidelines for Hyper-V Dynamic Memory and SQL Server Now that we understand SQL Server Memory Management and Hyper-V Dynamic Memory basics, let’s take a look at general configuration guidelines in order to utilize benefits of Hyper-V Dynamic Memory in your SQL Server VMs. Requirements Host Operating System Requirements Hyper-V Dynamic Memory feature is introduced with Windows Server 2008 R2 SP1. Therefore in order to use Dynamic Memory for your virtual machines, you need to have Windows Server 2008 R2 SP1 or Microsoft Hyper-V Server 2008 R2 SP1 in your Hyper-V host. Guest Operating System Requirements In addition to this Dynamic Memory is only supported in Standard, Web, Enterprise and Datacenter editions of windows running inside VMs. Make sure that your VM is running one of these editions. For additional requirements on each operating system see “Dynamic Memory Configuration Guidelines” here. SQL Server Requirements All versions of SQL Server support Hyper-V Dynamic Memory. However, only certain editions of SQL Server are aware of dynamically changing system memory. To have a truly dynamic environment for your SQL Server VMs make sure that you are running one of the SQL Server editions listed below: ·         SQL Server 2005 Enterprise ·         SQL Server 2008 Enterprise / Datacenter Editions ·         SQL Server 2008 R2 Enterprise / Datacenter Editions Configuration guidelines for other versions of SQL Server are covered below in the FAQ section. Guidelines for configuring Dynamic Memory Parameters Here is how to configure Dynamic Memory for your SQL VMs in a nutshell: Hyper-V Dynamic Memory Parameter Recommendation Startup RAM 1 GB + SQL Min Server Memory Maximum RAM > SQL Max Server Memory Memory Buffer % 5 Memory Weight Based on performance needs   Startup RAM In order to ensure that your SQL Server VMs can start correctly, ensure that Startup RAM is higher than configured SQL Min Server Memory for your VMs. Otherwise SQL Server service will need to do paging in order to start since it will not be able to see enough memory during startup. Also note that Startup Memory will always be reserved for your VMs. This will guarantee a certain level of performance for your SQL Servers, however setting this too high will limit the consolidation benefits you’ll get out of your virtualization environment. Maximum RAM This one is obvious. If you’ve configured SQL Max Server Memory for your SQL Server, make sure that Dynamic Memory Maximum RAM configuration is higher than this value. Otherwise your SQL Server will not grow to memory values higher than the value configured for Dynamic Memory. Memory Buffer % Memory buffer configuration is used to provision file cache to virtual machines in order to improve performance. Due to the fact that SQL Server is managing its own buffer pool, Memory Buffer setting should be configured to the lowest value possible, 5%. Configuring a higher memory buffer will prevent low resource notifications from Windows Memory Manager and it will prevent reclaiming memory from SQL Server VMs. Memory Weight Memory weight configuration defines the importance of memory to a VM. Configure higher values for the VMs that have higher performance requirements. VMs with higher memory weight will have more memory under high memory pressure conditions on your host. Questions and Answers Q1 – Which SQL Server memory model is best for Dynamic Memory? The best SQL Server model for Dynamic Memory is “Locked Page Memory Model”. This memory model ensures that SQL Server memory is never paged out and it’s also adaptive to dynamically changing memory in the system. This will be extremely useful when Dynamic Memory is attempting to remove memory from SQL Server VMs ensuring no SQL Server memory is paged out. You can find instructions on configuring “Locked Page Memory Model” for your SQL Servers here. Q2 – What about other SQL Server Editions, how should I configure Dynamic Memory for them? Other editions of SQL Server do not adapt to dynamically changing environments. They will determine how much memory they should allocate during startup and don’t change this value afterwards. Therefore make sure that you configure a higher startup memory for your VM because that will be all the memory that SQL Server utilize Tune Maximum Memory and Memory Buffer based on the other workloads running on the system. If there are no other workloads consider using Static Memory for these editions. Q3 – What if I have multiple SQL Server instances in a VM? Having multiple SQL Server instances in a VM is not a general recommendation for predictable performance, manageability and isolation. In order to achieve a predictable behavior make sure that you configure SQL Min Server Memory and SQL Max Server Memory for each instance in the VM. And make sure that: ·         Dynamic Memory Startup Memory is greater than the sum of SQL Min Server Memory values for the instances in the VM ·         Dynamic Memory Maximum Memory is greater than the sum of SQL Max Server Memory values for the instances in the VM Q4 – I’m using Large Page Memory Model for my SQL Server. Can I still use Dynamic Memory? The short answer is no. SQL Server does not dynamically change its memory size when configured with Large Page Memory Model. In virtualized environments Hyper-V provides large page support by default. Most of the time, Large Page Memory Model doesn’t bring any benefits to a SQL Server if it’s running in virtualized environments. Q5 – How do I monitor SQL performance when I’m trying Dynamic Memory on my VMs? Use the performance counters below to monitor memory performance for SQL Server: Process - Working Set: This counter is available in the VM via process performance counters. It represents the actual amount of physical memory being used by SQL Server process in the VM. SQL Server – Buffer Cache Hit Ratio: This counter is available in the VM via SQL Server counters. This represents the paging being done by SQL Server. A rate of 90% or higher is desirable. Conclusion These blog posts are a quick start to a story that will be developing more in the near future. We’re still continuing our testing and investigations to provide more detailed configuration guidelines with example performance numbers with a white paper in the upcoming months. Now it’s time to give SQL Server and Hyper-V Dynamic Memory a try. Use this guidelines to kick-start your environment. See what you think about it and let us know of your experiences. - Serdar Sutay Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Protecting Cookies: Once and For All

    - by Your DisplayName here!
    Every once in a while you run into a situation where you need to temporarily store data for a user in a web app. You typically have two options here – either store server-side or put the data into a cookie (if size permits). When you need web farm compatibility in addition – things become a little bit more complicated because the data needs to be available on all nodes. In my case I went for a cookie – but I had some requirements Cookie must be protected from eavesdropping (sent only over SSL) and client script Cookie must be encrypted and signed to be protected from tampering with Cookie might become bigger than 4KB – some sort of overflow mechanism would be nice I really didn’t want to implement another cookie protection mechanism – this feels wrong and btw can go wrong as well. WIF to the rescue. The session management feature already implements the above requirements but is built around de/serializing IClaimsPrincipals into cookies and back. But if you go one level deeper you will find the CookieHandler and CookieTransform classes which contain all the needed functionality. public class ProtectedCookie {     private List<CookieTransform> _transforms;     private ChunkedCookieHandler _handler = new ChunkedCookieHandler();     // DPAPI protection (single server)     public ProtectedCookie()     {         _transforms = new List<CookieTransform>             {                 new DeflateCookieTransform(),                 new ProtectedDataCookieTransform()             };     }     // RSA protection (load balanced)     public ProtectedCookie(X509Certificate2 protectionCertificate)     {         _transforms = new List<CookieTransform>             {                 new DeflateCookieTransform(),                 new RsaSignatureCookieTransform(protectionCertificate),                 new RsaEncryptionCookieTransform(protectionCertificate)             };     }     // custom transform pipeline     public ProtectedCookie(List<CookieTransform> transforms)     {         _transforms = transforms;     }     public void Write(string name, string value, DateTime expirationTime)     {         byte[] encodedBytes = EncodeCookieValue(value);         _handler.Write(encodedBytes, name, expirationTime);     }     public void Write(string name, string value, DateTime expirationTime, string domain, string path)     {         byte[] encodedBytes = EncodeCookieValue(value);         _handler.Write(encodedBytes, name, path, domain, expirationTime, true, true, HttpContext.Current);     }     public string Read(string name)     {         var bytes = _handler.Read(name);         if (bytes == null || bytes.Length == 0)         {             return null;         }         return DecodeCookieValue(bytes);     }     public void Delete(string name)     {         _handler.Delete(name);     }     protected virtual byte[] EncodeCookieValue(string value)     {         var bytes = Encoding.UTF8.GetBytes(value);         byte[] buffer = bytes;         foreach (var transform in _transforms)         {             buffer = transform.Encode(buffer);         }         return buffer;     }     protected virtual string DecodeCookieValue(byte[] bytes)     {         var buffer = bytes;         for (int i = _transforms.Count; i > 0; i—)         {             buffer = _transforms[i - 1].Decode(buffer);         }         return Encoding.UTF8.GetString(buffer);     } } HTH

    Read the article

  • SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28

    - by pinaldave
    This is another common wait type. However, I still frequently see people getting confused with PAGEIOLATCH_X and PAGELATCH_X wait types. Actually, there is a big difference between the two. PAGEIOLATCH is related to IO issues, while PAGELATCH is not related to IO issues but is oftentimes linked to a buffer issue. Before we delve deeper in this interesting topic, first let us understand what Latch is. Latches are internal SQL Server locks which can be described as very lightweight and short-term synchronization objects. Latches are not primarily to protect pages being read from disk into memory. It’s a synchronization object for any in-memory access to any portion of a log or data file.[Updated based on comment of Paul Randal] The difference between locks and latches is that locks seal all the involved resources throughout the duration of the transactions (and other processes will have no access to the object), whereas latches locks the resources during the time when the data is changed. This way, a latch is able to maintain the integrity of the data between storage engine and data cache. A latch is a short-living lock that is put on resources on buffer cache and in the physical disk when data is moved in either directions. As soon as the data is moved, the latch is released. Now, let us understand the wait stat type  related to latches. From Book On-Line: PAGELATCH_DT Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Destroy mode. PAGELATCH_EX Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Exclusive mode. PAGELATCH_KP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Keep mode. PAGELATCH_SH Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Shared mode. PAGELATCH_UP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Update mode. PAGELATCH_X Explanation: When there is a contention of access of the in-memory pages, this wait type shows up. It is quite possible that some of the pages in the memory are of very high demand. For the SQL Server to access them and put a latch on the pages, it will have to wait. This wait type is usually created at the same time. Additionally, it is commonly visible when the TempDB has higher contention as well. If there are indexes that are heavily used, contention can be created as well, leading to this wait type. Reducing PAGELATCH_X wait: The following counters are useful to understand the status of the PAGELATCH: Average Latch Wait Time (ms): The wait time for latch requests that have to wait. Latch Waits/sec: This is the number of latch requests that could not be granted immediately. Total Latch Wait Time (ms): This is the total latch wait time for latch requests in the last second. If there is TempDB contention, I suggest that you read the blog post of Robert Davis right away. He has written an excellent blog post regarding how to find out TempDB contention. The same blog post explains the terms in the allocation of GAM, SGAM and PFS. If there was a TempDB contention, Paul Randal explains the optimal settings for the TempDB in his misconceptions series. Trace Flag 1118 can be useful but use it very carefully. I totally understand that this blog post is not as clear as my other blog posts. I suggest if this wait stats is on one of your higher wait type. Do leave a comment or send me an email and I will get back to you with my solution for your situation. May the looking at all other wait stats and types together become effective as this wait type can help suggest proper bottleneck in your system. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • sprintf an LPCWSTR variable

    - by Julio
    Hello everyone. I'm trying to debug print an LPCWSTR string, but I get a problem during the sprintf push in the buffer, because it retrieve only the first character from the string. Here is the code: HANDLE WINAPI hookedCreateFileW(LPCWSTR lpFileName, DWORD dwDesiredAccess, DWORD dwShareMode, LPSECURITY_ATTRIBUTES lpSecurityAttributes, DWORD dwCreationDisposition, DWORD dwFlagsAndAttributes, HANDLE hTemplateFile) { char buffer[1024]; sprintf_s(buffer, 1024, "CreateFileW: %s", lpFileName); OutputDebugString(buffer); return trueCreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwFlagsAndAttributes, dwCreationDisposition, hTemplateFile); } For example I get "CreateFileW: C" or "CreateFileW: \". How do I properly push it in the buffer? Thank you.

    Read the article

  • Vsync in Flex/Flash/AS3?

    - by oshyshko
    I work on a 2D shooter game with lots of moving objects on the screen (bullets etc). I use BitmapData.copyPixels(...) to render entire screen to a buffer:BitmapData. Then I "copyPixels" from "buffer" to screen:BitmapData. The framerate is 60. private var bitmap:Bitmap = new Bitmap(); private var buffer:Bitmap = new Bitmap(); private function start():void { addChild(bitmap); } private function onEnterFrame():void { // render into "buffer" // copy "buffer" -> "bitmap" } The problem is that the sprites are tearing apart: some part of a sprite got shifted horizontally. It looks like a PC game with VSYNC turned off. Did anyone solve this problem? UPDATE: the question is not about performance, but about getting rid of screen tearing. [!] UPDATE: I've created another question and here you may try both implementations: using Flash way or BitmapData+copyPixels()

    Read the article

  • how to get the size of a C global array into an assembly program written for the avr architecture co

    - by johannes
    I have a .c file with the following uint8_t buffer[32] I have a .S file where I want to do the following cpi r29, buffer+sizeof(buffer) The second argument for cpi muste be an imidiate value not a location. But unfortunetly sizeof() is a c operator. Both files, are getting compiled to seperate object files and linked afterwards. If I do avr-objdump -x file.c. Amongst other things, I get the size of the buffer. So it is already available in the object file. How do I access the size of the buffer in my assembly file at compile time?

    Read the article

  • Help writing emacs lisp for emacs etags search

    - by user535707
    I'm looking for some help developing what I think should be an easy program. I want something similar to Emacs tags-search command, but I want to collect all search results into a buffer. (I want to see all results of M-,) I'm thinking this python style pseudo code should work, but I have no idea how to do this in emacs lisp? Any help would be greatly appreciated. def myTagsGrep(searchValue): for aFile in the tag list: result = grep aFile seachValue if len(result) > 0: print aFile # to the buffer print result # to the buffer I would like to be able to browse through the buffer with the same features tags-apropos does. Note that a similar question has been asked before: Is there a way to get emacs tag-search command to output all results to a buffer?

    Read the article

  • LearnBoost's Socket.IO-Node why onClientMessage not work

    - by KingPin
    Hi, all, I tried to put the module "LearnBoost's Socket.IO-Node", all works, except event 'onClientMessage' Tell, in what there can be a problem, thanks! ...sorry for my english io.listen(server, { onClientConnect: function(client){ client.send(json({ buffer: buffer })); client.broadcast(json({ announcement: client.sessionId + ' connected' })); }, onClientDisconnect: function(client){ client.broadcast(json({ announcement: client.sessionId + ' disconnected' })); }, onClientMessage: function(message, client){ var msg = { mess: [client.sessionId, message] }; buffer.push(msg); if (buffer.length > 15) { buffer.shift(); } client.broadcast(json(msg)); }

    Read the article

  • what is the wrong in this code(openAl in vc++)

    - by maiajam
    hi how are you all? i need your help i have this code #include <conio.h> #include <stdlib.h> #include <stdio.h> #include <al.h> #include <alc.h> #include <alut.h> #pragma comment(lib, "openal32.lib") #pragma comment(lib, "alut.lib") /* * These are OpenAL "names" (or "objects"). They store and id of a buffer * or a source object. Generally you would expect to see the implementation * use values that scale up from '1', but don't count on it. The spec does * not make this mandatory (as it is OpenGL). The id's can easily be memory * pointers as well. It will depend on the implementation. */ // Buffers to hold sound data. ALuint Buffer; // Sources are points of emitting sound. ALuint Source; /* * These are 3D cartesian vector coordinates. A structure or class would be * a more flexible of handling these, but for the sake of simplicity we will * just leave it as is. */ // Position of the source sound. ALfloat SourcePos[] = { 0.0, 0.0, 0.0 }; // Velocity of the source sound. ALfloat SourceVel[] = { 0.0, 0.0, 0.0 }; // Position of the Listener. ALfloat ListenerPos[] = { 0.0, 0.0, 0.0 }; // Velocity of the Listener. ALfloat ListenerVel[] = { 0.0, 0.0, 0.0 }; // Orientation of the Listener. (first 3 elements are "at", second 3 are "up") // Also note that these should be units of '1'. ALfloat ListenerOri[] = { 0.0, 0.0, -1.0, 0.0, 1.0, 0.0 }; /* * ALboolean LoadALData() * * This function will load our sample data from the disk using the Alut * utility and send the data into OpenAL as a buffer. A source is then * also created to play that buffer. */ ALboolean LoadALData() { // Variables to load into. ALenum format; ALsizei size; ALvoid* data; ALsizei freq; ALboolean loop; // Load wav data into a buffer. alGenBuffers(1, &Buffer); if(alGetError() != AL_NO_ERROR) return AL_FALSE; alutLoadWAVFile((ALbyte *)"C:\Users\Toshiba\Desktop\Graduation Project\OpenAL\open AL test\wavdata\FancyPants.wav", &format, &data, &size, &freq, &loop); alBufferData(Buffer, format, data, size, freq); alutUnloadWAV(format, data, size, freq); // Bind the buffer with the source. alGenSources(1, &Source); if(alGetError() != AL_NO_ERROR) return AL_FALSE; alSourcei (Source, AL_BUFFER, Buffer ); alSourcef (Source, AL_PITCH, 1.0 ); alSourcef (Source, AL_GAIN, 1.0 ); alSourcefv(Source, AL_POSITION, SourcePos); alSourcefv(Source, AL_VELOCITY, SourceVel); alSourcei (Source, AL_LOOPING, loop ); // Do another error check and return. if(alGetError() == AL_NO_ERROR) return AL_TRUE; return AL_FALSE; } /* * void SetListenerValues() * * We already defined certain values for the Listener, but we need * to tell OpenAL to use that data. This function does just that. */ void SetListenerValues() { alListenerfv(AL_POSITION, ListenerPos); alListenerfv(AL_VELOCITY, ListenerVel); alListenerfv(AL_ORIENTATION, ListenerOri); } /* * void KillALData() * * We have allocated memory for our buffers and sources which needs * to be returned to the system. This function frees that memory. */ void KillALData() { alDeleteBuffers(1, &Buffer); alDeleteSources(1, &Source); alutExit(); } int main(int argc, char *argv[]) { printf("MindCode's OpenAL Lesson 1: Single Static Source\n\n"); printf("Controls:\n"); printf("p) Play\n"); printf("s) Stop\n"); printf("h) Hold (pause)\n"); printf("q) Quit\n\n"); // Initialize OpenAL and clear the error bit. alutInit(NULL, 0); alGetError(); // Load the wav data. if(LoadALData() == AL_FALSE) { printf("Error loading data."); return 0; } SetListenerValues(); // Setup an exit procedure. atexit(KillALData); // Loop. ALubyte c = ' '; while(c != 'q') { c = getche(); switch(c) { // Pressing 'p' will begin playing the sample. case 'p': alSourcePlay(Source); break; // Pressing 's' will stop the sample from playing. case 's': alSourceStop(Source); break; // Pressing 'h' will pause the sample. case 'h': alSourcePause(Source); break; }; } return 0; } and it is run willbut i cant here any thing also i am new in programong and wont to program a virtual reality sound in my graduation project and start to learn opeal and vc++ but i dont how to start and from where i must begin and i want to ask if i need to learn about API win ?? and if i need how i can learn that thank you alote and i am sorry coz of my english

    Read the article

  • reading a file that doesn't exist

    - by John
    Hi, I have got a small program that prints the contents of files using the system call - read. unsigned char buffer[8]; size_t offset=0; size_t bytes_read; int i; int fd = open(argv[1], O_RDONLY); do{ bytes_read = read(fd, buffer, sizeof(buffer)); printf("0x%06x : ", offset); for(i=0; i<bytes_read; ++i) { printf("%c ", buffer[i]); } printf("\n"); offset = offset + bytes_read; }while(bytes_read == sizeof(buffer)); Now while running I give a file name that doesn't exist. It prints some kind of data mixed with environment variables and a segmentation fault at the end. How is this possible? What is the program printing? Thanks, John

    Read the article

  • Download Large Files using java

    - by angelina
    Dear All, I M building a application in which i want to download large files on handset (mobile),but if size of file is large i m getting exception socket exception-broken pipe . inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); }

    Read the article

  • What is the most efficient way to read many bytes from SQL Server using SqlDataReader (C#)

    - by eccentric
    Hi everybody! What is the most efficient way to read bytes (8-16 K) from SQL Server using SqlDataReader. It seems I know 2 ways: byte[] buffer = new byte[4096]; MemoryStream stream = new MemoryStream(); long l, dataOffset = 0; while ((l = reader.GetBytes(columnIndex, dataOffset, buffer, 0, buffer.Length)) > 0) { stream.Write(buffer, 0, buffer.Length); dataOffset += l; } and reader.GetSqlBinary(columnIndex).Value The data type is IMAGE

    Read the article

  • zeroing out memory

    - by robUK
    Hello, gcc 4.4.4 c89 I am just wondering what most c programmers do when they want to zero out memory. For example I have a buffer of 1024 bytes. Sometimes I do this: char buffer[1024] = {0}; Which will zero all bytes. However, should I declare like this and use memset? char buffer[1024]; . . memset(buffer, 0, sizeof(buffer); Is there any real reason you have to zero the memory? What is the worst that can happen by not doing it? Many thanks for any suggestions,

    Read the article

  • Using sizeof with a dynamically allocated array

    - by robUK
    Hello, gcc 4.4.1 c89 I have the following code snippet: #include <stdlib.h> #include <stdio.h> char *buffer = malloc(10240); /* Check for memory error */ if(!buffer) { fprintf(stderr, "Memory error\n"); return 1; } printf("sizeof(buffer) [ %d ]\n", sizeof(buffer)); However, the sizeof(buffer) always prints 4. I know that a char* is only 4 bytes. However, I have allocated the memory for 10kb. So shouldn't the size be 10240? I am wondering am I thinking right here? Many thanks for any suggestions,

    Read the article

  • How can I optimize this code?

    - by loop0
    Hi, I'm developing a logger daemon to squid to grab the logs on a mongodb database. But I'm experiencing too much cpu utilization. How can I optimize this code? from sys import stdin from pymongo import Connection connection = Connection() db = connection.squid logs = db.logs buffer = [] a = 'timestamp' b = 'resp_time' c = 'src_ip' d = 'cache_status' e = 'reply_size' f = 'req_method' g = 'req_url' h = 'username' i = 'dst_ip' j = 'mime_type' L = 'L' while True: l = stdin.readline() if l[0] == L: l = l[1:].split() buffer.append({ a: float(l[0]), b: int(l[1]), c: l[2], d: l[3], e: int(l[4]), f: l[5], g: l[6], h: l[7], i: l[8], j: l[9] } ) if len(buffer) == 1000: logs.insert(buffer) buffer = [] if not l: break connection.disconnect()

    Read the article

  • Access violation C++ (Deleting items in a vector)

    - by Gio Borje
    I'm trying to remove non-matching results from a memory scanner I'm writing in C++ as practice. When the memory is initially scanned, all results are stored into the _results vector. Later, the _results are scanned again and should erase items that no longer match. The error: Unhandled exception at 0x004016f4 in .exe: 0xC0000005: Access violation reading location 0x0090c000. // Receives data DWORD buffer; for (vector<memblock>::iterator it = MemoryScanner::_results.begin(); it != MemoryScanner::_results.end(); ++it) { // Reads data from an area of memory into buffer ReadProcessMemory(MemoryScanner::_hProc, (LPVOID)(*it).address, &buffer, sizeof(buffer), NULL); if (value != buffer) { MemoryScanner::_results.erase(it); // where the program breaks } }

    Read the article

  • how divide herader from binary data

    - by fixo2020
    Hi, I have this code: ofstream dest("test.txt",ios::binary); while (true){ size_t retval = recv (sd, buffer, sizeof(buffer), 0); dest.write(buffer,retval); if(retval <= 0) { delete[] buffer; break;} } Now, the recv() function return 4 bytes each loop right? and buffer contain it, this return all data so, pseudo-header and binary data (image), but I want know how capture only binary data, I know that the end of header are "\n\r" right? but what's are the solution better for make this? I make a function that detect when are "\n\r"? and after how capture binary data? Or, I put all data in memory, and after parse it? but how? I'm desperate :(

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

  • Vimdiff with git mergetool error: "More than two buffers in diff mode"

    - by Elizabeth Buckwalter
    I've read Vimdiff and Viewing differences with Vimdiff plus doing various google searches using things like "vimdiff multiple", "vimdiff git", "vimdiff commands" etc. When using do or diffg I get the error "More than two buffers in diff mode, don't know which one to use". When using diffg v:fname_in I get "No matching buffer for v:fname_in". From the vimdiff documentation: :[range]diffg[et] [bufspec] Modify the current buffer to undo difference with another buffer. If [bufspec] is given, that buffer is used. If [bufspec] refers to the current buffer then nothing happens. Otherwise this only works if there is one other buffer in diff mode. and more: When 'diffexpr' is not empty, Vim evaluates to obtain a diff file in the format mentioned. These variables are set to the file names used: v:fname_in original file v:fname_new new version of the same file v:fname_out resulting diff file So, I need to get the name of bufspec, but the default variables (fname_in, fname_new, and fname_out) aren't set. I ran the command git mergetool on a linux box through a terminal.

    Read the article

  • How to find cause and of the SocketException with message that an established connection was aborted

    - by cdpnet
    Hi All, I know the similar question may have been asked many times, but I want to represent the behavior I'm seeing and find if somebody can help predict the cause of this. I am writing a windows service which connects to other windows service over TCP. There are 100 user entities of this, and 5 connections per each. These users perform their tasks using their individual connections. The application goes on withough seeing this problem for 1 or 2 days. Or sometimes show the problem right after starting (-rarely). The best run I had was like 4 to 5 days without showing this exception. And after that application died or I had to stop it for various reasons. I want to know what can be causing this? Here is the stacktrace. System.IO.IOException: Unable to write data to the transport connection: An established connection was aborted by the software in your host machine. ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslStream.Write(Byte[] buffer, Int32 offset, Int32 count)

    Read the article

  • Reading binary data from serial port using Dejan TComport Delphi component

    - by johnma
    Apologies for this question but I am a bit of a noob with Delphi. I am using Dejan TComport component to get data from a serial port. A box of equipment connected to the port sends about 100 bytes of binary data to the serial port. What I want to do is extract the bytes as numerical values into an array so that I can perform calculations on them. TComport has a method Read(buffer,Count) which reads DATA from input buffer. function Read(var Buffer; Count: Integer): Integer; The help says the Buffer variable must be large enough to hold Count bytes but does not provide any example of how to use this function. I can see that the Count variable holds the number of bytes received but I can't find a way to access the bytes in Buffer. TComport also has a methord Readstr which reads data from input buffer into a STRING variable. function ReadStr(var Str: String; Count: Integer): Integer; Again the Count variable shows the number of bytes received and I can use Memo1.Text:=str to display some information but obviously Memo1 has problems displaying the control characters. I have tried various ways to try and extract the byte data from Str but so far without success. I am sure it must be easy. Here's hoping.

    Read the article

  • Why is setting HTML5's CanvasPixelArray values ridiculously slow and how can I do it faster?

    - by Nixuz
    I am trying to do some dynamic visual effects using the HTML 5 canvas' pixel manipulation, but I am running into a problem where setting pixels in the CanvasPixelArray is ridiculously slow. For example if I have code like: imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ imageData.data[i] = buffer[i]; imageData.data[i + 1] = buffer[i + 1]; imageData.data[i + 2] = buffer[i + 2]; } ctx.putImageData(imageData, 0, 0); Profiling with Chrome reveals, it runs 44% slower than the following code where CanvasPixelArray is not used. tempArray = new Array(500 * 500 * 4); imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ tempArray[i] = buffer[i]; tempArray[i + 1] = buffer[i + 1]; tempArray[i + 2] = buffer[i + 2]; } ctx.putImageData(imageData, 0, 0); My guess is that the reason for this slowdown is due to the conversion between the Javascript doubles and the internal unsigned 8bit integers, used by the CanvasPixelArray. Is this guess correct? Is there anyway to reduce the time spent setting values in the CanvasPixelArray?

    Read the article

  • C# PInvoke VerQueryValue returns back OutOfMemoryException?

    - by Bopha
    Hi, Below is the code sample which I got from online resource but it's suppose to work with fullframework, but when I try to build it using C# smart device, it throws exception saying it's out of memory. Does anybody know how can I fix it to use on compact? the out of memory exception when I make the second call to VerQueryValue which is the last one. thanks, [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] buffer, string subblock, out IntPtr blockbuffer, out uint len); [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] pBlock, string pSubBlock, out string pValue, out uint len); // private static void GetAssemblyVersion() { string filename = @"\Windows\MyLibrary.dll"; if (File.Exists(filename)) { try { int handle = 0; Int32 size = 0; size = GetFileVersionInfoSize(filename, out handle); if (size > 0) { bool retValue; byte[] buffer = new byte[size]; retValue = GetFileVersionInfo(filename, handle, size, buffer); if (retValue == true) { bool success = false; IntPtr blockbuffer = IntPtr.Zero; uint len = 0; //success = VerQueryValue(buffer, "\\", out blockbuffer, out len); success = VerQueryValue(buffer, @"\VarFileInfo\Translation", out blockbuffer, out len); if(success) { int p = (int)blockbuffer; //Reads a 16-bit signed integer from unmanaged memory int j = Marshal.ReadInt16((IntPtr)p); p += 2; //Reads a 16-bit signed integer from unmanaged memory int k = Marshal.ReadInt16((IntPtr)p); string sb = string.Format("{0:X4}{1:X4}", j, k); string spv = @"\StringFileInfo\" + sb + @"\ProductVersion"; string versionInfo; VerQueryValue(buffer, spv, out versionInfo, out len); } } } } catch (Exception err) { string error = err.Message; } } }

    Read the article

  • Possible to Inspect Innards of Core C# Functionality

    - by Nick Babcock
    I was struck today, with the inclination to compare the innards of Buffer.BlockCopy and Array.CopyTo. I am curious to see if Array.CopyTo called Buffer.BlockCopy behind the scenes. There is no practical purpose behind this, I just want to further my understanding of the C# language and how it is implemented. Don't jump the gun and accuse me of micro-optimization, but you can accuse me of being curious! When I ran ILasm on mscorlib.dll I received this for Array.CopyTo .method public hidebysig newslot virtual final instance void CopyTo(class System.Array 'array', int32 index) cil managed { // Code size 0 (0x0) } // end of method Array::CopyTo and this for Buffer.BlockCopy .method public hidebysig static void BlockCopy(class System.Array src, int32 srcOffset, class System.Array dst, int32 dstOffset, int32 count) cil managed internalcall { .custom instance void System.Security.SecuritySafeCriticalAttribute::.ctor() = ( 01 00 00 00 ) } // end of method Buffer::BlockCopy Which, frankly, baffles me. I've never run ILasm on a dll/exe I didn't create. Does this mean that I won't be able to see how these functions are implemented? Searching around only revealed a stackoverflow question, which Marc Gravell said [Buffer.BlockCopy] is basically a wrapper over a raw mem-copy While insightful, it doesn't answer my question if Array.CopyTo calls Buffer.BlockCopy. I'm specifically interested in if I'm able to see how these two functions are implemented, and if I had future questions about the internals of C#, if it is possible for me to investigate it. Or am I out of luck?

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >