Search Results

Search found 13889 results on 556 pages for 'scratch memory'.

Page 183/556 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • android compile error: could not reserve enough space for object heap

    - by moonlightcheese
    I'm getting this error during compilation: Error occurred during initialization of VM Could not create the Java virtual machine. Could not reserve enough space for object heap What's worse, the error occurs intermittently. Sometimes it happens, sometimes it doesn't. It seems to be dependent on the amount of code in the application. If I get rid of some variables or drop some imported libraries, it will compile. Then when I add more to it, I get the error again. I've included the following sources into the application in the [project_root]/src/ directory: org.apache.httpclient (I've stripped all references to log4j from the sources, so don't need it) org.apache.codec (as a dependency) org.apache.httpcore (dependency of httpclient) and my own activity code consisting of nothing more than an instance of HttpClient. I know this has something to do with the amount of memory necessary during compile time or some compiler options, and I'm not really stressing my system while i'm coding. I've got 2GB of memory on this Core Duo laptop and windows reports only 860MB page file usage (haven't used any other memory tools. I should have plenty of memory and processing power for this... and I'm only compiling some common http libs... total of 406 source files. What gives? Android API Level: 5 Android SDK rel 5 JDK version: 1.6.0_12

    Read the article

  • C# PInvoke VerQueryValue returns back OutOfMemoryException?

    - by Bopha
    Hi, Below is the code sample which I got from online resource but it's suppose to work with fullframework, but when I try to build it using C# smart device, it throws exception saying it's out of memory. Does anybody know how can I fix it to use on compact? the out of memory exception when I make the second call to VerQueryValue which is the last one. thanks, [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] buffer, string subblock, out IntPtr blockbuffer, out uint len); [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] pBlock, string pSubBlock, out string pValue, out uint len); // private static void GetAssemblyVersion() { string filename = @"\Windows\MyLibrary.dll"; if (File.Exists(filename)) { try { int handle = 0; Int32 size = 0; size = GetFileVersionInfoSize(filename, out handle); if (size > 0) { bool retValue; byte[] buffer = new byte[size]; retValue = GetFileVersionInfo(filename, handle, size, buffer); if (retValue == true) { bool success = false; IntPtr blockbuffer = IntPtr.Zero; uint len = 0; //success = VerQueryValue(buffer, "\\", out blockbuffer, out len); success = VerQueryValue(buffer, @"\VarFileInfo\Translation", out blockbuffer, out len); if(success) { int p = (int)blockbuffer; //Reads a 16-bit signed integer from unmanaged memory int j = Marshal.ReadInt16((IntPtr)p); p += 2; //Reads a 16-bit signed integer from unmanaged memory int k = Marshal.ReadInt16((IntPtr)p); string sb = string.Format("{0:X4}{1:X4}", j, k); string spv = @"\StringFileInfo\" + sb + @"\ProductVersion"; string versionInfo; VerQueryValue(buffer, spv, out versionInfo, out len); } } } } catch (Exception err) { string error = err.Message; } } }

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • How to read a database record with a DataReader and add it to a DataTable

    - by Olga
    Hello I have some data in a Oracle database table(around 4 million records) which i want to transform and store in a MSSQL database using ADO.NET. So far i used (for much smaller tables) a DataAdapter to read the data out of the Oracle DataBase and add the DataTable to a DataSet for further processing. When i tried this with my huge table, there was a outofmemory exception thrown. ( I assume this is because i cannot load the whole table into my memory) :) Now i am looking for a good way to perform this extract/transfer/load, without storing the whole table in the memory. I would like to use a DataReader and read the single dataRecords in a DataTable. If there are about 100k rows in it, I would like to process them and clear the DataTable afterwards(to have free memory again). Now i would like to know how to add a single datarecord as a row to a dataTable with ado.net and how to completly clear the dataTable out of memory: My code so far: Dim dt As New DataTable Dim count As Int32 count = 0 ' reads data records from oracle database table' While rdr.Read() 'read n records and add them to a dataTable' While count < 10000 dt.Rows.Add(????) count = count + 1 End While 'transform data in the dataTable, and insert it to the destination' ' flush the dataTable after insertion' count = 0 End While Thank you very much for your response!

    Read the article

  • image analysis and 64bit OS

    - by picciopiccio
    I developed a C# application that makes use of Congex vision library (VPro). My application is developed with Visual Studio 2008 Pro on a 32bit Windows PC with 3GB of RAM. During the startup of application I see that a large amount of memory is allocated. So far so good, but when I add many and many vision elaboration the memory allocation increases and a part of application (only Cognex OCX) stops working well. The rest of application stills to work (working threads, com on socket....) I did whatever I could to save memory, but when the memory allocated is about 700MB I begin to have the problems. A note on the documentation of Cognex library tells that /LARGEADDRESSWARE is not supported. Anyway I'm thinking to try the migration of my app on win64 but what do I have to do? Can I simply use a processor with 64bit and windows 64bit without recompiling my application that would remain a 32bit application to take advantage of 64bit ? Or I should recompile my application ? If I don't need to recompile my application, can I link it with 64bit Congnex library? If I have to recompile my application, is it possible to cross compile the application so that my develop suite is on a 32bit PC? Every help will be very appreciated!! Thank in advance

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

  • what exactly is the danger of an uninitialized pointer in C

    - by akh2103
    I am trying get a handle on C as I work my way thru Jim Trevor's "Cyclone: A safe dialect of C" for a PL class. Trevor and his co-authors are trying to make a safe version of C, so they eliminate uninitialized pointers in their language. Googling around a bit on uninitialized pointers, it seems like un-initialized pointers point to random locations in memory. It seems like this alone makes them unsafe. If you reference an un-itilialized pointer, you jump to an unsafe part of memory. Period. But the way Trevor talks about them seems to imply that it is more complex. He cites the following code, and explains that when the function FrmGetObjectIndex dereferences f, it isn’t accessing a valid pointer, but rather an unpredictable address — whatever was on the stack when the space for f was allocated. What does Trevor mean by "whatever was on the stack when the space for f was allocated"? Are "un-initialized" pointers initialized to random locations in memory by default? Or does their "random" behavior have to do with the memory allocated for these pointers getting filled with strange values (that are then referenced) because of unexpected behavior on the stack. Form *f; switch (event->eType) { case frmOpenEvent: f = FrmGetActiveForm(); ... case ctlSelectEvent: i = FrmGetObjectIndex(f, field); ... }

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • help me improve my sse yuv to rgb ssse3 code

    - by David McPaul
    Hello, I am looking to optimise some sse code I wrote for converting yuv to rgb (both planar and packed yuv functions). i am using SSSE3 at the moment but if there are useful functions from later sse versions thats ok. I am mainly interested in how I would work out processor stalls and the like. Anyone know of any tools that do static analysis of sse code? ; ; Copyright (C) 2009-2010 David McPaul ; ; All rights reserved. Distributed under the terms of the MIT License. ; ; A rather unoptimised set of ssse3 yuv to rgb converters ; does 8 pixels per loop ; inputer: ; reads 128 bits of yuv 8 bit data and puts ; the y values converted to 16 bit in xmm0 ; the u values converted to 16 bit and duplicated into xmm1 ; the v values converted to 16 bit and duplicated into xmm2 ; conversion: ; does the yuv to rgb conversion using 16 bit integer and the ; results are placed into the following registers as 8 bit clamped values ; r values in xmm3 ; g values in xmm4 ; b values in xmm5 ; outputer: ; writes out the rgba pixels as 8 bit values with 0 for alpha ; xmm6 used for scratch ; xmm7 used for scratch %macro cglobal 1 global _%1 %define %1 _%1 align 16 %1: %endmacro ; conversion code %macro yuv2rgbsse2 0 ; u = u - 128 ; v = v - 128 ; r = y + v + v >> 2 + v >> 3 + v >> 5 ; g = y - (u >> 2 + u >> 4 + u >> 5) - (v >> 1 + v >> 3 + v >> 4 + v >> 5) ; b = y + u + u >> 1 + u >> 2 + u >> 6 ; subtract 16 from y movdqa xmm7, [Const16] ; loads a constant using data cache (slower on first fetch but then cached) psubsw xmm0,xmm7 ; y = y - 16 ; subtract 128 from u and v movdqa xmm7, [Const128] ; loads a constant using data cache (slower on first fetch but then cached) psubsw xmm1,xmm7 ; u = u - 128 psubsw xmm2,xmm7 ; v = v - 128 ; load r,b with y movdqa xmm3,xmm0 ; r = y pshufd xmm5,xmm0, 0xE4 ; b = y ; r = y + v + v >> 2 + v >> 3 + v >> 5 paddsw xmm3, xmm2 ; add v to r movdqa xmm7, xmm1 ; move u to scratch pshufd xmm6, xmm2, 0xE4 ; move v to scratch psraw xmm6,2 ; divide v by 4 paddsw xmm3, xmm6 ; and add to r psraw xmm6,1 ; divide v by 2 paddsw xmm3, xmm6 ; and add to r psraw xmm6,2 ; divide v by 4 paddsw xmm3, xmm6 ; and add to r ; b = y + u + u >> 1 + u >> 2 + u >> 6 paddsw xmm5, xmm1 ; add u to b psraw xmm7,1 ; divide u by 2 paddsw xmm5, xmm7 ; and add to b psraw xmm7,1 ; divide u by 2 paddsw xmm5, xmm7 ; and add to b psraw xmm7,4 ; divide u by 32 paddsw xmm5, xmm7 ; and add to b ; g = y - u >> 2 - u >> 4 - u >> 5 - v >> 1 - v >> 3 - v >> 4 - v >> 5 movdqa xmm7,xmm2 ; move v to scratch pshufd xmm6,xmm1, 0xE4 ; move u to scratch movdqa xmm4,xmm0 ; g = y psraw xmm6,2 ; divide u by 4 psubsw xmm4,xmm6 ; subtract from g psraw xmm6,2 ; divide u by 4 psubsw xmm4,xmm6 ; subtract from g psraw xmm6,1 ; divide u by 2 psubsw xmm4,xmm6 ; subtract from g psraw xmm7,1 ; divide v by 2 psubsw xmm4,xmm7 ; subtract from g psraw xmm7,2 ; divide v by 4 psubsw xmm4,xmm7 ; subtract from g psraw xmm7,1 ; divide v by 2 psubsw xmm4,xmm7 ; subtract from g psraw xmm7,1 ; divide v by 2 psubsw xmm4,xmm7 ; subtract from g %endmacro ; outputer %macro rgba32sse2output 0 ; clamp values pxor xmm7,xmm7 packuswb xmm3,xmm7 ; clamp to 0,255 and pack R to 8 bit per pixel packuswb xmm4,xmm7 ; clamp to 0,255 and pack G to 8 bit per pixel packuswb xmm5,xmm7 ; clamp to 0,255 and pack B to 8 bit per pixel ; convert to bgra32 packed punpcklbw xmm5,xmm4 ; bgbgbgbgbgbgbgbg movdqa xmm0, xmm5 ; save bg values punpcklbw xmm3,xmm7 ; r0r0r0r0r0r0r0r0 punpcklwd xmm5,xmm3 ; lower half bgr0bgr0bgr0bgr0 punpckhwd xmm0,xmm3 ; upper half bgr0bgr0bgr0bgr0 ; write to output ptr movntdq [edi], xmm5 ; output first 4 pixels bypassing cache movntdq [edi+16], xmm0 ; output second 4 pixels bypassing cache %endmacro SECTION .data align=16 Const16 dw 16 dw 16 dw 16 dw 16 dw 16 dw 16 dw 16 dw 16 Const128 dw 128 dw 128 dw 128 dw 128 dw 128 dw 128 dw 128 dw 128 UMask db 0x01 db 0x80 db 0x01 db 0x80 db 0x05 db 0x80 db 0x05 db 0x80 db 0x09 db 0x80 db 0x09 db 0x80 db 0x0d db 0x80 db 0x0d db 0x80 VMask db 0x03 db 0x80 db 0x03 db 0x80 db 0x07 db 0x80 db 0x07 db 0x80 db 0x0b db 0x80 db 0x0b db 0x80 db 0x0f db 0x80 db 0x0f db 0x80 YMask db 0x00 db 0x80 db 0x02 db 0x80 db 0x04 db 0x80 db 0x06 db 0x80 db 0x08 db 0x80 db 0x0a db 0x80 db 0x0c db 0x80 db 0x0e db 0x80 ; void Convert_YUV422_RGBA32_SSSE3(void *fromPtr, void *toPtr, int width) width equ ebp+16 toPtr equ ebp+12 fromPtr equ ebp+8 ; void Convert_YUV420P_RGBA32_SSSE3(void *fromYPtr, void *fromUPtr, void *fromVPtr, void *toPtr, int width) width1 equ ebp+24 toPtr1 equ ebp+20 fromVPtr equ ebp+16 fromUPtr equ ebp+12 fromYPtr equ ebp+8 SECTION .text align=16 cglobal Convert_YUV422_RGBA32_SSSE3 ; reserve variables push ebp mov ebp, esp push edi push esi push ecx mov esi, [fromPtr] mov edi, [toPtr] mov ecx, [width] ; loop width / 8 times shr ecx,3 test ecx,ecx jng ENDLOOP REPEATLOOP: ; loop over width / 8 ; YUV422 packed inputer movdqa xmm0, [esi] ; should have yuyv yuyv yuyv yuyv pshufd xmm1, xmm0, 0xE4 ; copy to xmm1 movdqa xmm2, xmm0 ; copy to xmm2 ; extract both y giving y0y0 pshufb xmm0, [YMask] ; extract u and duplicate so each u in yuyv becomes u0u0 pshufb xmm1, [UMask] ; extract v and duplicate so each v in yuyv becomes v0v0 pshufb xmm2, [VMask] yuv2rgbsse2 rgba32sse2output ; endloop add edi,32 add esi,16 sub ecx, 1 ; apparently sub is better than dec jnz REPEATLOOP ENDLOOP: ; Cleanup pop ecx pop esi pop edi mov esp, ebp pop ebp ret cglobal Convert_YUV420P_RGBA32_SSSE3 ; reserve variables push ebp mov ebp, esp push edi push esi push ecx push eax push ebx mov esi, [fromYPtr] mov eax, [fromUPtr] mov ebx, [fromVPtr] mov edi, [toPtr1] mov ecx, [width1] ; loop width / 8 times shr ecx,3 test ecx,ecx jng ENDLOOP1 REPEATLOOP1: ; loop over width / 8 ; YUV420 Planar inputer movq xmm0, [esi] ; fetch 8 y values (8 bit) yyyyyyyy00000000 movd xmm1, [eax] ; fetch 4 u values (8 bit) uuuu000000000000 movd xmm2, [ebx] ; fetch 4 v values (8 bit) vvvv000000000000 ; extract y pxor xmm7,xmm7 ; 00000000000000000000000000000000 punpcklbw xmm0,xmm7 ; interleave xmm7 into xmm0 y0y0y0y0y0y0y0y0 ; extract u and duplicate so each becomes 0u0u punpcklbw xmm1,xmm7 ; interleave xmm7 into xmm1 u0u0u0u000000000 punpcklwd xmm1,xmm7 ; interleave again u000u000u000u000 pshuflw xmm1,xmm1, 0xA0 ; copy u values pshufhw xmm1,xmm1, 0xA0 ; to get u0u0 ; extract v punpcklbw xmm2,xmm7 ; interleave xmm7 into xmm1 v0v0v0v000000000 punpcklwd xmm2,xmm7 ; interleave again v000v000v000v000 pshuflw xmm2,xmm2, 0xA0 ; copy v values pshufhw xmm2,xmm2, 0xA0 ; to get v0v0 yuv2rgbsse2 rgba32sse2output ; endloop add edi,32 add esi,8 add eax,4 add ebx,4 sub ecx, 1 ; apparently sub is better than dec jnz REPEATLOOP1 ENDLOOP1: ; Cleanup pop ebx pop eax pop ecx pop esi pop edi mov esp, ebp pop ebp ret SECTION .note.GNU-stack noalloc noexec nowrite progbits

    Read the article

  • what are all the Optimize tricks that you know for asp.net code ?

    - by Aristos
    After some time of many code programming on asp.net, I discover the very big speed different between string and StringBuilder. I know that is very common and known but I just mention it for start. The second think that I have found to speed up the code, is to use the const, and not the static, for declare my configuration constants value (especial the strings). With the const, the compiler not create new object, but just place the value, on the point that you have ask it, but with the static declaration, is create a new memory object and keep its on the memory. My third trick, is when I search for string, I use hash values, and not the string itself. For example, if I need a List<string SomeValues, and place inside strings that I need to search them, I prefer to use List<int SomeHashValue, and I use the hash value to locate the strings. My forth thought that I was wandering, is if is better to place big strings in one line, or separate them in different lines with the + symbol to be more easy to read out. I make some tests and see that the compiler make a good job is some split the string, in many lines, using the + symbol. What other tricks/tips do you know and use on your programming to make it run faster, and maybe use less memory. Well I know, that some times, to make something run faster, you need more memory, more cache. My priority is on speed. Because Speed Counts.

    Read the article

  • Is a call to the following method considered late binding?

    - by AspOnMyNet
    1) Assume: • B1 defines methods virtualM() and nonvirtualM(), where former method is virtual while the latter is non-virtual • B2 derives from B1 • B2 overrides virtualM() • B2 is defined inside assembly A • Application app doesn’t have a reference to assembly A In the following code application app dynamically loads an assembly A, creates an instance of a type B2 and calls methods virtualM() and nonvirtualM(): Assembly a=Assembly.Load(“A”); Type t= a.GetType(“B2”); B1 a = ( B1 ) Activator.CreateInstance ( “t” ); a.virtualM(); a.nonvirtualM(); a) Is call to a.virtualM() considered early binding or late binding? b) I assume a call to a.nonvirtualM() is resolved during compilation time? 2) Does the term late binding refer only to looking up the target method at run time or does it also refer to creating an instance of given type at runtime? thanx EDIT: 1) A a=new A(); a.M(); As far as I know, it is not known at compile time where on the heap (thus at which memory address ) will instance a be created during runtime. Now, with early binding the function calls are replaced with memory addresses during compilation process. But how can compiler replace function call with memory address, if it doesn’t know where on the heap will object a be created during runtime ( here I’m assuming the address of method a.M will also be at same memory location as a )? 2) The method slot is determined at compile time I assume that by method slot you’re referring to the entry point in V-table?

    Read the article

  • c# string interning

    - by CodingThunder
    I am trying to understand string interning and why is doesn't seem to work in my example. The point of the example is to show Example 1 uses less (a lot less memory) as it should only have 10 strings in memory. However, in the code below both example use roughly the same amount of memory (virtual size and working set). Please advice why example 1 isn't using a lot less memory? Thanks Example 1: IList<string> list = new List<string>(10000); for (int i = 0; i < 10000; i++) { for (int k = 0; k < 10; k++) { list.Add(string.Intern(k.ToString())); } } Console.WriteLine("intern Done"); Console.ReadLine(); Example 2: IList<string> list = new List<string>(10000); for (int i = 0; i < 10000; i++) { for (int k = 0; k < 10; k++) { list.Add(k.ToString()); } } Console.WriteLine("intern Done"); Console.ReadLine();

    Read the article

  • What are the skills a Drupal Developer needs?

    - by hfidgen
    I'm trying to write out a list of key Drupal competencies, mainly so I can confirm what I know, don't know and don't know I don't know. (Thanks D. Rumsfeld for that quote!) I think some of these are really broad, for instance there's quite a difference between making a functional theme and creating a theme with good SEO, load times and so on, but I'm hoping you could assume that a half decent web developer would look after that anyway. Just interested to see what people here feel is also important. Able to install Drupal on a server (pretty obvious). Able to research and install modules to meet project requirements Able to configure all the basic modules and core settings to get a site running Able to create a custom Theme from scratch which validates with good HTML/CSS and also pays attention to usability and accessibility. (Whilst still looking kick-ass). Able to use Hooks in the theme template.php to alter forms, page layout and other core functionality Can make forms from scratch using the API - with validation and posting back to the DB/email Can use Views to create blocks or pages, use php snippets as arguments, etc. Can create custom modules from scratch utilising core hooks and other hooks.

    Read the article

  • How do I patch a Windows API at runtime so that it to returns 0 in x64?

    - by Jorge Vasquez
    In x86, I get the function address using GetProcAddress() and write a simple XOR EAX,EAX; RET; in it. Simple and effective. How do I do the same in x64? bool DisableSetUnhandledExceptionFilter() { const BYTE PatchBytes[5] = { 0x33, 0xC0, 0xC2, 0x04, 0x00 }; // XOR EAX,EAX; RET; // Obtain the address of SetUnhandledExceptionFilter HMODULE hLib = GetModuleHandle( _T("kernel32.dll") ); if( hLib == NULL ) return false; BYTE* pTarget = (BYTE*)GetProcAddress( hLib, "SetUnhandledExceptionFilter" ); if( pTarget == 0 ) return false; // Patch SetUnhandledExceptionFilter if( !WriteMemory( pTarget, PatchBytes, sizeof(PatchBytes) ) ) return false; // Ensures out of cache FlushInstructionCache(GetCurrentProcess(), pTarget, sizeof(PatchBytes)); // Success return true; } static bool WriteMemory( BYTE* pTarget, const BYTE* pSource, DWORD Size ) { // Check parameters if( pTarget == 0 ) return false; if( pSource == 0 ) return false; if( Size == 0 ) return false; if( IsBadReadPtr( pSource, Size ) ) return false; // Modify protection attributes of the target memory page DWORD OldProtect = 0; if( !VirtualProtect( pTarget, Size, PAGE_EXECUTE_READWRITE, &OldProtect ) ) return false; // Write memory memcpy( pTarget, pSource, Size ); // Restore memory protection attributes of the target memory page DWORD Temp = 0; if( !VirtualProtect( pTarget, Size, OldProtect, &Temp ) ) return false; // Success return true; } This example is adapted from code found here: http://www.debuginfo.com/articles/debugfilters.html#overwrite .

    Read the article

  • How Can I Compress Texture in OpenGL on iPhone/iPad?

    - by nonamelive
    Hi, I'm making an iPad app which needs OpenGL to do a flip animation. I have a front image texture and a back image texture. Both the two textures are screenshots. // Capture an image of the screen UIGraphicsBeginImageContext(view.bounds.size); [view.layer renderInContext:UIGraphicsGetCurrentContext()]; image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); // Allocate some memory for the texture GLubyte *textureData = (GLubyte*)calloc(maxTextureSize*4, maxTextureSize); // Create a drawing context to draw image into texture memory CGContextRef textureContext = CGBitmapContextCreate(textureData, maxTextureSize, maxTextureSize, 8, maxTextureSize*4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(textureContext, CGRectMake(0, maxTextureSize-size.height, size.width, size.height), image.CGImage); CGContextRelease(textureContext); // ...done creating the texture data [EAGLContext setCurrentContext:context]; glGenTextures(1, &textureToView); glBindTexture(GL_TEXTURE_2D, textureToView); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, maxTextureSize, maxTextureSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData); // free texture data which is by now copied into the GL context free(textureData); Each of the texture takes up about 8MB memory, which is unacceptable for an iPhone/iPad app. Could anyone tell me how can I compress the texture to reduce the memory. I'm a complete newbie to OpenGL. Any help would be appreciated!

    Read the article

  • Do entity collections and object sets implement IQueryable<T>?

    - by Chevex
    I am using Entity Framework for the first time and noticed that the entities object returns entity collections. DBEntities db = new DBEntities(); db.Users; //Users is an ObjectSet<User> User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? user.Posts; //Posts is an EntityCollection<Post> Post post = user.Posts.Where(x => x.PostID == "123").First(); //Is this getting executed in the SQL or in memory? Do both ObjectSet and EntityCollection implement IQueryable? I am hoping they do so that I know the queries are getting executed at the data source and not in memory. EDIT: So apparently EntityCollection does not while ObjectSet does. Does that mean I would be better off using this code? DBEntities db = new DBEntities(); User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? Post post = db.Posts.Where(x => (x.PostID == "123")&&(x.Username == user.Username)).First(); // Querying the object set instead of the entity collection. Also, what is the difference between ObjectSet and EntityCollection? Shouldn't they be the same? Thanks in advance!

    Read the article

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • Unusual heap size limitations in VS2003 C++

    - by Shane MacLaughlin
    I have a C++ app that uses large arrays of data, and have noticed while testing that it is running out of memory, while there is still plenty of memory available. I have reduced the code to a sample test case as follows; void MemTest() { size_t Size = 500*1024*1024; // 512mb if (Size > _HEAP_MAXREQ) TRACE("Invalid Size"); void * mem = malloc(Size); if (mem == NULL) TRACE("allocation failed"); } If I create a new MFC project, include this function, and run it from InitInstance, it works fine in debug mode (memory allocated as expected), yet fails in release mode (malloc returns NULL). Single stepping through release into the C run times, my function gets inlined I get the following // malloc.c void * __cdecl _malloc_base (size_t size) { void *res = _nh_malloc_base(size, _newmode); RTCCALLBACK(_RTC_Allocate_hook, (res, size, 0)); return res; } Calling _nh_malloc_base void * __cdecl _nh_malloc_base (size_t size, int nhFlag) { void * pvReturn; // validate size if (size > _HEAP_MAXREQ) return NULL; ' ' And (size _HEAP_MAXREQ) returns true and hence my memory doesn't get allocated. Putting a watch on size comes back with the exptected 512MB, which suggests the program is linking into a different run-time library with a much smaller _HEAP_MAXREQ. Grepping the VC++ folders for _HEAP_MAXREQ shows the expected 0xFFFFFFE0, so I can't figure out what is happening here. Anyone know of any CRT changes or versions that would cause this problem, or am I missing something way more obvious?

    Read the article

  • Efficient (basic) regular expression implementation for streaming data

    - by Brendan Dolan-Gavitt
    I'm looking for an implementation of regular expression matching that operates on a stream of data -- i.e., it has an API that allows a user to pass in one character at a time and report when a match is found on the stream of characters seen so far. Only very basic (classic) regular expressions are needed, so a DFA/NFA based implementation seems like it would be well-suited to the problem. Based on the fact that it's possible to do regular expression matching using a DFA/NFA in a single linear sweep, it seems like a streaming implementation should be possible. Requirements: The library should try to wait until the full string has been read before performing the match. The data I have really is streaming; there is no way to know how much data will arrive, it's not possible to seek forward or backward. Implementing specific stream matching for a couple special cases is not an option, as I don't know in advance what patterns a user might want to look for. For the curious, my use case is the following: I have a system which intercepts memory writes inside a full system emulator, and I would like to have a way to identify memory writes that match a regular expression (e.g., one could use this to find the point in the system where a URL is written to memory). I have found (links de-linkified because I don't have enough reputation): stackoverflow.com/questions/1962220/apply-a-regex-on-stream stackoverflow.com/questions/716927/applying-a-regular-expression-to-a-java-i-o-stream www.codeguru.com/csharp/csharp/cs_data/searching/article.php/c14689/Building-a-Regular-Expression-Stream-Search-with-the-NET-Framework.htm But all of these attempt to convert the stream to a string first and then use a stock regular expression library. Another thought I had was to modify the RE2 library, but according to the author it is architected around the assumption that the entire string is in memory at the same time. If nothing's available, then I can start down the unhappy path of reinventing this wheel to fit my own needs, but I'd really rather not if I can avoid it. Any help would be greatly appreciated!

    Read the article

  • How to lazy process an xml documentwith hexpat?

    - by Florian
    In my search for a haskell library that can process large (300-1000mb) xml files i came across hexpat. There is an example in the Haskell Wiki that claims to -- Process document before handling error, so we get lazy processing. For testing purposes i have redirected the output to /dev/null and throw a 300mb file at it. Memory consumption kept rising until i had to kill the process. Now i removed the error handling from the process function: process :: String -> IO () process filename = do inputText <- L.readFile filename let (xml, mErr) = parse defaultParseOptions inputText :: (UNode String, Maybe XMLParseError) hFile <- openFile "/dev/null" WriteMode L.hPutStr hFile $ format xml hClose hFile return () As a result the function now uses constant memory. Why does the error handling result in massive memory consumption? As far as i understand xml and mErr are two seperate unevaluated thunks after the call to parse. Does format xml evaluate xml and build the evaluation tree of 'mErr'? If yes is there a way to handle the error while using constant memory? http://www.haskell.org/haskellwiki/Hexpat/

    Read the article

  • Whether to put method code in a VB.Net data storage class, or put it in a separate class?

    - by Alan K
    TLDR summary: (a) Should I include (lengthy) method code in classes which may spawn multiple objects at runtime, (b) does doing so cause memory usage bloat, (c) if so should I "outsource" the code to a class that is loaded only once and have the class methods call that, or alternatively (d) does the code get loaded only once with the object definition anyway and I'm worrying about nothing? ........ I don't know whether there's a good answer to this but if there is I haven't found it yet by searching in the usual places. In my VB.Net (2010 if it matters) WinForms project I have about a dozen or so class objects in an object model. Some of these are pretty simple and do little more than act as data storage repositories. The ones further up the object model, however, have an increasing number of methods. There can be a significant number of higher level objects in use though the exact number will be runtime dependent so I can't be more precise than that. As I was writing the method code for one of the top level ones I noticed that it was starting to get quite lengthy. Memory optimisation is something of a lost art given how much memory the average PC has these days but I don't want to make my application a resource hog. So my questions for anyone who knows .Net way better than I do (of which there will be many) are: Is the code loaded into memory with each instance of the class that's created? Alternatively is it loaded only once with the definition of the class, and all derived objects just refer to that definition? (I'm not really sure how that could be possible given that, for example, event handlers can be assigned dynamically, but no harm asking.) If the answer to the first one is yes, would it be more efficient to write the code in a "utility" object which is loaded only once and called from the real class' methods? Any thoughts appreciated.

    Read the article

  • How does an ASP.NET programmer go from working on/developing existing sites, to creating one from sc

    - by SLC
    I've been an ASP.NET developer for some time, always working on existing ASP.NET pages, modifying functionality, adding features, tweaking things etc. but have never built a site up from scratch. I've read books on ASP.NET, and they generally talk you through the various features of ASP.NET with a mock up site, but it's always very basic and they jump straight in. The time has come however, to write a site from scratch for a client. I've never done this before. There are design considerations, but like a lot of ASP.NET sites, the basic idea is, you have a site, where users can log in, and save some information like their name and password and address. The site has some functionality, but that's the basic design of a majority of (business-related) asp.net websites I would wager. I know how to program in ASP.NET already on an existing site, but I don't know how to design my own properly that meets the criteria above. I guess the main worry is security. I don't know the best way to handle a simple log-in system that stores user information like their name and password. I understand there are a few approaches to this, but the catch with this project is that it has to be absolutely bulletproof. Maximum security. All those good practices for security, it needs to have them all. I'm not asking what they are, but I am asking where to begin. What should be the first steps after I do File New Project ? Where can I look for information about setting up a secure ASP.NET website? I'll figure out the content and page layout later, it's the framework that is the big thing. Any and all advice would be welcome. I really want to get my first from-scratch project right from the beginning. Just to confuse things, it's possible I will be using MVC, I am not sure if this has any impact.

    Read the article

  • How to debug properly and find causes for crashes?

    - by Newbie
    I dont know what to do anymore... its hopeless. I'm getting tired of guessing whats causing the crashes. Recently i noticed some opengl calls crashes programs randomly on some gfx cards. so i am getting really paranoid what can cause crashes now. The bad thing on this crash is that it crashes only after a long time of using the program, so i can only guess what is the problem. I cant remember what changes i made to the program that may cause the crashes, its been so long time. But luckily the previous version doesnt crash, so i could just copypaste some code and waste 10 hours to see at which point it starts crashing... i dont think i want to do that yet. The program crashes after i make it to process the same files about 5 times in a row, each time it uses about 200 megabytes of memory in the process. It crashes at random times while and after the reading process. I have createn a "safe" free() function, it checks the pointer if its not NULL, and then frees the memory, and then sets the pointer to NULL. Isn't this how it should be done? I watched the task manager memory usage, and just before it crashed it started to eat 2 times more memory than usual. Also the program loading became exponentially slower every time i loaded the files; first few loads didnt seem much slower from each other, but then it started rapidly doubling the load speeds. What should this tell me about the crash? Also, do i have to manually free the c++ vectors by using clear() ? Or are they freed after usage automatically, for example if i allocate vector inside a function, will it be freed every time the function has ended ? I am not storing pointers in the vector. -- Shortly: i want to learn to catch the damn bugs as fast as possible, how do i do that? Using Visual Studio 2008.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >