Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 432/555 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • Caching issue with Centos forwarding DNS server

    - by Paddington
    I installed a Forwarding DNS server on Centos 5.10 and it is resolving addresses e.g google.com. When I stopped named (service named stop) and tried to dig (dig @localhost A google.com) there was a failure to resolve the address. I checked and see the caching daemon nscd is running. Does this mean the server is not caching at all? How can I get it to cache? named.conf options { // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; // Put files that named is allowed to write in the data/ directory: listen-on port 53 {127.0.0.1; 10.0.0.4;}; directory "/var/named"; // the default dump-file "/var/named/chroot/var/named/data/cache_dump.db"; statistics-file "/var/named/chroot/var/named/data/named_stats.txt"; memstatistics-file "/var/named/chroot/var/named/data/named_mem_stats.txt"; // allow-query {localhost; 192.168.0.0/24; 10.0.0.0/8;}; recursion yes; //allow-query { localhost; 10.0.0.0/8;}; allow-query { localhost; any; }; allow-query-cache { localhost; any; }; forward only; forwarders {8.8.8.8; 8.8.4.4;}; dnssec-enable yes; // dnssec-lookaside auto; /* Path to ISC DLV key */ // bindkeys-file "/etc/named.iscdlv.key"; // managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; **

    Read the article

  • Creating additional Window by a thread - C

    - by Jamie Keeling
    Hello, I have a windows form that has a simple menu and performs a simple operation, I want to be able to create another windows form with all the functionality of a menu bar, message pump etc.. as a separate thread so I can then share the results of the operation to the second window. I.E. 1) Form A opens Form B opens as a separate thread 2)Form A performs operation 3)Form A passes results via memory to Form B 4)Form B display results I'm confused as to how to go about it, the main app runs fine but i'm not sure how to add a second window if the first one already exists. I hope it makes sense. Thanks!

    Read the article

  • Homemade fstat to get file size, always return 0 length.

    - by Fred
    Hello, I am trying to use my own function to get the file size from a file. I'll use this to allocate memory for a data structure to hold the information on the file. The file size function looks like this: long fileSize(FILE *fp){ long start; fflush(fp); rewind(fp); start = ftell(fp); return (fseek(fp, 0L, SEEK_END) - start); } Any ideas what I'm doing wrong here?

    Read the article

  • POST data disapearing on large file upload

    - by DfKimera
    I'm having issues with a file uploading utility in my PHP application. When sending large files (9MB+) over the form, I get a very odd behaviour: the POST data I've included in the form dissapears, including the file information. I've already increased all PHP limits I could (time limit, max input time, post max size, memory limit and upload max filesize) and I still can't get the proper behaviour. I've tried replacing the regular HTTP forms with a Flash-based solution (SWFUpload, www.swfupload.org), still the same behaviour. I've tried multiple files of similar sizes and its definitely not a particular file issue. I've debugged the POST vars sent using Firebug, and the correct variables are still there in the header, together with the file. What could be going on here?

    Read the article

  • Does a static object within a function introduce a potential race condition?

    - by Jeremy Friesner
    I'm curious about the following code: class MyClass { public: MyClass() : _myArray(new int[1024]) {} ~MyClass() {delete [] _myArray;} private: int * _myArray; }; // This function may be called by different threads in an unsynchronized manner void MyFunction() { static const MyClass _myClassObject; [...] } Is there a possible race condition in the above code? Specifically, is the compiler likely to generate code equivalent to the following, "behind the scenes"? void MyFunction() { static bool _myClassObjectInitialized = false; if (_myClassObjectInitialized == false) { _myClassObjectInitialized = true; _myClassObject.MyClass(); // call constructor to set up object } [...] } ... in which case, if two threads were to call MyFunction() nearly-simultaneously, then _myArray might get allocated twice, causing a memory leak? Or is this handled correctly somehow?

    Read the article

  • how to generate large image in compact framework

    - by Buthrakaur
    I need to generate large images (A4 image at 200 DPI, PNG format would be fine) in my compact framework application. This is impossible to do in standard way due to memory limitations (such big image will throw OOMException). Is there any library which offers file-backed stream image generation? Or I could generate many smaller stripes of images (each stripe representing a row of the large image) using standard Bitmap approach, but I need to merge them together afterwards - is there any method how to merge many smaller images into one large without having to instantiate large Bitmap instance (which would again cause OOM)?

    Read the article

  • Memcachedb Versus MongoDB Versus CouchDB in terms of file based caching solution?

    - by Scott Faisal
    We need a caching solution that essentially caches data (text files) anywhere from 3 days up to a week based on user preferences and criteria. In this case memory based caching does not make sense to us. We were referred to MemcacheDB however I also thought of some NO SQL solutions. Our current application uses RDMS (MYSQL) and I guess it makes sense to use MemcacheDB however NOSQL does appeal as it is something more on the horizon. However we have not deployed a production level application under NOSQL and the beta stuff does not settle well with management/investors. Any how what are your thoughts and how would you address it? Thank You

    Read the article

  • [Ruby] How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: (0..9).sort_by{rand}.map{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle could be used to increase efficiency, but this code is sufficient for many purposes. My problem is that sort_by will transform the range into an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do?

    Read the article

  • NSArray vs. SQLite for Complex Queries on iPhone

    - by GingerBreadMane
    Developing for iPhone, I have a collection of points that I need to make complex queries on. For example: "How many points have a y-coordinate of 10" and "Return all points with an X-coordinate between 3 and 5 and a y-coordinate of 7". Currently, I am just cycling through each element of an NSArray and checking to see if each element matches my query. It's a pain to write the queries though. SQLite would be much nicer. I'm not sure which would be more efficient though since a SQLite database resides on disk and not in memory (to my understanding). Would SQLite be as efficient or more efficient here? Or is there a better way to do it other than these methods that I haven't thought of? I would need to perform the multiple queries with multiple sets of points thousands of times, so the best performance is important.

    Read the article

  • Which external DSLs do you like to use?

    - by Max Toro
    The reason I'm asking is because right now there seems to be tendency to make DSLs internal. One example is LINQ in C# and VB. You can use it against in-memory objects, or you can use it as a replacement of SQL or other external DSL. Another example is HTML5 vs XHTML2. XHTML2 supported decentralized extensibility through namespaces, in other words you embed external DSL code (XForms, SVG, MathML, etc.) in your XHTML code. Sadly HTML5 doesn't seem to have such mechanism, instead new features are internal (e.g. <canvas> instead of SVG). I'd like to know what other developers think about this. Do you like using external DSLs ? Which ones ? If not, why ?

    Read the article

  • string manipulations in C

    - by Vivek27
    Following are some basic questions that I have with respect to strings in C. If string literals are stored in read-only data segment and cannot be changed after initialisation, then what is the difference between the following two initialisations. char *string = "Hello world"; const char *string = "Hello world"; When we dynamically allocate memory for strings, I see the following allocation is capable enough to hold a string of arbitary length.Though this allocation work, I undersand/beleive that it is always good practice to allocate the actual size of actual string rather than the size of data type.Please guide on proper usage of dynamic allocation for strings. char *string = (char *)malloc(sizeof(char));

    Read the article

  • C++ program runs slow in VS2008

    - by Nima
    I have a program written in C++, that opens a binary file(test.bin), reads it object by object, and puts each object into a new file (it opens the new file, writes into it(append), and closes it). I use fopen/fclose, fread and fwrite. test.bin contains 20,000 objects. This program runs under linux with g++ in 1 sec but in VS2008 in debug/release mode in 1min! There are reasons why I don't do them in batches or don't keep them in memory or any other kind of optimizations. I just wonder why it is that much slow under windows. Thanks,

    Read the article

  • how to make a program like fraps.

    - by blood
    i want to make a program that will capture video. what if the best way to captrue the video, i know c++ and im learning assembly and i found in my assembly book i can get data from the video card i think? would that be the best way? i know fraps hooks into programs but i want my program to take the full screen? so i want something fast low memory useage if i can and something i can use on other computers with them having the same hardware.

    Read the article

  • How do I write Push and Pop in Scheme?

    - by kunjaan
    Right now I have (define (push x a-list) (set! a-list (cons a-list x))) (define (pop a-list) (let ((result (first a-list))) (set! a-list (rest a-list)) result)) But I get this result: Welcome to DrScheme, version 4.2 [3m]. Language: Module; memory limit: 256 megabytes. > (define my-list (list 1 2 3)) > (push 4 my-list) > my-list (1 2 3) > (pop my-list) 1 > my-list (1 2 3) What am I doing wrong? Is there a better way to write push so that the element is added at the end and pop so that the element gets deleted from the first?

    Read the article

  • Java Web App: Passing form parameters across multiple pages

    - by digiarnie
    Hi, what is the best practice or best way of passing form parameters from page to page in a flow? If I have a flow where a user enters data in a form and hits next and repeats this process until they get to an approval page, what ways could I approach this problem to make the retention of data as simple as possible over the flow? I guess you could put all the information as you go in the session but could you get into memory issues if a lot of people are using your app and going through the flow at the same time?

    Read the article

  • Remove from a std::set<shared_ptr<T>> by T*

    - by Autopulated
    I have a set of shared pointers: std::set<boost::shared_ptr<T>> set; And a pointer: T* p; I would like to efficiently remove the element of set equal to p, but I can't do this with any of the members of set, or any of the standard algorithms, since T* is a completely different type to boost::shared_ptr<T>. A few approaches I can think of are: somehow constructing a new shared_ptr from the pointer that won't take ownership of the pointed to memory (ideal solution, but I can't see how to do this) wrapping / re-implementing shared_ptr so that I can do the above just doing my own binary search over the set Help!

    Read the article

  • CUDA driver installation on a laptop with nVidia NVS140M card

    - by stanigator
    I'm trying to first figure out if my computer contains a CUDA-enabled card. It has an nVidia NVS 140M card, but I can't seem to figure out if it is the 128 MB version or 256 MB version. On the laptop purchase receipt, I found out that I ordered the 128 MB version, but the control panel description of the card said otherwise as shown below: When I ran the CUDA driver from nVidia's site, it cannot find a hardware compatible with CUDA (even though the product series is CUDA-enabled, the card does not have 256 MB minimum of memory to do so). What would be your recommendations in this case with trying to use CUDA on this computer (I'm not sure if nothing can be done at this point)? Thanks in advance.

    Read the article

  • Bitmap size exceeds VM budget after second load

    - by jonny
    This is driving me crazy. I have a game which has a bitmap as the background, this is big so I scale it down and this works fine. However when I navigate to another activity and then reload the game screen it crashes on drawing the background. I am calling recycle on all the bitmaps and setting them to null on onDestroy() but this doesn't help. Any ideas and if not how can I debug the memory to see at which step its growing. I looked at getting the heap but nothing of any size is on there really. Thanks.

    Read the article

  • Is it possible to find out what FlashBuilder is doing during compilation?

    - by justkevin
    I've found that Flash Builder 4 (formerly Flex Builder) has trouble working with large projects. After a certain point, builds seem to take longer and longer. I've tried many different ways of improving build time including: Moving embedded resources into externally linked projects. Using -incremental. Tweaking the .ini jvm settings including memory and -server. Turning off automatic build (I'd prefer not to have to do this, because one of the main reasons for using an IDE is to be told about errors as you make them). Deleting the project and re-checking out from the repository. While some of these may help a bit, the performance is still annoyingly slow. I feel if I knew what was taking so long I could refactor my projects to build faster. Is there some setting that tells FlashBuilder to let me see what parts of the build process take so much time?

    Read the article

  • Double indirection and structures passed into a function

    - by ZPS
    I am curious why this code works: typedef struct test_struct { int id; } test_struct; void test_func(test_struct ** my_struct) { test_struct my_test_struct; my_test_struct.id=267; *my_struct = &my_test_struct; } int main () { test_struct * main_struct; test_func(&main_struct); printf("%d\n",main_struct->id); } This works, but pointing to the memory address of a functions local variable is a big no-no, right? But if i used a structure pointer and malloc, that would be the correct way, right? void test_func(test_struct ** my_struct) { test_struct *my_test_struct; my_test_struct = malloc(sizeof(test_struct)); my_test_struct->id=267; *my_struct = my_test_struct; } int main () { test_struct * main_struct; test_func(&main_struct); printf("%d\n",main_struct->id); }

    Read the article

  • Is it possible to give a python dict an initial capacity (and is it usefull)

    - by Peter Smit
    I am filling a python dict with around 10,000,000 items. My understanding of dict (or hashtables) is that when too much elements get in them, the need to resize, an operation that cost quite some time. Is there a way to say to a python dict that you will be storing at least n items in it, so that it can allocate memory from the start? Or will this optimization not do any good to my running speed? (And no, I have not checked that the slowness of my small script is because of this, I actually wouldn't now how to do that. This is however something I would do in Java, set the initial capacity of the HashSet right)

    Read the article

  • Howto check if a object is connected to another in hibernate

    - by codevourer
    Imagine two domain object classes, A and B. A has a bidirectional one-to-many relationship to B. A is related to thousands of B. The relations must be unique, it's not possible to have a duplicate. To check if an instance of B is already connected to a given instance of A, we could perform an easy INNER JOIN but this will only ensure the already persisted relations. What about the current transient relations? class A { @OneToMany private List<B> listOfB; } If we access the listOfB and perform a check of contains() this will fetch all the connected instances of B lazy from the datasource. I only want to validate them by their primary-key. Is there an easy solution where I can do things like "Does this instance of A is connected with this instance of B?" Without loading all these data into memory and perform a based on collections?

    Read the article

  • OpenGL Vertex Array/Buffer Objects

    - by sadanjon
    Question 1 Do vertex buffer objects created under a certain VAO deleted once that VAO is deleted? An example: glGenBuffers(1, &bufferObject); glGenVertexArrays(1, &VAO); glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, bufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(someVertices), someVertices, GL_STATIC_DRAW); glEnableVertexAttribArray(positionAttrib); glVertexAttribPointer(positionAttrib, 3, GL_FLOAT, GL_FALSE, 0, NULL); When later calling glDeleteVertexArrays(1, &VAO);, will bufferObject be deleted as well? The reason I'm asking is that I saw a few examples over the web that didn't delete those buffer objects. Question 2 What is the maximum amount of memory that I can allocate for buffer objects? It must be system dependent of course, but I can't seem find an estimation for it. What happens when video RAM isn't big enough? How would I know?

    Read the article

  • Output of System.out.println(object)

    - by Shaarad Dalvi
    I want to know what exactly the output tells when I do the following : class data { int a=5; } class main { public static void main(String[] args) { data dObj=new data(); System.out.println(dObj); } } I know it gives something related to object as the output in my case is data@1ae73783. I guess the '1ae73783' is a hex number. I also did some work around and printed System.out.println(dObj.hashCode()); I got number 415360643. I got an integer value. I don't know what hashCode() returns, still out of curiosity, when I converted 1ae73783 to decimal, I got 415360643! That's why I am curious that what exactly is this number?? Is this some memory location of Java's sandbox or some other thing? Any light on this matter will be helpful..thanks! :)

    Read the article

  • c, pass awk syntax as argument to execl

    - by Skuja
    I want to run following command in c to read systems cpu and memory usage: ps aux|awk 'NR > 0 { cpu +=$3; ram+=$4 }; END {print cpu,ram}' I am trying to pass it to execl command and after that read its output: execl("/bin/ps", "/bin/ps", "aux|awk", "'NR > 0 { cpu +=$3; ram+=$4 }; END {print cpu,ram}'",(char *) 0); but in terminal i am getting following error: ERROR: Unsupported option (BSD syntax) I would like to know how to properly pass awk as argument to execl?

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >