Search Results

Search found 49453 results on 1979 pages for 'memory mapped files'.

Page 71/1979 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Open Folder and Select multiple files

    - by Vytas999
    In C# I want to open explorer and in this explorer window must be selected some files. I do this like that: string fPath = newShabonFilePath; string arg = @"/select, "; int cnt = filePathes.Count; foreach (string s in filePathes) { if(cnt == 1) arg = arg + s; else { arg = arg + s + ","; } cnt--; } System.Diagnostics.Process.Start("explorer.exe", arg); But only the last file of "arg" is selected. How to make that all files of arg would be selected, when explorer window is opened..?

    Read the article

  • using macro defined in header files

    - by Neeraj
    I have a macro definition in header file like this: // header.h ARRAY_SZ(a) = ((int) sizeof(a)/sizeof(a[0])); This is defined in some header file, which includes some more header files. Now, i need to use this macro in some source file that has no other reason to include header.h or any other header files included in header.h, so should i redefine the macro in my source file or simply include the header file header.h. Will the latter approach affect the code size/compile time (I think yes), or runtime (i think no)? Your advice on this!

    Read the article

  • Is there a way to log when a particular memory location gets written and by which function?

    - by rusbi
    I'm having a bug in my c++ program which happens very rarely but crashes my program. It's seems I have some buffer overflow problem or something similar. I find that these types of bug are most difficult to find. My program always crashes because of the same corrupted memory location. I'm wondering if there is some debugging tool which could detect when a particular memory location get written to and logs the function which does it. I'm using VLD (visual leak detector) for my memory leak hunting and it works great. It substitutes the original mallocs which its own and logs every allocation. I was wondering if there is something similar for memory? I know that something like that would cripple a program, but it could be really helpful.

    Read the article

  • Are large include files like iostream efficient? (C++)

    - by Keand64
    Iostream, when all of the files it includes, the files that those include, and so on and so forth, adds up to about 3000 lines. Consider the hello world program, which needs no more functionality than to print something to the screen: #include <iostream> //+3000 lines right there. int main() { std::cout << "Hello, World!"; return 0; } this should be a very simple piece of code, but iostream adds 3000+ lines to a marginal piece of code. So, are these 3000+ lines of code really needed to simply display a single line to the screen, and if not, do they create a less efficient program than if I simply copied the relevant lines into the code?

    Read the article

  • Bad Intel DQ965GF motherboard? Fails memtest, but memory is good.

    - by Boden
    I've got a machine with a DQ965GF motherboard. Two days ago it started locking up hard. Ran memtest 3.3, 1.7, and TestMem 4. TestMem just freezes, memtest failed on moving 8 bit inversions. Letting memtest run eventually causes the system to restart. I pulled memory sticks one by one, and then replaced the memory with a couple of known good sticks. No luck. I switched power supplies, didn't help. Swapped video cards just to be safe. No help. When I start the machine I get a single beep before it POSTs. According to the manual, a single beep means: 1 beep - Refresh Error (with nothing on the screen and it is not a video problem) I'm assuming that the motherboard has failed since it's obviously not a RAM or power issue. Do you agree? NOTE: I also tried resetting BIOS defaults, and even flashed the BIOS to the latest version. I also ran the Mersenne Prime Test and the CPU seems to click along just fine. (Tried logging in to superuser with openid but it's not working for me today. Hope this gets through)

    Read the article

  • Split user.config into different files for faster saving

    - by HorstWalter
    In my c# Windows Forms application (.net 3.5 / VS 2008) I have 2 settings files resulting in one user.config file. One setting file consists of larger data, but is rarely changed. The frequently changed data are very few. However, since the saving of the settings is always writing the whole (XML) file it is always "slow". SettingsSmall.Default.Save(); // slow, even if SettingsSmall consists of little data Could I configure the settings somehow to result in two files, resulting in: SettingsSmall.Default.Save(); // should be fast SettingsBig.Default.Save(); // could be slow, is seldom saved I have seen that I can use the SecionInformation class for further customizing, however what would be the easiest approach for me? Is this possible by just changing the app.config (config.sections)?

    Read the article

  • How can I get valgrind to tell me the address of each non-freed block of memory?

    - by James
    Valgrind tells me function xxx allocated memory which was not freed. Fine. It's proving more difficult than usual to trace however. To this end I have created numerous: #ifdef DEBUG fprintf(stderr, "something happening:%lx\n", (unsigned long)ptr); #endif So I just need to match these ptr addresses that are displayed with the addresses of non-freed memory. How can I get valgrind to tell me the address of each non-freed block of memory?

    Read the article

  • [PHP] Using cURL to download large XML files

    - by ndg
    I'm working with PHP and need to parse a number of fairly large XML files (50-75MB uncompressed). The issue, however, is that these XML files are stored remotely and will need to be downloaded before I can parse them. Having thought about the issue, I think using a system() call in PHP in order to initiate a cURL transfer is probably the best way to avoid timeouts and PHP memory limits. Has anyone done anything like this before? Specifically, what should I pass to cURL to download the remote file and ensure it's saved to a local folder of my choice?

    Read the article

  • python list directory , subdirectory and files

    - by thomytheyon
    Hi Alls, I'm trying to make a script to list all directory, subdirectory, and files in a given directory. I tried this : import sys,os root="/home/patate/directory/" path=os.path.join(root,"targetdirectory") for r,d,f in os.walk(path): for file in f: print os.path.join(root,file) Unfortunatly it doesn't work properly, i get all the files, but not them complete path. like : if the dir struct would be : /home/patate/directory/targetdirectory/123/456/789/file.txt It would print: /home/patate/directory/targetdirectory/file.txt What i need is the first result, any help would be greatly appreciated ! Thanks

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Environment

    - by Johannes
    I want to program a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both engines will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • Is Java serialization a tool to shrink the memory footprint?

    - by Pentius
    Hey folks, does serialization in Java always have to shrink the memory that is used to hold an object structure? Or is it likely that serialization will have higher costs? In other words: Is serialization a tool to shrink the memory footprint of object structures in Java? Edit I'm totally aware of what serialization was intended for, but thanks anyway :-) But you know, tools can be misused. My question is, whether it is a good tool to decrease the memory usage. So what reasons can you imagine, why memory usage should increase/decrease? What will happen in most cases?

    Read the article

  • How to reduce virtual memory by optimising my PHP code?

    - by iCeR
    My current code (see below) uses 147MB of virtual memory! My provider has allocated 100MB by default and the process is killed once run, causing an internal error. The code is utilising curl multi and must be able to loop with more than 150 iterations whilst still minimizing the virtual memory. The code below is only set at 150 iterations and still causes the internal server error. At 90 iterations the issue does not occur. How can I adjust my code to lower the resource use / virtual memory? Thanks! <?php function udate($format, $utimestamp = null) { if ($utimestamp === null) $utimestamp = microtime(true); $timestamp = floor($utimestamp); $milliseconds = round(($utimestamp - $timestamp) * 1000); return date(preg_replace('`(?<!\\\\)u`', $milliseconds, $format), $timestamp); } $url = 'https://www.testdomain.com/'; $curl_arr = array(); $master = curl_multi_init(); for($i=0; $i<150; $i++) { $curl_arr[$i] = curl_init(); curl_setopt($curl_arr[$i], CURLOPT_URL, $url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYPEER, FALSE); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); for($i=0; $i<150; $i++) { $results = curl_multi_getcontent ($curl_arr[$i]); $results = explode("<br>", $results); echo $results[0]; echo "<br>"; echo $results[1]; echo "<br>"; echo udate('H:i:s:u'); echo "<br><br>"; usleep(100000); } ?> Processor Information Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #2 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #3 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #4 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #5 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #6 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #7 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #8 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Memory Information Memory for crash kernel (0x0 to 0x0) notwithin permissible range Memory: 8302344k/9175040k available (2176k kernel code, 80272k reserved, 901k data, 228k init, 7466304k highmem) System Information Linux server3.server.com 2.6.18-194.17.1.el5PAE #1 SMP Wed Sep 29 13:31:51 EDT 2010 i686 i686 i386 GNU/Linux Physical Disks SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back sd 0:1:0:0: Attached scsi disk sda sd 4:0:0:0: Attached scsi removable disk sdb sd 0:1:0:0: Attached scsi generic sg4 type 0 sd 4:0:0:0: Attached scsi generic sg7 type 0 Current Memory Usage total used free shared buffers cached Mem: 8306672 7847384 459288 0 487912 6444548 -/+ buffers/cache: 914924 7391748 Swap: 4095992 496 4095496 Total: 12402664 7847880 4554784 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 898G 307G 546G 36% / /dev/sda1 99M 19M 76M 20% /boot none 4.0G 0 4.0G 0% /dev/shm /var/tmpMnt 4.0G 1.8G 2.0G 48% /tmp

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • C# xml Class to substitute ini files

    - by Eduardo
    Hi guys, I am learning Windows Forms in C#.NET 2008 and i want to build a class to work with SIMPLE xml files (config file like INI files), but i just need a simple class (open, getvalue, setvalue, creategroup, save and close functions), to substitute of ini files. I already did something and it is working but I am having trouble when I need to create different groups, something like this: <?xml version="1.0" encoding="utf-8"?> <CONFIG> <General> <Field1>192.168.0.2</Field1> </General> <Data> <Field1>Joseph</Field1> <Field2>Locked</Field2> </Data> </CONFIG> how can i specify that i want to read the field1 of [data] group? note that i have same field name in both groups (Field1)! I am using System.Linq, something like this: To open document: XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(FilePath); To save document: xmlDoc.Save(FilePath); To get value: public string getValue(string Field) { string result = ""; try { XmlNodeList xmlComum = xmlDoc.GetElementsByTagName(Field); if (xmlComum.Item(0) == null) result = ""; else result = xmlComum.Item(0).InnerText; } catch (Exception ex) { return ""; } return result; } To set value: public void setValue(string Group, string Field, string FieldValue) { try { XmlNodeList xmlComum = xmlDoc.GetElementsByTagName(Field); if (xmlComum.Item(0) == null) { xmlComum = xmlDoc.GetElementsByTagName(Group); if (xmlComum.Item(0) == null) { // create group createGroup(Group); xmlComum = xmlDoc.GetElementsByTagName(Group); } XmlElement xmlE = xmlDoc.CreateElement(Field); XmlText xmlT = xmlDoc.CreateTextNode(FieldValue); xmlE.AppendChild(xmlT); xmlComum.Item(0).AppendChild(xmlE); } else { // item already exists, just change its value xmlComum.Item(0).InnerText = Value; } xmlDoc.Save(FilePath); } catch (Exception ex) { } } The CreateGroup code: public void createGroup(string Group) { try { XmlElement xmlComum = xmlDoc.CreateElement(Group); xmlDoc.DocumentElement.AppendChild(xmlComum); xmlDoc.Save(FilePath); } catch (Exception ex) { } } Thank You!

    Read the article

  • Which RAM is faster (or, is Crucial's Memory Advisor giving non-optimal advice)?

    - by adpe
    In general, if a PC's motherboard is only specified for RAM up to a given core speed x, will that PC be faster with: RAM of latency y capable of running at a maximum core speed >x or RAM of latency <y capable of running at a maximum core speed of exactly x ? I would have thought the latter, but Crucial's Memory Adviser tool advises the former. So, which of us is correct - me, or the machine? (Here is a concrete example: I wish to upgrade a Toshiba Satellite Pro L300-155 laptop from its current 1GB RAM to 2GB Crucial RAM. The laptop's specifications are given here. I see from those specifications that the laptop is designed for DDR2-667 Ram. Crucial sells two compatible 2GB kits, priced exactly the same as each other: DDR2-667, CL=5; DDR2-800, CL=6. It seems to me that of these two upgrade kits, the first kit would run slightly faster on the L300-155 than the second, because both will presumably be capped at DDR2-667 core speed (see laptop specs), but the second kit has more latency. However, Crucial's Memory Advisor tool recommends the second kit.)

    Read the article

  • iPhone OS Memory Warnings. What Do The Different Levels Mean?

    - by dugla
    Regarding the black art of managing memory on iPhone OS devices: what do the different levels of memory warning mean. Level 1? Level 2? Does the dial go to 11? Context: After an extensive memory stress testing period - including running my iPad app with the iPod music player app playing, I am inclined to ignore the random yet infrequent memory warnings I am receiving. My app never crashes. Ever. My app is leak free. And, well, the mems warnings just don't seem to matter. Thanks, Doug

    Read the article

  • is it possible to load a shared library on a shared memory?

    - by quimm2003
    I have a server and a client written in C. I try to load a shared library in the server and then pass library function pointers to the client. This way I can change the library without have to compile the client. Because of every process has its own separate memory space, I wonder if it is possible to load a shared library on a shared memory, pass the function pointers and map the shared memory on the client and then make the client execute the code of the library loaded by the server.

    Read the article

  • How can I find out how much memory an object (rather the instance of an object) of a C++ class consu

    - by Shadow
    Hi, I am developing a Graph-class, based on boost-graph-library. A Graph-object contains a boost-graph, so to say an adjacency_list, and a map. When monitoring the total memory usage of my program, it consumes quite a lot (checked with pmap). Now, I would like to know, how much of the memory is exactly consumed by a filled object of this Graph-class? With filled I mean when the adjacency_list is full of vertices and edges. I found out, that using sizeof() doesn't bring me far. Using valgrind is also not an alternative as there is quite some memory allocation done previously and this makes the usage of valgrind impractical for this purpose. I'm also not interested in what other parts of the program cost in memory, I want to focus on one single object. Thank you.

    Read the article

  • Shell script to count files, then remove oldest files

    - by Nic Hubbard
    I am new to shell scripting, so I need some help here. I have a directory that fills up with backups. If I have more than 10 backup files, I would like to remove the oldest files, so that the 10 newest backup files are the only ones that are left. So far, I know how to count the files, which seems easy enough, but how do I then remove the oldest files, if the count is over 10? if [ls /backups | wc -l > 10] then echo "More than 10" fi

    Read the article

  • Using Python, How to copy files in 'temporary internet files' folder in Windows

    - by pythBegin
    I am using this code to find files recursively in a folder , with size greater than 50000 bytes. def listall(parent): lis=[] for root, dirs, files in os.walk(parent): for name in files: if os.path.getsize(os.path.join(root,name))>500000: lis.append(os.path.join(root,name)) return lis This is working fine. But when I used this on 'temporary internet files' folder in windows, am getting this error. Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> listall(a) File "<pyshell#2>", line 5, in listall if os.path.getsize(os.path.join(root,name))>500000: File "C:\Python26\lib\genericpath.py", line 49, in getsize return os.stat(filename).st_size WindowsError: [Error 123] The filename, directory name, or volume label syntax is incorrect: 'C:\\Documents and Settings\\khedarnatha\\Local Settings\\Temporary Internet Files\\Content.IE5\\EDS8C2V7\\??????+1[1].jpg' I think this is because windows gives names with special characters in this specific folder... Please help to sort out this issue.

    Read the article

  • Viewing large XML files in eclipse?

    - by Paul Wicks
    I'm working on a project involving some large XML files (from 50MB to over 1GB) and it would be nice if I could view them in eclipse (simple text view is fine) without Java running out of heap space. I've tried tweaking the amount of memory available to the jvm in eclipse.ini but haven't had much success. Any ideas?

    Read the article

  • My Android game runs out of memory when it is closed and opened and couple times.

    - by sirconnorstack
    I have an Android game that has an activity for the menu, and then another activity for the game that creates a SurfaceView and Thread to deal with canvas drawing and game logic. When you exit the game and start it up again too much or if you open and close the keyboard (thus restarting the activity), the game runs out of memory, usually when loading a bitmap: java.lang.OutOfMemoryError: bitmap size exceeds VM budget How can I keep all my images in memory without loading them again when the game changes state, or how can I release them from memory and let them reload when the game is restarted?

    Read the article

  • How can I find out how much memory an instance of a C++ class consumes?

    - by Shadow
    Hi, I am developing a Graph-class, based on boost-graph-library. A Graph-object contains a boost-graph, so to say an adjacency_list, and a map. When monitoring the total memory usage of my program, it consumes quite a lot (checked with pmap). Now, I would like to know, how much of the memory is exactly consumed by a filled object of this Graph-class? With filled I mean when the adjacency_list is full of vertices and edges. I found out, that using sizeof() doesn't bring me far. Using valgrind is also not an alternative as there is quite some memory allocation done previously and this makes the usage of valgrind impractical for this purpose. I'm also not interested in what other parts of the program cost in memory, I want to focus on one single object. Thank you.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >