Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 425/555 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • vector<vector<largeObject>> vs. vector<vector<largeObject>*> in c++

    - by Leif Andersen
    Obviously it will vary depending on the compiler you use, but I'm curious as to the performance issues when doing vector<vector<largeObject>> vs. vector<vector<largeObject>*>, especially in c++. In specific: let's say that you have the outer vector full, and you want to start inserting elements into first inner vector. How will that be stored in memory if the outer vector is just storing pointers, as apposed to storing the whole inner vector. Will the whole outer vector have to be moved to gain more space, or will the inner vector be moved (assuming that space wasn't pre-allocated), causing problems with the outer vector? Thank you

    Read the article

  • Parsing XML stream in ASP.NET 3.5

    - by Ranjit
    Hi All, I am trying to build an ASP.NET 3.5 application based on XML streams from a legacy systems. My issue is once I get the XML I have to built menus and sub menus from the XML as well filter data(XML stream) based on the menu selection without making roundtrip to the Data store(legacy system). Right now I have a DAL which will get the XML stream in the form of XDocument.I was able to built the first level Menu Items, but not the sub Menu Items based on the selection in the Main Menu and then the final content based on the sub menu selection, all this without making a round trip. Is there a way to do this in-memory. Please suggest. thank you. Ranjit

    Read the article

  • Good reasons why to not use XIB files?

    - by mystify
    Are there any good reasons why I should not use XIB / NIB files with an highly customized UI and extensive animations and super low memory footprint needs? As a beginner I started with XIB. Then I figured out I couldn't do just about everything in them. It started to get really hard to customize things the way I wanted them to be. So at the end, I threw all my XIBs away and did it all programmatically. So when someone asks me if XIB is good, I generally say: Yeah, if you want to make crappy boring interfaces and don't care too much about performance, go ahead. But what else could be a reason not to use XIB? Am I the only iPhone developer who prefers doing everything programmatically for this reasons?

    Read the article

  • PHP: how to stop ignore_user_abort, is it a good solution for long run program

    - by user192344
    let say i have send email program which need to run arround 7 hours. but i cant open the browser for 7 hours beside cronjob, ignore_user_abort() will it be a solution? will the script stop when all email has sent and the program has finish the loop? or it will keep eating the server memory? some people said u may need to add some output at the end of the program to avoid the program run forever? and some people also said echo a litte bit string will not stop the script, but has to use ob_flush, any example for this?

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe Irony

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Paging in ActiveRecord

    - by Alex
    Does CastleProject ActiveRecord support paging? I need to load only data which is now seen on the screen. If I use [HasMany], it will be loaded as a whole either immediately or at the first call (if lazy attribute is true). However I only need something like first 100 records (then maybe 100 next records). Another question is how to load only 100 items. If the collection is too big, memory can reach its limit if we constantly load more and more items.

    Read the article

  • Java threadpool functionality

    - by cpf
    Hi stackoverflow, I need to make a program with a limited amount of threads (currently using newFixedThreadPool) but I have the problem that all threads get created from start, filling up memory at alarming rate. I wish to prevent this. Threads should only be created shortly before they are executed. e.g.: I call the program and instruct it to use 2 threads in the pool. The program should create & launch the first 2 Threads immediately (obviously), create the next 2 to wait for the previous 2, and at that point wait until one or both of the first 2 ended executing. I thought about extending executor or FixedThreadPool or such. However I have no clue on how to start there and doubt it is the best solution. Easiest would have my main Thread sleeping on intervals, which is not really good either... Thanks in advance!

    Read the article

  • How can I catch runtime error in C++

    - by Yan Cheng CHEOK
    By referring to http://stackoverflow.com/questions/315948/c-catching-all-exceptions try { int i = 0; int j = 0/i; /* Division by 0 */ int *k = 0; std::cout << *k << std::endl; /* De-reference invalid memory location. */ } catch (...) { std::cout << "Opps!" << std::endl; } The above run-time error are unable to be detected. Or, am I having wrong expectation on C++ exception handling feature?

    Read the article

  • Clone LINQ To SQL object Extension Method throws object dispose exception....

    - by gtas
    Hello all, I have this extension method for cloning my LINQ To SQL objects: public static T CloneObjectGraph<T>(this T obj) where T : class { var serializer = new DataContractSerializer(typeof(T), null, int.MaxValue, false, true, null); using (var ms = new System.IO.MemoryStream()) { serializer.WriteObject(ms, obj); ms.Position = 0; return (T)serializer.ReadObject(ms); } } But while i carry objects with not all references loaded, while qyuerying with DataLoadOptions, sometimes it throws the object disposed exception, but thing is I don't ask for references that is not loaded (null). e.g. I have Customer with many references and i just need to carry on memory the Address reference EntityRef< and i don't Load anything else. But while i clone the object this exception forces me to load all the EntitySet< references with the Customer object, which might be too much and slow down the application speed. Any suggestions?

    Read the article

  • clear html webpage?

    - by noname
    i am currently developing a crawler that crawls all links on the web and displays them in the web browser (and saving it of course). but after some hours there will be a huge list displayed on the web browser and i want to only display lets say 1000 links at the same time. then i clear the html and display another 1000 links. this is also good for the RAM or it will eat up all memory. how do i clear the web browser screen? EDIT: i have seen some scripts using some flush buffer functions. has this anything to do with my case?

    Read the article

  • What is a .NET managed module?

    - by Abhijeet Patel
    I know it's a Windows PE32, but I also know that the unit of deployment in .NET is an assembly which in turn has a manifest and can be made up of multiple managed modules. My questions are : 1) How would you create multiple managed modules when building a project such as a class lib or a console app etc. 2) Is there a way to specify this to the compiler(via the project properties for example) to partition your source code files into multiple managed modules. If so what is the benefit of doing so? 3)Can managed modules span assemblies? 4)Are separate file created on disk when the source code is compiled or are these created in memory and directly embedded in an assembly?

    Read the article

  • Change connection on the fly

    - by aron
    Hello, I have a SQL server with 50 databases. Each one has the exact same schema. I used the awesome Subsonic 2.2 create the DAL based on one of them. I need to loop though a list of database names and connect to each one and perform an update one at a time. If there a way to alter how subsonic uses the connection string. I believe I would need to store the connection string in memory that way it can keep changing. Is this possible? I tried doing a ConfigurationManager.ConnectionStrings["theConnStrName"].ConnectionString = updated-connection-string-here; .. but that did not work thanks!

    Read the article

  • Application crashing in the iPad but working fine in iPad simulator.

    - by srikanth rongali
    Hi, I am writing a game in cocos2d. In the iPad Simulator the application is running good. While I am running the application in the iPad. But it was crashing by giving the following message in terminal. I am using 2048x2048 CCSpriteSheets in my code. I used instruments tool there is sudden increase in memory to 32MB before crashing. It is crashing at CCSpriteFrameCache . Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-6258-64 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none continue The program is not being run. The program is not being run. Thank you.

    Read the article

  • Are there any Python reference counting/garbage collection gotchas when dealing with C code?

    - by Jason Baker
    Just for the sheer heck of it, I've decided to create a Scheme binding to libpython so you can embed Python in Scheme programs. I'm already able to call into Python's C API, but I haven't really thought about memory management. The way mzscheme's FFI works is that I can call a function, and if that function returns a pointer to a PyObject, then I can have it automatically increment the reference count. Then, I can register a finalizer that will decrement the reference count when the Scheme object gets garbage collected. I've looked at the documentation for reference counting, and don't see any problems with this at first glance (although it may be sub-optimal in some cases). Are there any gotchas I'm missing? Also, I'm having trouble making heads or tails of the cyclic garbage collector documentation. What things will I need to bear in mind here? In particular, how do I make Python aware that I have a reference to something so it doesn't collect it while I'm still using it?

    Read the article

  • Why is autorelease especially dangerous/expensive for iPhone applications?

    - by e.James
    I'm looking for a primary source (or a really good explanation) to back up the claim that the use of autorelease is dangerous or overly expensive when writing software for the iPhone. Several developers make this claim, and I have even heard that Apple does not recommend it, but I have not been able to turn up any concrete sources to back it up. SO references: autorelease-iphone Why does this create a memory leak (iPhone)? Note: I can see, from a conceptual point of view, that autorelease is slightly more expensive than a simple call to release, but I don't think that small penalty is enough to make Apple recommend against it. What's the real story?

    Read the article

  • Changes to the User Permissions Not saving

    - by anru
    I am use drupal 6. it seems like permission page can not save too many settings. I have try to save permission setting, but it is just not saved into DB. I have found out this is due to "too many fields". (use content permission module). if i uncheck some fields, and then checking lesser fields, permission will be saved. for example, if I am unchecking 2 check boxes, then checking one check box, permission will be saved. does any one know which function the permission page used to insert result into db? my php memory limit is 256M.

    Read the article

  • Path vs GeometryDrawing

    - by Carlo
    Just wondering what's lighter, I'm going to have a control that draws 280 * 4 my SegmentControl, which is a quarter of a circle, and I'm just wondering what's the way that takes least memory to draw said segment. GeometryDrawing: <Image> <Image.Source> <DrawingImage> <DrawingImage.Drawing> <GeometryDrawing Brush="LightBlue" Geometry="M24.612317,0.14044853 C24.612317,0.14044853 33.499971,-0.60608719 41,7.0179795 48.37642,14.516393 47.877537,23.404541 47.877537,23.404541 L24.60978,23.401991 z" /> </DrawingImage.Drawing> </DrawingImage> </Image.Source> </Image> Or Path: <Path Fill="LightBlue" Stretch="Fill" Stroke="#FF0DA17D" Data="M24.612317,0.14044853 C24.612317,0.14044853 33.499971,-0.60608719 41,7.0179795 48.37642,14.516393 47.877537,23.404541 47.877537,23.404541 L24.60978,23.401991 z" /> Or if you know of an even better way, it'll be much appreciated. Thanks!

    Read the article

  • Why does SQL Server keep throwing exceptions?

    - by Augusto Càzares
    I have my project in .NET that uses a database in SQL Server. I'm using Linq-to-SQL, sometimes when the project throws me an exception (Constraint) in a part of the project this same error keeps showing in other part of the project when I do another thing with the database. Like when I do an insertion and I had before an exception on delete the insertion throws me the delete exception, and it remains this way until I close and open again the project. My major problem is when this happen in my online project, this error in my project causes me problems in the project I'm testing online (I use the same database). I don't know if this exception is on the memory or something but its have been causing me a lot of headaches.

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • How to solve the performance decay of a VB.NET 1.1 application?

    - by marco.ragogna
    I have single-thread windows form application written with VB.NET and targeting Framework 1.1. The software communicates with external boards through a serial interface, and it mainly consist of a state machine that run some tests, driven in a loop done with a Timer and an Interval of 50ms. The feedback on the user interface is done through some custom events raised during the tests. The problem that is driving me crazy is that the performance slightly decrease over time, and in particular after 1200/1300 test operations. The memory occupied does not increase over time, it is only the CPU that seems interested by this problem. The strange thing is that, targeting framework 2.0 and using the same identical code, I do not have this problem. I know that is difficult without looking at the code, but do you have suggestions how can I approach the problem?

    Read the article

  • CPU/GPU Monitering. (Tempature, current speed, etc) in C#

    - by Tommy
    Im looking for a way to moniter system statistics, here are my main points of interest: CPU Tempature CPU speed, (Cycles per second) CPU Load (Idle percent) GPU Tempature Some other points of interest: Memory useage Network Load (Traffic Up/Down) My ultimate goal is to write an application that can be used for easily running in the backround, and allow setting many events fr certain actions, for instance When processer temp gets to 56C -> Do _Blank_ etc. So this leaves me two main points. Is there a framework already out there for this sort of thing? If No to #1, How can i go about doing this?

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • How does multiple implementing multiple COM interfaces work in C++?

    - by Martin
    I am trying to understand this example code regarding Browser Helper Objects. Inside, the author implements a single class which exposes multiple interfaces (IObjectWithSite, IDispatch). His QueryInterface function performs the following: if(riid == IID_IUnknown) *ppv = static_cast<BHO*>(this); else if(riid == IID_IObjectWithSite) *ppv = static_cast<IObjectWithSite*>(this); else if (riid == IID_IDispatch) *ppv = static_cast<IDispatch*>(this); I have learned that from a C perspective, interface pointers are just pointers to VTables. So I take it to mean that C++ is capable of returning the VTable of any implemented interface using static_cast. Does this mean that a class constructed in this way has a bunch of VTables in memory (IObjectWithSite, IDispatch, etc)? What does C++ do with the name collisions on the different interfaces (they each have a QueryInterface, AddRef and Release function), can I implement different methods for each of these?

    Read the article

  • MySQL ALTER TABLE on very large table - is it safe to run it?

    - by Timothy Mifsud
    I have a MySQL database with one particular MyISAM table of above 4 million rows. I update this table about once a week with about 2000 new rows. After updating, I then perform the following statement: ALTER TABLE x ORDER BY PK DESC i.e. I order the table in question by the primary key field in descending order. This has not given me any problems on my development machine (Windows with 3GB memory), but, even though 3 times I have tried it successfully on the production Linux server (with 512MB RAM - and achieving the resulted sorted table in about 6 minutes each time), the last time I tried it I had to stop the query after about 30 minutes and rebuild the database from a backup. I have started to wonder whether a 512MB server can cope with that statement (on such a large table) as I have read that a temporary table is created to perform the ALTER TABLE command?! And, if it can be safely run, what should be the expected time for the alteration of the table? Thanks in advance, Tim

    Read the article

  • Why might different computers calculate different arithmetic results in VB.NET?

    - by Eyal
    I have some software written in VB.NET that performs a lot of calculations, mostly extracting jpegs to bitmaps and computing calculations on the pixels like convolutions and matrix multiplication. Different computers are giving me different results despite having identical inputs. What might be the reason? Edit: I can't provide the algorithm because it's proprietary but I can provide all the relevant operations: ULong \ ULong (Turuncating division) Bitmap.Load("filename.bmp') (Load a bitmap into memory) Bitmap.GetPixel(Integer, Integer) (Get a pixel's brightness) Double + Double Double * Double Math.Sqrt(Double) Math.PI Math.Cos(Double) ULong - ULong ULong * ULong ULong << ULong List.OrderBy(Of Double)(Func) Hmm... Is it possible that OrderBy is using a non-stable QuickSort and that QuickSort is using a random pivot? Edit: Just tested, nope. The sort is stable.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >