Search Results

Search found 17140 results on 686 pages for 'records management'.

Page 190/686 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • SQL 2008 Select Top 1000 and update the selected database drop-down

    - by CWinKY
    When you right click and do a Select top 1000 rows from a table in sql 2008, it opens a tab and writes the sql and then executes it. This is okay, however I'll erase the sql and use the same tab often to do other sql statements. What annoys me is that I have to go to the database drop-down at the top of the window and change it to the current database I'm in because it says Master. How can I make sql 2008 update the selected database for this tab automatically when I right click a table and do select top 1000? On a side note, can I automatically hide the select statement that it generates and just show grid of results?

    Read the article

  • Autorelease vs. Release

    - by Sheehan Alam
    Given the two scenarios, which code is best practice and why? Autorelease loginButton = [[[UIBarButtonItem alloc] initWithTitle:@"Login" style:UIBarButtonItemStylePlain target:self action:@selector(loginButtonClicked:)] autorelease]; self.navigationItem.rightBarButtonItem = loginButton; or Release loginButton = [[UIBarButtonItem alloc] initWithTitle:@"Login" style:UIBarButtonItemStylePlain target:self action:@selector(loginButtonClicked:)]; self.navigationItem.rightBarButtonItem = loginButton; [loginButton release];

    Read the article

  • Preallocating memory with C++ in realtime environment

    - by Elazar Leibovich
    I'm having a function which gets an input buffer of n bytes, and needs an auxillary buffer of n bytes in order to process the given input buffer. (I know vector is allocating memory at runtime, let's say that I'm using a vector which uses static preallocated memory. Imagine this is NOT an STL vector.) The usual approach is void processData(vector<T> &vec) { vector<T> &aux = new vector<T>(vec.size()); //dynamically allocate memory // process data } //usage: processData(v) Since I'm working in a real time environment, I wish to preallocate all the memory I'll ever need in advance. The buffer is allocated only once at startup. I want that whenever I'm allocating a vector, I'll automatically allocate auxillary buffer for my processData function. I can do something similar with a template function static void _processData(vector<T> &vec,vector<T> &aux) { // process data } template<size_t sz> void processData(vector<T> &vec) { static aux_buffer[sz]; vector aux(vec.size(),aux_buffer); // use aux_buffer for the vector _processData(vec,aux); } // usage: processData<V_MAX_SIZE>(v); However working alot with templates is not much fun (now let's recompile everything since I changed a comment!), and it forces me to do some bookkeeping whenever I use this function. Are there any nicer designs around this problem?

    Read the article

  • overview/history of resident memory usage

    - by kapet
    I have a fairly complicated program (Python with SWIG'ed C++ code, long running server) that shows a constantly growing resident memory usage. I've been digging with the usual tools for the leak (valgrind, Pythons gc module, etc.) but to no avail so far. I'm a bit afraid that the actual problem is memory fragmentation within Python and/or libc managed memory. Anyway, my question is more specific right now: Is there a tool to visualize resident memory usage and ideally show how it develops over time? I think the raw data is in /proc/$PID/smaps but I was hoping there's some tool that shows me a nice graph of the amounts used by mmap'ed files vs. anonymous mmap'ed memory vs. heap over time so that it's easier to see (literally) what's changing. I couldn't find anything though. Does anybody know of a ready to use tool that graphs memory usage over space and time in an intuitive way?

    Read the article

  • MMGR Questions, code use and thread-saftey

    - by chadb
    1) Is MMGR thread safe? 2) I was hoping someone could help me understand some code. I am looking at something where a macro is used, but I don't understand the macro. I know it contains a function call and an if check, however, the function is a void function. How does wrapping "(m_setOwner (FILE,_LINE_,FUNCTION),false)" ever change return types? #define someMacro (m_setOwner(__FILE__,__LINE__,__FUNCTION__),false) ? NULL : new ... void m_setOwner(const char *file, const unsigned int line, const char *func); 3) What is the point of the reservoir? 4) On line 770 ("void *operator new(size_t reportedSize)" there is the line "// ANSI says: allocation requests of 0 bytes will still return a valid value" Who/what is ANSI in this context? Do they mean the standards? 5) This is more of C++ standards, but where does "reportedSize" come from for "void *operator new(size_t reportedSize)"? 6) Is this the code that is actually doing the allocation needed? "au-actualAddress = malloc(au-actualSize);"

    Read the article

  • SystemInformation.PowerStatus.BatteryLifeRemaining returns -1?

    - by dsrdakota
    In my code I have: Dim ps as PowerStatus = SystemInformation.PowerStatus Dim batteryTimeLeft as Integer = ps.BatteryLifeRemaining 'I have a problem here MsgBox("Time left on battery: " & cstr(batteryTimeLeft),vbInformation,"Info") PowerStatus.BatteryLifeRemaining always returns -1 when I have a battery present and used or when not used. Why does this always return -1? I currently am using MS .NET 4.0 Client Profile on VB.NET 2010 Express. I unplug the for my laptop and see if it makes a difference and it doesn't. Also tried in plugged in. Help please???

    Read the article

  • Does anyone still believe in the Capability Maturity Model for Software?

    - by Ed Guiness
    Ten years ago when I first encountered the CMM for software I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes. But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses. So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM? And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as six-sigma) have been equally or more beneficial? Does anyone still believe? As an aside, for those who haven't yet seen it, check out this funny-because-its-true parody

    Read the article

  • How do i free objects in C#

    - by assassin
    Hi, Can anyone please tell me how I can free objects in C#? For example, I have an object: Object obj1 = new Object(); //Some code using obj1 //Here I would like to free obj1, after it is no longer required and also more importantly its scope is the full run time of the program. Thanks for all your help

    Read the article

  • Keeping Track of Dependant Third-party Library Releases

    - by Sonny
    I am building a web application that is dependent upon several third-party libraries. What is a good strategy for making sure that you're always using the most fully patched versions? A simple method would be to keep the versions written down and visit the websites at regular intervals, but I am looking for some way to get the information 'pushed' to me if possible. I figured that there might be others out there who have needed to do the same thing and have worked out a good solution. Here are a few libraries I am using: Zend Framework jQuery HTMLPurifier Markdownify InnovaStudio WYSIWYG Editor Fancybox MojoZoom

    Read the article

  • C++ Storing variables and inheritance

    - by Kaa
    Hello Everyone, Here is my situation: I have an event driven system, where all my handlers are derived from IHandler class, and implement an onEvent(const Event &event) method. Now, Event is a base class for all events and contains only the enumerated event type. All actual events are derived from it, including the EventKey event, which has 2 fields: (uchar) keyCode and (bool)isDown. Here's the interesting part: I generate an EventKey event using the following syntax: Event evt = EventKey(15, true); and I ship it to the handlers: EventDispatch::sendEvent(evt); // void EventDispatch::sendEvent(const Event &event); (EventDispatch contains a linked list of IHandlers and calls their onEvent(const Event &event) method with the parameter containing the sent event. Now the actual question: Say I want my handlers to poll the events in a queue of type Event, how do I do that? x Dynamic pointers with reference counting sound like too big of a solution. x Making copies is more difficult than it sounds, since I'm only receiving a reference to a base type, therefore each time I would need to check the type of event, upcast to EventKey and then make a copy to store in a queue. Sounds like the only solution - but is unpleasant since I would need to know every single type of event and would have to check that for every event received - sounds like a bad plan. x I could allocate the events dynamically and then send around pointers to those events, enqueue them in the array if wanted - but other than having reference counting - how would I be able to keep track of that memory? Do you know any way to implement a very light reference counter that wouldn't interfere with the user? What do you think would be a good solution to this design? I thank everyone in advance for your time. Sincerely, Kaa

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • Is there any way to determine what type of memory the segments returned by VirtualQuery() are?

    - by bdbaddog
    Greetings, I'm able to walk a processes memory map using logic like this: MEMORY_BASIC_INFORMATION mbi; void *lpAddress=(void*)0; while (VirtualQuery(lpAddress,&mbi,sizeof(mbi))) { fprintf(fptr,"Mem base:%-10x start:%-10x Size:%-10x Type:%-10x State:%-10x\n", mbi.AllocationBase, mbi.BaseAddress, mbi.RegionSize, mbi.Type,mbi.State); lpAddress=(void *)((unsigned int)mbi.BaseAddress + (unsigned int)mbi.RegionSize); } I'd like to know if a given segment is used for static allocation, stack, and/or heap and/or other? Is there any way to determine that?

    Read the article

  • If my application doesn't use a lot of memory, can I ignore viewDidUnload:?

    - by iPhoneToucher
    My iPhone app generally uses under 5MB of living memory and even in the most extreme conditions stays under 8MB. The iPhone 2G has 128MB of RAM and from what I've read an app should only expect to have 20-30MB to use. Given that I never expect to get anywhere near the memory limit, do I need to care about memory warnings and setting objects to nil in viewDidUnload:? The only way I see my app getting memory warnings is if something else on the phone is screwing with the memory, in which case the entire phone would be acting silly. I built my app without ever using viewDidUnload:, so there's more than a hundred classes that I'd need to inspect and add code to if I did need to implement it.

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • logic before dispatcher + controller?

    - by Spoonface
    I believe for a typical MVC web application the router / dispatcher routine is used to decide which controller is loaded based primarily on the area requested in the url by the user. However, in addition to checking the url query string, I also like to use the dispatcher to check whether the user is currently logged in or not to decide which controller is loaded. For example if they are logged in and request the login page, the dispatcher would load their account instead. But is this a fairly non-standard design? Would it violate MVC in any way? I only ask as the examples I've read through this weekend have had no major calculations performed before the dispatcher routine, and commonly check whether the user is logged in or not per controller, and then redirect where necessary. But to me it seems odd to redirect a logged in user from the login area to account area if you could just load the account controller in the first place? I hope I've explained my consternation well enough, but could anyone offer some details on how they handle logged in users, and similar session data?

    Read the article

  • Commited memory goes to physical RAM or reserves space in the paging file?

    - by Sil
    When I do VirtualAlloc with MEM_COMMIT this "Allocates physical storage in memory or in the paging file on disk for the specified reserved memory pages" (quote from MSDN article http://msdn.microsoft.com/en-us/library/aa366887%28VS.85%29.aspx). All is fine up until now BUT: the description of Commited Bytes Counter says that "Committed memory is the physical memory which has space reserved on the disk paging file(s)." I also read "Windows via C/C++ 5th edition" and this book says that commiting memory means reserving space in the page file.... The last two cases don't make sense to me... If you commit memory, doesn't that mean that you commit to physical storage (RAM)? The page file being there for swaping out currently unused pages of memory in case memory gets low. The book says that when you commit memory you actually reserve space in the paging file. If this were true than that would mean that for a committed page there is space reserved in the paging file and a page frame in physical in memory... So twice as much space is needed ?! Isn't the page file's purpose to make the total physical memory larger than it actually is? If I have a 1G of RAM with a 1G page file = 2G of usable "physical memory"(the book also states this but right after that it says what I discribed at point 2). What am I missing? Thanks.

    Read the article

  • NHibernate auditing in disconnected mode

    - by Ciaran
    I'm developing an app with a Silverlight UI, transferring my domain objects over WCF and persisting them via NHibernate. I'm therefore working with NHibernate in a disconnected mode. I'm already using the NHibernate PreUpdate and PreInsert EventListeners to perform some metadata operations (updating Create/Update date, created/updated by etc) and they are working fine. I now have a requirement to perform data logging on some of my domain objects. So I will need to have an audit table that has a before-save and after-save state of certain entities. I had wanted to use the @event.Persister.OldState and @event.Persister.NewState to perform this logging, but because I am in a disconnected scenario (using different Sessions from when data is retrieved to when it is persisted), @event.Persister.OldState is null when I am saving my changes back to the database. How is anyone else doing data logging in a disconnected scenario with NHibernate?

    Read the article

  • EXC_BAD_ACCESS when simply casting a pointer in Obj-C

    - by AlexChilcott
    Hi all, Frequent visitor but first post here on StackOverflow, I'm hoping that you guys might be able to help me out with this. I'm fairly new to Obj-C and XCode, and I'm faced with this really... weird... problem. Googling hasn't turned up anything whatsoever. Basically, I get an EXC_BAD_ACCESS signal on a line that doesn't do any dereferencing or anything like that that I can see. Wondering if you guys have any idea where to look for this. I've found a work around, but no idea why this works... The line the broken version barfs out on is the line: LevelEntity *le = entity; where I get my bad access signal. Here goes: THIS VERSION WORKS NSArray *contacts = [self.body getContacts]; for (PhysicsContact *contact in contacts) { PhysicsBody *otherBody; if (contact.bodyA == self.body) { otherBody = contact.bodyB; } if (contact.bodyB == self.body) { otherBody = contact.bodyA; } id entity = [otherBody userData]; if (entity != nil) { LevelEntity *le = entity; CGPoint point = [contact contactPointOnBody:otherBody]; } } THIS VERSION DOESNT WORK NSArray *contacts = [self.body getContacts]; for (NSUInteger i = 0; i < [contacts count]; i++) { PhysicsContact *contact = [contacts objectAtIndex:i]; PhysicsBody *otherBody; if (contact.bodyA == self.body) { otherBody = contact.bodyB; } if (contact.bodyB == self.body) { otherBody = contact.bodyA; } id entity = [otherBody userData]; if (entity != nil) { LevelEntity *le = entity; CGPoint point = [contact contactPointOnBody:otherBody]; } } Here, the only difference between the two examples is the way I enumerate through my array. In the first version (which works) I use for (... in ...), where as in the second I use for (...; ...; ...). As far as I can see, these should be the same. This is seriously weirding me out. Anyone have any similar experience or idea whats going on here? Would be really great :) Cheers, Alex

    Read the article

  • NSXMLParser 's delegate and memory leak

    - by dizzy_fingers
    Hello, I am using a NSXMLParser class in my program and I assign a delegate to it. This delegate, though, gets retained by the setDelegate: method resulting to a minor, yet annoying :-), memory leak. I cannot release the delegate class after the setDelegate: because the program will crash. Here is my code: self.parserDelegate = [[ParserDelegate alloc] init]; //retainCount:1 self.xmlParser = [[NSXMLParser alloc] initWithData:self.xmlData]; [self.xmlParser setDelegate:self.parserDelegate]; //retainCount:2 [self.xmlParser parse]; [self.xmlParser release]; ParserDelegate is the delegate class. Of course if I set 'self' as the delegate, I will have no problem but I would like to know if there is a way to use a different class as delegate with no leaks. Thank you in advance.

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • Is there any project estimator tool to give estimate for web design/ developemnt work?

    - by jitendra
    Is there any project estimator tool to give estimate for web design/ development work? I don't have to calculate Price jusr want to calculate estimated time. for things like Just for example Page creation (layout in XHTML) CSS creation Content creation (Word to HTML including images in some pages) Bulk PDF upload PHP Script for Form Testing all pages I need like Items Quantity Time for each task(min.) Estimated total (in hour) PDF upload x 30 = 2 min = 60 Min pages with images x 30 = 15 min for each = 60 Min is there any simple jquery calculator power with jquery . Where we can add add remove custom thing to calculate time.? or any other free online/offline tool

    Read the article

  • Simulate memory warnings from the code, possible?

    - by krasnyk
    I know i can simulate a memory warning on the simulator by selecting 'Simulate Memory Warning' from the drop down menu of the iPhone Simulator. I can even make a hot key for that. But this is not what I'd like to achieve. I'd like to do that from the code by simply, lets say doing it every 5 seconds. Is that possible?

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >