Search Results

Search found 23610 results on 945 pages for 'management studio'.

Page 420/945 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • Read the "Human friendly" text of a WebPage

    - by oidfrosty
    I need to read the final user page text, for example : "&#x73;&#x74;&#x61;&#x63;&#x6B;&#x6F;&#x76;&#x65;&#x72;&#x66;&#x6C;&#x6F;&#x77;&#x2E;&#x63;&#x6F;&#x6D;" is displayed as "stackoverflow.com". the aim is to avoid the use of script/encoding to avoid my filters it will be done with a content filtering proxy. i was thinking about injecting a script in the page

    Read the article

  • Should developers *really* have private offices?

    - by Aron Rotteveel
    We will probably be moving within a year, so we have to make some decisions regarding office layout. At the moment, our company is basically one big office. When our developers can't bother to be disturbed at all, we all have our own headphones to mute the outside world. Still, it seems a lot of people feel that private offices are no doubt the way to go. From Joel's article Private Offices Redux: Not every programmer in the world wants to work in a private office. In fact quite a few would tell you unequivocally that they prefer the camaradarie and easy information sharing of an open space. Don't fall for it. They also want M&Ms for breakfast and a pony. Open space is fun but not productive. Even though I can understand the benefit on productivity, does having a private office really result in more net productivity? There seem to be plenty of companies that create wide open spaces and still maintain good productivity. Or so it seems. (I should mention many of them use cubicles, though) What is your opinion on this? What does your company do? Is there some middle ground in this? Some more related information on this matter: Private Offices Redux The new Fog Creek office A Field Guide to Developers Gmail recruitment page. Found this last one somewhat remarkable since the Gmail recruitment page promotes the "wide open space" idea.

    Read the article

  • Does variable = null set it for garbage collection

    - by manyxcxi
    Help me settle a dispute with a coworker: Does setting a variable or collection to null in Java aid in garbage collection and reducing memory usage? If I have a long running program and each function may be iteratively called (potentially thousands of times): Does setting all the variables in it to null before returning a value to the parent function help reduce heap size/memory usage?

    Read the article

  • Objective-C ref count and autorelease

    - by turbovince
    Hey guys, suppose the following code: int main (int argc, const char * argv[]) { //[...] Rectangle* myRect = [[Rectangle alloc] init]; Vector2* newOrigin = [[[Vector2 alloc] init] autorelease]; // ref count 1 [newOrigin setX: 50.0f]; [myRect setOrigin: newOrigin]; // ref count 2 [myRect.origin setXY: 25.0f :100.0f]; // ref count goes to 3... why ? [myRect release]; [pool drain]; return 0; } Rectangle's origin is declared as a (retain) synthesized property. Just wondering 2 things: Why does ref count goes to 3 when using the getter accessor of Rectangle's origin? Am I doing something wrong ? With a ref count of 3, I don't understand how this snippet of code cannot leak. Calling release on myRect will make it go down to 2 since I call release on the origin in dealloc(). But then, when does autorelease take effect? Thanks!

    Read the article

  • My project is no longer used - how should I feel?

    - by flybywire
    For the last two years I have been developing and supporting an important project for a big customer. The project included mining data from the customer's existing systems, processing, and displaying and updating in the customer's public home page. The project was defined as crucial by the customer and I was payed good money and flown at the customer's expense to meet key employees. Some months ago, when the project was finished and in maintainance mode, I informed the customer that I am no longer interested in doing it as I had a new opportunity that would not be compatible with my existing customer. I was payed to train one of their employees, flown to meet him, make sure everything works and that he can be safely left in charge of the project. We finished in good terms after I complied with all my obligations and they payed me all they owed me. Some days ago, just out of curiosity, I entered to their website to see how the data continues to be updated and much to my dismay I discovered that the day after my contract was finished my system was "turned off" and it ceased to feed data to the public website. Let's put it clear, there is no issue of money or broken contract here. They are in they full right to do whatever they want with my software. But it is an issue of broken "programmer's ego". Should I feel bad about it (I do). Should I care and check out with my customer if they need some help? Or is it none of my matters?

    Read the article

  • UIVIewController not released when view is dismissed

    - by Nelson Ko
    I have a main view, mainWindow, which presents a couple of buttons. Both buttons create a new UIViewController (mapViewController), but one will start a game and the other will resume it. Both buttons are linked via StoryBoard to the same View. They are segued to modal views as I'm not using the NavigationController. So in a typical game, if a person starts a game, but then goes back to the main menu, he triggers: [self dismissViewControllerAnimated:YES completion:nil ]; to return to the main menu. I would assume the view controller is released at this point. The user resumes the game with the second button by opening another instance of mapViewController. What is happening, tho, is some touch events will trigger methods on the original instance (and write status updates to them - therefore invisible to the current view). When I put a breakpoint in the mapViewController code, I can see the instance will be one or the other (one of which should be released). I have tried putting a delegate to the mainWindow clearing the view: [self.delegate clearMapView]; where in the mainWindow - (void) clearMapView{ gameWindow = nil; } I have also tried self.view=nil; in the mapViewController. The mapViewController code contains MVC code, where the model is static. I wonder if this may prevent ARC from releasing the view. The model.m contains: static CanShieldModel *sharedInstance; + (CanShieldModel *) sharedModel { @synchronized(self) { if (!sharedInstance) sharedInstance = [[CanShieldModel alloc] init]; return sharedInstance; } return sharedInstance; } Another post which may have a lead, but so far not successful, is UIViewController not being released when popped I have in ViewDidLoad: // checks to see if app goes inactive - saves. [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(resignActive) name:UIApplicationWillResignActiveNotification object:nil]; with the corresponding in ViewDidUnload: [[NSNotificationCenter defaultCenter] removeObserver:self name:UIApplicationWillResignActiveNotification object:nil]; Does anyone have any suggestions? EDIT: - (void) prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender{ NSString *identifier = segue.identifier; if ([identifier isEqualToString: @"Start Game"]){ gameWindow = (ViewController *)[segue destinationViewController]; gameWindow.newgame=-1; gameWindow.delegate = self; } else if ([identifier isEqualToString: @"Resume Game"]){ gameWindow = (ViewController *)[segue destinationViewController]; gameWindow.newgame=0; gameWindow.delegate = self;

    Read the article

  • folder structure for project documentation

    - by Qiulang
    Hi all, I saw some questions raised about the folder structure of source codes, but I never see the question about folder structure of project documentation. I googled it and still do not see many articles talk about. Here is one http://www.projectperfect.com.au/downloads/Info/info_project_folder_structure.pdf To quote some of its words: "There are two broad approaches: Organize by phase so that each top directory is a phase. For example, you might have directories for Feasibility, Business Analysis, Design etc. or whatever your phases are called. Organize by function so that the top directory level are functions. For example, Risks, Requirements, Scope, Change Control, Development. Most times a mix of both are used..." So any thought about it? I believe this is also an important issue!

    Read the article

  • Will it use more and more memory if I keep drawing on the UIView?

    - by Tattat
    This is my drawRect: CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor); CGContextMoveToPoint(context, x1, y1); CGContextAddLineToPoint(context, x2, y2); CGContextStrokePath(context); If I run this code thousand times or more. My UIView will have many lines on that. Will it use more memory than only just one line on it? Er... ...I mean, will the program remember the line I draw or after it draw the lines, it won't have any information in the memory. thz .

    Read the article

  • release viewcontroller after presenting modally

    - by Jonathan
    I was watching CS193P Stanford course on Itunes, and in one of the lectures a demo was given and There it was said you could present the viewcontroller modally and then release it. Roughly like this (I know this isn't perfect but I'm on my PC atm) [self.view presentcontentmodally:myVC] [myVC release]; However this seems to produce problems. If I put a NSLog(@"%d", [myVC retainCount]) between those two lines then it returns 2 implying it is ok to release. However when I dismiss the myVC the app crashes. Nothing in the NSlog and the debugger won't show where it stopped. But I used malloc-history or something that some blog said would help. And found that it was the myVC. So should I be releasing myVC? (also when the modalVC has been dissmissed should the app's memory usuage go back to before the modalVC was presented?)

    Read the article

  • Is it OK to write code after [super dealloc]? (Objective-C)

    - by Richard J. Ross III
    I have a situation in my code, where I cannot clean up my classes objects without first calling [super dealloc]. It is something like this: // Baseclass.m @implmentation Baseclass ... -(void) dealloc { [self _removeAllData]; [aVariableThatBelongsToMe release]; [anotherVariableThatBelongsToMe release]; [super dealloc]; } ... @end This works great. My problem is, when I went to subclass this huge and nasty class (over 2000 lines of gross code), I ran into a problem: when I released my objects before calling [super dealloc] I had zombies running through the code that were activated when I called the [self _removeAllData] method. // Subclass.m @implementation Subclass ... -(void) deallloc { [super dealloc]; [someObjectUsedInTheRemoveAllDataMethod release]; } ... @end This works great, and It didn't require me to refactor any code. My question Is this: Is it safe for me to do this, or should I refactor my code? Or maybe autorelease the objects? I am programming for iPhone if that matters any.

    Read the article

  • WithEvents LinkedList is it Impossible?

    - by serhio
    What is the optimal approach to a WithEvents Collection - VB.NET? Have you any remarks on the code bellow (skipping the Nothing verifications)? The problem is when I obtain the LinkedListNode(Of Foo) in a For Each block I can set myNode.Value = something, and here is a handlers leak... -Could I override the FooCollection's GetEnumerator in this case? -No. :( cause NotInheritable Class LinkedListNode(Of T) Class Foo Public Event SelectedChanged As EventHandler End Class Class FooCollection Inherits LinkedList(Of Foo) Public Event SelectedChanged As EventHandler Protected Overloads Sub AddFirst(ByVal item As Foo) AddHandler item.SelectedChanged, AddressOf OnSelectedChanged MyBase.AddFirst(item) End Sub Protected Overloads Sub AddLast(ByVal item As Foo) AddHandler item.SelectedChanged, AddressOf OnSelectedChanged MyBase.AddLast(item) End Sub ' ------------------- ' Protected Overloads Sub RemoveFirst() RemoveHandler MyBase.First.Value.SelectedChanged, _ AddressOf OnSelectedChanged MyBase.RemoveFirst() End Sub Protected Overloads Sub RemoveLast(ByVal item As Foo) RemoveHandler MyBase.Last.Value.SelectedChanged, _ AddressOf OnSelectedChanged MyBase.RemoveLast() End Sub ' ------------------- ' Protected Sub OnSelectedChanged(ByVal sender As Object, ByVal e As EventArgs) RaiseEvent SelectedChanged(sender, e) End Sub End Class

    Read the article

  • Make sense of Notification Watcher source objective-c

    - by Chris
    From Notification Watcher source. - (void)selectNotification:(NSNotification*)aNotification { id sender = [aNotification object]; [selectedDistNotification release]; selectedDistNotification = nil; [selectedWSNotification release]; selectedWSNotification = nil; NSNotification **targetVar; NSArray **targetList; if (sender == distNotificationList) { targetVar = &selectedDistNotification; targetList = &distNotifications; } else { targetVar = &selectedWSNotification; targetList = &wsNotifications; } if ([sender selectedRow] != -1) { [*targetVar autorelease]; *targetVar = [[*targetList objectAtIndex:[sender selectedRow]] retain]; } if (*targetVar == nil) { [objectText setStringValue:@""]; } else { id obj = [*targetVar object]; NSMutableAttributedString *objStr = nil; if (obj == nil) { NSFont *aFont = [objectText font]; NSDictionary *attrDict = italicAttributesForFont(aFont); objStr = [[NSMutableAttributedString alloc] initWithString:@"(null)" attributes:attrDict]; } else { /* Line 1 */ objStr = [[NSMutableAttributedString alloc] initWithString: [NSString stringWithFormat:@" (%@)", [obj className]]]; [objStr addAttributes:italicAttributesForFont([objectText font]) range:NSMakeRange(1,[[obj className] length]+2)]; if ([obj isKindOfClass:[NSString class]]) { [objStr replaceCharactersInRange:NSMakeRange(0,0) withString:obj]; } else if ([obj respondsToSelector:@selector(stringValue)]) { [objStr replaceCharactersInRange:NSMakeRange(0,0) withString:[obj performSelector:@selector(stringValue)]]; } else { // Remove the space since we have no value to display [objStr replaceCharactersInRange:NSMakeRange(0,1) withString:@""]; } } [objectText setObjectValue:objStr]; /* LINE 2 */ [objStr release]; } [userInfoList reloadData]; } Over at //LINE 2 objStr is being released. Is this because we are assigning it with alloc in //LINE 1? Also, why is //LINE 1 not: objStr = [NSMutableAttributedString* initWithString:@"(null)" attributes:attrDict] If I create a new string like (NSString*) str = [NSString initWithString:@"test"]; ... str = @"another string"; Would I have to release str, or is this wrong and if I do that I have to use [[NSString alloc] initWithString:@"test"]? Why isn't the pointer symbol used as in [[NSString* alloc] ...? Thanks

    Read the article

  • P-invoke call fails if too much memory is assigned beforehand

    - by RandomEngy
    I've got a p-invoke call to an unmanaged DLL that was failing in my WPF app but not in a simple, starter WPF app. I tried to figure out what the problem was but eventually came to the conclusion that if I assign too much memory before making the call, the call fails. I had two separate blocks of code, both of which would succeed on their own, but that would cause failure if both were run. (They had nothing to do with what the p-invoke call is trying to do). What kind of issues in the unmanaged library would cause such an issue? I thought that the managed and unmanaged heaps were supposed to be automatically separated. The crash as far as I can tell is happening in a dynamically loaded secondary DLL from the one p-invoked into. Could that have something to do with it?

    Read the article

  • Question about unions and heap allocated memory

    - by Dennis Miller
    I was trying to use a union to so I could update the fields in one thread and then read allfields in another thread. In the actual system, I have mutexes to make sure everything is safe. The problem is with fieldB, before I had to change it fieldB was declared like field A and C. However, due to a third party driver, fieldB must be alligned with page boundary. When I changed field B to be allocated with valloc, I run into problems. Questions: 1) Is there a way to statically declare fieldB alligned on page boundary. Basically do the same thing as valloc, but on the stack? 2) Is it possible to do a union when field B, or any field is being allocated on the heap?. Not sure if that is even legal. Here's a simple Test program I was experimenting with. This doesn't work unless you declare fieldB like field A and C, and make the obvious changes in the public methods. #include <iostream> #include <stdlib.h> #include <string.h> #include <stdio.h> class Test { public: Test(void) { // field B must be alligned to page boundary // Is there a way to do this on the stack??? this->field.fieldB = (unsigned char*) valloc(10); }; //I know this is bad, this class is being treated like //a global structure. Its self contained in another class. unsigned char* PointerToFieldA(void) { return &this->field.fieldA[0]; } unsigned char* PointerToFieldB(void) { return this->field.fieldB; } unsigned char* PointerToFieldC(void) { return &this->field.fieldC[0]; } unsigned char* PointerToAllFields(void) { return &this->allFields[0]; } private: // Is this union possible with field B being // allocated on the heap? union { struct { unsigned char fieldA[10]; //This field has to be alligned to page boundary //Is there way to be declared on the stack unsigned char* fieldB; unsigned char fieldC[10]; } field; unsigned char allFields[30]; }; }; int main() { Test test; strncpy((char*) test.PointerToFieldA(), "0123456789", 10); strncpy((char*) test.PointerToFieldB(), "1234567890", 10); strncpy((char*) test.PointerToFieldC(), "2345678901", 10); char dummy[11]; dummy[10] = '\0'; strncpy(dummy, (char*) test.PointerToFieldA(), 10); printf("%s\n", dummy); strncpy(dummy, (char*) test.PointerToFieldB(), 10); printf("%s\n", dummy); strncpy(dummy, (char*) test.PointerToFieldC(), 10); printf("%s\n", dummy); char allFields[31]; allFields[30] = '\0'; strncpy(allFields, (char*) test.PointerToAllFields(), 30); printf("%s\n", allFields); return 0; }

    Read the article

  • overview/history of resident memory usage

    - by kapet
    I have a fairly complicated program (Python with SWIG'ed C++ code, long running server) that shows a constantly growing resident memory usage. I've been digging with the usual tools for the leak (valgrind, Pythons gc module, etc.) but to no avail so far. I'm a bit afraid that the actual problem is memory fragmentation within Python and/or libc managed memory. Anyway, my question is more specific right now: Is there a tool to visualize resident memory usage and ideally show how it develops over time? I think the raw data is in /proc/$PID/smaps but I was hoping there's some tool that shows me a nice graph of the amounts used by mmap'ed files vs. anonymous mmap'ed memory vs. heap over time so that it's easier to see (literally) what's changing. I couldn't find anything though. Does anybody know of a ready to use tool that graphs memory usage over space and time in an intuitive way?

    Read the article

  • How to keep windows from paging block of memory

    - by photo_tom
    We are working on a Vista/Windows 7 applicaiton that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this. I've done a fair amount of research and cannot find documenation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me? I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity. Suggestions?

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • Preallocating memory with C++ in realtime environment

    - by Elazar Leibovich
    I'm having a function which gets an input buffer of n bytes, and needs an auxillary buffer of n bytes in order to process the given input buffer. (I know vector is allocating memory at runtime, let's say that I'm using a vector which uses static preallocated memory. Imagine this is NOT an STL vector.) The usual approach is void processData(vector<T> &vec) { vector<T> &aux = new vector<T>(vec.size()); //dynamically allocate memory // process data } //usage: processData(v) Since I'm working in a real time environment, I wish to preallocate all the memory I'll ever need in advance. The buffer is allocated only once at startup. I want that whenever I'm allocating a vector, I'll automatically allocate auxillary buffer for my processData function. I can do something similar with a template function static void _processData(vector<T> &vec,vector<T> &aux) { // process data } template<size_t sz> void processData(vector<T> &vec) { static aux_buffer[sz]; vector aux(vec.size(),aux_buffer); // use aux_buffer for the vector _processData(vec,aux); } // usage: processData<V_MAX_SIZE>(v); However working alot with templates is not much fun (now let's recompile everything since I changed a comment!), and it forces me to do some bookkeeping whenever I use this function. Are there any nicer designs around this problem?

    Read the article

  • C++ Storing variables and inheritance

    - by Kaa
    Hello Everyone, Here is my situation: I have an event driven system, where all my handlers are derived from IHandler class, and implement an onEvent(const Event &event) method. Now, Event is a base class for all events and contains only the enumerated event type. All actual events are derived from it, including the EventKey event, which has 2 fields: (uchar) keyCode and (bool)isDown. Here's the interesting part: I generate an EventKey event using the following syntax: Event evt = EventKey(15, true); and I ship it to the handlers: EventDispatch::sendEvent(evt); // void EventDispatch::sendEvent(const Event &event); (EventDispatch contains a linked list of IHandlers and calls their onEvent(const Event &event) method with the parameter containing the sent event. Now the actual question: Say I want my handlers to poll the events in a queue of type Event, how do I do that? x Dynamic pointers with reference counting sound like too big of a solution. x Making copies is more difficult than it sounds, since I'm only receiving a reference to a base type, therefore each time I would need to check the type of event, upcast to EventKey and then make a copy to store in a queue. Sounds like the only solution - but is unpleasant since I would need to know every single type of event and would have to check that for every event received - sounds like a bad plan. x I could allocate the events dynamically and then send around pointers to those events, enqueue them in the array if wanted - but other than having reference counting - how would I be able to keep track of that memory? Do you know any way to implement a very light reference counter that wouldn't interfere with the user? What do you think would be a good solution to this design? I thank everyone in advance for your time. Sincerely, Kaa

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >