Search Results

Search found 22546 results on 902 pages for 'mode management'.

Page 191/902 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • How to hide/show controls in UITableViewCell when it goes in/out of editing mode?

    - by Aleksandar Vacic
    I have few custom controls (image views) added programmatically to table cell. I want to hide them when table view goes into editing mode and show them again when view gets out of editing mode. I'm not using UITableViewCell subclasses, controls are added through tableView:cellForRowAtIndexPath: method. When and where should I do the hide/show? I'm wondering is this even possible without subclassing (where I could do this in layoutSubviews)...

    Read the article

  • Can Hudson branch promotion get based on project stability?

    - by Wayne
    Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project? Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode. IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version. We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle. Here's some background details: The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch. Our Hudson projects look like this: test - Only for testing a build, zero user visibility. integrate_deploy - Promotes a test project build to integrate branch and makes it available to public for UA testing. integrate - Repeatedly builds the integrate branch to determine if it's stable enough to promote to stable branch. This runs the builds and test hourly throughout every night. stable_deploy - Promotes an integrate project build to the stable branch and makes it public for users who want the latest and greatest. stable - Builds the stable branch once every night. After 2 weeks of successful builds (14 builds) it can go to "release candidate". And so on... it continues with "release candidate" and then "release".

    Read the article

  • How to make Flash 'play well with others'?

    - by Sensei James
    What up fam. So this isn't a question asking about memory management schemes; for those of you who may not know, the Flash Virtual Machine relies on garbage collection by using reference counting and mark and sweep (for good coverage of these topics, check out Grant Skinner's article and presentation). And yes, Flash also provides the "delete" operator, which can (unfortunately only) be used to remove the properties of dynamic objects. What I want to know is how to make it so that Flash programs don't continue to consume CPU and memory while running in the background (save loading content or communicating remotely, for example). The motivation for this question comes in part from Apple's ban on cross compiled applications (in its SDK 4) on the grounds that they do not behave as predicted with the multitasking feature central to iPhone OS 4. My intention is not only to make Flash programs that will 'pass muster' as far as multitasking in iPhone OS 4, but also to simply make better (behaving) Flash programs. Put another way, how might a Flash application mimic the multitasking feature of iPhone OS 4? Does the Flash API provide the means for a developer to put their applications to 'sleep' while other programs run, and then to 'awaken' them just as quickly? In our own program, we might do something as crude as detecting when the user has been idle (no mouse motion or key press) for (say) four seconds: var idle_id:uint = setInterval(4000, pause_program); var current_movie_clip:MovieClip; var current_frame:uint; ... // on Mouse move or key press... clearInterval(idle_id); idle_id = setInterval(4000, pause_program); ... function pause_program():void { current_movie_clip = event.target as MovieClip; current_frame = current_movie_clip.currentFrame; MovieClip(root).gotoAndStop("program_pause_screen"); } (on the program pause screen) resume_button.addEventListener(MouseEvent.CLICK, resume_program); function resume_program(event:MouseEvent) { current_movie_clip.gotoAndPlay(current_frame); } If that's the right idea, what's the best way to detect that an application should be shelved? And, more importantly, is it possible for Flash Player to detect that some of its running programs are idle, and to similarly shelve them until the user performs an action to resume them? (Please feel free to answer as much or as little of the many questions I've posed.)

    Read the article

  • Cannot access drive in Windows 7 after scandisk lockup, but can in safe mode....

    - by Matt Thompson
    I ran scandisk on my external USB drive due to the inability to delete a few files. Windows asked me if I wanted to unmount the drive before the scan, warning me that it would be unusable until the scan was finished, and I said yes. During the scan, my machine locked up, and I was forced to reboot the machine. When it came up, I was unable to access the drive, getting an error that "L:is not accessible, access is denied". Comupter Management sees the drive, and has the proper amount of disk space filled. I booted into safe mode, and can access the drive with no problems, and I noticed that in explorer, all the folders have locks on them. I booted back into windows, but still could not access the drive, getting the same error as above. Hovever, if I right click on the drive, select properties, and go to Customize, in the folder pictures ares, I select Choose File, and a window open up, that shows the root of the directory, with all the folder able to be accessed, but again, the icon is the folder icon with a lock on it. I can even copy files from the drive to another. So, the files are not gone, windows can obviously access the drive no matter what it thinks, so there has to be a problem with the flag windows put on the drive when it ran the original scan that failed. I was able to run a scan both in safe mode with no problems, and in windows. In windows, I received the cannot access error the first time I run scan disk on it, but if I try again, it works fine. Any ideas on how to clear the flag that windows set, so I can access the drive normally again?

    Read the article

  • DOM memory issue with IE8 (inserting lots of JSON data)

    - by okie.floyd
    i am developing a small web-utility that displays some data from some database tables. i have the utility running fine on FF, Safari, Chrome..., but the memory management on IE8 is horrendous. the largest JSON request I do will return information to create around 5,000 or so rows in a table within the browser (3 columns in the table). i'm using jquery to get the data (via getJSON). to remove the old/existing table, i'm just doing a $('#my_table_tbody').empty(). to add the new info to the table, within the getJSON callback, i am just appending each table row that i am creating to a variable, and then once i have them all, i am using $('#my_table_tbody').append(myVar) to add it to the existing tbody. i don't add the table rows as they are created because that seems to be a lot slower than just adding them all at once. does anyone have any recommendation on what someone should do who is trying to add thousands of rows of data to the DOM? i would like to stay away from pagination, but i'm wondering if i don't have a choice. Update 1 So here is the code I was trying after the innerHTML suggestion: /* Assuming a div called 'main_area' holds the table */ document.getElementById('main_area').innerHTML = ''; $.getJSON("my_server", {my: JSON, args: are, in: here}, function(j) { var mylength = j.length; var k =0; var tmpText = ''; tmpText += /* Add the table, thead stuff, and tbody tags here */; for (k = mylength - 1; k = 0; k--) { /* stack overflow wont let me type greater than & less than signs here, so just assume that they are there. */ tmpText += 'tr class="' + j[k].row_class . '" td class="col1_class" ' + j[k].col1 + ' /td td class="col2_class" ' + j[k].col2 + ' /td td class="col3_class" ' + j[k].col3 + ' /td /tr'; } document.getElementById('main_area').innerHTML = tmpText; } That is the gist of it. I've also tried using just a $.get request, and having the server send the formatted HTML, and just setting that in the innerHTML (i.e. document.getElementById('main_area').innerHTML = j;). thanks for all of the replies. i'm floored with the fact that you all are willing to help.

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • What Can I Do To One Of My Team Number (Good Friend As Well) Who Lost His Passion.

    - by skyflyer
    It seems this question is not program related, but there are lot of similar questions. So please bear with me! By the way, I am programmer and my team is also charging a software project. And SO is the only place which solved me lot of thorny troubles!THANK YOU GUYS! I joined my company with him years ago. At that time he was quite passionate on his job which is a front-end development. He gave us lot of useful suggestions concerning his work like design. And I believed he was a smart guy. I believe he still is smart too by the way. One years later, however, he seemed lost his passion and fooling around every day, did not care about his work any more and produced poorwork. Even worse he literally stopped learning new skills and honing his work related skills. For me it is horrible, we got to keep abreast with new technology development, otherwise we will be throw out. Since we were just coworkers, I did not care about it too much except mentioned my thoughts several times. But last month, we resembled a new group and assigned very important project. And I am the team leader, sadly! My boss gave me lot of support and expectation as well. I did a pretty good job before and I am very optimism to our future. But as a team, if my team does not work hard, we will be doomed to failure no matter how hard I work and push. In order to revitalize his passion, I tried couple of ways like talking to him about my concern and my boss's angry. I offered his new task which is quite new to him. I even persuaded my boss to give him new incentive package. But all of them knocked wall. His reaction was just he did not care. Even worse he did not want to talk about his situation. I want to be hard on him, but since we are friends and coworkers, I really can not see it will work. Even it works, I can not so quickly change my self from friend and coworker into manager. As a novice in management, I am really overwhelmed! I do not want get him fired, we are friends and I do not see him fired as my team number. What can I do? Thank you guys!

    Read the article

  • SplitView Controller portrait mode- Top button keeps shifting to right.

    - by nishantcm
    Hi, I am using SplitViewController in ipad. On a button click from detail view, I open a modalview which is in full screen mode. Whenever I dismiss the modal view, the button which displays the table view in portrait mode shifts to the right. If I continue the process of opening the modal view and dismissing it, it keeps moving to the right until it disappears to the right of the screen. Any idea why this is happening?

    Read the article

  • Testing for existence using SELECT WHERE HAVING and NOT HAVING in a grouped subset

    - by IanC
    I have data on which I need to count +1 if a particular condition exists or another condition doesn't exist. I'm using SQL Server 2008. I shred the following simplified sample XML into a temp table and validate it: <product type="1"> <param type="1"> <item mode="0" weight="1" /> </param> <param type="2"> <item mode="1" weight="1" /> <item mode="0" weight="0.1" /> </param> <param type="3"> <item mode="1" weight="0.75" /> <item mode="1" weight="0.25" /> </param> </product> The validation in concern is the following rule: For each product type, for each param type, mode may be 0 & (1 || 2). In other words, there may be a 0(s), but then 1s or 2s are required, or there may be only 1(s) or 2(s). There cannot be only 0s, and there cannot be 1s and 2s. The only part I haven't figured out is how to detect if there are only 0s. This seems like a "not having" problem. The validation code (for this part): WITH t1 AS ( SELECT SUM(t.ParamWeight) AS S, COUNT(1) AS C, t.ProductTypeID, t.ParamTypeID, t.Mode FROM @t AS t GROUP BY t.ProductTypeID, t.ParamTypeID, t.Mode ), ... UNION ALL SELECT TOP (1) 1 -- only mode 0 & (1 || 2) is allowed FROM t1 WHERE t1.Mode IN (1, 2) GROUP BY t1.ProductTypeID, t1.ParamTypeID HAVING COUNT(1) > 1 UNION ALL ... ) SELECT @C = COUNT(1) FROM t2 This will show if any mode 1s & 2s are mixed, but not if the group contains only a 0. I'm sure there is a simple solution, but it's evading me right now. EDIT: I thought of a "cheat" that works perfectly. I added the following to the above: SELECT TOP (1) 1 -- only mode 0 & (null || 1 || 2) is allowed FROM t1 GROUP BY t1.ProductTypeID, t1.ParamTypeID HAVING SUM(t1.Mode) = 0 However, I'd still like to know how to do this without cheating.

    Read the article

  • Unique_ptr compiler errors

    - by Godric Seer
    I am designing and entity-component system for a project, and C++ memory management is giving me a few issues. I just want to make sure my design is legitimate. So to start I have an Entity class which stores a vector of Components: class Entity { private: std::vector<std::unique_ptr<Component> > components; public: Entity() { }; void AddComponent(Component* component) { this -> components.push_back(std::unique_ptr<Component>(component)); } ~Entity(); }; Which if I am not mistaken means that when the destructor is called (even the default, compiler created one), the destructor for the Entity, will call ~components, which will call ~std::unique_ptr for each element in the vector, and lead to the destruction of each Component, which is what I want. The component class has virtual methods, but the important part is its constructor: Component::Component(Entity parent) { parent.addComponent(this) // I am not sure if this would work like I expect // Other things here } As long as passing this to the method works, this also does what I want. My confusion is in the factory. What I want to do is something along the lines of: std::shared_ptr<Entity> createEntity() { std::shared_ptr<Entity> entityPtr(new Entity()); new Component(*parent); // Initialize more, and other types of Components return entityPtr; } Now, I believe that this setup will leave the ownership of the Component in the hands of its Parent Entity, which is what I want. First a small question, do I need to pass the entity into the Component constructor by reference or pointer or something? If I understand C++, it would pass by value, which means it gets copied, and the copied entity would die at the end of the constructor. The second, and main question is that code based on this sample will not compile. The complete error is too large to print here, however I think I know somewhat of what is going on. The compiler's error says I can't delete an incomplete type. My Component class has a purely virtual destructor with an implementation: inline Component::~Component() { }; at the end of the header. However since the whole point is that Component is actually an interface. I know from here that a complete type is required for unique_ptr destruction. The question is, how do I work around this? For reference I am using gcc 4.4.6.

    Read the article

  • How to overwrite an array of char pointers with a larger list of char pointers?

    - by Casey
    My function is being passed a struct containing, among other things, a NULL terminated array of pointers to words making up a command with arguments. I'm performing a glob match on the list of arguments, to expand them into a full list of files, then I want to replace the passed argument array with the new expanded one. The globbing is working fine, that is, g.gl_pathv is populated with the list of expected files. However, I am having trouble copying this array into the struct I was given. #include <glob.h> struct command { char **argv; // other fields... } void myFunction( struct command * cmd ) { char **p = cmd->argv; char* program = *p++; // save the program name (e.g 'ls', and increment to the first argument glob_t g; memset(&g, 0, sizeof(g)); g.gl_offs = 1; int res = glob(*p++, GLOB_DOOFFS, NULL, &g); glob_handle_res(res); while (*p) { res = glob(*p, GLOB_DOOFFS | GLOB_APPEND, NULL, &g); glob_handle_res(res); } if( g.gl_pathc <= 0 ) { globfree(&g); } cmd->argv = malloc((g.gl_pathc + g.gl_offs) * sizeof *cmd->argv); if (cmd->argv == NULL) { sys_fatal_error("pattern_expand: malloc failed\n");} // copy over the arguments size_t i = g.gl_offs; for (; i < g.gl_pathc + g.gl_offs; ++i) cmd->argv[i] = strdup(g.gl_pathv[i]); // insert the original program name cmd->argv[0] = strdup(program); ** cmd->argv[g.gl_pathc + g.gl_offs] = 0; ** globfree(&g); } void command_free(struct esh_command * cmd) { char ** p = cmd->argv; while (*p) { free(*p++); // Segfaults here, was it already freed? } free(cmd->argv); free(cmd); } Edit 1: Also, I realized I need to stick program back in there as cmd-argv[0] Edit 2: Added call to calloc Edit 3: Edit mem management with tips from Alok Edit 4: More tips from alok Edit 5: Almost working.. the app segfaults when freeing the command struct Finally: Seems like I was missing the terminating NULL, so adding the line: cmd->argv[g.gl_pathc + g.gl_offs] = 0; seemed to make it work.

    Read the article

  • Linux Live CD only works when Windows is in Legacy mode?

    - by Vee
    I have asked a similar question before and no one was able to help me but I think it was because I wasn't phrasing it properly. This is a better restatement of the question. I have Windows 8 and Linux Mint dual booted on my pc. When I tried to boot the Linux from a CD ROM only, it would give me the following error: error: failure reading sector 0x0 from 'hd1' error: you need to load the kernel first. Press any key to continue... The Linux Mint works fine but otherwise, but it gives this error when I try to boot from CD. The boot Linux from CD only worked when I changed the Windows to Legacy mode in the BIOS settings. When I changed it back to UEFI, it would give the same error. Why is this? How can I fix it? I am somewhat new so is there anything else I should know about all of this? NOTE: I changed the Linux into UEFI mode using boot-repair but that still did not solve the problem when I tried to boot from CD ROM.

    Read the article

  • Objective-C++ Memory Problem

    - by Stephen Furlani
    Hello, I'm having memory woes. I've got a C++ Library (Equalizer from Eyescale) and they use the Traversal Visitor Pattern to allow you to add new functionality to their classes. I've finally figured out how it works, and I've got a Visitor that just returns the properties from one of the objects. (since I don't know how they're allocated). so. My little code does this: VisitorResult AGLContextVisitor::visit( Channel* channel ) { // Search through Nodes, Pipes until we get to the right window. // Add some code to make sure we find the right one? // Not executing the following code as C++ in gdb? eq::Window* w = channel->getWindow(); OSWindow* osw = w->getOSWindow(); AGLWindow* aw = (AGLWindow *)osw; AGLContext agl_ctx = aw->getAGLContext(); this->setContext(agl_ctx); return TRAVERSE_PRUNE; } So here's the problem. eq::Window* w = channel->getWindow(); (gdb) print w 0x0 BUT If I do this: (gdb) set objc-non-blocking-mode off (gdb) print w=channel->getWindow() 0x300effb9 // an honest memory location, and sets w as verified in the Debugger window of XCode. It does the same thing for osw. I don't get it. Why would something work in (gdb) but not in the code? The file is completely a cpp file, but it seems to be running in objc++, since I need to turn blocking off. Help!? I feel like I'm missing some memory-management basic thing here, either with C++ or Obj-C. [edit] channel-getWindow() is supposed to do this: /** @return the parent window. @version 1.0 */ Window* getWindow() { return _window; } The code also executes fine if I run it from a C++-only application. [edit] No... I tried creating a simple stand-alone program since I was tired of running it as a plugin. Messy to debug. And no, it doesn't run in the C++ program either. So I'm really at a loss as to what I'm doing wrong. Thanks, -- Stephen Furlani

    Read the article

  • how to refactor user-permission system?

    - by John
    Sorry for lengthy question. I can't tell if this should be a programming question or a project management question. Any advice will help. I inherited a reasonably large web project (1 year old) from a solo freelancer who architected it then abandoned it. The project was a mess, but I cleaned up what I could, and now the system is more maintainable. I need suggestions on how to extend the user-permission system. As it is now, the database has a t_user table with the column t_user.membership_type. Currently, there are 4 membership types with the following properties: 3 of the membership types are almost functionally the same, except for the different monthly fees each must pay 1 of the membership type is a "fake-user" type which has limited access ( different business logic also applies) With regards to the fake-user type, if you look in the system's business logic files, you will see a lot of hard-coded IF statements that do something like if (fake-user) { // do something } else { // a paid member of type 1,2 or 3 // proceed normally } My client asked me to add 3 more membership types to the system, each of them with unique features to be implemented this month, and substantive "to-be-determined" features next month. My first reaction is that I need to refactor the user-permission system. But it concerns me that I don't have enough information on the "to-be-determined" membership type features for next month. Refactoring the user-permission system will take a substantive amount of time. I don't want to refactor something and throw it out the following month. I get substantive feature requests on a monthly basis that come out of the blue. There is no project road map. I've asked my client to provide me with a roadmap of what they intend to do with the new membership types, but their answer is along the lines of "We just want to do [feature here] this month. We'll think of something new next month." So questions that come to mind are: 1) Is it dangerous for me to refactor the user permission system not knowing what membership type features exist beyond a month from now? 2) Should I refactor the user permission system regardless? Or just continue adding IF statements as needed in all my controller files? Or can you recommend a different approach to user permission systems? Maybe role-based ? 3) Should this project have a road map? For a 1 year old project like mine, how far into the future should this roadmap project? 4) Any general advice on the best way to add 3 new membership types?

    Read the article

  • How can I get Pinch to Zoom back in Desktop mode?

    - by Ben Brocka
    Windows 7 had an old implimentation of Pinch to Zoom where bringing your fingers apart/together would act similar to ctrl + +/-, the standard zoom. It's not as nice as granular zoom (like iOS/Android use) but it worked. Most notably it doesn't work in Chrome (did before) but I haven't noticed it working in any other apps. In windows 8 desktop mode, pinch to zoom doesn't seem to work at all. It doesn't even work in One Note 2010, which, if I recall correctly, had granular zoom in Windows 7. I have an (older) 2 touch point multi-touch monitor, and I can see the visual feedback that the two touch points and coming closer/farther apart, but it doesn't zoom. Note I'm using the touchscreen, not a touchpad or the Arch mouse or other peripherals. Can I enable this somehow or is it gone from Desktop mode? It works fine in Metro apps. Additionally I get weird visual feedback when placing my second finger on the screen; a shrinking transparent square appears somewhere between the two fingers, visually similar to the Right Click visual queue when long-pressing. It's not a right click though, I can't tell what, if anything, it's doing.

    Read the article

  • Vim: Smart indent when entering insert mode on blank line?

    - by TheDeeno
    When I open a new line (via 'o') my cursor jumps to a correctly indented position on the next line. On the other hand, entering insert mode while my cursor is on a blank line doesn't move my cursor to the correctly indented location. How do I make vim correctly indent my cursor when entering insert mode (via i) on a blank line?

    Read the article

  • Disable internal display on Macbook Pro without closed lid mode?

    - by jslaker
    I have an early 2007 Macbook Pro running 10.5 that I've recently set up on a KVM with my primary desktop system. The problem I've run into is that I have a 20" 1680x1050 LCD, and OS X only provides options to mirror at the resolution of the built-in display or to span. Since the built-in display runs at 1440x900, this leads to running my LCD at non-native res and a fuzzy picture. There isn't any option that I can find to simply disable the built-in display entirely and run the external LCD at its native resolution. I am aware of closed lid mode, but the MBP was disassembled while in storage for about 6 months (took it apart to pull the HDD) and the cable to the touchpad, which controls the sleep sensor was damaged, meaning closed lid mode won't work. I've looked into replacing the cable, but the cheapest I've been able to find it is $75-100, and I'm trying not to invest any more money into this computer as it also has a completely dead battery and a few other minor problems. I've found the app SwitchResX which appears to allow you to do what I need, but it has a lot of functionality I don't need and a ~$20 registration charge attached to it. An odd set of circumstances, I'm aware, but I was hoping somebody might know of an OS hack that would let me just disable the internal display and be done with it. :)

    Read the article

  • Web Services, Memory Leaks and CRM

    - by Neil
    Hi, I have a website that allows users to upload a csv file. This calls a service that reads the information from the csv, puts it into DynamicEntity objects and calls the CRM service to Create/Update entities in CRM. When this service creates/updates an entity this kicks off other plugins to apply certain business rules. These rules can also Create or Update entites in CRM. The issue here is that the handle count of the w3wp.exe process that the website is calling increases every time the an entity is created or updated and it never comes back down. I tried putting Garbage Collection code in the business rules and this reduces the handle count of the CRM w3wp process (run by the Network Service), but not the other w3wp process. Should I have Dispose methods on the Web Service that calls the CRM service? I hope that makes sense. I'm not overly familiar with memory management issues so any help is appreciated. Can anybody give me some tips on how to stop this from occurring? Thanks, Neil -- EDIT Okay well the handle count goes up when I call the Service.Create(DynamicEntity) method. I don't think placing any code here would be beneficial. When I exit the method/class/service that contains this call the handle count stays as it is. What I need to know is whether this is something I should be managing or is it something CRM takes care of (or doesn't take care of but I can't do anything about it) -- Another Edit Right this is how it works. 1) We have CRM and its related services 2) We have another service independent of CRM that uses the CRM services (number 1 above) to create entities based on csv info passed into it 3) We have a website that allows a user to upload a csv, and calls service no 2 above to Create/Update entities in CRM 4) We have plugins fired by CRM which use Service 1 above to create/update entities So the user uploads a csv to the website (3), this fires a service(2). When service 2 creates an entity using service 1, Service 4 fires. Service 4 calls also uses service 1 to Create entities, and when these services are called (using the Service.Create() method) the handle count of the process increases. When the method/class/services finish the handle count remains the same, and so when the whole process occurs again the handle count will increased again.

    Read the article

  • How do I configure emacs speedbar for C# mode?

    - by Alex B
    I'm using emacs with C# Mode and when I turn on the speedbar, no files show up by default. I can choose "show all files" on the speedbar mode, but then every .cs file shows up with a '[?]' next to the name. How do I properly configure speedbar so it shows up with .cs files by default? How do I get the '[+]' next to each file so I can navigate inside the file?

    Read the article

  • Object relationships

    - by Hammerstein
    This stems from a recent couple of posts I've made on events and memory management in general. I'm making a new question as I don't think the software I'm using has anything to do with the overall problem and I'm trying to understand a little more about how to properly manage things. This is ASP.NET. I've been trying to understand the needs for Dispose/Finalize over the past few days and believe that I've got to a stage where I'm pretty happy with when I should/shouldn't implement the Dispose/Finalize. 'If I have members that implement IDisposable, put explicit calls to their dispose in my dispose method' seems to be my understanding. So, now I'm thinking maybe my understanding of object lifetimes and what holds on to what is just wrong! Rather than come up with some sample code that I think will illustrate my point, I'm going to describe as best I can actual code and see if someone can talk me through it. So, I have a repository class, in it I have a DataContext that I create when the repository is created. I implement IDisposable, and when my calling object is done, I call Dispose on my repository and explicitly call DataContext.Dispose( ). Now, one of the methods of this class creates and returns a list of objects that's handed back to my front end. Front End - Controller - Repository - Controller - Front End. (Using Redgate Memory Profiler, I take a snapshot of my software when the page is first loaded). My front end creates a controller object on page load and then makes a request to the repository sending back a list of items. When the page is finished loading, I call Dispose on the controller which in turn calls dispose on the context. In my mind, that should mean that my connection is closed and that I have no instances of my controller class. If I then refresh the page, it jumps to two 'Live' instances of the controller class. If I look at the object retention graph, the objects I created in my call to the list are being held onto ultimately by what looks like Linq. The controller/repository aside, if I create a list of objects somewhere, or I create an object and return it somewhere, am I safe to just assume that .NET will eventually come and clean things up for me or is there a best practice? The 'Live' instances suggest to me that these are still in memory and active objects, the fact that RMP apparently forces GC doesn't mean anything?

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >