Search Results

Search found 42115 results on 1685 pages for 'access management'.

Page 339/1685 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Trac vs. Redmine vs. JIRA vs. FogBugz for one-man shop?

    - by kizzx2
    Background I am a one-man freelancer looking for a project management software that can provide the following requirements. I have used Trac for about a year now. Tried Redmine and FogBugz on Demand for a couple of weeks. Never tried JIRA before. Basically, I'm looking for a piece of software that: Facilitates developer-client communication/collaboration Does time tracking Requirements Record time estimates/Time tracking Clients must be able to create/edit his own tickets/cases Clients must not see Developer created tickets/cases (internal) Affordable (price) with multiple clients Nice-to-haves Supports multiple projects in one installation Free eclipse integration (Mylyn) Easy time-tracking without using the Web UI (Trac's post commit hook or Redmine's commit message scanning) Clients can access the Wiki Export the data to standard formats My evaluation Trac can basically fulfill most of the above requirements, but with lots of customizations and plug-ins that it doesn't feel so clean. One downside is that the main trunk (0.11) has been around for a year or more and I still haven't seen much tendency of any upgrades coming up. Redmine has the cleanest Web UI. It's design philosophy seems to be the most elegant, with its innovative commit message scanning and stuff. However, the current version doesn't seem to be very mature and stable yet. It doesn't support internal (private) tickets and the time-tracking commit message patch doesn't support the trunk version. The good thing about it is that the main trunk still seems to be actively developed. FogBugz is actually a very well written piece of software. However the idea of paying $25/month for the client to be able to log-in to the system seems a little bit too far off for an individual developer. The free version supports letting clients create/view their own cases using email, which is a sub-optimal alternative to having a full-fledged list of the user's own cases. That also means clients can't read/write wiki pages. Its time-tracking approach is innovative and good though. However the fact that all the eclipse integration (Bugclipse, Foglyn) are commercial. Yet other investments before I can use my bug-tracker! If I revert back to the Web UI, it's not really a fast rendering Web service. Also, the in-built report functions are excellent (e.g. evidence based scheduling) JIRA is something I have zero experience with. Can someone with JIRA experience recommend why it might be a good fit for this particular situation? Question Can we share experience on this? Any specific plugins/customizations would that would best suit the requirements for this case?

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • Does ActiveCollab subversion integration work with subversion over ssh?

    - by executor21
    I'm trying to setup a repository in an ActiveCollab project. During setup, it reports that the connection tests successfully. However, when I try to actually update the repository, I get the following message: Could not obrain the highest revision number for the given repository. If I try to browse the repository, the following error comes up: Fatal error: Call to a member function getRevision() on a non-object in /u/sites/activecollab/webroot/shared/activecollab/activecollab/application/modules/source/controllers/RepositoryController.class.php on line 357 Is this because of trying to access the repository via svn+ssh plugin rather than http? Or did something happen on the ActiveCollab end? The repository is accessed fine via other means -- only ActiveCollab has the problem.

    Read the article

  • How to release MPMoviePlayerController?

    - by 4thSpace
    I have a couple of views that access the movie player. I've put the following code in a method in AppDelegate for these views. They send in the filename to play. The code works fine but I know a release is required somewhere. If I add the last line as a release or autorelease, the app will crash once the user presses done on the movieplayer. MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc] initWithContentURL:[NSURL fileURLWithPath:moviePath]]; moviePlayer.movieControlMode = MPMovieControlModeDefault; [moviePlayer play]; //[moviePlayer release]; I get this error: objc[51051]: FREED(id): message videoViewController sent to freed object=0x1069b30 Program received signal: “EXC_BAD_INSTRUCTION”. How should I be releasing the player?

    Read the article

  • SugarCRM - how to make users see only child users?

    - by John
    Hello, I want to change the behaviour of Sugar CRM community version. Here's what sugar currently does: 1) Admin logs into Sugar. 2) Admin clicks on Admin tab. 3) Admin creates a new user named George with admin access 4) Under user information section, Admin makes George report to Admin (in the database, it will show users.report_to_id is the admin's user_id) 5) Admin saves and logs out 6) George logs in with his password 7) George goes to admin tab. 8) George goes to list users page and sees all users, including Admin, the person he is supposed to report to. I want to change step 8 such that George is not allowed to see the user he reports to. Additionally, he is not allowed to see his grand parent user, great grand parent user, great great grand parent user etc... George should only be able to see users that he creates. How can I achieve this? Is this even possible?

    Read the article

  • How do I use logback-access in combination with Tomcat 5.5?

    - by mcduke
    I've got several Web apps running on a Tomcat 5.5 server, and I'm working on improving/updating the overall logging system used throughout the system. I already had some success with logback-classic. However, when I try to use logback-access (i.e. access the lbAccessStatus servlet), I get this exception: exception javax.servlet.ServletException: Wrapper cannot find servlet class ch.qos.logback.access.ViewStatusMessagesServlet or a class it depends on org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:433) ... root cause java.lang.ClassNotFoundException: ch.qos.logback.access.ViewStatusMessagesServlet org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1386) ... I have everything set up according to the docs: common/lib: logback-classic-0.9.15.jar logback-core-0.9.15.jar server/lib: logback-access-0.9.15.jar Moving around the libraries doesn't seem to help. logback-classic seems to work fine, it's just logback-access that causes problems.

    Read the article

  • Opera Mobile, offline web app development, and memory

    - by Jake Krohn
    I'm developing a data collection app for use on a HP iPAQ 211. I'm doing it as an offline web app (go with what you know) using Opera Mobile 9.7 and Google Gears. Being it is an offline app, it is very dependent on Javascript for much of its behavior. I'm using the LocalServer, Database, and Geolocation components of Gears, as well as the JQuery core and a couple of plugins for form validation and other usability tweaks (no jQuery UI). I've tried to be conservative with my programming style and free up or close resources whenever possible, but Opera just slowly dies after about 10-20 minutes of use. The Javascript engine stops responding, pages only half-load, and eventually stop loading completely. I'm guessing it's a resource issue. Quitting and relaunching the browser solves the problem, but only temporarily. The iPAQ ships with 128 MB of RAM, about 85-87 MB of which is available immediately after a reset. With only Opera running, there still remains about 50 MB that is left unused. My questions are thus: Is it possible to get Opera to address this unused RAM? Are there configuration settings in Opera or in the Windows Registry itself that will help improve performance? I know where to tweak, but the descriptions of the opera:config variables that I've found are less than helpful. Is is laughable to ask about memory management and jQuery in the same sentence? If not, does anyone have any suggestions? Finally, are my plans too ambitious, given the platform I have to work with? I know that Gears and Windows Mobile 6 are on their way out, but they (theoretically) suffice for what I need to do. I could ditch them in favor of an iPhone/iPod Touch, Mobile Safari, and HTML5 but I'd like to try to make this work first. I didn't think that Opera was a dog when it comes to JS performance, but perhaps it's worse than I thought. That this motley collection of technologies works at all is a minor miracle, but it needs to be faster and more stable. I appreciate any suggestions.

    Read the article

  • How to make Flash 'play well with others'?

    - by Sensei James
    What up fam. So this isn't a question asking about memory management schemes; for those of you who may not know, the Flash Virtual Machine relies on garbage collection by using reference counting and mark and sweep (for good coverage of these topics, check out Grant Skinner's article and presentation). And yes, Flash also provides the "delete" operator, which can (unfortunately only) be used to remove the properties of dynamic objects. What I want to know is how to make it so that Flash programs don't continue to consume CPU and memory while running in the background (save loading content or communicating remotely, for example). The motivation for this question comes in part from Apple's ban on cross compiled applications (in its SDK 4) on the grounds that they do not behave as predicted with the multitasking feature central to iPhone OS 4. My intention is not only to make Flash programs that will 'pass muster' as far as multitasking in iPhone OS 4, but also to simply make better (behaving) Flash programs. Put another way, how might a Flash application mimic the multitasking feature of iPhone OS 4? Does the Flash API provide the means for a developer to put their applications to 'sleep' while other programs run, and then to 'awaken' them just as quickly? In our own program, we might do something as crude as detecting when the user has been idle (no mouse motion or key press) for (say) four seconds: var idle_id:uint = setInterval(4000, pause_program); var current_movie_clip:MovieClip; var current_frame:uint; ... // on Mouse move or key press... clearInterval(idle_id); idle_id = setInterval(4000, pause_program); ... function pause_program():void { current_movie_clip = event.target as MovieClip; current_frame = current_movie_clip.currentFrame; MovieClip(root).gotoAndStop("program_pause_screen"); } (on the program pause screen) resume_button.addEventListener(MouseEvent.CLICK, resume_program); function resume_program(event:MouseEvent) { current_movie_clip.gotoAndPlay(current_frame); } If that's the right idea, what's the best way to detect that an application should be shelved? And, more importantly, is it possible for Flash Player to detect that some of its running programs are idle, and to similarly shelve them until the user performs an action to resume them? (Please feel free to answer as much or as little of the many questions I've posed.)

    Read the article

  • How do you determine with CLLocationManager when a user denies access to location services?

    - by Brennan
    With CLLocationManager I can use the following code to determine if I can access location services on the device. This is the master setting for all apps and can be turned on and off. if (self.locationManager.locationServicesEnabled) { [self.locationManager startUpdatingLocation]; } But a user can deny access to an individual app and in order to not execute the code to use the location manager I need to know if the user approved access to location services for this specific app. I saw that at one point there was a property called locationServicesApproved which appears it would indicate if the user approved access to location services in this app. But it was removed in 2008. Source: http://trailsinthesand.com/apple-removes-notifications-from-iphone-sdk-beta-4/ It appears that there is no way to determine if the user approved access to location services but that seems to be a big hole in the SDK. Is this feature in the SDK elsewhere? What can I do to determine if the user has approved access to location services for the current app?

    Read the article

  • DOM memory issue with IE8 (inserting lots of JSON data)

    - by okie.floyd
    i am developing a small web-utility that displays some data from some database tables. i have the utility running fine on FF, Safari, Chrome..., but the memory management on IE8 is horrendous. the largest JSON request I do will return information to create around 5,000 or so rows in a table within the browser (3 columns in the table). i'm using jquery to get the data (via getJSON). to remove the old/existing table, i'm just doing a $('#my_table_tbody').empty(). to add the new info to the table, within the getJSON callback, i am just appending each table row that i am creating to a variable, and then once i have them all, i am using $('#my_table_tbody').append(myVar) to add it to the existing tbody. i don't add the table rows as they are created because that seems to be a lot slower than just adding them all at once. does anyone have any recommendation on what someone should do who is trying to add thousands of rows of data to the DOM? i would like to stay away from pagination, but i'm wondering if i don't have a choice. Update 1 So here is the code I was trying after the innerHTML suggestion: /* Assuming a div called 'main_area' holds the table */ document.getElementById('main_area').innerHTML = ''; $.getJSON("my_server", {my: JSON, args: are, in: here}, function(j) { var mylength = j.length; var k =0; var tmpText = ''; tmpText += /* Add the table, thead stuff, and tbody tags here */; for (k = mylength - 1; k = 0; k--) { /* stack overflow wont let me type greater than & less than signs here, so just assume that they are there. */ tmpText += 'tr class="' + j[k].row_class . '" td class="col1_class" ' + j[k].col1 + ' /td td class="col2_class" ' + j[k].col2 + ' /td td class="col3_class" ' + j[k].col3 + ' /td /tr'; } document.getElementById('main_area').innerHTML = tmpText; } That is the gist of it. I've also tried using just a $.get request, and having the server send the formatted HTML, and just setting that in the innerHTML (i.e. document.getElementById('main_area').innerHTML = j;). thanks for all of the replies. i'm floored with the fact that you all are willing to help.

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • What Can I Do To One Of My Team Number (Good Friend As Well) Who Lost His Passion.

    - by skyflyer
    It seems this question is not program related, but there are lot of similar questions. So please bear with me! By the way, I am programmer and my team is also charging a software project. And SO is the only place which solved me lot of thorny troubles!THANK YOU GUYS! I joined my company with him years ago. At that time he was quite passionate on his job which is a front-end development. He gave us lot of useful suggestions concerning his work like design. And I believed he was a smart guy. I believe he still is smart too by the way. One years later, however, he seemed lost his passion and fooling around every day, did not care about his work any more and produced poorwork. Even worse he literally stopped learning new skills and honing his work related skills. For me it is horrible, we got to keep abreast with new technology development, otherwise we will be throw out. Since we were just coworkers, I did not care about it too much except mentioned my thoughts several times. But last month, we resembled a new group and assigned very important project. And I am the team leader, sadly! My boss gave me lot of support and expectation as well. I did a pretty good job before and I am very optimism to our future. But as a team, if my team does not work hard, we will be doomed to failure no matter how hard I work and push. In order to revitalize his passion, I tried couple of ways like talking to him about my concern and my boss's angry. I offered his new task which is quite new to him. I even persuaded my boss to give him new incentive package. But all of them knocked wall. His reaction was just he did not care. Even worse he did not want to talk about his situation. I want to be hard on him, but since we are friends and coworkers, I really can not see it will work. Even it works, I can not so quickly change my self from friend and coworker into manager. As a novice in management, I am really overwhelmed! I do not want get him fired, we are friends and I do not see him fired as my team number. What can I do? Thank you guys!

    Read the article

  • How do you protect code from leaking outside?

    - by cubex
    Besides open-sourcing your project and legislation, are there ways to prevent, or at least minimize the damages of code leaking outside your company/group? We obviously can't block Internet access (to prevent emailing the code) because programmer's need their references. We also can't block peripheral devices (USB, Firewire, etc.) The code matters most when it has some proprietary algorithms and in-house developed knowledge (as opposed to regular routine code to draw GUIs, connect to databases, etc.), but some applications (like accounting software and CRMs) are just that: complex collections of routine code that are simple to develop in principle, but will take years to write from scratch. This is where leaked code will come in handy to competitors. As far as I see it, preventing leakage relies almost entirely on human process. What do you think? What precautions and measures are you taking? And has code leakage affected you before?

    Read the article

  • Best Version control for lone developer

    - by Stephen
    I'm a lone developer at the moment; please share you experiences on what is a good VC setup for a lone developer. My constraints are; I work on multiple machines and need to keep them synced up Sometimes I work offline I'm currently using Subversion(just the client to a remote server), and that is working ok. I'm interested in mecurial and git DVCS, but none of their use-cases make sense to my situation. EDIT: I've migrated my active development to Fossil http://www.fossil-scm.org/ after trialing it with a client. I really like the features to autosync my repositories(reducing accidental forks), the documentation support(both wiki and embedded/versioned) that supports my need to document the code and the project in different spaces, the easy to configure issue tracker, nice access control, skinnable web interface and helpful community.

    Read the article

  • Cisco Pix how to add an additional block of static ip addresses for nat?

    - by Scott Szretter
    I have a pix 501 with 5 static ip addresses. My isp just gave me 5 more. I am trying to figure out how to add the new block and then how to nat/open at least one of them to an inside machine. So far, I named a new interface "intf2", ip range is 71.11.11.58 - 62 (gateway should 71.11.11.57) imgsvr is the machine I want to nat to one of the (71.11.11.59) new ip addresses. mail (.123) is an example of a machine that is mapped to the current existing 5 ip block (96.11.11.121 gate / 96.11.11.122-127) and working fine. Building configuration... : Saved : PIX Version 6.3(4) interface ethernet0 auto interface ethernet0 vlan1 logical interface ethernet1 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 nameif vlan1 intf2 security1 enable password xxxxxxxxx encrypted passwd xxxxxxxxx encrypted hostname xxxxxxxPIX domain-name xxxxxxxxxxx no fixup protocol dns fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 no fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names ...snip... name 192.168.10.13 mail name 192.168.10.29 imgsvr object-group network vpn1 network-object mail 255.255.255.255 access-list outside_access_in permit tcp any host 96.11.11.124 eq www access-list outside_access_in permit tcp any host 96.11.11.124 eq https access-list outside_access_in permit tcp any host 96.11.11.124 eq 3389 access-list outside_access_in permit tcp any host 96.11.11.123 eq https access-list outside_access_in permit tcp any host 96.11.11.123 eq www access-list outside_access_in permit tcp any host 96.11.11.125 eq smtp access-list outside_access_in permit tcp any host 96.11.11.125 eq https access-list outside_access_in permit tcp any host 96.11.11.125 eq 10443 access-list outside_access_in permit tcp any host 96.11.11.126 eq smtp access-list outside_access_in permit tcp any host 96.11.11.126 eq https access-list outside_access_in permit tcp any host 96.11.11.126 eq 10443 access-list outside_access_in deny ip any any access-list inside_nat0_outbound permit ip 192.168.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.17.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.16.0.0 255.255.0.0 IPPool2 255.255.255.0 ...snip... access-list inside_access_in deny tcp any any eq smtp access-list inside_access_in permit ip any any pager lines 24 logging on logging buffered notifications mtu outside 1500 mtu inside 1500 ip address outside 96.11.11.122 255.255.255.248 ip address inside 192.168.10.15 255.255.255.0 ip address intf2 71.11.11.58 255.255.255.248 ip audit info action alarm ip audit attack action alarm pdm location exchange 255.255.255.255 inside pdm location mail 255.255.255.255 inside pdm location IPPool2 255.255.255.0 outside pdm location 96.11.11.122 255.255.255.255 inside pdm location 192.168.10.1 255.255.255.255 inside pdm location 192.168.10.6 255.255.255.255 inside pdm location mail-gate1 255.255.255.255 inside pdm location mail-gate2 255.255.255.255 inside pdm location imgsvr 255.255.255.255 inside pdm location 71.11.11.59 255.255.255.255 intf2 pdm logging informational 100 pdm history enable arp timeout 14400 global (outside) 1 interface global (outside) 2 96.11.11.123 global (intf2) 3 interface global (intf2) 4 71.11.11.59 nat (inside) 0 access-list inside_nat0_outbound nat (inside) 2 mail 255.255.255.255 0 0 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 static (inside,outside) tcp 96.11.11.123 smtp mail smtp netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 https mail https netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 www mail www netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.124 ts netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.126 mail-gate2 netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.125 mail-gate1 netmask 255.255.255.255 0 0 access-group outside_access_in in interface outside access-group inside_access_in in interface inside route outside 0.0.0.0 0.0.0.0 96.11.11.121 1 route intf2 0.0.0.0 0.0.0.0 71.11.11.57 2 timeout xlate 0:05:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute floodguard enable ...snip... : end [OK] Thanks!

    Read the article

  • Unique_ptr compiler errors

    - by Godric Seer
    I am designing and entity-component system for a project, and C++ memory management is giving me a few issues. I just want to make sure my design is legitimate. So to start I have an Entity class which stores a vector of Components: class Entity { private: std::vector<std::unique_ptr<Component> > components; public: Entity() { }; void AddComponent(Component* component) { this -> components.push_back(std::unique_ptr<Component>(component)); } ~Entity(); }; Which if I am not mistaken means that when the destructor is called (even the default, compiler created one), the destructor for the Entity, will call ~components, which will call ~std::unique_ptr for each element in the vector, and lead to the destruction of each Component, which is what I want. The component class has virtual methods, but the important part is its constructor: Component::Component(Entity parent) { parent.addComponent(this) // I am not sure if this would work like I expect // Other things here } As long as passing this to the method works, this also does what I want. My confusion is in the factory. What I want to do is something along the lines of: std::shared_ptr<Entity> createEntity() { std::shared_ptr<Entity> entityPtr(new Entity()); new Component(*parent); // Initialize more, and other types of Components return entityPtr; } Now, I believe that this setup will leave the ownership of the Component in the hands of its Parent Entity, which is what I want. First a small question, do I need to pass the entity into the Component constructor by reference or pointer or something? If I understand C++, it would pass by value, which means it gets copied, and the copied entity would die at the end of the constructor. The second, and main question is that code based on this sample will not compile. The complete error is too large to print here, however I think I know somewhat of what is going on. The compiler's error says I can't delete an incomplete type. My Component class has a purely virtual destructor with an implementation: inline Component::~Component() { }; at the end of the header. However since the whole point is that Component is actually an interface. I know from here that a complete type is required for unique_ptr destruction. The question is, how do I work around this? For reference I am using gcc 4.4.6.

    Read the article

  • How to overwrite an array of char pointers with a larger list of char pointers?

    - by Casey
    My function is being passed a struct containing, among other things, a NULL terminated array of pointers to words making up a command with arguments. I'm performing a glob match on the list of arguments, to expand them into a full list of files, then I want to replace the passed argument array with the new expanded one. The globbing is working fine, that is, g.gl_pathv is populated with the list of expected files. However, I am having trouble copying this array into the struct I was given. #include <glob.h> struct command { char **argv; // other fields... } void myFunction( struct command * cmd ) { char **p = cmd->argv; char* program = *p++; // save the program name (e.g 'ls', and increment to the first argument glob_t g; memset(&g, 0, sizeof(g)); g.gl_offs = 1; int res = glob(*p++, GLOB_DOOFFS, NULL, &g); glob_handle_res(res); while (*p) { res = glob(*p, GLOB_DOOFFS | GLOB_APPEND, NULL, &g); glob_handle_res(res); } if( g.gl_pathc <= 0 ) { globfree(&g); } cmd->argv = malloc((g.gl_pathc + g.gl_offs) * sizeof *cmd->argv); if (cmd->argv == NULL) { sys_fatal_error("pattern_expand: malloc failed\n");} // copy over the arguments size_t i = g.gl_offs; for (; i < g.gl_pathc + g.gl_offs; ++i) cmd->argv[i] = strdup(g.gl_pathv[i]); // insert the original program name cmd->argv[0] = strdup(program); ** cmd->argv[g.gl_pathc + g.gl_offs] = 0; ** globfree(&g); } void command_free(struct esh_command * cmd) { char ** p = cmd->argv; while (*p) { free(*p++); // Segfaults here, was it already freed? } free(cmd->argv); free(cmd); } Edit 1: Also, I realized I need to stick program back in there as cmd-argv[0] Edit 2: Added call to calloc Edit 3: Edit mem management with tips from Alok Edit 4: More tips from alok Edit 5: Almost working.. the app segfaults when freeing the command struct Finally: Seems like I was missing the terminating NULL, so adding the line: cmd->argv[g.gl_pathc + g.gl_offs] = 0; seemed to make it work.

    Read the article

  • Objective-C++ Memory Problem

    - by Stephen Furlani
    Hello, I'm having memory woes. I've got a C++ Library (Equalizer from Eyescale) and they use the Traversal Visitor Pattern to allow you to add new functionality to their classes. I've finally figured out how it works, and I've got a Visitor that just returns the properties from one of the objects. (since I don't know how they're allocated). so. My little code does this: VisitorResult AGLContextVisitor::visit( Channel* channel ) { // Search through Nodes, Pipes until we get to the right window. // Add some code to make sure we find the right one? // Not executing the following code as C++ in gdb? eq::Window* w = channel->getWindow(); OSWindow* osw = w->getOSWindow(); AGLWindow* aw = (AGLWindow *)osw; AGLContext agl_ctx = aw->getAGLContext(); this->setContext(agl_ctx); return TRAVERSE_PRUNE; } So here's the problem. eq::Window* w = channel->getWindow(); (gdb) print w 0x0 BUT If I do this: (gdb) set objc-non-blocking-mode off (gdb) print w=channel->getWindow() 0x300effb9 // an honest memory location, and sets w as verified in the Debugger window of XCode. It does the same thing for osw. I don't get it. Why would something work in (gdb) but not in the code? The file is completely a cpp file, but it seems to be running in objc++, since I need to turn blocking off. Help!? I feel like I'm missing some memory-management basic thing here, either with C++ or Obj-C. [edit] channel-getWindow() is supposed to do this: /** @return the parent window. @version 1.0 */ Window* getWindow() { return _window; } The code also executes fine if I run it from a C++-only application. [edit] No... I tried creating a simple stand-alone program since I was tired of running it as a plugin. Messy to debug. And no, it doesn't run in the C++ program either. So I'm really at a loss as to what I'm doing wrong. Thanks, -- Stephen Furlani

    Read the article

  • how to refactor user-permission system?

    - by John
    Sorry for lengthy question. I can't tell if this should be a programming question or a project management question. Any advice will help. I inherited a reasonably large web project (1 year old) from a solo freelancer who architected it then abandoned it. The project was a mess, but I cleaned up what I could, and now the system is more maintainable. I need suggestions on how to extend the user-permission system. As it is now, the database has a t_user table with the column t_user.membership_type. Currently, there are 4 membership types with the following properties: 3 of the membership types are almost functionally the same, except for the different monthly fees each must pay 1 of the membership type is a "fake-user" type which has limited access ( different business logic also applies) With regards to the fake-user type, if you look in the system's business logic files, you will see a lot of hard-coded IF statements that do something like if (fake-user) { // do something } else { // a paid member of type 1,2 or 3 // proceed normally } My client asked me to add 3 more membership types to the system, each of them with unique features to be implemented this month, and substantive "to-be-determined" features next month. My first reaction is that I need to refactor the user-permission system. But it concerns me that I don't have enough information on the "to-be-determined" membership type features for next month. Refactoring the user-permission system will take a substantive amount of time. I don't want to refactor something and throw it out the following month. I get substantive feature requests on a monthly basis that come out of the blue. There is no project road map. I've asked my client to provide me with a roadmap of what they intend to do with the new membership types, but their answer is along the lines of "We just want to do [feature here] this month. We'll think of something new next month." So questions that come to mind are: 1) Is it dangerous for me to refactor the user permission system not knowing what membership type features exist beyond a month from now? 2) Should I refactor the user permission system regardless? Or just continue adding IF statements as needed in all my controller files? Or can you recommend a different approach to user permission systems? Maybe role-based ? 3) Should this project have a road map? For a 1 year old project like mine, how far into the future should this roadmap project? 4) Any general advice on the best way to add 3 new membership types?

    Read the article

  • What languages, frameworks, and technologies have you used to implement document searching?

    - by Bill Brasky
    I am at a new company and one of our goals is to implement a document search portal for our team and our clients. I am a bit worried that if we use an external service provider like Salesforce or some other ECM in the cloud there will be a lot of integration work in the future. From a client perspective, these documents will also exist in the same bucket as our structured content (stored in the DB, not a MS Word doc). If you have implemented document searching, what languages, frameworks, and technologies have you used? Do you have any failure stories? I don't have a problem using something out of the box, but I think it is important that we have control over the documents and the API to access them. I would like to use Rails if we go fully custom.

    Read the article

  • Web Services, Memory Leaks and CRM

    - by Neil
    Hi, I have a website that allows users to upload a csv file. This calls a service that reads the information from the csv, puts it into DynamicEntity objects and calls the CRM service to Create/Update entities in CRM. When this service creates/updates an entity this kicks off other plugins to apply certain business rules. These rules can also Create or Update entites in CRM. The issue here is that the handle count of the w3wp.exe process that the website is calling increases every time the an entity is created or updated and it never comes back down. I tried putting Garbage Collection code in the business rules and this reduces the handle count of the CRM w3wp process (run by the Network Service), but not the other w3wp process. Should I have Dispose methods on the Web Service that calls the CRM service? I hope that makes sense. I'm not overly familiar with memory management issues so any help is appreciated. Can anybody give me some tips on how to stop this from occurring? Thanks, Neil -- EDIT Okay well the handle count goes up when I call the Service.Create(DynamicEntity) method. I don't think placing any code here would be beneficial. When I exit the method/class/service that contains this call the handle count stays as it is. What I need to know is whether this is something I should be managing or is it something CRM takes care of (or doesn't take care of but I can't do anything about it) -- Another Edit Right this is how it works. 1) We have CRM and its related services 2) We have another service independent of CRM that uses the CRM services (number 1 above) to create entities based on csv info passed into it 3) We have a website that allows a user to upload a csv, and calls service no 2 above to Create/Update entities in CRM 4) We have plugins fired by CRM which use Service 1 above to create/update entities So the user uploads a csv to the website (3), this fires a service(2). When service 2 creates an entity using service 1, Service 4 fires. Service 4 calls also uses service 1 to Create entities, and when these services are called (using the Service.Create() method) the handle count of the process increases. When the method/class/services finish the handle count remains the same, and so when the whole process occurs again the handle count will increased again.

    Read the article

  • Does anyone else think instance variables are problematic in database-backed applications?

    - by Ben Aston
    It occurs to me that state control in languages like C# is not well supported. By this, I mean, it is left upto the programmer to manage the state of in-memory objects. A common use-case is that instance variables in the domain-model are copies of information residing in persistent storage (i.e. the database). Clearly this violates the single point of authority principle, and "synchronisation" has to be managed by the developer. I envisage a system where instead of instance variables, we have simple public access/mutator methods marked with attributes that link them to the database, and where reads and writes are mediated by a framework that decides whether to hit the database. Does such a system exist? Am I completely missing the point, or is there some truth to this idea?

    Read the article

  • Object relationships

    - by Hammerstein
    This stems from a recent couple of posts I've made on events and memory management in general. I'm making a new question as I don't think the software I'm using has anything to do with the overall problem and I'm trying to understand a little more about how to properly manage things. This is ASP.NET. I've been trying to understand the needs for Dispose/Finalize over the past few days and believe that I've got to a stage where I'm pretty happy with when I should/shouldn't implement the Dispose/Finalize. 'If I have members that implement IDisposable, put explicit calls to their dispose in my dispose method' seems to be my understanding. So, now I'm thinking maybe my understanding of object lifetimes and what holds on to what is just wrong! Rather than come up with some sample code that I think will illustrate my point, I'm going to describe as best I can actual code and see if someone can talk me through it. So, I have a repository class, in it I have a DataContext that I create when the repository is created. I implement IDisposable, and when my calling object is done, I call Dispose on my repository and explicitly call DataContext.Dispose( ). Now, one of the methods of this class creates and returns a list of objects that's handed back to my front end. Front End - Controller - Repository - Controller - Front End. (Using Redgate Memory Profiler, I take a snapshot of my software when the page is first loaded). My front end creates a controller object on page load and then makes a request to the repository sending back a list of items. When the page is finished loading, I call Dispose on the controller which in turn calls dispose on the context. In my mind, that should mean that my connection is closed and that I have no instances of my controller class. If I then refresh the page, it jumps to two 'Live' instances of the controller class. If I look at the object retention graph, the objects I created in my call to the list are being held onto ultimately by what looks like Linq. The controller/repository aside, if I create a list of objects somewhere, or I create an object and return it somewhere, am I safe to just assume that .NET will eventually come and clean things up for me or is there a best practice? The 'Live' instances suggest to me that these are still in memory and active objects, the fact that RMP apparently forces GC doesn't mean anything?

    Read the article

  • memory usage in iOS

    - by varun
    My app has a simple UI interface having simple buttons, date picker, picker view, table view, action sheet, toolbar, alert boxes etc. No images, no network access. Just plain simple UI. It accesses SQLite database a lot. ARC option is enabled. I have many questions to ask: In .h files, I am defining IBOutlets like @property(nonatomic, retain) IBOutlet UIButton *bt; Where do i need to do bt=nil? in didReceiveMemoryWarning or viewDidLoad Live Bytes in Instruments tool is 4-5MB. Is it enough or I need to reduce memory usage? If so, how can I do so? Please mention few important points. Also, what all need to be added to the following methods? applicationDidReceiveMemoryWarning UIApplicationDidReceiveMemoryWarningNotification

    Read the article

  • using arrays to get best memory alignment and cache use, is it necessary?

    - by Alberto Toglia
    I'm all about performance these days cause I'm developing my first game engine. I'm no c++ expert but after some research I discovered the importance of the cache and the memory alignment. Basically what I found is that it is recommended to have memory well aligned specially if you need to access them together, for example in a loop. Now, In my project I'm doing my Game Object Manager, and I was thinking to have an array of GameObjects references. meaning I would have the actual memory of my objects one after the other. static const size_t MaxNumberGameObjects = 20; GameObject mGameObjects[MaxNumberGameObjects]; But, as I will be having a list of components per object -Component based design- (Mesh, RigidBody, Transformation, etc), will I be gaining something with the array at all? Anyway, I have seen some people just using a simple std::map for storing game objects. So what do you guys think? Am I better off using a pure component model?

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >