Search Results

Search found 31269 results on 1251 pages for 'process management'.

Page 38/1251 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Process.Start() and ShellExecute() fails with URLs on Windows 8

    - by Rick Strahl
    Since I installed Windows 8 I've noticed that a number of my applications appear to have problems opening URLs. That is when I click on a link inside of a Windows application, either nothing happens or there's an error that occurs. It's happening both to my own applications and a host of Windows applications I'm running. At first I thought this was an issue with my default browser (Chrome) but after switching the default browser to a few others and experimenting a bit I noticed that the errors occur - oddly enough - only when I run an application as an Administrator. I also tried switching to FireFox and Opera as my default browser and saw exactly the same behavior. The scenario for this is a bit bizarre: Running on Windows 8 Call Process.Start() (or ShellExecute() in Win32 API) with a URL or an HTML file Run 'As Administrator' (works fine under non-elevated user account!) or with UAC off A browser other than Internet Explorer is set as your Default Web Browser Talk about a weird scenario: Something that doesn't work when you run as an Administrator which is supposed to have rights to everything on the system! Instead running under an Admin account - either elevated with a User Account Control prompt or even when running as a full Administrator fails. It appears that this problem does not occur for everyone, but when I looked for a solution to this, I saw quite a few posts in relation to this with no clear resolutions. I have three Windows 8 machines running here in the office and all three of them showed this behavior. Lest you think this is just a programmer's problem - this can affect any software running on your system that needs to run under administrative rights. Try it out Now, in order for this next example to fail, any browser but Internet Explorer has to be your default browser and even then it may not fail depending on how you installed your browser. To see if this is a problem create a small Console application and call Process.Start() with a URL in it:namespace Win8ShellBugConsole { class Program { static void Main(string[] args) { Console.WriteLine("Launching Url..."); Process.Start("http://microsoft.com"); Console.Write("Press any key to continue..."); Console.ReadKey(); Console.WriteLine("\r\n\r\nLaunching image..."); Process.Start(Path.GetFullPath(@"..\..\sailbig.jpg")); Console.Write("Press any key to continue..."); Console.ReadKey(); } } } Compile this code. Then execute the code from Explorer (not from Visual Studio because that may change the permissions). If you simply run the EXE and you're not running as an administrator, you'll see the Web page pop up in the browser as well as the image loading. Now run the same thing with Run As Administrator: Now when you run it you get a nice error when Process.Start() is fired: The same happens if you are running with User Account Control off altogether - ie. you are running as a full admin account. Now if you comment out the URL in the code above and just fire the image display - that works just fine in any user mode. As does opening any other local file type or even starting a new EXE locally (ie. Process.Start("c:\windows\notepad.exe"). All that works, EXCEPT for URLs. The code above uses Process.Start() in .NET but the same happens in Win32 Applications that use the ShellExecute API. In some of my older Fox apps ShellExecute returns an error code of 31 - which is No Shell Association found. What's the Deal? It turns out the problem has to do with the way browsers are registering themselves on Windows. Internet Explorer - being a built-in application in Windows 8 - apparently does this correctly, but other browsers possibly don't or at least didn't at the time I installed them. So even Chrome, which continually updates itself, has a recent version that apparently has this registration issue fixed, I was unable to simply set IE as my default browser then use Chrome to 'Set as Default Browser'. It still didn't work. Neither did using the Set Program Associations dialog which lets you assign what extensions are mapped to by a given application. Each application provides a set of extension/moniker mappings that it supports and this dialog lets you associate them on a system wide basis. This also did not work for Chrome or any of the other browsers at first. However, after repeated retries here eventually I did manage to get FireFox to work, but not any of the others. What Works? Reinstall the Browser In the end I decided on the hard core pull the plug solution: Totally uninstall and re-install Chrome in this case. And lo and behold, after reinstall everything was working fine. Now even removing the association for Chrome, switching to IE as the default browser and then back to Chrome works. But, even though the version of Chrome I was running before uninstalling and reinstalling is the same as I'm running now after the reinstall now it works. Of course I had to find out the hard way, before Richard commented with a note regarding what the issue is with Chrome at least: http://code.google.com/p/chromium/issues/detail?id=156400 As expected the issue is a registration issue - with keys not being registered at the machine level. Reading this I'm still not sure why this should be a problem - an elevated account still runs under the same user account (ie. I'm still rickstrahl even if I Run As Administrator), so why shouldn't an app be able to read my Current User registry hive? And also that doesn't quite explain why if I register the extensions using Run As Administrator in Chrome when using Set as Default Browser). But in the end it works… Not so fast It's now a couple of days later and still there are some oddball problems although this time they appear to be purely Chrome issues. After the reinstall Chrome seems to pop up properly with ShellExecute() calls both in regular user and Admin mode. However, it now looks like Chrome is actually running two completely separate user profiles for each. For example, when I run Visual Studio in Admin mode and go to View in browser, Chrome complains that it was installed in Admin mode and can't launch (WTF?). Then you retry a few times later and it ends up working. When launched that way some of the plug-ins installed don't show up with the effect that sometimes they're visible sometimes they're not. Also Chrome seems to loose my configuration and Google sign in between sessions now, presumably when switching user modes. Add-ins installed in admin mode don't show up in user mode and vice versa. Ah, this is lovely. Did I mention that I freaking hate UAC precisely because of this kind of bullshit. You can never tell exactly what account your app is running under, and apparently apps also have a hard time trying to put data into the right place that works for both scenarios. And as my recent post on using Windows Live accounts shows it's yet another level of abstraction ontop of the underlying system identity that can cause all sort of small side effect headaches like this. Hopefully, most of you are skirting this issue altogether - having installed more recent versions of your favorite browsers. If not, hopefully this post will take you straight to reinstallation to fix this annoying issue.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Do you really need cable management for a cabinet with just switches and patch panels?

    - by ObligatoryMoniker
    We are about to start wiring out a building expansion and our vendor has laid out the racks in the following configuration: Option 1 1U Fiber patch panel 2U Cable Manager 2U 48 port Patch Panel 2U Cable Manager 2U 48 port Patch Panel 2U Cable Manager 1U 48 port Switch 2U Cable Manager 1U 48 port Switch Total = 15U All the patch panels would be connected to the switches with 1ft+ cables fed through cable management. What I am considering instead is: Option 2 1U Fiber patch panel 1U 24 port Patch Panel 1U 48 port Switch 2U 48 port Patch Panel 1U 48 port Switch 2U 48 port Patch Panel Total = 8U All of the patch panels would be connected to the switches with .5 ft cables directly on their face with the top 24 ports of each switch patched to the patch panel above it and the bottom 24 ports of each switch patched to the patch panel beneath it which would not require any cable management. If I go with option 2 it save all of the space used by cable management and allows us to keep adding on switches and patch panels at the end without having to re-cable all of the patch panels above. Our vendor has indicated that this is not best practice and that .5ft cables will introduce cross talk. I could understand that being the case if we were connecting the .5 ft cable directly into another switch but we are connecting it to a patch panel that likely has another 150 ft cable run from the back of the patch panel out to the port in the building in which case the real resulting cable is 150.5 ft at minimum before even connecting it to a PC. It seems like it makes much more sense to go with option 2. It is easier to expand, saves space, and saves money on cabling and cable management. Does this kind of configuration make sense or is there a legitimate reason to choose Option 1 over Option 2?

    Read the article

  • Spawned Process Terminated in GP Startup Script

    - by Charles Gargent
    I have a Group Policy Startup Script which runs synchronously. I now need this script to run one process asynchronously. So far I have managed to get the spawned process running via the command below, however once the rest of the script finishes and the GP Startup Script "phase" finishes and the logon prompt is shown, my spawned process is terminated. Is there any way to have this process continue beyond the Startup Script phase? cmd /c start spawned.bat I guess the reason why it terminates is because the process was launched by the Startup Script process and when the parent process terminates so do its children. PS I need it to be launched via the exisitng script.

    Read the article

  • /sbin/getty process causing 100% CPU utilization

    - by scrrr
    I have an instance of Ubuntu 12.04 LTS (GNU/Linux 3.2.0-25-virtual i686) running as a KVM-VM on a host-machine that runs one more VM beside it. I deploy a Ruby on Rails application using the Capistrano deployment-gem. However, if I deploy twice in a row in a short time, the CPU usage jumps to 100% because of the /sbin/getty process. How can this be? I believe getty is a rather simple program that passes a login-name from a terminal to a login-process. Also: In my Capfile (Capistrano configuration file) I am running certain commands after the Rails application is deployed including a call to sudo /sbin/restart <APPNAME> which is an upstart task. Could this be related somehow? I can always kill the getty process and the problem is gone until the next deployment, but I would rather understand and fix the problem. Any help is appreciated. Attached is a screenshot of my problem.

    Read the article

  • Inside the JCP (Java Community Process)

    - by Tori Wieldt
    There has been lots of interest lately in the Java Community Process (JCP) and how it works. Here are two great chances to learn about the JCP, both are interviews with Patrick Curran, Chair of the JCP and director of the JCP's Program Management Office: Video InterviewGet an insider view of the Java Community Process (JCP) in this Oracle Technology Network (OTN) TechCast. (See below or click here.) Justin Kestelyn, Oracle Technical Network Senior Director, sits down to have a beer with Patrick Curran and discuss the JCP. They start with the basics of what is the JCP, then describe how its governance model has evolved, addressing common misperceptions, and explain how and why developers around the world can get involved.Written Interview Janice J. Heiss interviews Patrick Curran to get his perspective on recent developments at the JCP, ongoing concerns and controversies, its history -- and its future in this article titled "The Latest on the Java Community Process: A Conversation wiht Patrick Curran."The home of the JCP is jcp.org.

    Read the article

  • Stumbling Through: Making a case for the K2 Case Management Framework

    I have recently attended a three-day training session on K2s Case Management Framework (CMF), a free framework built on top of K2s blackpearl workflow product, and I have come away with several different impressions for some of the different aspects of the framework.  Before we get into the details, what is the Case Management Framework?  It is essentially a suite of tools that, when used together, solve many common workflow scenarios.  The tool has been developed over time by K2 consultants that have realized they tend to solve the same problems over and over for various clients, so they attempted to package all of those common solutions into one framework.  Most of these common problems involve workflow process that arent necessarily direct and would tend to be difficult to model.  Such solutions could be achieved in blackpearl alone, but the workflows would be complex and difficult to follow and maintain over time.  CMF attempts to simplify such scenarios not so much by black-boxing the workflow processes, but by providing different points of entry to the processes allowing them to be simpler, moving the complexity to a middle layer.  It is not a solution in and of itself, development is still required to tie the pieces together. CMF is under continuous development, both a plus and a minus in that bugs are fixed quickly and features added regularly, but it may be difficult to know which versions are the most stable.  CMF is not an officially supported K2 product, which means you will not get technical support but you will get access to the source code. The example given of a business process that would fit well into CMF is that of a file cabinet, where each folder in said file cabinet is a case that contains all of the data associated with one complaint/customer/incident/etc. and various users can access that case at any time and take one of a set of pre-determined actions on it.  When I was given that example, my first thought was that any workflow I have ever developed in the past could be made to fit this model there must be more than just this model to help decide if CMF is the right solution.  As the training went on, we learned that one of the key features of CMF is SharePoint integration as each case gets a SharePoint site created for it, and there are a number of excellent web parts that can be used to design a portal for users to get at all the information on their cases.  While CMF does not require SharePoint, without it you will be missing out on a huge portion of functionality that CMF offers.  My opinion is that without SharePoint integration, you may as well write your workflows and other components the old fashioned way. When I heard that each case gets its own SharePoint site created for it, warning bells immediately went off in my head as I felt that depending on the data load, a CMF enabled solution could quickly overwhelm SharePoint with thousands of sites so we have yet another deciding factor for CMF:  Just how many cases will your solution be creating?  While it is not necessary to use the site-per-case model, it is one of the more useful parts of the framework.  Without it, you are losing a big chunk of what CMF has to offer. When it comes to developing on top of the Case Management Framework, it becomes a matter of configuring what makes up a case, what can be done to a case, where each action on a case should take the user, and then typing up actions to case statuses.  This last step is one that I immediately warmed up to, as just about every workflow Ive designed in the past needed some sort of mapping table to set the status of a work item based on the action being taken definitely one of those common solutions that it is good to see rolled up into a re-useable entity (and it gets a nice configuration UI to boot!).  This concept is a little different than traditional workflow design, in that you dont have to think of an end-to-end process around passing a case along a path, rather, you must envision the case as central object with workflow threads branching off of it and doing their own thing with the case data.  Certainly there can be certain workflow threads that get rather complex, but the idea is that they RELATE to the case, they dont BECOME the case (though it is still possible with action->status mappings to prevent certain actions in certain cases, so it isnt always a wide-open free for all of actions on a case). I realize that this description of the Case Management Framework merely scratches the surface on what the product actually can do, and I dont think Ive conclusively defined for what sort of business scenario you can make a case for Case Management Framework.  What I do hope to have accomplished with this post is to raise awareness of CMF there is a (free!) product out there that could potentially simplify a tangled workflow process and give (for free!) a very useful set of SharePoint web parts and a nice set of (free!) reports.  The best way to see if it will truly fit your needs is to give it a try did I mention it is FREE?  Er, ok, so it is free, but only obtainable at this time for K2 partnersDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Real life example of an agile game development process

    - by Ken
    I'm trying to learn about applying agile methodologies to game development. But seems to be impossible to find real life examples. What I am looking for are things like; Initial user stories Final user stories (complete, covering the entire game requirements) Acceptance criteria Task list Sprint backlogs (before and after each sprint) The agile books seem to have some limited examples, many of which seem contrived. In this era of open source software, there must be an documented example of the process applied to a game that is publicly available. I am asking specifically about games because they are so different from normal applications. Regular applications are built to all users to complete specific tasks in order to get stuff done(book a room, print a report etc). People play games for much less tangible reasons, so I think the process is significantly different. [it doesn't have to be scrum, it could be any process, just needs to be a real life example game and be reasonably complete]

    Read the article

  • Process development lifecycle in Oracle BPM 11g

    - by mesriniv
       Oracle BPM 11g platform provides two modeling tools tailored to different audience. The BPM Process Composer component is a web-based, role-driven, collaborative platform for discovery, design and documentation of business processes aimed at business audience. It empowers the business user to participate in the definition, feedback and design of business processes. The other modeling tool is Oracle BPM Studio that runs in the JDeveloper IDE .  Irrespective of the tool used, same BPMN and related artifacts are authored - that is , this is not import/export but just multiple tools working with same assets. In addition to BPMN 2.0, both tools provides editors for process data, organizational roles, human tasks (including assignment and user interface), business rules. The Oracle BPM design-time repository (Oracle Metadata Services Repository) is the glue that facilitates shared work environment across multiple BPM Composer and Studio clients.This document explains how to create snapshots and versions of your BPM projects and captures best practices for shared process development lifecycle. http://java.net/projects/oraclebpmsuite11g/downloads/directory/Samples/bpm-122-processdevelopment-lifecycle

    Read the article

  • R12.0 Cash Management Consolidated Patch Collection (CPC) And R12.1 Cash Management Recommended Patch Collection (RPC)

    - by user793553
    If you have Oracle E-Business Suite's Cash Management (CE) application installed, you'll want to be sure to install the latest CPC (Consolidated Patch Collection) if you are using a R12.0 version of the apps, or the latest RPC (Recommended Patch Collection) for the R12.1 version of the apps. These collections give you all the fixes currently available for known issues in the specified versions of the application, including all of the latest Root Cause Analysis Fixes (RCAs)! What is an "RPC" (for R12.1 users)? Since the release of 12.1, a number of recommended patches for Oracle Cash Management have been made available as standalone patches to help address important business process issues. Adoption of these patches was highly recommended at the time, but not always implemented, so to further facilitate adoption of these patches, Oracle consolidated them into product-specific Recommended Patch Collections (RPCs) - a collection of recommended patches. They were created by Oracle Development with the following goals in mind: Stability: To address data integrity issues that have been identified by Oracle Development and Oracle Software Support as having the potential to interfere with the normal completion of important business processes (such as, period close, etc.). Root Cause Fixes (RCAs): To make available root cause fixes for known data integrity issues. Compact: To keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: To compile the collection of patches based on functional areas, allowing a customer to apply multiple RPCs at once, or in phases (based on individual needs and goals). Where to start ALL R12 Cash Management users (R12.0 and R12.1 users) should start with the following Note on My Oracle Support (MOS): Doc ID 1367845.1: R12: Cash Management Recommended Patch Collections It's a great place for important implementation information about both sets of critical patch collections! For R12.1x users R12.1 users should also take a look at the documents below for even more information about the RPC for the R12.1.x versions of the Cash Management application, and other related available RPCs: Note Number  Title                                                                                                      1489997.1 Master Troubleshooting Guide for CE: Reconciliation & Clearing [VIDEO] 954704.1 EBS: R12.1 Oracle Financials Recommended Patch Collections (RPCs) 1316506.1 R12: Oracle CE: Upgrading from R11i to R12.1: Latest Recommended Patches Patch Wizard Utility While a patch may contain several hundred files, the impact on your system may actually be minimal. Patches contain hard prerequisites that are intended to make a patch work on a very low code baseline. The Patch Wizard Utility will give you a detailed analysis of the patch’s impact on your instance BEFORE it’s applied, so you’ll know exactly what to expect from the application. Please refer to Doc ID 976188.1 for more information on this important utility

    Read the article

  • How to get email notification when process on ubuntu screen stopped?

    - by Manoj2411
    I have a application running on Ubuntu 12.04.2 LTS and I am running an application server like mailman server or faye server on ubuntu screen. The problem is, at times the process that is running on screen gets stopped and my application crashes because of that. Now I want to be notified whenever that 'faye server' or 'mailman server' is stopped that is running on screen. I am using Digital Ocean and I have already setup postfix server.

    Read the article

  • How can I tell which "explorer.exe" process is the main one?

    - by HodofHod
    I have a batch file that changes a few registry files, and then restarts explorer.exe so that they take effect. I'm using the command taskkill /f /im explorer.exe and then explorer.exe which of course kills all the explorer.exe processes, including the explorer windows I have open. Obviously, I am using the option to Launch folder windows in a separate process. Is there any way I can determine which instance of explorer.exe is the main one, and just kill that?

    Read the article

  • Why does starting a process in the background prevent me from recalling previous commands in the command prompt?

    - by leeand00
    I'm running asynchronous command prompt commands in cmd on Windows 7 64-bit and for some reason it effects my ability to recall previous commands just after I run them; for example: I might run something like: start /B rmdir /Q /S .\some_massive_directory And next I try to press the up arrow to recall the text of the previous command..but nothing happens...is this because whatever data structure is holding my commands is locked by the process I sent to the background, or is it because I am using a network mapped drive to run my command on?

    Read the article

  • Is an average RAM usage per Apache process of 43 MB "normal" for a Social Networking site? [closed]

    - by Programmer
    I have a Social Networking site that runs on a single LAMP Server that handles everything. The average RAM usage per Apache process is 43 MB. Is that amount roughly within the expected range for a Social Networking site, or is it too high? If it's too high, where and how can I look to bring that average number down? (If you need more details to determine whether it's within the expected range or not, just let me know and I'll edit my question to provide them as best I can.)

    Read the article

  • LexisNexis and Oracle Join Forces to Prevent Fraud and Identity Abuse

    - by Tanu Sood
    Author: Mark Karlstrand About the Writer:Mark Karlstrand is a Senior Product Manager at Oracle focused on innovative security for enterprise web and mobile applications. Over the last sixteen years Mark has served as director in a number of tech startups before joining Oracle in 2007. Working with a team of talented architects and engineers Mark developed Oracle Adaptive Access Manager, a best of breed access security solution.The world’s top enterprise software company and the world leader in data driven solutions have teamed up to provide a new integrated security solution to prevent fraud and misuse of identities. LexisNexis Risk Solutions, a Gold level member of Oracle PartnerNetwork (OPN), today announced it has achieved Oracle Validated Integration of its Instant Authenticate product with Oracle Identity Management.Oracle provides the most complete Identity and Access Management platform. The only identity management provider to offer advanced capabilities including device fingerprinting, location intelligence, real-time risk analysis, context-aware authentication and authorization makes the Oracle offering unique in the industry. LexisNexis Risk Solutions provides the industry leading Instant Authenticate dynamic knowledge based authentication (KBA) service which offers customers a secure and cost effective means to authenticate new user or prove authentication for password resets, lockouts and such scenarios. Oracle and LexisNexis now offer an integrated solution that combines the power of the most advanced identity management platform and superior data driven user authentication to stop identity fraud in its tracks and, in turn, offer significant operational cost savings. The solution offers the ability to challenge users with dynamic knowledge based authentication based on the risk of an access request or transaction thereby offering an additional level to other authentication methods such as static challenge questions or one-time password when needed. For example, with Oracle Identity Management self-service, the forgotten password reset workflow utilizes advanced capabilities including device fingerprinting, location intelligence, risk analysis and one-time password (OTP) via short message service (SMS) to secure this sensitive flow. Even when a user has lost or misplaced his/her mobile phone and, therefore, cannot receive the SMS, the new integrated solution eliminates the need to contact the help desk. The Oracle Identity Management platform dynamically switches to use the LexisNexis Instant Authenticate service for authentication if the user is not able to authenticate via OTP. The advanced Oracle and LexisNexis integrated solution, thus, both improves user experience and saves money by avoiding unnecessary help desk calls. Oracle Identity and Access Management secures applications, Juniper SSL VPN and other web resources with a thoroughly modern layered and context-aware platform. Users don't gain access just because they happen to have a valid username and password. An enterprise utilizing the Oracle solution has the ability to predicate access based on the specific context of the current situation. The device, location, temporal data, and any number of other attributes are evaluated in real-time to determine the specific risk at that moment. If the risk is elevated a user can be challenged for additional authentication, refused access or allowed access with limited privileges. The LexisNexis Instant Authenticate dynamic KBA service plugs into the Oracle platform to provide an additional layer of security by validating a user's identity in high risk access or transactions. The large and varied pool of data the LexisNexis solution utilizes to quiz a user makes this challenge mechanism even more robust. This strong combination of Oracle and LexisNexis user authentication capabilities greatly mitigates the risk of exposing sensitive applications and services on the Internet which helps an enterprise grow their business with confidence.Resources:Press release: LexisNexis® Achieves Oracle Validated Integration with Oracle Identity Management Oracle Access Management (HTML)Oracle Adaptive Access Manager (pdf)

    Read the article

  • Javascript new keyword and memory management

    - by Whyamistilltyping
    Coming from C++ it is hard grained into my mind that everytime I call new I call delete. In javascript I find myself calling new occasionally in my code but (hoping) the garbage collection functionality in the browser will take care of the mess for me. I don't like this - is there a 'delete' method in javascript and is how I use it different from in C++? Thanks.

    Read the article

  • SQL Server Management Studio Express 2005 has no Configuration Manager

    - by brohjoe
    Where is the configuration manager for SQL Express 2005? I need to configure SQL Server for TCP/IP but there is no configuration manager with the package. I see SQL Server Database Publishing Wizard, I see SQL Server Migration Assistant for Access, but no Configuration Manager. According to the MSDN, there should be one. I've even looked online for a download of the Configuration Manager for SQL Server 2005, but could not find one. Did I miss something in the download or should I just scrap SQL Server Express and download the full-blown SQL Server for Developers?

    Read the article

  • Memory management in objective-c

    - by prathumca
    I have this code in one of my classes: - (void) processArray { NSMutableArray* array = [self getArray]; . . . [array release]; array = nil; } - (NSMutableArray*) getArray { //NO 1: NSMutableArray* array = [[NSMutableArray alloc]init]; //NO 2: NSMutableArray* array = [NSMutableArray array]; . . . return array; } NO 1: I create an array and return it. In the processArray method I release it. NO 2: I get an array by simply calling array. As I'm not owner of this, I don't need to release it in the processArray method. Which is the best alternative, NO 1 or NO 2? Or is there a better solution for this?

    Read the article

  • How to weed out the bad programmers from the competent ones in the interview process

    - by thaBadDawg
    I am getting ready to add another developer to my team and I want to try and fix the mistakes I made in my last hiring cycle. I like to think of myself as a competent programmer (I can be given a project, I can deliver on that project and the deliverable work with very few if any bugs) and so I ask questions that I would ask myself in an interview. I've come to the conclusion that my interviewing skills are completely lacking because the last two people I've hired interviewed incredibly well but have been less than ideal at the tasks that they've been given. My CTO (who was completely useless in giving any guidance as to how) suggested I improve on my interviewing skills. The question is this - How does one programmer interview another programmer and get an understanding of the other programmer's abilities? Edit: Though slightly different, the answers provided to this question could be of use to you. That question concerns specific interview questions while yours seems to be more general about interview approaches and not just about the questions themselves. Update: Just for the hell of it I asked two of the guys I worked with if they could do FizzBuzz. 45 minutes and 80 minutes to work it out. And these aren't bottom level guys either.

    Read the article

  • Objective-C (iPhone) Memory Management

    - by Steven
    I'm sorry to ask such a simple question, but it's a specific question I've not been able to find an answer for. I'm not a native objective-c programmer, so I apologise if I use any C# terms! If I define an object in test.h @interface test : something { NSString *_testString; } Then initialise it in test.m -(id)init { _testString = [[NSString alloc] initWithString:@"hello"]; } Then I understand that I would release it in dealloc, as every init should have a release -(void)dealloc { [_testString release]; } However, what I need clarification on is what happens if in init, I use one of the shortcut methods for object creation, do I still release it in dealloc? Doesn't this break the "one release for one init" rule? e.g. -(id)init { _testString = [NSString stringWithString:@"hello"]; } Thanks for your helps, and if this has been answered somewhere else, I apologise!! Steven

    Read the article

  • What Project Management Software should I use?

    - by Vecdid
    I am looking for either an MS tool like project or an open source equivalent. Yes I could google it, but I am looking for some insight from some people whp handle the end of the software I would as a programmer. The tool has to run using IIS as the webserver. What are some of the best features of your suggestion?

    Read the article

  • Cocoa memory management

    - by silvio
    At various points during my application's workflow, I need so show a view. That view is quite memory intensive, so I want it to be deallocated when it gets discarded by the user. So, I wrote the following code: - (MyView *)myView { if (myView != nil) return myView; myView = [[UIView alloc] initWithFrame:CGRectZero]; // allocate memory if necessary. // further init here return myView; } - (void)discardView { [myView discard]; // the discard methods puts the view offscreen. [myView release]; // free memory! } - (void)showView { view = [self myView]; // more code that puts the view onscreen. } Unfortunately, this methods only works the first time. Subsequent requests to put the view onscreen result in "message sent to deallocated instance" errors. Apparently, a deallocated instance isn't the same thing as nil. I thought about putting an additional line after [myView release] that reads myView = nil. However, that could result in errors (any calls to myView after that line would probably yield errors). So, how can I solve this problem?

    Read the article

  • stack and heap issue for iPhone memory management

    - by Forrest
    From this post I got know that the Objective-C runtime does not allow objects to be instantiated on the stack, but only on the heap; this means that you don’t have “automatic objects”, nor things like auto_ptr objects to help you manage memory; Someone give one example in post Objective C: Memory Allocation on stack vs. heap NSString* str = @"hello"; but this NSString is also not allocated in stack. Feel odd that this str is static. (Who can explain this ? ) Question here is that why there is no heap ? even mixing c++ together with Object C ? /////////////////////////////// Clear my question /////////////////////////////// I am confused , so questions are not clear. Let me put in this way. 1) All Object C objects should be alloc in stack ? ( I think yes ) 2)In C++, there are stack for memory, so for iOS app, also have stack ? ( I think yes ) 3) for iOS app, if only use Object C, so what is the usage of stack ? what kind of objects should use stack then ?

    Read the article

  • java memory management

    - by pavlos
    i have the following code snapshot: public void getResults( String expression, String collection ){ ReferenceList list; Vector lists = new Vector(); list = invertedIndex.get( 1 )//invertedIndex is class member lists.add(list); } when the method is finished, i supose that the local objects ( list, lists) are "destroyed". Can you tell if the memory occupied by list stored in invertedIndex is released as well? Or does java allocate new memory for list when assigning list = invertedIndex.get( 1 );

    Read the article

  • Whats the difference between Process and ProcessStartInfo in C#?

    - by JimDel
    Whats the difference between Process and ProcessStartInfo? Ive used both to launch external programs but there has to be a reason there are two ways to do it. Here are two examples. Process notePad = new Process(); notePad.StartInfo.FileName = "notepad.exe"; notePad.StartInfo.Arguments = "ProcessStart.cs"; notePad.Start(); and ProcessStartInfo startInfo = new ProcessStartInfo(); startInfo.FileName = "notepad.exe"; startInfo.Arguments = " ProcessStart.cs "; Process.Start(startInfo) Thanks

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >