Search Results

Search found 27654 results on 1107 pages for 'build management'.

Page 80/1107 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • It's The End of Work as We Know It, But I Feel Fine

    - by Naresh Persaud
    If you are attending Open World this year, don't miss Amit Jasuja's session on trends in Identity Management. This session will take place on Monday October 1st in Moscone West at 10:45. You can join the conversation on Twitter as Amit Jasuja discusses the trends that are shaping Identity Management as a market and how Oracle is responding to these secular trends. Use hashtag OracleIDM. In addition, here’s a list of the sessions in the  Identity Management  track. In Amit's session, he will discuss how the workplace is changing. The pace of technology is accelerating and work is no longer a place but rather an activity. We are behaving socially in our professional lives and our professional responsibilities are encroaching on our social lives.  The net result is that we will need to change the way we work and collaborate. Work is anytime and anywhere. This impacts the dynamics of teams and how they access information and applications. Our teams span multiple organizations and "the new work order" means enabling the interaction and securing the experience. It is the end of work as we know it both economically and technologically. Join Amit for this session and you will feel much better about the changing workplace. 

    Read the article

  • How to handle growing QA reporting requirements?

    - by Phillip Jackson
    Some Background: Our company is growing very quickly - in 3 years we've tripled in size and there are no signs of stopping any time soon. Our marketing department has expanded and our IT requirements have as well. When I first arrived everything was managed in Dreamweaver and Excel spreadsheets and we've worked hard to implement bug tracking, version control, continuous integration, and multi-stage deployment. It's been a long hard road, and now we need to get more organized. The Problem at Hand: Management would like to track, per-developer, who is generating the most issues at the QA stage (post unit testing, regression, and post-production issues specifically). This becomes a fine balance because many issues can't be reported granularly (e.g. per-url or per-"page") but yet that's how Management would like reporting to be broken down. Further, severity has to be taken into account. We have drafted standards for each of these areas specific to our environment. Developers don't want to be nicked for 100+ instances of an issue if it was a problem with an include or inheritance... I had a suggestion to "score" bugs based on severity... but nobody likes that. We can't enter issues for every individual module affected by a global issue. [UPDATED] The Actual Questions: How do medium sized businesses and code shops handle bug tracking, reporting, and providing useful metrics to management? What kinds of KPIs are better metrics for employee performance? What is the most common way to provide per-developer reporting as far as time-to-close, reopens, etc.? Do large enterprises ignore the efforts of the individuals and rather focus on the team? Some other questions: Is this too granular of reporting? Is this considered 'blame culture'? If you were the developer working in this environment, what would you define as a measureable goal for this year to track your progress, with the reward of achieving the goal a bonus?

    Read the article

  • A Year of Upheaval for Procurement Professionals-New Report & Webinar

    - by DanAshton
    2013 will see significant changes in priorities and initiatives among procurement professionals as they balance the needs of their enterprises with efforts to add capabilities for long-term procurement success. In response, procurement managers will expand their organization’s spend influence via supplier relationship management, sourcing, and category management. These findings are part of the new report, “2013 Procurement Key Issues: Going Deeper and Broader to Deliver Borderless Procurement Services,” by the Hackett Group. The authors say that compared to similar studies over the last five years, 2013 is registering the greatest year-over-year changes in priorities for both procurement performance and capability issues. Three Important PrioritiesThe survey found that procurement professionals are focusing their attention in three key areas. Cost reduction. Controlling expenses is always a high priority, but with 90 percent of the respondents now placing this at the top of their performance concerns, the Hackett analysts say this “clearly shows that, for better or worse, cost reduction is king” in 2013. Technology innovation. Innovation has shot up significantly in the priority rankings and is now tied with spend influence for second among procurement professionals. Sixty-five percent of the survey participants said pursuing game-changing innovation and technology is a top procurement initiative. Managing supply risk. This area registered a sharp rise in importance because of its role in protecting profits, Hackett says. Supplier compliance with performance milestones and regulatory requirements is receiving particular attention, with an emphasis on efficient management of cross-functional workflows. “These processes create headaches for suppliers and buyers alike, and can detract from strategic value creation when participants are bogged down in processing paper and spreadsheets,” the report explains.  For more insights into the current state of the procurement industry, download the full report, “2013 Procurement Key Issues: Going Deeper and Broader to Deliver Borderless Procurement Services” and watch a Webcast featuring Global Procurement Advisory Practice Leader for The Hackett Group, Chis Sawchuk, and Managing Supervisor of Supply Chain Processes and Systems for Ameren, Chris Nelms. 

    Read the article

  • What is the role of traditional issue tracker when Scrum / Kanban board is used?

    - by Borek
    From a very high level view, to me it seems there are generally 2 types of Project Management tools: Traditional issue trackers like Fogbugz, JIRA, BugZilla, Trac, Redmine etc. Virtual card boards / agile project management tools like Pivotal Tracker, GreenHopper, AgileZen, Trello etc. Sure, they overlap in one way or another, e.g. Pivotal Tracker tasks can be imported to JIRA, GreenHopper itself is implemented on top of JIRA issue base etc. but I think one can still see the difference in orientation between those two types of tools. Traditional issue tracker seems to be used even in companies otherwise doing agile project management. My question is, why do they do that? I also feel that we should use an issue tracker in my company but when I'm thinking about it, I'm not actually sure why should we need it. For example, Trello development seems to be managed by using Trello itself (see this virtual wall) even though they have access to Fogbugz, one of the best issue trackers around. So maybe we don't need traditional issue tracker when we'll be doing 100% of our work in an agile manner using one of the agile PM tools?

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • Webcast: DB Enterprise User Security Integration with Oracle Directory Services

    - by B Shashikumar
    The typical enterprise has a large number of DBA (Database administrator) accounts that are locally managed, which is often very costly, problematic and error-prone. Databases are a crucial component of your enterprise IT infrastructure, housing sensitive corporate data and database user accounts and privileges. To ensure the integrity of your enterprise's data, it's imperative to have a well-managed identity management system. This begins with centralized management of user accounts and access rights. Enterprise User Security (EUS), an Oracle Database Enterprise Edition feature, combined with Oracle Identity Management, gives you the ability to centrally manage database users and their authorizations in one central place. The cost of user provisioning and password resets is dramatically reduced. This technology is a must for new application development and should be considered for existing applications as well. Join Oracle Advisors for a live webcast on Jul 11 at 8am Pacific Time where Oracle experts will briefly introduce EUS, followed by a detailed discussion about the various directory options that are supported, including integration with Microsoft Active Directory. We'll conclude how to avoid common pitfalls deploying EUS with directory services. To register for this event, click here  

    Read the article

  • PeopleSoft CRM 9.2 Release Value Proposition

    - by Race Bannon
    Oracle's PeopleSoft Customer Relationship Management (CRM) delivers solutions that have been tailored to fit your industry business processes, your customer strategies, and your success criteria. With PeopleSoft CRM 9.2, organizations will be able to deploy a solution that delivers built-in best practices specific to your industry with a highly configurable, tightly integrated platform, ensuring that solutions will be fast to implement. The result is less configuration, less customization, and less integration. PeopleSoft Customer Relationship Management (CRM) is a world-class solution for organizations of every size and Oracle’s planned product roadmap for PeopleSoft applications is to deliver valuable, needed features for all of an organization’s constituents along three design principles — Simplicity, Productivity, and Lowered Total Cost of Ownership — as well as new application functionality as prioritized by our customers. The upcoming 9.2 release of PeopleSoft Customer Relationship Management focuses on these themes of Simplicity, Productivity, and Lower Total Cost of Ownership while also delivering robust new functionality to help your organization succeed. The recently published PeopleSoft CRM 9.2 Release Value Proposition provides overviews of the new features and enhancements planned for these applications for Release 9.2. This document offers customers a road map intended to help them assess the business benefits of upgrading to the 9.2 release while also helping them plan their IT projects and investments. (Link is to a My Oracle Support page, available to customers and partners.) Oracle continues to deliver enterprise-wide features that enhance our customer ownership experience and helps them run their businesses more efficiently and profitably. With the CRM 9.2 release, we continue to abide by this firm commitment we’ve made to our customers.

    Read the article

  • Unable to build Python modules in Mandriva 2010

    - by SteveJ
    I am trying to build a Python module (pyfits) but I get the following error: # python setup.py install /home/steve/src/pyfits-2.2.2/stsci_distutils_hack.py:239: DeprecationWarning: os.popen3 is deprecated. Use the subprocess module. (sin, sout, serr) = os.popen3(cmd) running install error: invalid Python installation: unable to open /usr/lib64/python2.6/config/Makefile (No such file or directory) I get the same error when I try and build other modules so my guess is I am missing a Python development library. I am running Mandriva 2010.0, any suggestions?

    Read the article

  • Visual Studio Project build Error [closed]

    - by Mina Sobhy
    I installed Visual Studio 2010 on Windows7 SP1 but a debug error occurs: 1>------ Build started: Project: x, Configuration: Debug Win32 ------ 1>MSVCRTD.lib(crtexe.obj) : error LNK2019: unresolved external symbol main referenced in function __tmainCRTStartup 1>C:\Users\mina\Documents\Visual Studio 2010\Projects\x\Debug\x.exe : fatal error LNK1120: 1 unresolved externals ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • javax.management.ObjectName not found

    - by VANJ
    Oracle 11g on a RHEL Linux (Dell) box Linux *** 2.6.18-164.11.1.el5 #1 SMP Wed Jan 6 13:26:04 EST 2010 x86_64 x86_64 x86_64 GNU/Linux /opt/oracle/product/11.1.0/jdk/bin/java -version java version "1.5.0_11" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_11-b03) Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_11-b03, mixed mode) Where can I find the javax.management.ObjectName class? Thanks

    Read the article

  • Bamboo to Build Specific SVN Revision

    - by Anton Gogolev
    Hi! Imagine there's a project in Bamboo with two build plans: Staging Deployment (SD) and Production Deployment (PD). Building SD checks out latest sources, builds them and deploys a web site to a staging server. Currently, PD does all the same, namely deploys the latest version of a web site to a production server. Clearly, this is not very good: I want to be able to deploy the same exact version of a web site that was previously deployed on a staging server, not the latest one. To illustrate: suppose we're at r101 in SVN repo. Clicking "Build SD" will deploy a web site version, say, 2.1.0.101 to staging server. Now we commit a breaking change and end up at r102. Now I want to deploy to a production server. If I hit "Build PD", Bamboo will happily check out r102 and build it, resulting in version 2.1.0.102 being deployed to a production server. What I want it to do, however, is to build and deploy a version which was previously built in an SD plan (that is, 2.1.0.101). Of course I can make SD plan to tag latest-successful build as tags/builds/latest, but I would rather have Bamboo itself handle that.

    Read the article

  • django custom management command does not show up in production

    - by Tom Tom
    I wrote a custom management command for django. Locally with my dev settings everything works fine. Now I deployed my project onto the production server and the management command does not show up, respectively is not available. But I did not get an error message deploying the project (syncdb). Any ideas where I could try to begin to search? Is there a special command that all custom management commands are "autodiscovered"?

    Read the article

  • Which open-source Scrum project management tool do you use?

    - by jumar
    I'm looking for an open-source Scrum project management tool for a small dev team (3 to 6 developers). I've been impressed by trac but I don't need its bug tracking feature as we already use Mantis. I'm having a look at iceScrum which seems feature-full and shiny but a bit cluttered. A solution that integrates into eclipse would be a plus.

    Read the article

  • In what (small) ways can I modify Octave's compile options to enhance it without breaking it?

    - by irrational John
    If the title to this question seems a bit vague, I am sorry. But I wasn't sure how to distill what I am attempting to do into a single sentence. A few weeks back I learned that I could build and install recent releases of Octave on an Ubuntu 12.04 system by following the steps below. Install the tools needed to compile, link, and run octave. For Ubuntu the commands below have worked for me. sudo apt-get build-dep octave3.2 sudo apt-get install build-essential gnuplot gtk2-engines-pixbuf sudo apt-get install libfontconfig-dev bison Next, download the source code for an Octave release from the Gnu Project Archives for Octave and unpack the archive into a folder on your system. Use the commands below to build, check, and install octave. ./configure make make check sudo make install Unfortunately it turns out that the above builds an Octave that contains all the debugging symbol tables. The object files alone are huge taking up around 1.7 GB. The current Octave documentation suggests To compile without debugging symbols try the command make CFLAGS=-O CXXFLAGS=-O LDFLAGS= instead of just make. However, when I tried this it did not work. The -g option was still used for the compiles. For the heck of it I instead tried ./configure CFLAGS=-O CXXFLAGS=-O and this did work. (Instead of ~1.7GB the result of the build now takes up around 253MB). My questions are Is this actually the correct (recommended?) method to use to compile Octave without debugging symbols (i.e. without -g)? How would I compile Octave so it uses x86_64 rather than x86? Note: I am not asking how to compile Octave to use the (experimental) 64-bit integers for array dimensions. I just want to allow the compiler to use the extra registers and word sizes available when an app runs in 64-bit mode. Is a (more) complete list available for the directives used with the Octave Makefile? I have only seen make, make check, and make install documented. But apparently make distclean is also allowed. (It removes the compilation results so you can do a complete rebuild of everything.) I'm wondering what else might be available. FWIW, I have tried using ./configure CFLAGS="-O3 -mtune=core2 -m64" CXXFLAGS="-O3 -mtune=core2 -m64" and, surprisingly, it not only appeared to build, but also ran and passed the make check tests. The ./configure script even gave me the (deceptively?) reassuring message "Octave is now configured for x86_64-unknown-linux-gnu". But of course that's not the same thing as saying it actually "works". Is there a recommended way to enable Octave to run as an x86_64 app? I have also tried looking inside the Octave Makefile to see if I could decipher what command line directives it accepts. I got nowhere. I have not a single clue as to how that Makefile does whatever it is that it does.

    Read the article

  • How can I build the Boost.Python example on Ubuntu 9.10?

    - by Gatlin
    I am using Ubuntu 9.10 beta, whose repositories contain boost 1.38. I would like to build the hello-world example. I followed the instructions here (http://www.boost.org/doc/libs/1%5F40%5F0/libs/python/doc/tutorial/doc/html/python/hello.html), found the example project, and issued the "bjam" command. I have installed bjam and boost-build. I get the following output: Jamroot:18: in modules.load rule python-extension unknown in module Jamfile</usr/share/doc/libboost1.38-doc/examples/libs/python/example>. /usr/share/boost-build/build/project.jam:312: in load-jamfile /usr/share/boost-build/build/project.jam:68: in load /usr/share/boost-build/build/project.jam:170: in project.find /usr/share/boost-build/build-system.jam:248: in load /usr/share/boost-build/kernel/modules.jam:261: in import /usr/share/boost-build/kernel/bootstrap.jam:132: in boost-build /usr/share/doc/libboost1.38-doc/examples/libs/python/example/boost-build.jam:7: in module scope I do not know enough about Boost (this is an exploratory exercise for myself) to understand why the python-extension macro in the included Jamroot is not valid. I am running this example from the install directory, so I have not altered the Jamroot's use-project setting. As a side question, if I were to just willy-nilly start a project in an arbitrary directory, how would I write my jamroot?

    Read the article

  • Role provider and Role management

    - by AspOnMyNet
    When the CacheRolesInCookie property is set to true in the Web.config file, role information for each user is stored in a cookie. When role management checks to see whether a user is in a particular role, the roles cookie is checked before the role provider is called to check the list of roles at the data source. The cookie is dynamically updated to cache the most recently validated role names. a) As far as I understand the above text, even though role management checks the roles cookie, role provider still checks the list of roles at the data source? b) Above text talks about role management, which is invoked before role provider is called. What class acts as a role management? thanx

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >