Search Results

Search found 13616 results on 545 pages for 'rights management'.

Page 14/545 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • What's a good tool for Scrum Project Management in game development? [closed]

    - by BleakCabalist
    I'm looking for an efficient, easy-to-learn tool for Scrum project management not for proffesional use but to use it in my thesis concerning the use of Scrum in game development. Basically I want to visualize a production process of a hypothetical game. Some fragments of the production process should be really detailed to make my point, so basically user stories, tasks, burndown charts etc. are a must. I'm using Scrum, Kanban and some Lean practices for eliminating waste. I also want to use Extreme Programming practices in this production process including TDD and Continuous Integration. I have zero experience in proffesional project management so I need something that's fairly simple to use for a newb like me. Anyone can recommend a tool like that? For now I was thinking about TargetProcess and ScrumWorks. Thanks.

    Read the article

  • Should your client be able to view your project management board?

    - by bizso09
    We're making a bespoke software for our client and use Codebase for our project management. Is it a good idea to let our client view our project management board? The advantages that we thought of are that this would enhance the cooperation between the client and the dev team, following agile practices. He would essentially become part of our team. It would also reduce communication overhead and make sure we're on the same page. The client could track the progression of the system and make suggestions along the way on the user stories. In addition, he could submit bugs or feature requests. The disadvantages that we though of are that some aspects of the board might be too technical to the client. He would suggest changes to the user stories too often and he might view some content that we normally wouldn't want our client to see. For example, when we compromise on technology or functionality, the client might question that and insist on doing things one way or the other.

    Read the article

  • Delegating account unlock rights in AD

    - by ewall
    I'm trying to delegate the rights to unlock user accounts in our Active Directory domain. This should be easy, and I've done it before... but every time the user tries to unlock an account (using the LockoutStatus tool), he gets denied with the error "You do not have the necessary permissions to unlock this account." Here's what I've done: I created a domain local group and added the members who should have the rights. This was created over a week ago, so the users have logged out and in again. In ADUC, I've used the Delegate Rights wizard on the OU which contains our user accounts to grant permissions to Read lockoutTime and Writer lockoutTime to the group, per MSKB 279723 I have double-checked the permissions were applied correctly in ADSIEdit. I have forced replication between all domain controllers to ensure the permission changes were copied over. The user testing it has logged out and in again to ensure he has any changes applied to his account. ...That covers all the bases I can think of. Anything else I could be missing?

    Read the article

  • What's the best way to sell ReSharper to management? [closed]

    - by Jackson Pope
    Possible Duplicate: How do you convince your boss to buy useful tools like Resharper, LinqPad? I've recently started a new job developing code in C# and ASP.Net. At a previous employer I've used ReSharper from JetBrains and I loved it. I've downloaded the free trial in my new job, as have several of my new colleagues on my recommendation. Everyone thinks it's great. But now our trials are coming to an end and it's time to buy or say goodbye. I've been reliably informed that getting money for tools from senior management is like trying to get blood from a stone, so how can I convince them to loosen their grip on the purse strings and buy it for our team (of seven developers)? Does anyone have any experience of convincing management of the benefits of refactoring tools? I feel the benefit every second I use it, but I'm having difficulty thinking of how to explain the concrete benefits to a manager who only think

    Read the article

  • How to convince management of making our project open source?

    - by MrSoundless
    Xamarin 3 was released last week with a great new addition: Xamarin.Forms . This triggered our attention because we've been using such a system for a couple of years now. We've developed it by ourselves and used it for a bunch of projects. We've been looking for a way to make this project open source but we didn't manage to convince the management. They believe we should not make it open source because we won't win anything with it and all that will happen is that the competition will be able to build apps quicker with our library. We believe open sourcing our library will make the world a better place and that it will make our library much more stable and complete. So my question to all you people out there: How can we convince the management to open source our library?

    Read the article

  • What is the best agile project management technique for developing innovative software systems?

    - by user654019
    I am involved with the development of innovative software. The development is innovative since we don't know how to develop it and what algorithm should we use to implement and nobody else did it before. The process consists of several stages of studying books/papers, suggesting algorithms, writing prototypes and comparing the result with actual data. We hope that after some iteration, we converge to a valid software system. What is the best project management approach that we can use? Is there any project management software for these types of projects?

    Read the article

  • Announcing General Availability of the E-Business Suite Plug-in

    - by Kenneth E.
    Oracle E-Business Suite Application Technology Group (ATG) is pleased to announce the General Availability of Oracle E-Business Suite Plug-in 12.1.0.1.0, an integral part of Application Management Suite for Oracle E-Business Suite.The combination of Enterprise Manager 12c Cloud Control and the Application Management Suite combines functionality that was available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle’s Real User Experience Insight product and the Configuration & Compliance capabilities to provide the most complete solution for managing Oracle E-Business Suite applications. The features that were available in the standalone management packs are now packaged into the Oracle E-Business Suite Plug-in, which is now fully certified with Oracle Enterprise Manager 12c Cloud Control. This latest plug-in extends Cloud Control with E-Business Suite specific system management capabilities and features enhanced change management support.Here is all the information you need to get started:EBS Plug-in 12.1.0.1.0 info -Full Announcement•    E-Business Suite Plug-in 12.1.0.1 for Enterprise Manager 12c Now Available MOS -•    Getting Started with Oracle E-Business Suite Plug-in, Release 12.1.0.1.0 (Doc ID 1434392.1)Documentation -•    Oracle Application Management Pack for Oracle E-Business Suite Guide, Release 12.1.0.1.0Certification•    Platforms and OS Release certification information is available from My Oracle Support via the Certification page. •    Search using the official trademark name Oracle Application Management Pack for Oracle E-Business Suite and Release 12.1.0.1.0

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • How to decide on going into management?

    - by Rob Wells
    I read the transcript of a speech by Richard Hamming included as a part of this SO question and the speech had a quote that got me thinking about when someone should move into development. When your vision of what you want to do is what you can do single-handedly, then you should pursue it. The day your vision, what you think needs to be done, is bigger than what you can do single-handedly, then you have to move toward management. And the bigger the vision is, the farther in management you have to go. Any other suggestions as to how you can decide if you want to move away from the coal face and into management?

    Read the article

  • Quick guide to Oracle IRM 11g: Server installation

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index This is the first of a set of articles designed to assist with the successful installation, configuration and deployment of a document security solution using Oracle IRM. This article goes through a set of simple instructions which detail how to download, install and configure the IRM server, the starting point for building a document security solution. This article contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Downloading the software Preparing a database Creating the schema WebLogic Server installation Installing Oracle IRM Introduction Because we are using Oracle Enterprise Linux in this guide, and before we get into the detail of IRM, i'd like to share some tips with Linux to make life a bit easier.Use a 64bit platform, IRM 11g runs just fine on a 32bit server but with 64bit you will build a more future proof service. Download and install the latest Java JDK package. Make sure you get the 64bit version if you are on a 64bit server. Configure Linux to use a good Yum server to simplify installing packages. For Oracle Enterprise Linux we maintain a great public Yum here. Have at least 20GB of free disk space on the partition you intend to install the IRM server. The downloads are big, then you extract them and then install. This quickly consumes disk space which you can easily recover by deleting the downloaded and extracted files after wards. But it's nice to have the disk space spare to keep these around in case you need to restart any part of the installation process again. Downloading the software OK, so before you can do anything, you need the software install kits. Luckily Oracle allows you to freely download every technology we create. You'll need to get the following; Oracle WebLogic Server Oracle Database Oracle Repository Creation Utility (rcu) Oracle IRM server You can use Microsoft SQL server 2005 or 2008, in this guide i've used Oracle RDBMS 11gR2 for Linux. Preparing the database I'm not going to go through the finer points of installing the database. There are many very good guides on installing the Oracle Database. However one thing I would suggest you think about is enabling TDE, network encryption and using Database Vault. These Oracle database security technologies are excellent for creating a complete end to end security solution. No point in going to all the effort to secure document access with IRM when someone can go directly to the database and assign themselves rights to documents. To understand this further, you can see a video of the IRM service using these database security technologies here. With a database up and running we need to create a schema to hold the IRM data. This schema contains the rights model, cryptographic keys, user account id's and associated rights etc. Creating the IRM database schema Oracle uses the Repository Creation Tool which builds your schema, extract the files from the rcu zip. Then in a terminal window; cd /oracle/install/rcu/bin ./rcu This will launch the Repository Creation Tool and you will be presented with the image to the right. Hit next and continue onto the next dialog. You are asked if you are going to be creating a new schema or wish to drop an existing one, you obviously just need to click next at this point to create a new schema. The RCU next needs to know where your database is so you'll need the following details of your database instance. Below, for reference, is the information for my installation. Hostname: irm.oracle.demo Port: 1521 (This is the default TCP port for the Oracle Database) Service Name: irm.oracle.demo. Note this is not the SID, but the service name. Username: sys Password: ******** Role: SYSDBA And then select next. Because the RCU contains schemas for many of the Oracle Technologies, you now need to select to just deploy the Oracle IRM schema. Open the section under "Enterprise Content Management" and tick the "Oracle Information Rights Management" component. Note that you also get the chance to select a prefix which defaults to "DEV" (for development). I usually change this to something that reflects my own install. PROD for a production system, INT for internal only etc. The next step asks for the passwords for the schema users. We are only creating one schema here so you just enter one password. Some brave souls store this password in an Excel spreadsheet which is then secure against the IRM server you're about to install in this guide. Nearing the end of the schema creation is the mapping of the tablespaces to the schema. Note I had setup a table space already that was encrypted using TDE and at this point I was able to select that tablespace by clicking in the "Default Tablespace" column. The next dialog confirms your actions and clicking on next causes it to create the schema and default data. After this you are presented with the completion summary. WebLogic Server installation The database is now ready and the next step is to install the application server. Oracle IRM 11g is a JEE application and currently only supported in Oracle WebLogic Server. So the next step is get WebLogic Server installed, which is pretty easy. Depending on the version you download, you either run the binary or for a 64 bit platform (like mine) run the following command. java -d64 -jar wls1033_generic.jar And in the resulting dialog hit next to start walking through the install. Next choose a directory into which you will install WebLogic Server. I like to change from the default and install into /oracle/. Then all my software goes into this one folder, all owned by the "oracle" user. The next dialog asks for your Oracle support information to ensure you are kept up to date. If you have an Oracle support account, enter your details but for most evaluation systems I leave these fields blank. Again, for evaluation or development systems, I usually stick with the "Typical" install type which you are next asked for. Next you are asked for the JDK which will be used for the server. When installing from the generic jar on a 64bit platform like in this guide, no JDK is bundled with the installer. But as you can see in the image on the right, that it does a good job of detecting the one you've got installed. Defaults for the install directories are usually taken, no changes here, just click next. And finally we are ready to install, hit next, sit back and relax. Typically this takes about 10 minutes. After the install, do not run the quick start, we need to deploy the IRM install itself from which we will create a new WebLogic domain. For now just hit done and lets move to the final step of the installation process. Installing Oracle IRM The last piece of the puzzle to getting your environment ready is to deploy the IRM files themselves. Unzip the Oracle Enterprise Content Management 11g zip file and it will create a Disk1 directory. Switch to this folder and in the console run ./runInstaller. This will launch the installer which will also ask for the location of the JDK. Look at the image on the right for the detail. You should now see the first stage of the IRM installation. The dialog warns you need to have a WebLogic server installed and have created the schema's, but you've just done all that above (I hope) so we are ready to go. The installer now checks that you have all the required libraries installed and other system parameters are correct. Because nearly all of my development and evaluation installations have the database server on the same system, the installer passes these checks without issue... Next... Now chose where to install the IRM files, you must install into the same Middleware Home as the WebLogic Server installation you just performed. Usually the installer already defaults to this location anyway. I also tend to change the Oracle Home Directory to Oracle_IRM so it's clear this is just an IRM install. The summary page tells you about space needed to deploy the files. Unfortunately the IRM install comes with all of the other Oracle ECM software, you can't just select the IRM files, everything gets deployed to disk and uses 1.6GB of space! Not fun, but Oracle has to package up similar technologies otherwise we would have a very large number of installers to QA and manage, again, not fun. Hit Install, time for another drink, maybe a piece of cake or a donut... on a half decent system this part of the install took under 10 minutes. Finally the installation of your IRM server is complete, click on finish and the next phase is to create the WebLogic domain and start configuring your server. Now move onto the next article in this guide... configuring your IRM server ready to seal your first document.

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • Join Us for the Next Quarterly Customer Update Webcast

    - by michelle.huff
    Join us for the next Oracle Content Management Quarterly Customer Update Webcast scheduled for this coming January 19 & 20, 2010. In this webcast we'll bring you up to speed on the latest updates and changes made available these past few months. Additionally, we'll cover the new features and certifications in the latest ODC & ODDC 10.1.3.5.1 release, as well as the upcoming Enterprise Content Management Suite 11gR1 PS3 (patch set 3) release. Register Today! Americas / EMEA time zones: Customer Update January 19, 2010 9:00am US PT / 12:00pm US ET / 17:00 London Length: 1 hour *Please use your corporate email address to register. Asia-Pacific time zones: Customer Update (Repeat Webcast) January 20, 2010 1:00pm Sydney AET, 10:00am Singapore (Jan 19, 2010 @ 6:00pm US PT) Length: 1 hour *Please use your corporate email address to register Missed Previous Customer Quarterly Updates? Get caught up on Oracle & ECM news. View a recording or the presentation from previous Webcasts held since June 2008 (available from My Oracle Support).

    Read the article

  • The Programmer's Bill of Rights

    - by Martin
    I know Jeff has written about this subject on his coding horror blog in the past but I am interested in learning the opinions of a broad set of developers. I agree wholeheartedly with his statement: I propose we adopt a Programmer's Bill of Rights, protecting the rights of programmers by preventing companies from denying them the fundamentals they need to be successful. So, if you could propose one item to the bill of rights, what would it be?

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Tokyo Tyrant ulog / update log management.

    - by Nathan Milford
    I'm testing Tokyo Tyrant in a master-master setup and have found the ulog grows out of control and locks up the disk. At first I found the -ulim option useful and limited the logfile size, however it simply rolls over to a new log, leaving the old ones to clutter up the partition. I suppose I'll write a shell script that will delete ulogs older than X, once I find out how far back Tokyo Tyrant needs in the update log in order to failover. Does anyone have any experience with this Tokyo Tyrant? Do you have a feel (acknowledging that every install is different based on what is being stored) for the optimal ulog size vs how far back a Tokyo Tyrant instance needs to look in the ulog to assume master status? Thanks, nathan

    Read the article

  • SNMP based network discovery (switches), device (ports on switches) power management

    - by SaM
    In a enterprise network, what would be the right way to generate a list of switches (SNMP managed) Is it reasonable to ask the organization to supply a list such as this: Switch name IP Address of switch Location SNMP community strings Or are there standard ways to run discovery scans - UDP broadcasts? After having generated a repository such as the above; given a single switch, how to query it for the list of all devices attached to it? Finally, how to selectively power down/power up ports? (remotely - using SNMP) Platform is going to be .NET based (C#) and the library being used is SharpSNMP

    Read the article

  • Network Management Cable Labeling Techniques and their alternatives [closed]

    - by Alex
    Possible Duplicate: What is the most effective solution you used to label cables? Yes i know there are a lot of howtos and already answered questions about this topic, like this one: How do you organise the cables in your racks? Currently i am searching the web for different techniques (alternatives) for labeling the cables at the server racks and/or data centers. Unfortunately i do not have any experience with labeling/documentation of network cables in a large scale. As far as I could lookup by now the current labeling techniques are coloring and a self defined print-labeling technique (numbering, text) maybe also according to a standard which are usually used. I want to know if QR, RFID (ok RFID in a data center would be stupid due to the radio frequency wouldn't it be?), Barcodes or similar (??) have already been used by some administrators or why they did not consider such techniques at all? Too complicated (with QR scanner etc..) if you are in front of the cables and want to get quick feedback for what the cable is? What alternatives are out there? Advantages/Disadvantages? Best-Practice? I would appreciate any help on this topic, thank you! Regards, Alex

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • SQL Server Rights to backup drive

    - by Sam
    I'm trying to copy a backup I've made from one server to another using either an SSIS or Powershell step in a job. I've run into the same error on both systems when running the step under the sql agent. I receive errors that the path does not exist. I've tried granting the agent rights to e:\backups, where the file is located, but it still doesn't work. When I use a proxy for the step, it works fine. Can anyone help me with what permissions to grant to sqlagent? Rights look to have been granted to MSSQL$Instance1 on the backup drive.

    Read the article

  • Change Management Software

    - by Andrew
    I manage an 80,000 user CIS application written in Uniface. Every form in the application, and many of its processes, are represented by .frm files. We have hundreds of these files and 5 instances of the application. Instances include multiple production installations which must be kept sync'd. We do not get MD5 from our vendor for files that are released to us as patches. We have been using a spreadsheet to track changes, but this is far from ideal. Is there a commercial application that can be purchased that will allow us to track changes to the instances? Thank you all! EDIT: Patches are released as zip files with either FRM files in them or SQL files or a mix of both. SQL files will contain statements that need to be run in Oracle. Patches are also assigned unique patch numbers.

    Read the article

  • Determining who is running with administrator rights?

    - by Alex C.
    I work at a small non-profit organization with about 55 desktop PCs running Windows XP Pro. The domain controller is running Windows Server 2003. I have a two-part question (note that I'm a bit of a newb when it comes to network administration). Part 1: Is there some simple way that I can determine which accounts are logged in with administrator rights? Part 2: Is there a way that I can remove administrator rights from users without sitting down at each individual machine? Thanks for considering my questions.

    Read the article

  • Unix Password Management Keyring

    - by Phil
    I am looking for a password manager for a command-line Unix environment. So far all I can find are keyring applications for Windows, Linux, and Mac. But no command-line Unix interfaces. My main goal is to be able to access a password keyring through an SSH connection to a machine that has no graphical user interface. If there are no good unix password keyrings out there, what would be a better way to store personal passwords in a central location?

    Read the article

  • Server Room Protocols/Server Room Management

    - by Matthew E
    Hi, I'm new to this site but have found the articles and feedback very useful. We have a Server Room which our Organisation owns and controls, yet there are several thirs party companies that have open access to this room. As such, we have been asked to put together a protocol paper that stipulates the standards that we expect to be adhered to when working in this room. Other than the monitoring of UPS loads, Air Cooling functionality, alarm systems etc, does any one have any guidance on the kind of issues that need to be documented to make this protocol all encompassing? I'm thinking along the lines of not leaving cardboard or other combustibles in the room, not having food and drink in the room, not altering the fabric of the building by drilling through walls etc? Many thanks in advance for any guidance provided.

    Read the article

  • IT Asset Management

    - by CogitoErgoSum
    Our company has grown quite quickly and I am facing new tasks which I did not think I'd need to deal with. Recently we've come ot a point where we have 100+ Devices (Routers, Bridges, Computers, Laptops, VOIP Phones etc). The other day I was quite frightened when I asked for an inventory and no one had one. I want to start tagging all equipment and recording serials to begin tracking our inventory and ensuring we have a proper record of what equipment we have. Does anyone have advice as to how to go about 1. Convincing the higher ups why we need to do this and 2. What software or strategies might work? Keep in mind this is not for furniture, office equipment etc but IT specific equipment. I'm concerned over people 1. Stealing the physical devices and 2. Losing track of configuration data etc in case we'd need to do a wipe and restore

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >