Search Results

Search found 37348 results on 1494 pages for 'agile project management'.

Page 42/1494 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Querying Visual Studio project files using T-SQL and Powershell

    - by jamiet
    Earlier today I had a need to get some information out of a Visual Studio project file and in this blog post I’m going to share a couple of ways of going about that because I’m pretty sure I won’t be the only person that ever wants to do this. The specific problem I was trying to solve was finding out how many objects in my database project (i.e. in my .dbproj file) had any warnings suppressed but the techniques discussed below will work pretty well for any Visual Studio project file because every such file is simply an XML document, hence it can be queried by anything that can query XML documents. Ever heard the phrase “when all you’ve got is hammer everything looks like a nail”? Well that’s me with querying stuff – if I can write SQL then I’m writing SQL. Here’s a little noddy database project I put together for demo purposes: Two views and a stored procedure, nothing fancy. I suppressed warnings for [View1] & [Procedure1] and hence the pertinent part my project file looks like this:   <ItemGroup>    <Build Include="Schema Objects\Schemas\dbo\Views\View1.view.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151,3276</SuppressWarnings>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Views\View2.view.sql">      <SubType>Code</SubType>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Programmability\Stored Procedures\Procedure1.proc.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151</SuppressWarnings>    </Build>  </ItemGroup>  <ItemGroup> Note the <SuppressWarnings> elements – those are the bits of information that I am after. With a lot of help from folks on the SQL Server XML forum  I came up with the following query that nailed what I was after. It reads the contents of the .dbproj file into a variable of type XML and then shreds it using T-SQL’s XML data type methods: DECLARE @xml XML; SELECT @xml = CAST(pkgblob.BulkColumn AS XML) FROM   OPENROWSET(BULK 'C:\temp\QueryingProjectFileDemo\QueryingProjectFileDemo.dbproj' -- <-Change this path!                    ,single_blob) AS pkgblob                    ;WITH XMLNAMESPACES( 'http://schemas.microsoft.com/developer/msbuild/2003' AS ns) SELECT  REVERSE(SUBSTRING(REVERSE(ObjectPath),0,CHARINDEX('\',REVERSE(ObjectPath)))) AS [ObjectName]        ,[SuppressedWarnings] FROM   (        SELECT  build.query('.') AS [_node]        ,       build.value('ns:SuppressWarnings[1]','nvarchar(100)') AS [SuppressedWarnings]        ,       build.value('@Include','nvarchar(1000)') AS [ObjectPath]        FROM    @xml.nodes('//ns:Build[ns:SuppressWarnings]') AS R(build)        )q And here’s the output: And that’s it – an easy way of discovering which warnings have been suppressed and for which objects in your database projects. I won’t bother going over the code as it is fairly self-explanatory – peruse it at your leisure.   Once I had the SQL above I figured I’d share it around a little in case it was ever useful to anyone else; hence I’m writing this blog post and I also posted it on the Visual Studio Database Development Tools forum at FYI: Discover which objects have had warnings suppressed. Luckily Kevin Goode saw the thread and he posted a different solution to the same problem, one that uses Powershell. The advantage of Kevin’s Powershell approach is that it is easy to analyse many .dbproj files at the same time. Below is Kevin’s code which I have tweaked ever so slightly so that it produces the same results as my SQL script (I just want any object that had had a warning suppressed whereas Kevin was querying specifically for warning 4151):   cd 'C:\Temp\QueryingProjectFileDemo\' cls $projects = ls -r -i *.dbproj Foreach($project in $projects) { $xml = new-object System.Xml.XmlDocument $xml.set_PreserveWhiteSpace( $true ) $xml.Load($project) #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings=4151]/@Include"} #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[contains(e:SuppressWarnings,'4151')]/@Include"} $xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings]/@Include"} $ns = @{ e = "http://schemas.microsoft.com/developer/msbuild/2003" } $xml | Select-Xml -XPath $xpath.Start -Namespace $ns |Select -Expand Node | Select -expand Value } and here’s the output: Nice reusable Powershell and SQL scripts – not bad for an evening’s work. Thank you to Kevin for allowing me to share his code. Don’t forget that these techniques can easily be adapted to query any Visual Studio project file, they’re only XML documents after all! Doubtless many people out there already have code for doing this but nonetheless here is another offering to the great script library in the sky. Have fun! @Jamiet

    Read the article

  • Project of Projects with team Foundation Server 2010

    - by Martin Hinshelwood
    It is pretty much accepted that you should use Areas instead of having many small Team Projects when you are using Team Foundation Server 2010. I have implemented this scenario many times and this is the current iteration of layout and considerations. If like me you work with many customers you will find that you get into a grove for how to set these things up to make them as easily understandable for everyone, while giving the best functionality. The trick is in making it as intuitive as possible for both you and the developers that need to work with it. There are five main places where you need to have the Product or Project name in prominence of any other value. Area Iteration Source Code Work Item Queries Build Once you decide how you are doing this in each of these places you need to keep to it religiously. Evan if you have one source code file to keep, make sure it is in the right place. This makes your developers and others working with the format familiar with where everything should go, as well as building up mussel memory. This prevents the neat system degenerating into a nasty mess. Areas Areas are traditionally used to separate out parts of your product / project so that you can see how much effort has gone into each. Figure: The top level areas are for reporting and work item separation There are massive advantages of using this method. You can: move work from one project to another rename a project / product It is far more likely that a project or product gets renamed than a department. Tip: If you have many projects, over 100, you should consider categorising them here, but make sure that the actual project name always sits at the same level so you know which is which. Figure: Always keep things that are the same at the same level Note: You may use these categories only at the Area/Iteration level to make it easier to select on drop down lists. You may not want to use them everywhere. On the other hand, for consistency it would be better to. Iterations Iterations are usually used to some sort of time based consideration. Here I am splitting into Iterations with periodic releases. Figure: Each product needs to be able to have its own cadence The ability to have each project run at its own pace and to enable them to have their own release schedule is often of paramount importance and you don’t want to fix your 100+ projects to all be released on the same date. Source Code Having a good structure for your source even if you are not branching or having multiple products under the same structure is always a good idea. Figure: Separate out your products source You need to think about both your branches as well as the structure of your source. All your code should be under “Source” and everything you need to build your solution including Build Scripts and 3rd party tools should be under your “Main” (branch) folder. This should them be branched by “Quality”, “Release” or both to get the most out of your branching structure. The important thing is to make sure you branch (or be able to branch) everything you need to build, test and deploy your application to an environment. That environment may be development, test or even production, but I can’t stress the importance of having everything your need. Note: You usually will not be able to install custom software on your build server. Store any *.dll’s or *.exe’s that you need under the “Tools\Tool1” folder. Note: Consult the Branching Guidance for Team Foundation Server 2010 for more on branching Figure: Adding category may be a necessary evil Even if you have to have a couple of categories called “Default”, it is better than not knowing the difference between a folder, Product and Branch. Work Item Queries Queries are used to load lists of Work Items out of TFS so you can see what work you have. This means that you want to also separate queries out by Product / project to make it easier to Figure: Again you have the same first level structure Having Folders also in Work Item Tracking we do the same thing. We put all the queries under a folder named for the Product / Project and change each query to have “AreaPath=[TeamProject]\[ProductX]” in the query instead of the standard “Project=@Project”. Tip: Don’t have a folder with new queries for each iteration. Instead have a single “Current” folder that has queries that point to the current iteration. Just change the queries as you move from one iteration to another. Tip: You can ctrl+drag the “Product1” folder to create your “Product2” folder. Builds You may have many builds both for individual products but also for different quality's. This can be further complicated by having some builds that action “Gated Check-In” and others that are specifically for “Release”, “Test” or another purpose. Figure: There are no folders, yet, for the builds so you need a good naming convention Its a pity that there are no folders under builds, some way to categorise would be nice. In lue of that at the moment you can use a functional naming convention that at least allows you to find what you want. Conclusion It is really easy to both achieve and to stick to this format if you take the time to do it. Unless you have 1000+ builds or 100+ Products you are unlikely run into any issues. Even then there are things you can do to mitigate the issues and I have describes some of them above. Let me know if you can think of any other things to make this easier.

    Read the article

  • Synchronizing an ERWin model with a Visual Studio 2008 GDR 2/2010 db project

    - by Grant Back
    I am looking for options to get our vast collection of DB objects across many DBs into source control (TFS 2010). Once we succeed here, we will work toward generating our alter scripts for a particular DB change via TFS build. The problem is, our data architecture group is responsible for maintaining the DB objects (excluding SPs), and they work within a model centric process, via ERWin. What this means, is that they maintain the DBs via ERWin models, and generate alters from them that are used to release changes. In order to achieve our goal of getting the DB objects (not just the ERWin models) into TFS, I believe the best option is to do this via Visual Studio DB projects. From what I can tell, there is very little urgency for CA to continue supporting an integration between ERWin and Visual Studio, that no longer works as of Visual Studio 2008 DB Ed. GDR. If I have been mislead in this regard, please feel free to set me straight. One potential solution is to: Perform changes in the ERWin model. Take the alter script generated from ERWin, and import the script into the appropriate Visual Studio DB project, updating the objects in the in the DB project Check the changed objects in the DB project into TFS. TFS Build executes to generate the alter scripts that will be used to push the changes through our release process. My question is, is this solution viable, or are there any other options?

    Read the article

  • Best Practices for Setup and Management of an Open Source Project

    - by VirtuosiMedia
    Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project? Some ideas that I have to make it successful (beyond marketing it): Good documentation and tutorials Automated unit tests and builds to push update to the website A clear roadmap Bug Tracking integrated with the source control A style guide to keep the code consistent along with clear A forum for the community to get support, share ideas, etc. A good example application built with the framework A blog to keep the community informed Maintaining backwards compatibility wherever possible Some of my questions: How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process? How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated? What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama. I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.

    Read the article

  • Announcing General Availability of the E-Business Suite Plug-in

    - by Kenneth E.
    Oracle E-Business Suite Application Technology Group (ATG) is pleased to announce the General Availability of Oracle E-Business Suite Plug-in 12.1.0.1.0, an integral part of Application Management Suite for Oracle E-Business Suite.The combination of Enterprise Manager 12c Cloud Control and the Application Management Suite combines functionality that was available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle’s Real User Experience Insight product and the Configuration & Compliance capabilities to provide the most complete solution for managing Oracle E-Business Suite applications. The features that were available in the standalone management packs are now packaged into the Oracle E-Business Suite Plug-in, which is now fully certified with Oracle Enterprise Manager 12c Cloud Control. This latest plug-in extends Cloud Control with E-Business Suite specific system management capabilities and features enhanced change management support.Here is all the information you need to get started:EBS Plug-in 12.1.0.1.0 info -Full Announcement•    E-Business Suite Plug-in 12.1.0.1 for Enterprise Manager 12c Now Available MOS -•    Getting Started with Oracle E-Business Suite Plug-in, Release 12.1.0.1.0 (Doc ID 1434392.1)Documentation -•    Oracle Application Management Pack for Oracle E-Business Suite Guide, Release 12.1.0.1.0Certification•    Platforms and OS Release certification information is available from My Oracle Support via the Certification page. •    Search using the official trademark name Oracle Application Management Pack for Oracle E-Business Suite and Release 12.1.0.1.0

    Read the article

  • Why do business analysts and project managers get higher salaries than programmers? [closed]

    - by jpartogi
    We have to admit that programming is much more difficult than creating documentation or even creating Gantt chart and asking progress to programmers. So for us that are naives, knowing that programming is generally more difficult, why do business analysts and project managers get higher salary than programmers? What is it that makes their job a high paying job when even at most times programmers are the ones that go home late? UPDATE Excuse my ignorance, from some of the response it seems that the reason why BAs and PMs gets higher salary because they are the ones that usually responsible for the mess programmers make. But at the end of the day, it is programmers that get their hands dirty to fix the mess and work harder. So it still does not make sense.

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • Alternative to MS Project 2007 for production scheduling?

    - by john c
    OK... Im coming to grips with the fact that MS Project 2007 may not be the correct tool for my production scheduling. We serve 120 to 150 projects a year with durations from 6 weeks to 12 months. The task are simple (6 to 8 per project) and the resource pool is stable (15 to 20 people). It's really an assembly line product but with extremely varied durations. I need to be able to prioritize the projects for production and run projects concurrently to fully utilize my resources. What are the feelings of the stackoverflow community. Am I using the wrong program? I was really hoping to make this simple for non-programer types to input project data into a form and have the schedule software automated enough to make most of the decisions. Is there a better solution available commercially? I'd like to hold on writing a custom spreadsheet as a last resort but if that's the best route then so be it. Thank you so much for your input.

    Read the article

  • Single developer, project organization

    - by poke
    I am looking for a good (and free) way to organize some of my personal projects. I am saying "organize" because I'm not sure, if the standard project management software solutions are exactly what I am looking for and especially something what I, as a single developer, need. In general, I just want to keep my progress of my projects organized in some way. I would like to be able to keep track of milestones and split those into multiple smaller tasks, so I can keep track of my progress. So some task/issue based system would probably be good, especially as I also want to keep track of issues/bugs with specific versions (although I alone will create those issues). I am and will be the only developer on those projects, so it doesn't matter if the software is offline or online, and I also don't need any collaboration features (like commenting on things, or assigning tasks to other developers etc). But if there is a good software that fits my needs, and in addition it has those things, I don't really care. After all it's easy enough to not use available features. Many online solutions also offer integrated code hosting. I am using git internally, but I don't plan to push any of the code, so such a feature is not needed either. In case of online solutions however I would like the projects to be closed to the public (some of the online utilities only offer open source projects for free and require payments for private projects). I have looked at some project management solutions already, I also read some similar questions here on SO. But given that I'm a single developer, my focus is probably a bit different as when others ask for a huge distributed software that supports many developers and different collaboration features. Some standard answers such as Trac (which also only supports one project), Redmine and FogBUGZ look interesting, but are a bit off my interest (although you may change my mind on that :P). Currently, I'm looking at Indefero which doesn't look too bad. But what do you think?

    Read the article

  • How to decide on going into management?

    - by Rob Wells
    I read the transcript of a speech by Richard Hamming included as a part of this SO question and the speech had a quote that got me thinking about when someone should move into development. When your vision of what you want to do is what you can do single-handedly, then you should pursue it. The day your vision, what you think needs to be done, is bigger than what you can do single-handedly, then you have to move toward management. And the bigger the vision is, the farther in management you have to go. Any other suggestions as to how you can decide if you want to move away from the coal face and into management?

    Read the article

  • PHP Commercial Project Function define

    - by Shiro
    Currently I am working with a commercial project with PHP. I think this question not really apply to PHP for all programming language, just want to discuss how your guys solve it. I work in MVC framework (CodeIgniter). all the database transaction code in model class. Previously, I seperate different search criteria with different function name. Just an example function get_student_detail_by_ID($id){} function get_student_detail_by_name($name){} as you can see the function actually can merge to one, just add a parameter for it. But something you are rushing with project, you won't look back previously got what similar function just make some changes can meet the goal. In this case, we found that there is a lot function there and hard to maintenance. Recently, we try to group the entity to one ultimate search something like this function get_ResList($is_row_count=FALSE, $record_start=0, $arr_search_criteria='', $paging_limit=20, $orderby='name', $sortdir='ASC') we try to make this function to fit all the searching criteria. However, our system getting bigger and bigger, the search criteria not more 1-2 tables. It require join with other table with different purpose. What we had done is using IF ELSE, if(bla bla bla) { $sql_join = JOIN_SOME_TABLE; $sql_where = CONDITION; } at the end, we found that very hard to maintance the function. it is very hard to debug as well. I would like to ask your opinion, what is the commercial solution they solve this kind of issue, how to define a function and how to revise it. I think this is link project management skill. Hope you willing to share with us. Thanks.

    Read the article

  • When is it time to start afresh instead of trying to rewrite the existing project?

    - by gablin
    As projects grow and age and new features are introduced, there may come a time when it's probably better to start anew. Some features may require so much rewriting that it would be easier (and maybe faster) to just rewrite the whole thing from scratch. Also, then you have the opportunity to apply what you've learned from the first project and thus improve the code and design of the successor. Is there a way of recognizing when this time is? Is it just a gut feeling, or could you say that "after x years, you probably should start over", or some other rule of thumb?

    Read the article

  • Have You Heard About Project Lucy?

    - by KKline
    Lucy, You Got Some 'Splainin to Do!' Quest Software's latest community initiative, Windows Azure-based Project Lucy, has debuted! Project Lucy is part infrastructure analytics, part social media experiment, and part performance data warehouse. The best things about Project Lucy include: It’s Free - just like our SQLServerPedia website, Project Lucy is free to anyone who wants to upload a trace file It’s 1oo% web-based - you don’t have to download or maintain anything and updates roll out seamlessly,...(read more)

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • How do you assign resources and keep begin, end and duration of a task intact?

    - by Random
    I have problems with assigning more than one resource to a group of tasks. The idea is simple, my tasks are in one group and are manually scheduled to particular begin and end dates. I want to assign more than one resource to keep task duration and dates (fixed duration) and increase work. For top level tasks it works fine but as long tasks are grouped, the duration of each is extended to reach group end date and work remains. For the problematic tasks, the Gantt chart looks like this: One resource attached (good) ( Task 1.1 ) ( Task 1.2 ) (Task 1.3) More than one resource attached (wrong) ( Task 1.1 )....................... ( Task 1.2 ).......... (Task 1.3) So for tasks like that, I want to have a fixed schedule and just increase work by adding resources that work in the same time, but sometimes MS Project does leveling to do resources work sequentialy.

    Read the article

  • How can architects work with self-organizing Scrum teams?

    - by Martin Wickman
    An organization with a number of agile Scrum teams also has a small group of people appointed as "enterprise architects". The EA group acts as control and gatekeeper for quality and adherence to decisions. This leads to overlaps between the team decision and EA decisions. For instance, the team might want to use library X or want to use REST instead of SOAP, but the EA does not approve of that. Now, this can lead to frustration when team decisions are overruled. Taken far enough, it can potentially lead to a situation where the EA people "grabs" all power and the team ends up feeling demotivated and not very agile at all. The Scrum guides has this to say about it: Self-organizing: No one (not even the Scrum Master) tells the Development Team how to turn Product Backlog into Increments of potentially releasable functionality. Is that reasonable? Should the EA team be disbanded? Should the teams refuse or simply comply?

    Read the article

  • Support / Maintenance documentation for development team

    - by benwebdev
    Hi, I'm working in the Development dept (around 40 developers) for a large E-Commerce company. We've grown quickly but have not evolved very well in the field of documenting our work. We work with an Agile / Scrum-like methodology with our development and testing but documentation seems to be neglected. We need to be able to make documentation that would aid a developer who hasnt worked on our project before or was new to the company. We also have to create more high level information for our support department to explain any extra config settings and fixes of known issues that may arise, if any. Currently we put this in a badly put together wiki, based on an old Sharepoint / TFS site. Can anyone suggest some ideal links or advice on improving the documentation standard? What works in other companies? Has anyone got avice on developing documentation as part of an agile process? Many thanks, ben

    Read the article

  • How do you get past the Analysis to Paralysis when working on a new project?

    - by Cape Cod Gunny
    I've been struggling with how to get my project going. I've got an old software package that is in need of desparate rewrite. I haven't compiled the source code since 2004. It still sells, it's stable but does require the “Run this program in compatibility mode for:” on a lot of the newer windows systems. It's also one of those hard coded 640 X 480 screen resolution programs. Yuck! I can't seem to get started with this rewrite. I'm constantly fiddling around with different things. I'll play around with different fluid layouts for a while. Then I start looking around at how the main menu should work/look. I quickly find out that there's this thing called "Cool Bars" and I'll spend hours playing with that. Then I start thinking about stuff like "Oh I need to make sure that the screen sizes are preserved so when the application gets relaunched it remebers how the screens were positioned." Which leads to what happens if they have two monitors? Which leads to what happens if they have a quad screen? Yikes it's got to stop. I have always been a slow starter. I think about stuff long and hard up front. This has always plagued me. Once I get my mind made up then bam... I'm off and running. I'm looking for advice from some other one-person software companies that can help someone like me get off to a quicker start?

    Read the article

  • need some concrete examples on user stories, tasks and how they relate to functional and technical specifications

    - by gideon
    Little heads up, Im the only lonely dev building/planning/mocking my project as I go. Ive come up with a preview release that does only the core aspects of the system, with good business value and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because Im completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: I know what a functional spec is.(sample): Do all user stories and/or scenarios become part of the functional spec? I know what user stories and tasks are. Are these kinda user stories? I dont see any Business Value reason added to them. I made a mind map using freemind, I had problems like this: Actor : Finance Manager Can Add a Financial Plan into the system because well thats the point of it? What Business Value reason do I add for things like this? Example : A user needs to be able to add a blog article (in the blogger app) because..?? Its the point of a blogger app, it centers around that feature? How do I go into the finer details and system definitions: Actor: Finance Manager Action: Adds a finance plan. This adding is a complicated process with lots of steps. What User Story will describe what a finance plan in the system is ?? I can add it into the functional spec under definitions explaining what a finance plan is and how one needs to add it into the system, but how do I get to the backlog planning from there? Example: A Blog Article is mostly a textual document that can be written in rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec contain user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : EditorUser Can add a blog article. Use XML to store blog article. Add a form to add blog. Add Windows Live Writer Support. That would be agile tasks but would that also be part of/or form the technical specs? Some concrete examples/answers of my questions would be nice!!

    Read the article

  • Changing MS Project to 20-hour or 30-hour week.

    - by Eric
    I'm working on a project in MS Project and the default is a 40-hour week. I'm putting each individual task in based on a number of hours, not days. I'd like to have the whole thing set up and computing at 40-hour weeks, and then change it to 20 hours and have the project recompute. How do I do this? I think it has something to do with changing the "project calendar" but I can't quite figure it out.

    Read the article

  • TFS Build Automation - Web Deployment Project error

    - by gracejz
    I'm trying to build a web deployment project using TFS automated build process. When I build the project directly in Visual Studio 2008, it works fine. But from TFS, I get the following error: "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\BuildType\TFSBuild.proj" (EndToEndIteration target) (1) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\BuildType\TFSBuild.proj" (CoreCompile target) (1:2) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\BuildType\TFSBuild.proj" (CompileConfiguration target) (1:3) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\BuildType\TFSBuild.proj" (CompileSolution target) (1:4) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\Sources\TestSolution.sln" (default target) (6) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\Sources\WebDeployment\WebDeployment.wdproj" (default target) (48) - (CreateVirtualDirectory target) - C:\Program Files\MSBuild\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets(676,5): error : Some or all identity references could not be translated. I made sure that NETWORK SERVICE account has permission to access all the web folders. Any ideas?

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Tokyo Tyrant ulog / update log management.

    - by Nathan Milford
    I'm testing Tokyo Tyrant in a master-master setup and have found the ulog grows out of control and locks up the disk. At first I found the -ulim option useful and limited the logfile size, however it simply rolls over to a new log, leaving the old ones to clutter up the partition. I suppose I'll write a shell script that will delete ulogs older than X, once I find out how far back Tokyo Tyrant needs in the update log in order to failover. Does anyone have any experience with this Tokyo Tyrant? Do you have a feel (acknowledging that every install is different based on what is being stored) for the optimal ulog size vs how far back a Tokyo Tyrant instance needs to look in the ulog to assume master status? Thanks, nathan

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >