Search Results

Search found 24347 results on 974 pages for 'cross process'.

Page 33/974 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Simplest way to respawn configured number of instances of a specific process.

    - by Zwei Steinen
    So we have an app. which we wan to run multiple instance of it in linux. The number should be configurable. We also want that whenever one of the instance disappears, a new one is booted up. I was looking into C based programs, shell script, python script etc. but I was wondering what would be the most simple, easiest way to do it. Are there any tools out there? Can one simply use some linux built-in functionality? Linux distribution is Red Hat.

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • Making fonts render similarly across browsers

    - by Zach L.
    I am building a website for a client, and we had hoped to use plain text, not images in the navigation bar. The font we are using is Century Gothic (I believe that this font is available on the majority of PCs and Macs) The problem is, that on different browsers the font renders significantly differnt. In Chrome we got it looking the way we want, but in firefox the text is smaller and bolder. Aside from writing browser specific javascript to alter the font properties, are there any other options to standardize the way the fonts are rendered cross-browser. Perhaps some library or API? Maybe its a matter of being more specific in declaring font properties? Honestly I am stuck and need help.

    Read the article

  • Paying a developer in stock/fixed rate? [closed]

    - by user51648
    I have an idea for a cross platform application. It will require knowledge of several different languages, web development, and system administration/IT. I don't personally code, but I want to pay professionals to do it. I'm wondering how I should go about paying them. Yes, this will be a large project, but I want it done ASAP. Is it ok if I don't pay them by the hour? I really want it to be a set price. Also, is it reasonable to pay them in stock of the company? Like, 20%? P.S. How do I know how big a project will be in order to give the devs themselves an idea?

    Read the article

  • At what visitor share do you stop supporting a given browser?

    - by adam
    I'm lead dev for a large website which has a higher than average percentage of IE6 users - about 4.4% of our audience. Our new version is going to make use of progressive enhancement - including transitions and effects as well as rounded corners, gradients, web fonts and other CSS techniques. Obviously there are cross-browser ways to achieve most of these things which require various amounts of work to implement. What I'm currently looking into - and what I'd like your experiences of - is how to decide at what point we draw the line between providing an enhanced experience vs just supporting the functionality. FYI, I believe that this question meets the six guidelines for great subjective questions as defined in the FAQ. I'm after answers detailing why and how, not too short, with constructive comments, experiences, facts and references. Thanks! Adam

    Read the article

  • All-around programming language for use on desktop and mobile devices

    - by mdm414 ZX
    Given that I am a PHP programmer and open-source is a must, what would be the best and practical programming language to use for all of the following: A desktop/cross-platform application. I've read that with HTML5, creating offline apps are possible? A web application. Android and iPhone/iPad apps. I am leaning towards using Python but I am not sure if it is possible to use it alone for all of them. There are other languages that I am also looking at like Ruby, Scala and Java. Kindly share your thoughts and experiences on this one. Thanks :-)

    Read the article

  • Best language for crossplatform app with GUI [on hold]

    - by Jeremy Dicaire
    I've decided to finally get rid of all Microsoft crap and switched to linux yesterday (It feels so good!) I'm looking for a way to create a cross-platform app with a GUI using an open-source language. I came across python with qt4 (or qt5). I give a thought to Java but it's a memory eater... I'm wondering which other good options is available before starting my journey with those 2 and which tools are good to help me code. I'm currently using Eclipse for all my programming needs. Your help is appreciated! Have a nice day

    Read the article

  • How can a website look different in safari Windows and Safari mac?

    - by Jakob
    I have the website http://storkbox.magentodemo.dk . I've been testing crossbrowser on my windows PC, and it looks good in all browsers, but on Mac in Safari it looks like the CSS is not getting interpreted right, or there is a critical javascript error. When I look in the console cross-browser, the error log shows exactly the same. Chrome on mac interprets the site as intended, so why do I have a problem with safari. It is the same across different computers, and iphone safari also shows the site wrong. How is this possible and how do I debug?

    Read the article

  • SVN post commit stucks while starting process

    - by Oded
    Hi, I've built a script in VS that receives the 2 arguments sent by post-commit hook. The script runs SVN LOG to retrieve data about the revision (author, date, files). When I run the solution from VS with constant vars for the arguments, it runs perfectly. When I execute the exe file, also runs perfectly. When I implement the hook script, it fails where it should read from the process. process.Start(); process.WaitForExit(); str = process.StandardOutput.ReadToEnd(); process.WaitForExit(); if (!process.HasExited) { try { process.Kill(); } catch (Exception e3) { // process is terminated } // Write Errors } Thanks. EDIT: The commit window stucks and never completes the commit. I write the code in C#.... there is no errors shown...

    Read the article

  • Execute multiple command lines with the same process using C#

    - by rima
    Hi according to my last question here I try to write a sql Editor or some thing like this,in this way I try to connect to CMD from C# and execute my command. now my problem is for example I connect to SQLPLUS after that I cant get SQLPLUS command,and the other resource I review don't satisfy me.Please help me how after I connected to Sqlplus ,I can a live my process to run my sql command? right now I use this code: //Create process System.Diagnostics.Process pProcess = new System.Diagnostics.Process(); //strCommand is path and file name of command to run pProcess.StartInfo.FileName = strCommand; //strCommandParameters are parameters to pass to program pProcess.StartInfo.Arguments = strCommandParameters; pProcess.StartInfo.UseShellExecute = false; //Set output of program to be written to process output stream pProcess.StartInfo.RedirectStandardOutput = true; //Optional pProcess.StartInfo.WorkingDirectory = strWorkingDirectory; //Start the process pProcess.Start(); //Get program output string strOutput = pProcess.StandardOutput.ReadToEnd(); //Wait for process to finish pProcess.WaitForExit(); but i customize it.I separate the initialize, i mean i just create process object one time,but I still have problem. to run the second command I use these codes for second time calling: pProcess.StartInfo.FileName = strCommand; //strCommandParameters are parameters to pass to program pProcess.StartInfo.Arguments = strCommandParameters; //Start the process pProcess.Start(); //Get program output string strOutput = pProcess.StandardOutput.ReadToEnd(); //Wait for process to finish pProcess.WaitForExit(); Thanks in advance

    Read the article

  • [Word2007] How to showing "only number" in picture cross-reference

    - by kornelijepetak
    I have many pictures in a document and I reference them very often in text. I don't want to lose the order so I am using Insert - Cross-reference. This opens the cross-reference dialog where I can set Reference type to Picture. For "Insert reference to", there are 5 choices: - Entire caption - List item - Only label and number - Only caption text - Page number, Above/below What I need is a reference that would be inserted like this: [4], and not like this: [Picture 4]; None of these options enable me to do it. Is there any way to make Word 2007 insert a reference to only Caption Number? Note: The document is written in Croatian language which has 7 declension cases, so using "Picture 4" would not be valid in all cases. Actually caption label Picture is set to croatian word "Slika" and when I need to say say "in the picture" I can't because it would be "na Slici 5." and not "na Slika 5." (like Word would make me do). That's why I need to reference only the caption number. Is that possible in Word 2007?

    Read the article

  • Python Process won't call atexit

    - by Brian M. Hunt
    I'm trying to use atexit in a Process, but unfortunately it doesn't seem to work. Here's some example code: import time import atexit import logging import multiprocessing logging.basicConfig(level=logging.DEBUG) class W(multiprocessing.Process): def run(self): logging.debug("%s Started" % self.name) @atexit.register def log_terminate(): # ever called? logging.debug("%s Terminated!" % self.name) while True: time.sleep(10) @atexit.register def log_exit(): logging.debug("Main process terminated") logging.debug("Main process started") a = W() b = W() a.start() b.start() time.sleep(1) a.terminate() b.terminate() The output of this code is: DEBUG:root:Main process started DEBUG:root:W-1 Started DEBUG:root:W-2 Started DEBUG:root:Main process terminated I would expect that the W.run.log_terminate() would be called when a.terminate() and b.terminate() are called, and the output to be something likeso (emphasis added)!: DEBUG:root:Main process started DEBUG:root:W-1 Started DEBUG:root:W-2 Started DEBUG:root:W-1 Terminated! DEBUG:root:W-2 Terminated! DEBUG:root:Main process terminated Why isn't this working, and is there a better way to log a message (from the Process context) when a Process is terminated? Thank you for your input - it's much appreciated.

    Read the article

  • process tree

    - by Robert
    I'm looking for an easy way to find the process tree (as shown by tools like Process Explorer), in C# or other .Net language. It would also be useful to find the command-line arguments of another process (the StartInfo on System.Diagnostics.Process seems invalid for process other than the current process). I think these things can only be done by invoking the win32 api, but I'd be happy to be proved wrong. Thanks! Robert

    Read the article

  • Cross-forest universal groups on Windows Server?

    - by DotGeorge
    I would like to create a Universal Group whose members are a mix of cross-forests users and groups. In the following example, two forests are mentioned (US and UK) and two domains in each forest (GeneralStaff and Java): For example, the universalDevelopers group may comprise of members from UK.Java.Developers and US.Java.Developers. Then, for example, there may be a group of universalSales which contains the users UK.GeneralStaff.John and US.GeneralStaff.Dave. In UK forest at the minute, I can freely add members and groups from the UK. But there is no way to add members from the US forest, despite having a two-way trust in place... e.g. I can login with US members into UK and vice-versa. A further complication is that, with a Universal group in the UK (which contains three domains), I can only add two of the three. It can't see the third. Could people please provide some thoughts on why cross-forest groups can't be created and ways of 'seeing' all domains within a forest. EDIT: This is on a combination of Windows 2003 and 2008 server. Answers can be regarding either. Thanks!

    Read the article

  • Starting a seperate process

    - by jacquesb
    I want a script to start a new process, such that the new process continues running after the initial script exits. I expected that I could use multiprocessing.Process to start a new process, and set daemon=True so that the main script may exit while the created process continues running. But it seems that the second process is silently terminated when the main script exits. Is this expected behavior, or am I doing something wrong?

    Read the article

  • People, Process & Engagement: WebCenter Partner Keste

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Within the WebCenter group here at Oracle, discussions about people, process and engagement cross over many vertical industries and products. Amidst our growing partner ecosystem, the community provides us insight into great customer use cases every day. Such is the case with our partner, Keste, who provides us a guest post on our blog today with an overview of their innovative solution for a customer in the transportation industry. Keste is an Oracle software solutions and development company headquartered in Dallas, Texas. As a Platinum member of the Oracle® PartnerNetwork, Keste designs, develops and deploys custom solutions that automate complex business processes. Seamless Customer Self-Service Experience in the Trucking Industry with Oracle WebCenter Portal  Keste, Oracle Platinum Partner Customer Overview Omnitracs, Inc., a Qualcomm company provides mobility solutions for trucking fleets to companies in the transportation industry. Omnitracs’ mobility services include basic communications such as text as well as advanced monitoring services such as GPS tracking, temperature tracking of perishable goods, load tracking and weighting distribution, and many others. Customer Business Needs Already the leading provider of mobility solutions for large trucking fleets, they chose to target smaller trucking fleets as new customers. However their existing high-touch customer support method would not be a cost effective or scalable method to manage and service these smaller customers. Omnitracs needed to provide several self-service features to make customer support more scalable while keeping customer satisfaction levels high and the costs manageable. The solution also had to be very intuitive and easy to use. The systems that Omnitracs sells to these trucking customers require professional installation and smaller customers need to track and schedule the installation. Information captured in Oracle eBusiness Suite needed to be readily available for new customers to track these purchases and delivery details. Omnitracs wanted a high impact User Interface to significantly improve customer experience with the ability to integrate with EBS, provisioning systems as well as CRM systems that were already implemented. Omnitracs also wanted to build an architecture platform that could potentially be extended to other Portals. Omnitracs’ stated goal was to deliver an “eBay-like” or “Amazon-like” experience for all of their customers so that they could reach a much broader market beyond their large company customer base. Solution Overview In order to manage the increased complexity, the growing support needs of global customers and improve overall product time-to-market in a cost-effective manner, IT began to deliver a self-service model. This self service model not only transformed numerous business processes but is also allowing the business to keep up with the growing demands of the (internal and external) customers. This solution was a customer service Portal that provided self service capabilities for large and small customers alike for Activation of mobility products, managing add-on applications for the devices (much like the Apple App Store), transferring services when trucks are sold to other companies as well as deactivation all without the involvement of a call service agent or sending multiple emails to different Omnitracs contacts. This is a conceptual view of the Customer Portal showing the details of the components that make up the solution. 12.00 The portal application for transactions was entirely built using ADF 11g R2. Omnitracs’ business had a pressing requirement to have a portal available 24/7 for its customers. Since there were interactions with EBS in the back-end, the downtimes on the EBS would negate this availability. Omnitracs devised a decoupling strategy at the database side for the EBS data. The decoupling of the database was done using Oracle Data Guard and completely insulated the solution from any eBusiness Suite down time. The customer has no knowledge whether eBS is running or not. Here are two sample screenshots of the portal application built in Oracle ADF. Customer Benefits The Customer Portal not only provided the scalability to grow the business but also provided the seamless integration with other disparate applications. Some of the key benefits are: Improved Customer Experience: With a modern look and feel and a Portal that has the aspects of an App Store, the customer experience was significantly improved. Page response times went from several seconds to sub-second for all of the pages. Enabled new product launches: After successfully dominating the large fleet market, Omnitracs now has a scalable solution to sell and manage smaller fleet customers giving them a huge advantage over their nearest competitors. Dozens of new customers have been acquired via this portal through an onboarding process that now takes minutes Seamless Integrations Improves Customer Support: ADF 11gR2 allowed Omnitracs to bring a diverse list of applications into one integrated solution. This provided a seamless experience for customers to route them from Marketing focused application to a customer-oriented portal. Internally, it also allowed Sales Representatives to have an integrated flow for taking a prospect through the various steps to onboard them as a customer. Key integrations included: Unity Core Salesforce.com Merchant e-Solution for credit card Custom Omnitracs Applications like CUPS and AUTO Security utilizing OID and OVD Back end integration with EBS (Data Guard) and iQ Database Business Impact Significant business impacts were realized through the launch of customer portal. It not only allows the business to push through in underserved segments, but also reduces the time it needs to spend on customer support—allowing the business to focus more on sales and identifying the market for new products. Some of the Immediate Benefits are The entire onboarding process is now completely automated and now completes in minutes. This represents an 85% productivity improvement over their previous processes. And it was 160 times faster! With the success of this self-service solution, the business is now targeting about 3X customer growth in the next five years. This represents a tripling of their overall customer base and significant downstream revenue for the ongoing services. 90%+ improvement of customer onboarding and management process by utilizing, single sign on integration using OID/OAM solution, performance improvements and new self-service functionality Unified login for all Customers, Partners and Internal Users enables login to a common portal and seamless access to all other integrated applications targeted at the respective audience Significantly improved customer experience with a better look and feel with a more user experience focused Portal screens. Helped sales of the new product by having an easy way of ordering and activating the product. Data Guard helped increase availability of the Portal to 99%+ and make it independent of EBS downtime. This gave customers the feel of high availability of the portal application. Some of the anticipated longer term Benefits are: Platform that can be leveraged to launch any new product introduction and enable all product teams to reach new customers and new markets Easy integration with content management to allow business owners more control of the product catalog Overall reduced TCO with standardization of the Oracle platform Managed IT support cost savings through optimization of technology skills needed to support and modify this solution ------------------------------------------------------------ 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

    Read the article

  • Launching a process on startup in ubuntu fails to start other processes

    - by whatWhat
    On Ubuntu Server I've written a c++ program which launches another process which is in python. The c++ process runs fine on startup but the python process never launches. It gets created and when I run "top" I can see that both process's are running but next to the one that says python it reads python defunct. I've created the startup script in /etc/init.d and updated rc.d. Is there something else I have to do in order for it to see the python application?

    Read the article

  • how is the windows kill process works?

    - by IttayD
    I'm unfamiliar with how processes are killed in Windows. In Linux, a "warm" kill sends a signal (15) and the process can handle by instantiating a signal handler it and a cold kill sends signal (9) which the OS handles killing the process by force. What is the procedure in Windows? How can I send a "kill" to a process? How does the process handle it? Is there a cross-platform way of responding to a kill/close request?

    Read the article

  • zipping a file used by a process

    - by jaganath
    i accidently zipped a log file of process (the process wasnt writing in it though, it writes it only during weekends when the process get killed).I unzipped the file immediately back. will it affect the process when it is trying to write in the log file?

    Read the article

  • Post build events using ROBOCOPY instead of XCOPY

    - by Vizioz Limited
    I don't know about you, but for a long time I have used XCOPY statements in my Visual Studio post build events to copy my Umbraco files from the project folders to the local version of the website associated with the project.For the last few months we have been building a website framework for a client, who has subsequently sold the site to 5 clients, each with a different skin and some variations in their functional requirements.So, we now have a single source solutions, that builds and copies the site files into 5 seperate local websites, which enables us to easily test them all, what we had found was that this process was starting to slow up our build process and was reaching 30-45 seconds on a high spec Quad core machine (and slower on others)Today I asked Colin to create seperate Solution Configurations within Visual Studio so that while we were developing we could target a single site, and when we wanted to test all sites, we could target "ALL" and the Post Build script would then copy the files to all sites.This worked well, and with a couple of other optimisations, our build was now taking about 10 seconds for a single site.Then Colin came across ROBOCOPY and suggested that maybe this would be a suitable alternative to XCOPY, well, I had not heard of it.. (shock horror some of you shout, some I am sure like me, are also wondering what it is!)ROBOCOPY is new in Windows Vista & Windows 7 (you can also download it for XP & Windows 2003) and it has a lot of additional features, the two that were most interesting to us were:/MIR = Mirror a folder tree/XD = Exclude Directories/NP = No Progress (i.e. it does not give you a chart of it's results, which just fills up your Output window!)So, we set about implementing ROBOCOPY, we decided to use the /MIR switch on all folders that we knew were always stored in our project folders:- images- css- masterpages- xsltAnd for other files we just used the straight robocopy functionality.We also decided to exclude all the .SVN directories using the /XD switch and finally we added the /NP switch as mentioned above.The beauty of all of this, is the /MIR functionality, as this means that only files that have changed will be copied across which greatly speeds up the process, especially on the images folders which previously copied across on every build, now, if we add a new image to the project it will be copied across automatically and then never again, unless we change it of course!The build time now for all sites is approximately 4 seconds and for a single site, 2 seconds, I would highly recommend the time to make the same optimisations to your build processes if you have not done so already.

    Read the article

  • tfs 2010 RC Agile Process template update New Task progress report

    Maybe my next post will just be about why I am so excited and impressed with the out of the box templates.  But, for this first blog with my new focus, I thought I would just walk through the process I went through to create a task progress report (to enhance the out of the box Agile template). So, I started with the MSF for Agile Development 5.0 RC template.  After reviewing the template, I came away pretty excited about many of the new reports.  I am especially excited about the reporting services reports.  The big advantage I see here is that these are querying the Warehouse directly instead of the Analysis Services Cube which means that they are much closer to real-time which I find very important for reports like Burndown and task status.  One report that I focused on right away was the User Story Progress Report.  An overview is shown below: This report is very useful, but a lot of our internal managers really prefer to manage at the task level and either dont have stories in TFS or would like to view this type of report for tasks in addition to the User Stories.  So, what did I do? Step 1: Download the Agile Template In VS 2010 RC, open Process Template Manager from Team->Team Project Collection Settings.  Download the MSF for Agile Development template to your local file system.  A project template is a folder of xml files.  There is a ProcessTemplate.xml in the root and then a bunch of directories for things like Work Item Definitions and Queries, Reports, Shared Documents and Source Control Settings.  Step 2: Copy the folder My plan here is to make a new template with all of my modifications.  You can also just enhance update the MSF template.  However, I think it is cleaner when you start making modifications to make your own template.  So, copy the folder and name it with your new template name. Step 3: Change Template Name Open ProcessTemplate.xml and change the <name> of the template. Step 4: Copy the rdl of the Report you want to use a starting point In my case, I copied Stories Progress.rdl and named the file Task Progress Breakdown.rdl.  I reviewed the requirements for the new report with some of the users here and came up with this plan.  Should show tasks and be expandable to show subtasks.  Should add Assigned To and Estimated Finish Date as 2 extra columns. Step 5: Walkthrough the existing report to understand how it works The main thing that I do here is try to get the sql to run in SQL Management Studio.  So, I can walkthrough the process of building up the data for the report. After analyzing this particular report I found a couple of very useful things.  One, this report is already built to display subtasks if I just flip the IncludeTasks flag to 1.  So, if you are using Stories and have tasks assigned to each story.  This might give you everything you want.  For my purposes, I did make that change to the Stories Progress report as I find it to be a more useful report to be able to see the tasks that comprise each story.  But, I still wanted a task only version with the additional fields. Step 6: Update the report definition I tend to work on rdl in visual studio directly as xml.  Especially when I am just altering an existing report, I find it easier than trying to deal with the BI Studio designer.  For my report I made the following changes. Updated Fields Removed Stack Rank and Replaced with Priority since we dont use Stack Rank Added FinishDate and AssignedTo Changed the root deliverable SQL to pull @tasks instead of @deliverablecategory and added a join CurrentWorkItemView for FinishDate and Assigned to SELECT cwi.[System_Id] AS ID FROM [CurrentWorkItemView] cwi             WHERE cwi.[System_WorkItemType] IN (@Task)             AND cwi.[ProjectNodeGUID] = @ProjectGuid SELECT lh.SourceWorkItemID AS ID FROM FactWorkItemLinkHistory lh             INNER JOIN [CurrentWorkItemView] cwi ON lh.TargetWorkItemID = cwi.[System_Id]             WHERE lh.WorkItemLinkTypeSK = @ParentWorkItemLinkTypeSK                 AND lh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND lh.TeamProjectCollectionSK = @TeamProjectCollectionSK                 AND cwi.[System_WorkItemType] NOT IN (@DeliverableCategory) Added AssignedTo and FinishDate columns to the @Rollups table Added two columns to the table used for column headers <Tablix Name="ProgressTable">         <TablixBody>           <TablixColumns>             <TablixColumn>               <Width>2.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.5125in</Width>             </TablixColumn>             <TablixColumn>               <Width>3.4625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>           </TablixColumns> Added Cells for the two new headers Added Cells to the data table to include the two new values (Assigned to & Finish Date) Changed a bunch of widths that would change the format of the report to display landscape and have room for the two additional columns Set the Value of the IncludeTasks Parameter to 1 <ReportParameter Name="IncludeTasks">       <DataType>Integer</DataType>       <DefaultValue>         <Values>           <Value>=1</Value>         </Values>       </DefaultValue>       <Prompt>IncludeTasks</Prompt>       <Hidden>true</Hidden>     </ReportParameter> Change a few descriptions on how the report should be used This is the resulting report I have attached the final rdl. Step 7: Update ReportTasks.xml Last step before the template is ready for use is to update the reportTasks.xml file in the reports folder.  This file defines the reports that are available in the template.           <report name="Task Progress Breakdown" filename="Reports\Task Progress Breakdown.rdl" folder="Project Management" cacheExpiration="30">             <parameters>               <parameter name="ExplicitProject" value="" />             </parameters>             <datasources>               <reference name="/Tfs2010ReportDS" dsname="TfsReportDS" />             </datasources>           </report> Step 8: Upload the template Open the process Template Manager just like Step 1.  And upload the new template. Thats it.  One other note, if you want to add this report to existing team project you will have to go into reportmanager (the reporting services portal) and upload the rdl to that projects directory.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Can defect containment metrics be readily applied at an organizational level when there is only a consistant organizational process framework?

    - by Thomas Owens
    Defect containment metrics, such as total defect containment effectiveness (TDCE) and phase containment effectiveness (PCE), can be used to give a good indicator of the quality of the process. TDCE captures the defects that are captured at some point between requirements and the release of a product into the field, indicating the overall effectiveness of the entire process to find and remove defects. PCE provides more detail at each phase of the software development life cycle and how the defect detection and removal techniques are working. Applying these metrics makes sense at a level where you have a well-defined process and methodology for product development, often a project. However, some organizations provide a process framework that is tailored at the project level. This process framework would include the necessary guidance for meeting certifications (ISO9001, CMMI), practices for incorporating known good techniques (agile methods, Lean, Six Sigma), and requirements for legal or regulatory reasons. However, the specific details of how to gather requirements, design the system, produce the software, conduct test, and release are left to the product development teams. Is there any effective way to apply defect containment metrics at an organizational level when only a process framework exists at the organizational level? If not, what might be some ideas for metrics that can be distilled from each project (each using a tailored process that fits into the organizational process framework) that captures defect containment metrics to discuss the ability of the process to find and remove defects? The end goal of such a metric would be to consolidate the defect containment practices of a large number of ongoing projects and report to management. The target audience would be people in roles such as the chief software engineer and the chief engineer (of all engineering disciplines) for the organization. Although project specific data would be available, the idea is to produce something that quantifies the general effectiveness of all tailored processes across all ongoing projects. I would suspect that this data would also be presented as part of CMMI, ISO, or similar audits to demonstrate process quality.

    Read the article

  • Ubuntu 12.04 LTS: Error message "Failed to execute child process"

    - by Ron
    I am an Ubuntu-newbie and just started working with Ubuntu (version 12.04 LTS) a couple of days ago. I wanted to add a launcher icon to desktop for launching an application I previously installed. Up to now I can only launch it by typing setsid matlab -desktop into my terminal. Now there is the following problem with the execution via the desktop icon: Whenever I click the desktop icon, I get the following error message: "Failed to execute child process" I would like to add a screenshot, but unfortunately as a new user, I am not allowed to... In the main menu from where I added the icon via drag'n'drop to desktop there is also a permission to execute the .desktop file. I also tried to look for advice on the error message "Failed to execute child process..." but could not find anything useful. Now does anybody have an idea what I am missing? Sorry if this is a stupid question ;) ...but as I just said: I just started with Ubuntu... Thanks to everybody in advance for their help! :) And let me know if you should need any more information... Regards, Ron

    Read the article

  • OpenWorld Session: BPM Analytics - Process Dashboards, BAM and Intelligent Optimization

    - by Ajay Khanna
    Blog Contributed by Payal Srivastava, Oracle Product Mamagement   Margin of error for the business is shrinking dramatically. Business needs to accomplish more with less i.e. minimal investment with quick ROI. Learn how you can leverage Oracle BPM suite and complementary technologies to create a robust analytics capability to provide visibility into operations to  C-level executives and Operational managers. We will talk about BPM analytics options available today that will not only enhance the visibility but allow you to intelligently optimize the business process at design time as well as run time.  The session will share some exciting this on our roadmap.  Come meet with the Oracle team members  from Product Development (Avinash Dabholkar , Eric Hsiao) and Product Management (Payal Srivastava) at the session. We would like to hear  your questions/comments about  our offering and roadmap. BPM Analytics: Process Dashboards, BAM and Intelligent Optimization, Moscone South 308, 10/3/12 @11:45am – CON 8598

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >