Search Results

Search found 21539 results on 862 pages for 'hanging process'.

Page 124/862 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • unable to start wamp server

    - by jayesh
    problem to start wamp server error log is [Sat Nov 12 22:01:00 2011] [notice] Apache/2.2.17 (Win32) PHP/5.3.5 configured -- resuming normal operations [Sat Nov 12 22:01:00 2011] [notice] Server built: Oct 18 2010 01:58:12 [Sat Nov 12 22:01:00 2011] [notice] Parent: Created child process 3228 [Sat Nov 12 22:01:01 2011] [notice] Child 3228: Child process is running [Sat Nov 12 22:01:01 2011] [crit] (OS 10022)An invalid argument was supplied. : Child 3228: setup_inherited_listeners(), WSASocket failed to open the inherited socket. [Sat Nov 12 22:01:01 2011] [crit] Parent: child process exited with status 3 -- Aborting.

    Read the article

  • How can I make check_nrpe wait for my remote script to finish executing?

    - by Rauffle
    I have a python script that's being used as a plugin for NRPE. This script checks to see if a process if running on a virtual machine by doing an SSH one-liner with a "ps ax | grep process" attached. When executing the script manually, it works as expected and returns a single line of output for NRPE as well as a status based on whether or not the process is running. When I attempt to run the command setup to execute this script (from my Nagios server), I instantly get the output "NRPE: Unable to read output", however when I run the script manually it takes about a second before it returns output. Other commands run just fine, so it would seem like NRPE needs to wait a second or two for output rather than instantly failing, but I've been unable to find any way of accomplishing this; any tips? Thanks PS: The virtual machines are not accessible from anywhere other than the host machine, hence the need for the nrpe plugin to ssh from the host into the VM to check the process.

    Read the article

  • Running out of LowMem with Ubuntu PAE Kernel and 32GB of RAM

    - by magneticMonster
    I'm running a Java data import process on a 32-bit Ubuntu 10 PAE kernel machine. After running the process for a while, the oom-killer zaps my Java process. After some Googling and digging through docs, it looks like the system is running out of LowMem. I started the process for the third time and am watching free -lm show me Low: 464 386 77 with the free value (77MB) slowly decreasing. Why am I running out of lowmem and how do I increase it? Some details: $ cat /proc/sys/vm/lowmem_reserve_ratio 256 256 32 $ free -lm total used free shared buffers cached Mem: 32086 24611 7475 0 0 24012 Low: 464 407 57 High: 31621 24204 7417 -/+ buffers/cache: 598 31487 Swap: 2047 0 2047

    Read the article

  • How could two processes bind onto the same port?

    - by Matt Ball
    I just ran into an issue where a request made to localhost:8080 from curl was hitting a different server than the same request made from inside Node. lsof -i :8080 revealed that two processes were both binding onto the same port: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME node 51961 mball 14u IPv4 0xd980e0df7f175e13 0t0 TCP *:http-alt (LISTEN) java 62704 mball 320u IPv6 0xd980e0df7fe08643 0t0 TCP *:http-alt (LISTEN) How is this possible? Were they binding onto different interfaces? Or was it the IPv4 vs 6? If you're curious, node was hitting the other node process, curl was hitting the java process. The java process was started after the node process.

    Read the article

  • unable to install anything that depends upon spamassassin. Cant even install spamassasin

    - by Harbhag
    I am trying to install mailscanner using apt-get install mailscanner and I got the following error Setting up spamassassin (3.3.1-1) ... Starting SpamAssassin Mail Filter Daemon: child process [21344] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 2588. invoke-rc.d: initscript spamassassin, action "start" failed. dpkg: error processing spamassassin (--configure): subprocess installed post-installation script returned error exit status 255 dpkg: dependency problems prevent configuration of mailscanner: mailscanner depends on spamassassin (>= 3.1); however: Package spamassassin is not configured yet. dpkg: error processing mailscanner (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: spamassassin mailscanner E: Sub-process /usr/bin/dpkg returned an error code (1) and when I tried to install spamassassin I got the following error : Setting up spamassassin (3.3.1-1) ... Starting SpamAssassin Mail Filter Daemon: child process [21389] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 2588. invoke-rc.d: initscript spamassassin, action "start" failed. dpkg: error processing spamassassin (--configure): subprocess installed post-installation script returned error exit status 255 dpkg: dependency problems prevent configuration of mailscanner: mailscanner depends on spamassassin (>= 3.1); however: Package spamassassin is not configured yet. dpkg: error processing mailscanner (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: spamassassin mailscanner E: Sub-process /usr/bin/dpkg returned an error code (1) I am using Ubuntu Server 10.04

    Read the article

  • How do I set a postgresql password in pgpass.conf for the Administrator account on Windows Server 2008?

    - by brad
    I have a pgpass.conf file that works well for my default user. It is in C:/Users/myuser/AppData/Roaming/postgresql/pgpass.conf. It reads like so; localhost:5432:*:postgres:password1 I have a process that runs under the Administrator account. When I run whoami under this process I get nt authority/system. I want to be able to access the database from this process but it gets stuck because it needs a password. I have tried putting the above pgpass.conf into C:/Users/Administrator/AppData/postgresql/pgpass.conf and C:/Users/Administrator/AppData/Roaming/postgresql/pgpass.conf but it does not work. Is this the correct place for this file? Am I even able to do this as the Administrator. Unfortunately I cannot change the user that this process runs under.

    Read the article

  • Mapping between 4+1 architectural view model & UML

    - by Sadeq Dousti
    I'm a bit confused about how the 4+1 architectural view model maps to UML. Wikipedia gives the following mapping: Logical view: Class diagram, Communication diagram, Sequence diagram. Development view: Component diagram, Package diagram Process view: Activity diagram Physical view: Deployment diagram Scenarios: Use-case diagram The paper Role of UML Sequence Diagram Constructs in Object Lifecycle Concept gives the following mapping: Logical view (class diagram (CD), object diagram (OD), sequence diagram (SD), collaboration diagram (COD), state chart diagram (SCD), activity diagram (AD)) Development view (package diagram, component diagram), Process view (use case diagram, CD, OD, SD, COD, SCD, AD), Physical view (deployment diagram), and Use case view (use case diagram, OD, SD, COD, SCD, AD) which combines the four mentioned above. The web page UML 4+1 View Materials presents the following mapping: Finally, the white paper Applying 4+1 View Architecture with UML 2 gives yet another mapping: Logical view class diagrams, object diagrams, state charts, and composite structures Process view sequence diagrams, communication diagrams, activity diagrams, timing diagrams, interaction overview diagrams Development view component diagrams Physical view deployment diagram Use case view use case diagram, activity diagrams I'm sure further search will reveal other mappings as well. While various people usually have different perspectives, I don't see why this is the case here. Specially, each UML diagram describes the system from a particular aspect. So, for instance, why the "sequence diagram" is considered as describing the "logical view" of the system by one author, while another author considers it as describing the "process view"? Could you please help me clarify the confusion?

    Read the article

  • If some standards apply when "it depends" then should I stick with custom approaches?

    - by Travis J
    If I have an unconventional approach which works better than the industry standard, should I just stick with it even though in principal it violates those standards? What I am talking about is referential integrity for relational database management systems. The standard for enforcing referential integrity is to CASCADE delete. In practice, this is just not going to work all the time. In my current case, it does not. The alternative suggested is to either change the reference to NULL, DEFAULT, or just to take NO ACTION - usually in the form of a "soft delete". I am all about enforcing referential integrity. Love it. However, sometimes it just does not fully apply to use all the standards in practice. My approach has been to slightly abandon a small part of one of those practices which is the part about leaving "hanging references" around. Oops. The trade off is plentiful in this situation I believe. Instead of having deprecated data in the production database, a splattering of "soft delete" logic all across my controllers (and views sometimes depending on how far down the chain the soft delete occurred), and the prospect of queries taking longer and longer - instead of all that - I now have a recycle bin and centralized logic. The only tradeoff is that I must explicitly manage the possibility of "hanging references" which can be done through generics with one class. Any thoughts?

    Read the article

  • nginx starts up before apache

    - by paullb
    I've been fumbling through setting up redmine on a unbuntu (12.04) box and somewhere along the line NginX got set up and now apache no longer loads because nginx has already grabbed the port. I tried removing NginX with the below command but that didn't seem to make any difference. When I restarted the server and pointed my web browser I still got the "Welcome to NginX" message sudo apt-get purge nginx I have confirmed that NginX is gone because when I run the above now I get as an output Package nginx is not installed, so not removed Yet everytime I start the machine it is running again. I noticed the following for the running processes (if that is helpful) root 923 0.0 0.0 76784 1280 ? Ss 03:00 0:00 nginx: master process /usr/sbin/nginx www-data 925 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 926 0.0 0.1 77092 2204 ? S 03:00 0:00 nginx: worker process www-data 927 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 928 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process Any advice for bringing back apache2 as the "default" (for lack of a better term) web server?

    Read the article

  • /lib/udev/net.agent causing high CPU usage

    - by Antoine Benkemoun
    We have a number of Soekris boxes running Debian Squeeze. They were installed through an automated process consisting of using deboostrap and copying it unto a Compact Flash card. We use puppet to manage the configuration of all these boxes. Before Debian Squeeze, they were running Voyage Linux which is just a "lighter" version of Debian. Since we have switched, we're seeing the /lib/udev/net.agent process take up an aweful lot of CPU. We have so far been unable to find any clue as to what this really does and why it's taking up some much CPU time. In htop we see the following : We are seeing absolutly no syslog messages related to this process so we're a bit lost... So, I am looking for pointers as to what this process does in general and what could be the potential cause of such CPU usage.

    Read the article

  • How to run Windows 7 Explorer shell with Administrator Privileges by default

    - by Barry Kelly
    The Windows 7 shell (Explorer) can be made to run with Administrator privileges by this manual process: Kill Explorer shell by holding down Shift+Ctrl, right-clicking the Shut down button in the Start Menu, and selecting Exit Explorer Start Task Manager with Ctrl+Shift+Esc Elevate Task Manager privileges by going to Processes tab and selecting Show processes from all users Then start up a new instance of the shell by File | Run in Task Manager, typing in explorer, and selecting the Create this task with administrative privileges. After following the above process, the Windows shell will be running with administrative privileges, and any programs it launches will also have administrative privileges. This makes performing tasks that require the privilege far easier, particularly for command-line applications, which usually fail silently or with an Access denied. message rather than giving an opportunity to use UAC to elevate the process's privileges. What I'm interested in, though, is creating an account which uses a privileged shell by default, rather than having to follow this laborious process every time. How can it be done?

    Read the article

  • A Model for Planning Your Oracle BPM 10g Migration by Kris Nelson

    - by JuergenKress
    As the Oracle SOA Suite and BPM Suite 12c products enter beta, many of our clients are starting to discuss migrating from the Oracle 10g or prior platforms. With the BPM Suite 11g, Oracle introduced a major change in architecture with a strong focus on integration with SOA and an entirely new technology stack. In addition, there were fresh new UIs and a renewed business focus with an improved Process Composer and features like Adaptive Case Management. While very beneficial to both technology and the business, the fundamental change in architecture does pose clear migration challenges for clients who have made investments in the 10g platform. Some of the key challenges facing 10g customers include: Managing in-process instance migration and running multiple process engines Migration of User Interfaces and other code within the environment that may not be automated Growing or finding technical staff with both 10g and 12c experience Managing migration projects while continuing to move the business forward and meet day-to-day responsibilities As a former practitioner in a mixed 10g/11g shop, I wrestled with many of these challenges as we tried to plan ahead for the migration. Luckily, there is migration tooling on the way from Oracle and several approaches you can use in planning your migration efforts. In addition, you already have a defined and visible process on the current platform, which will be invaluable as you migrate.  A Migration ModelThis model presents several options across a value and investment spectrum. The goal of the AVIO Migration Model is to kick-start discussions within your company and assist in creating a plan of action to take advantage of the new platform. As with all models, this is a framework for discussion and certain processes or situations may not fit. Please contact us if you have specific questions or want to discuss migrations efforts in your situation. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Kris Nelson,ACM,Adaptive Case Management,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • xampp apache on windows 7 returns http header only

    - by bumperbox
    i am having issues with xampp running on windows 7 RC32 i type in a localhost and get a header back only, no page content somedays it works fine, other days i can't get it to work after multiple attempts, reboot or otherwise the request doesn't even get put into the acccess log which seems unusual here is the log file at startup incase that helps any ideas ?? [Wed Sep 09 12:27:08 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:08 2009] [notice] Digest: done [Wed Sep 09 12:27:09 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:09 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:09 2009] [notice] Parent: Created child process 2500 [Wed Sep 09 12:27:10 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:10 2009] [notice] Digest: done [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Child process is running [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Acquired the start mutex. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting 250 worker threads. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 443. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 80. [Wed Sep 09 12:27:15 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Sep 09 12:27:15 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:15 2009] [notice] Digest: done [Wed Sep 09 12:27:16 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:16 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:16 2009] [notice] Parent: Created child process 3252 [Wed Sep 09 12:27:17 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:17 2009] [notice] Digest: done [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Child process is running [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Acquired the start mutex. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting 250 worker threads. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 443. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 80.

    Read the article

  • What Apache/PHP configurations do you know and how good are they?

    - by FractalizeR
    Hello. I wanted to ask you about PHP/Apache configuration methods you know, their pros and cons. I will start myself: ---------------- PHP as Apache module---------------- Pros: good speed since you don't need to start exe every time especially in mpm-worker mode. You can also use various PHP accelerators in this mode like APC or eAccelerator. Cons: if you are running apache in mpm-worker mode, you may face stability issues because every glitch in any php script will lead to unstability to the whole thread pool of that apache process. Also in this mode all scripts are executed on behalf of apache user. This is bad for security. mpm-worker configuration requires PHP compiled in thread-safe mode. At least CentOS and RedHat default repositories doesn't have thread-safe PHP version so on these OSes you need to compile at least PHP yourself (there is a way to activate worker mpm on Apache). The use of thread-safe PHP binaries is considered experimental and unstable. Plus, many PHP extensions does not support thread-safe mode or were not well-tested in thread-safe mode. ---------------- PHP as CGI ---------------- This seems to be the slowest default configuration which seems to be a "con" itself ;) ---------------- PHP as CGI via mod_suphp ---------------- Pros: suphp allows you to execute php scipts on behalf of the script file owner. This way you can securely separate different sites on the same machine. Also, suphp allows to use different php.ini files per virtual host. Cons: PHP in CGI mode means less performance. In this mode you can't use php accelerators like APC because each time new process is spawned to handle script rendering the cache of previous process useless. BTW, do you know the way to apply some accelerator in this config? I heard something about using shm for php bytecode cache. Also, you cannot configure PHP via .htaccess files in this mode. You will need to install PECL htscanner for this if you need to set various per-script options via .htaccess (php_value / php_flag directives) ---------------- PHP as CGI via suexec ---------------- This configuration looks the same as with suphp, but I heard, that it's slower and less safe. Almost same pros and cons apply. ---------------- PHP as FastCGI ---------------- Pros: FastCGI standard allows single php process to handle several scripts before php process is killed. This way you gain performance since no need to spin up new php process for each script. You can also use PHP accelerators in this configuration (see cons section for comment). Also, FCGI almost like suphp also allows php processes to be executed on behalf of some user. mod_fcgid seems to have the most complete fcgi support and flexibility for apache. Cons: The use of php accelerator in fastcgi mode will lead to high memory consumption because each PHP process will have his own bytecode cache (unless there is some accelerator that can use shared memory for bytecode cache. Is there such?). FastCGI is also a little bit complex to configure. You need to create various configuration files and make some configuration modifications. It seems, that fastcgi is the most stable, secure, fast and flexible PHP configuration, however, a bit difficult to be configured. But, may be, I missed something? Comments are welcome!

    Read the article

  • Cannot connect to postgresql on port 5432

    - by Assaf Lavie
    I installed the Bitnami Django stack which included PostgreSQL 8.4. When I run psql -U postgres I get the following error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? PG is definitely running and the pg_hba.conf file looks like this: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all md5 # IPv4 local connections: host all all 127.0.0.1/32 md5 # IPv6 local connections: host all all ::1/128 md5 What gives? "Proof" that pg is running: root@assaf-desktop:/home/assaf# ps axf | grep postgres 14338 ? S 0:00 /opt/djangostack-1.3-0/postgresql/bin/postgres -D /opt/djangostack-1.3-0/postgresql/data -p 5432 14347 ? Ss 0:00 \_ postgres: writer process 14348 ? Ss 0:00 \_ postgres: wal writer process 14349 ? Ss 0:00 \_ postgres: autovacuum launcher process 14350 ? Ss 0:00 \_ postgres: stats collector process 15139 pts/1 S+ 0:00 \_ grep --color=auto postgres root@assaf-desktop:/home/assaf# netstat -nltp | grep 5432 tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 14338/postgres tcp6 0 0 ::1:5432 :::* LISTEN 14338/postgres root@assaf-desktop:/home/assaf#

    Read the article

  • Fatal Execution Engine Error on the Windows2008 r2, IIS7.5

    - by user66524
    Hi Guys We are running some asp.net(3.5) applications on the Windows2008 r2, IIS7.5. Recently we got some event logs so difficult, we have not idea hope some guys can help. 1.EventID: 1334 (9-1-2011 8:41:57) Error message An error occurred during a process host idle check. Exception: System.AccessViolationException Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. StackTrace: at System.Collections.Hashtable.GetEnumerator() at System.Web.Hosting.ApplicationManager.IsIdle() at System.Web.Hosting.ProcessHost.IsIdle() 2.EventID: 1023 (9-1-2011 19:44:02) Error message .NET Runtime version 2.0.50727.4952 - Fatal Execution Engine Error (742B851A) (80131506) 3.EventID: 1000 (9-1-2011 19:44:03) Error message Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bcd2b Faulting module name: mscorwks.dll, version: 2.0.50727.4952, time stamp: 0x4bebd49a Exception code: 0xc0000005 Fault offset: 0x0000c262 Faulting process id: 0x%9 Faulting application start time: 0x%10 Faulting application path: %11 Faulting module path: %12 Report Id: %13 4.EventID: 5011 (9-1-2011 19:44:03) Error message A process serving application pool 'AppPoolName' suffered a fatal communication error with the Windows Process Activation Service. The process id was '2552'. The data field contains the error number. 5.some info: we got the memory.hdmp(234MB) and minidump.mdmp(19.2) from control panel action center but I donot know how to use that :(

    Read the article

  • Using a mounted NTFS share with nginx

    - by Hoff
    I have set up a local testing VM with Ubuntu Server 12.04 LTS and the LEMP stack. It's kind of an unconventional setup because instead of having all my PHP scripts on the local machine, I've mounted an NTFS share as the document root because I do my development on Windows. I had everything working perfectly up until this morning, now I keep getting a dreaded 'File not found.' error. I am almost certain this must be somehow permission related, because if I copy my site over to /var/www, nginx and php-fpm have no problems serving my PHP scripts. What I can't figure out is why all of a sudden (after a reboot of the server), no PHP files will be served but instead just the 'File not found.' error. Static files work fine, so I think it's PHP that is causing the headache. Both nginx and php-fpm are configured to run as the user www-data: root@ubuntu-server:~# ps aux | grep 'nginx\|php-fpm' root 1095 0.0 0.0 5816 792 ? Ss 11:11 0:00 nginx: master process /opt/nginx/sbin/nginx -c /etc/nginx/nginx.conf www-data 1096 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process www-data 1098 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process root 1130 0.0 0.4 175560 4212 ? Ss 11:11 0:00 php-fpm: master process (/etc/php5/php-fpm.conf) www-data 1131 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1132 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1133 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www root 1686 0.0 0.0 4368 816 pts/1 S+ 11:11 0:00 grep --color=auto nginx\|php-fpm I have mounted the NTFS share at /mnt/webfiles by editing /etc/fstab and adding the following line: //192.168.0.199/c$/Websites/ /mnt/webfiles cifs username=Jordan,password=mypasswordhere,gid=33,uid=33 0 0 Where gid 33 is the www-data group and uid 33 is the user www-data. If I list the contents of one of my sites you can in fact see that they belong to the user www-data: root@ubuntu-server:~# ls -l /mnt/webfiles/nTv5-2.0 total 8 drwxr-xr-x 0 www-data www-data 0 Jun 6 19:12 app drwxr-xr-x 0 www-data www-data 0 Aug 22 19:00 assets -rwxr-xr-x 0 www-data www-data 1150 Jan 4 2012 favicon.ico -rwxr-xr-x 0 www-data www-data 1412 Dec 28 2011 index.php drwxr-xr-x 0 www-data www-data 0 Jun 3 16:44 lib drwxr-xr-x 0 www-data www-data 0 Jan 3 2012 plugins drwxr-xr-x 0 www-data www-data 0 Jun 3 16:45 vendors If I switch to the www-data user, I have no problem creating a new file on the share: root@ubuntu-server:~# su www-data $ > /mnt/webfiles/test.txt $ ls -l /mnt/webfiles | grep test\.txt -rwxr-xr-x 0 www-data www-data 0 Sep 8 11:19 test.txt There should be no problem reading or writing to the share with php-fpm running as the user www-data. When I examine the error log of nginx, it's filled with a bunch of lines that look like the following: 2012/09/08 11:22:36 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" 2012/09/08 11:22:39 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET /apc.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" It's bizarre that this was working previously and now all of sudden PHP is complaining that it can't "find" the scripts on the share. Does anybody know why this is happening? EDIT I tried editing php-fpm.conf and changing chdir to the following: chdir = /mnt/webfiles When I try and restart the php-fpm service, I get the error: Starting php-fpm [08-Sep-2012 14:20:55] ERROR: [pool www] the chdir path '/mnt/webfiles' does not exist or is not a directory This is a total load of bullshit because this directory DOES exist and is mounted! Any ls commands to list that directory work perfectly. Why the hell can't PHP-FPM see this directory?! Here are my configuration files for reference: nginx.conf user www-data; worker_processes 2; error_log /var/log/nginx/nginx.log info; pid /var/run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { include fastcgi.conf; include mime.types; default_type application/octet-stream; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; ## Proxy proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 32m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; ## Compression gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ### TCP options tcp_nodelay on; tcp_nopush on; keepalive_timeout 65; sendfile on; include /etc/nginx/sites-enabled/*; } my site config server { listen 80; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /mnt/webfiles/nTv5-2.0/app/webroot; index index.php; ## Block bad bots if ($http_user_agent ~* (HTTrack|HTMLParser|libcurl|discobot|Exabot|Casper|kmccrew|plaNETWORK|RPT-HTTPClient)) { return 444; } ## Block certain Referers (case insensitive) if ($http_referer ~* (sex|vigra|viagra) ) { return 444; } ## Deny dot files: location ~ /\. { deny all; } ## Favicon Not Found location = /favicon.ico { access_log off; log_not_found off; } ## Robots.txt Not Found location = /robots.txt { access_log off; log_not_found off; } if (-f $document_root/maintenance.html) { rewrite ^(.*)$ /maintenance.html last; } location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "max-age=2678400, public, must-revalidate"; } location / { try_files $uri $uri/ index.php; if (-f $request_filename) { break; } rewrite ^(.+)$ /index.php?url=$1 last; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } } php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix (/opt/php5). This prefix can be dynamicaly changed by using the ; '-p' argument from the command line. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. ; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p arguement) ; - /opt/php5 otherwise ;include=etc/fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Note: the default prefix is /opt/php5/var ; Default Value: none pid = /var/run/php-fpm.pid ; Error log file ; Note: the default prefix is /opt/php5/var ; Default Value: log/php-fpm.log error_log = /var/log/php5-fpm/php-fpm.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /opt/php5) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. ;listen = 127.0.0.1:9000 listen = /var/run/php5-fpm.sock ; Set listen(2) backlog. A value of '-1' means unlimited. ; Default Value: 128 (-1 on FreeBSD and OpenBSD) ;listen.backlog = -1 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives: ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes to be created when pm is set to 'dynamic'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. ; Note: Used when pm is set to either 'static' or 'dynamic' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 20 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 35 ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. By default, the status page shows the following ; information: ; accepted conn - the number of request accepted by the pool; ; pool - the name of the pool; ; process manager - static or dynamic; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes. ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic') ; The values of 'idle processes', 'active processes' and 'total processes' are ; updated each second. The value of 'accepted conn' is updated in real time. ; Example output: ; accepted conn: 12073 ; pool: www ; process manager: static ; idle processes: 35 ; active processes: 65 ; total processes: 100 ; max children reached: 1 ; By default the status page output is formatted as text/plain. Passing either ; 'html' or 'json' as a query string will return the corresponding output ; syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ping.response = pong ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /opt/php5) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i

    Read the article

  • Oracle SOA Suite customer panel: Successful Application Integration & SOA Projects

    - by Simone Geib
    At the recent SOA Suite customer panel, Roger Brown from UNS Energy, Fabio Ravagni from Cencosud and Paras Jain from Cisco discussed their recent SOA Suite implementations, business drivers and challenges, architecture and lessons learned. Roger started by describing how UNS redesigned their internet portal to improve their customer experience and reduce manual steps in their business processes. Through the use of Oracle Service Bus, Oracle BPEL Process Manager and Oracle Business Activity Monitoring, they provided more self-service functionality, automated their business processes and increased the use of their web site by 12.98% for number of visits and 33.58% for average visit duration. The screenshot below shows the UNS architecture: > Next Fabio described the challenges Cencosud faced through continuous expansion of their business, different standards and levels of expertise and large volumes of information. By introducing Oracle SOA Suite, Oracle Data Integrator and Oracle Enterprise Repository, and with the help of Oracle Consulting, they significantly simplified their integration model, reduced their maintenance effort and increased their integration governance. The picture below shows the implemented solution with so far more than 400 services in production and more than 20 ongoing projects, which will make use of the new integration platform. > Last, but not least, Paras discussed the challenges the Webex division of Cisco faced with a highly manual service fulfillment process, multiple data sources and the resulting large room for errror and delay in customer time-to-service. Through a redesign of their order fulfillment process and the introduction of Oracle SOA Suite (see below), they significantly improved their SLAs, eliminated duplicate orders, provided higher visibility into the order process and aligned business and IT. For more information about Oracle OpenWorld SOA & BPM Session, please see the Focus on SOA and BPM document

    Read the article

  • What is the crappiest network build you've ever seen?

    - by Ivan Petrushev
    This is only about networks you have seen personaly, not heared from others or seen on pictures at the web. Cables hanging from the ceiling lamps? Cables going trough culverts and other tubes? Switches and other equipment in the cleaner's closet? Cleaning lady's rags drying hanging from the cables? Key to the main node door possesed only by the janitor (or other non-tech and completely non-network-related guy)? Switches powered by foreign power adapters (cheaper and providing non-specified voltage or amperage)? All of this was in my old dormitory. Tell us about your bad experience.

    Read the article

  • How to debug solve 500 Internal error aws micro ec2 with suexec, Apache and php CGi

    - by Oudin
    I'm running WordPress multi-site on an amazon micro ec2 with suexec, Apache and php CGi On Ubuntu 12.04 However I've been experiencing a lot of Internal server 500 errors and I'm in the process of debugging it to find a solution. I've posted my error logs below example.com error.log: [Fri Oct 26 10:10:08 2012] [warn] [client 23.23.xxx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Fri Oct 26 10:10:08 2012] [error] [client 23.23.xxx.xx] Premature end of script headers: wp-cron.php [Fri Oct 26 10:50:04 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/ [Fri Oct 26 10:50:04 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: admin.php, referer: https://www.example.com/wp-admin/ [Fri Oct 26 10:58:14 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:58:15 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: admin-ajax.php, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:58:56 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:58:57 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: plugins.php, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:59:18 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:59:18 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: admin-ajax.php, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 11:01:49 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/ [Fri Oct 26 11:01:49 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: ap_pass_brigade failed in handle_request_ipc function, referer: https://www.example.com/wp-admin/ Apache Log: php (pre-forking): Cannot allocate memory php (pre-forking): Cannot allocate memory Recipient names must be specified Recipient names must be specified php (pre-forking): Cannot allocate memory php (pre-forking): Cannot allocate memory php (pre-forking): Cannot allocate memory [Fri Oct 26 10:49:33 2012] [warn] mod_fcgid: cleanup zombie process 2852 [Fri Oct 26 10:49:33 2012] [warn] mod_fcgid: cleanup zombie process 2851 [Fri Oct 26 10:49:33 2012] [warn] mod_fcgid: cleanup zombie process 2853 [Fri Oct 26 10:58:22 2012] [warn] mod_fcgid: process 2892 graceful kill fail, sending SIGKILL php (pre-forking): Cannot allocate memory [Fri Oct 26 10:59:21 2012] [warn] mod_fcgid: process 2894 graceful kill fail, sending SIGKILL [Fri Oct 26 10:59:25 2012] [warn] mod_fcgid: process 2866 graceful kill fail, sending SIGKILL suexec.log: [2012-10-25 16:05:36]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:09:38]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:09:51]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:14:03]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:14:06]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:14:35]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 20:20:27]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 20:20:29]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 20:20:31]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 21:42:12]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 22:56:50]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 02:34:43]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 04:25:07]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 06:35:19]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 06:40:05]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 07:22:45]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 10:10:05]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 10:49:24]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 10:49:24]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi based on the logs can any determine what might be the cause of this? Thinking that it might be the micro instance I'm thinking of upgrading to a small. Any help would be greatly appreciated.

    Read the article

  • XUbuntu vsftpd couldnt restart

    - by Fara
    # sudo /etc/init.d/vsftpd restart Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service vsftpd restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop vsftpd ; start vsftpd. The restart(8) utility is also available. vsftpd start/running, process 3237 then I tried this # service vsftpd start vsftpd start/running, process 3275 # service vsftpd stop stop: Unknown instance: # service vsftpd restart stop: Unknown instance: vsftpd start/running, process 3315 # sudo service vsftpd restart stop: Unknown instance: vsftpd start/running, process 3358 I couldn't get the vsftp resrated when ever I try the restart the above happens ! How to restart ? Please advice

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >