Search Results

Search found 11280 results on 452 pages for 'zend newbie dev'.

Page 339/452 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • E: Sub-process /usr/bin/dpkg returned an error code (100)

    - by user67011
    Hello, I am running on xen, Debian 5.0-i386-default. I haven't touched my vps in 2 months then last night I ran the following command: myserver:/usr/bin# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: makepasswd The following packages will be upgraded: libc6 libc6-dev libc6-xen libmysqlclient15off locales mysql-client mysql-client-5.0 mysql- common mysql-server mysql-server-5.0 10 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. Need to get 0B/50.1MB of archives. After this operation, 483kB of additional disk space will be used. Do you want to continue [Y/n]? y Preconfiguring packages ... E: Sub-process /usr/bin/dpkg returned an error code (100) I googled and it seems to be a permission thing for "dpkg". However, I cd into /usr/bin and there's no dpkg binary!!! Please help thanks

    Read the article

  • Adding expire headers to content served from CDN?

    - by mdolon
    I'm using MaxCDN to serve content to my blog using W3 Total Cache. The problem I'm running into when evaluating my site using Google Page Speed and YSlow! is that expire headers are not being sent on content delivered from the CDN, nor are they coming from a cookieless domain. Is this something that is completely in the hands of my CDN or is it something I can fix using my server configuration? Some info about my setup: nginx with php-fastcgi wordpress 3.0 w3 total cache 0.9a (dev release) MaxCDN the site: http://devgrow.com/

    Read the article

  • Missing Driver - Video Controller (VGA Compatible)

    - by arahant
    I have a HP 2000-2106TU Notebook PC running Windows XP. I want to get the brightness keys to work. They are placed over the F2 and F3 buttons and are meant to be used in conjunction with the Fn key. But these combinations do not work, though other Fn keys such as the volume control keys do. I see a missing driver for a device called Video Controller (VGA Compatible) in the Windows Device Manager. The hardware id is PCI VEN 8086 DEV 0106 SUBSYS_1858103C which a Google search suggests is in an Intel HD Graphics family, but I don't know where to locate the driver. HP's driver scan does not help, as it does not show any missing driver related to video/graphics. What can I do next?

    Read the article

  • Best Practice for upgrading PHP On Production Systems

    - by Demic
    We Have two load balanced web servers running php 5.3. I've been asked by our dev team to upgrade php to 5.4 because they need certain functionality it will bring. The main issue is that 5.3 is the latest thats been built into the distros repository, so to upgrade using the package manager, Ill need to add another 3rd party repo. I dont have a problem with this per se, but Im concerned about using a package from a "non official" source. The other option is to compile php from source, but I guess this will prevent me from using the package manager to upgrade at any stage in the future? So I guess Im just looking for some guidance on which way to go. Compile from source or install from any old repo that purports to supply php 5.4? Or perhaps theres a third option I havent considered? Thanks in advance Demic

    Read the article

  • downgrading the php version

    - by aadiahg
    I used to upgrade my php from 5.3.5-1ubuntu7.11 to 5.3.18-1~dotdeb.0 I got a lot of problems after the upgrading process . My localhost/phpmyadmin displaying the blank screen. apache2 show me a warning message waiting [Sun Nov 04 12:11:21 2012] [warn] The Alias directive in /etc/apache2/conf.d/phpmyadmin.conf at line 3 will probably never match because it overlaps an earlier Alias. most of CMS cant installed and show me this error Required MySQL version for CMS is 5.x but this server has: mysqlnd 5.0.8-dev the Scripts that has been already installed some of there functions doesn’t work i've googled a lot to fix this problems also i ve googled about downgrading the php version from 5.3.18 to 5.3.x but it doesn’t work with me Can you help please Many thanks

    Read the article

  • Run a specific command from a directory

    - by Cameron Kilgore
    I have a bash script where I need to run an init utility within a directory with a configuration file defined. I don't think it's possible to explicitly tell the utility to run the file as an argument, so what I need to do is go to the directory with the config file, and then run the command. I have some logic in place, but its not working -- the utility never runs. Is there any way I can tell the script to go to this directory, and then run the script? cd /var/www/testing-dev.example.co eval "standardprofile"

    Read the article

  • Services of virtual machines

    - by RredCat
    I am looking a way to establish dev machine on the cloud. I want to have access for development from different places. I don't want to play with sync brunch and so on. It would be Lubuntu with 1G or 512M memory and I want to have way to setup my image. What I have found in the current moment: Amazon EC2 service Azure Virtual Machines I am pretty sure that there should be more services like this. I hope to find one specified for this purpose. I have experience to work in such way and I like it. Unfortunately it was server of certain company for projects of this company and I can't use it for my private aims. Could anybody suggest me anything?

    Read the article

  • How Do I Migrate 100 DBs From One MS-SQL 2008 Server To Another? (looking for automation)

    - by jc4rp3nt3r
    Let me start by saying that I am not a DBA, but I am in a position where I am responsible for moving just under 100 MS-SQL 2008 DBs from our current development server, to a new/better/faster development server. As this is just a local dev server, temporary downtime is acceptable, but I am looking for a way to move all of the databases (preferably in bulk). I know that I could take a bak of each, and restore it on the new server, but given the volume of DBs, I am looking for a more efficient way. I am not opposed to learning a new piece of software, writing code or any other requirement, so long as it speeds up the process.

    Read the article

  • LVM incorrectly reported missing after power failure

    - by mensi
    We have had a major power failure in the data-center. We are using a set of servers for our storage needs. The main server has several pairs of disks mirrored with mdadm. The resulting /dev/mdX are LVM physical volumes and belong to a big volume-group with all our data. After the powerloss, we had the problem that one of the mdadm devices was not auto-detected due to a missing entry in mdadm.conf. As a consequence, the volumegroup had inactive logical volumes due to the missing PV. We were able to fix the mdadm config and reboot. pvscan shows all expected PVs but one LV still does not come up. vgdisplay shows: [...] Cur PV: 3 Act PV: 2 [...] Neither vgscan nor pvscan show any missing devices. What went wrong? How can we force LVM to activate all PVs?

    Read the article

  • Effective system to backup this setup?

    - by user71785
    I currently have my development environment on a usb hd. It has things like portable xampp, virtualbox with ubuntu guest, portable firefox and other dev tools. It works fantastic! I can attach it to almost any computer and all works fine. However, if this drive decides to go suicide on me I will be close behind it. The problem I'm having is I use this portable HD almost all the time and so I need a fast way to backup the entire drive. It is around 400gb. Any advice?

    Read the article

  • Western Digital Caviar SE16 not recognized

    - by NStorm
    Before I start, I have been looking at quite a few websites and I still have not found an answer to my problem. I have been building my own computer recently and I have just received the hard drive (WD Caviar SE16 WD5000AAKS) I was planning to put in my computer. After connecting the SATA power cable (99.99999% sure it is connected correctly) and the SATA cable to my motherboard (ASUS M5A78L-M/USB3) I booted my computer into a Linux Mint 13 XFCE 64-bit live USB expecting to see a hard drive when I came to install. Sadly when I checked the only hard drive that was showing was /dev/sda which was my USB with the Linux files on it. I also checked gparted and no hard drive other than my USB was showing up there either. Lastly I checked my BIOS and no matter what SATA port I connected the HDD to it wouldn't show up there either. Does anyone have any advice? Some images of my set-up which could help are bellow: Thanks in advance, Nick

    Read the article

  • permissions destroyed

    - by n00b32
    yes yes i know im stupid but while i was veeeery late i tried to fix one thing asap and ended up doing chmod 777 -R /* it chmoded dr-xr-xr-x 2 root root 4096 2011-02-15 13:12 bin drwx------ 3 root root 4096 2010-09-07 15:57 boot d-wx-wx-wx 15 root root 13680 2010-12-11 05:48 dev drwx------ 3 root root 4096 2010-09-09 05:24 emul d-wx-wx-wx 110 root root 4096 2011-03-07 07:12 etc drwx------ 2 root root 4096 2010-09-10 04:35 firewall because of spelling mistake... can some send me a tree of permissions for those on debian so ill have a lot less work ? is there another way i can fix them ?

    Read the article

  • Change Google Chrome's Process model?

    - by mobius42
    See here: http://imgur.com/lKffI.png Does anyone here know how to stop Chrome doing this? Chrome seems to group all tabs I open through the same page into one process. If I copy and paste the links individually into separate tabs, it creates new processes, but when I just middle click links, it groups them into one. I want to force Chrome to create a new process for every tab because when one page locks up, it freezes pretty much all the tabs I have open and if one of the tabs crashes, it takes the rest with it. You can apparently alter Chrome's process model to one called "--process-per-tab" which seems to be what I'm looking for, but when I try and open Chrome with this argument via the terminal, it doesn't work. It's likely I'm not using the correct command; what I tried was: /Applications/"Google Chrome.app"/Contents/MacOS/"Google Chrome" --process-per-tab I'm on OSX and using the latest dev build 5.0.396.0.

    Read the article

  • can power supply affect I/O

    - by user101289
    I have a dev server machine running Ubuntu 12.04. For a long while it's been throwing intermittent errors where it would suddenly tell me "File system is read only" or drop into a GRUB error console on boot. I've done disk checks, bad blocks, etc. and no real problems with the main SATA drive were detected. Finally the drive would not be detected at all-- but neither would other drives I plugged in (via SATA). I plugged the supposedly "bad" drive into another server and it worked fine, no issues, for days-- so I assumed the motherboard had a bad SATA controller, and replaced the motherboard with an identical model. I replaced the drive into the original machine with the new motherboard, rebooted-- and the same issues-- I/O errors, failure to read the drive at all, dropping into GRUB, etc. I'm wondering if there could be some other issue with this machine, that's not related to the drive-- possibly power supply? Thanks for ideas

    Read the article

  • Apache 406 error with JPEGCam

    - by BenM
    We have recently migrated our website to a new server, and JPEGCam is now reporting a 406 (Not Acceptable) error when trying to upload the image from the SWF to the server. I know that it isn't supported any more by the developer, but I wondered if anyone has encountered this problem before. I suspected the issue was with the Mod_Security module in Apache (i.e. not whitelisting Shockwave Flash), but the server admins have also drawn a blank. The request is being sent as a POST method, and returns a 406 error, but according to the Apache logs, it is returning a 404 error. I wondered if anyone has encountered this problem before, or knows of a 'simple' fix. Everything was working on our Xampp flavour dev server, so I am 100% certain this is an Apache issue. Also, when trying to access the requested page directly in the web browser, everything gets served up without a problem...

    Read the article

  • What is a good way to get back to the command prompt discarding STDOUT and STDERR

    - by elementz
    I often launch applications from the cli via e.g. command & to immediately get back to the prompt back. The downside of this is, that I still get STDOUT and STDERR. So I use command &> /dev/null to discard those outputs. This can get quite a chore, when having to write this often during a day. So my question is, is there a better (read shorter) way to discard of STDOUT and STDERR when not needed? What could be done? write a wrapper script to launch applications? What would be an elegant way to do this?

    Read the article

  • Rebuild mdadm RAID5 array with fewer disks

    - by drjeep
    I have a 4 disk RAID5 array, one of which is starting to fail according to smartd. However, since I'm using less than half the space on /dev/md0, I'd like to rebuild the array without the failing disk. The closest scenario I've been able to find online has been this post, however it contains bits that don't apply to me (LVM volumes) and also doesn't explain how I go about resizing the partition after I'm done. Please note I have backups of important data, but I'd like to avoid rebuilding the array from scratch if possible.

    Read the article

  • What are the common linux ( RH ) commands for SAN related activities ? How to check if a LUN is attached to the computer ?

    - by Nishant
    How do I check if a LUN has been presented to my server ? What are the Linux commands for that ? Do the LUNS show up in a fdisk -l command like a normal /dev/sda gets listed ? What are other commands assosicaed with general SAN related checks in Linux ? What is WWN and how does that have any relevance and Also please explain multipathing why if we have LUN's , what is the use of multipathing then ? Bit lenghty but I am not able to get a grasp on the topic . Any help would be appreciated .

    Read the article

  • Change Google Chrome’s Process model?

    - by mobius42
    See here: http://imgur.com/lKffI.png Does anyone know how to stop Chrome doing this? Chrome seems to group all tabs I open through the same page into one process. If I copy and paste the links individually into separate tabs, it creates new processes, but when I just middle click links, it groups them into one. I want to force Chrome to create a new process for every tab because when one page locks up, it freezes pretty much all the tabs I have open and if one of the tabs crashes, it takes the rest with it. You can apparently alter Chrome's process model to one called "--process-per-tab" which seems to be what I'm looking for, but when I try and open Chrome with this argument via the terminal, it doesn't work. It's likely I'm not using the correct command; what I tried was: /Applications/"Google Chrome.app"/Contents/MacOS/"Google Chrome" --process-per-tab I'm on OSX and using the latest dev build 5.0.396.0.

    Read the article

  • MySQL max_user_connections

    - by Sheriffen
    We're releasing a site in a couple of weeks that has been developed on a local machine but now when were testing on dev server we get MySQL error 'max_user_connections'. We have talked to the host company (biggest in sweden) and they say that we don't close our connections properly. But the thing is that we user the EXACT same code on another host where it works. And I also added echo "closed"; in the database_close function so that now in the very bottom of very page there is "closed". To me this means that we do close the connection, anyone got any idea of what could be wrong? We connect through the PHP PDO function and closes it by setting it to 'null', all according to the manual.

    Read the article

  • GitHub updating repository?

    - by user1804933
    I am trying to setup GitHub on my server and gotten to the point where I am running the command "git push -u origin master". However, a large file was detected and the following error was received: remote: error: GH001: Large files detected. remote: error: Trace: 5520a70fd2eeaa2eafd7de049a590fb5 remote: error: See http://git.io/iEPt8g for more information. remote: error: File app/logs/dev.log is 2041.59 MB; this exceeds GitHub's file size limit of 100 MB I ended up deleting that file and tried adding the git again but I keep running into that error. Any ideas on how to work around this?

    Read the article

  • How would you rewrite/refactor this ?

    - by frostings
    Old application that is used by 50-60.000 paying customers. Company is several hundred people big. Application has a lot of business critical code (30% of all code) written in classic asp. Application has a lot more .net code. Application has a COM+ bridge for enabling asp to "talk" to .net Organization lacks some/a lot knowledge on what is causing the 10-20% server-reset per day (might be due to COM+ ?) There is no red line through the application; no architecture, no real patterns etc. The application has been like this for at least 5 years. The asp code base is increasing, slowly but certainly. I have read refactoring stories and I have knowledge on why you some of the times should not re-write a system. I would love for the old asp code to vanish as well as the COM+ component. But the pain is that no one really knows what is going on inside the asp classic code and the attitude inside all the teams are "this is just how it is". Down the line, this causes a lot of other issues like recruiting, dev effeciency, business needs that cannot be met, scale etc. With these little facts, does that justify a re-write of the asp code and the removal of the COM+ component ? How would you go about it ?

    Read the article

  • SVN: Working with branches using the same working copy

    - by uXuf
    We've just moved to SVN from CVS. We have a small team and everyone checks in code on the trunk and we have never ever used branches for development. We each have directories on a remote dev server with the codebase checked out. Each developer works on their own sandbox with an associated URL to pull up the app in a browser (something like the setup here: Trade-offs of local vs remote development workflows for a web development team). I've decided that for my current project, I'll use a branch because it would span multiple releases. I've already cut a branch out, but I am using the same directory as the one originally checked out (i.e. for the trunk). Since it's the same directory (or working copy) for both the branch and the trunk, if for e.g. a bug pops up in the app I switch to the trunk and commit the change there, and then switch back to my branch for my project development. My questions are: Is this a sane way to work with branches? Are there any pitfalls that I need to be aware of? What would be the optimal way to work with branches if separate working copies are out of the question? I haven't had issues yet as I have just started doing this way but all the tutorials/books/blog posts I have seen about branching with SVN imply working with different working copies (or perhaps I haven't come across an explanation of mixed working copies in plain English). I just don't want to be sorry three months down the road when its time to integrate the branch back to the trunk.

    Read the article

  • PPTP server stuck at "GRE: Bad checksum from pppd"

    - by user92516
    I am a network engineer having quite limited experience with Ubuntu. I have been following up these online instructions to set up a pptp server but without much luck to get it to work. My server is a vm running an Apple Xserve behind a Cisco firewall. I made sure tcp 1723 and GRE are opened for the box. Below is the syslog output, looks like I always got stuck at GRE: Bad checksum from pppd. I'm running Ubuntu 10.04. Sep 24 13:21:53 ubuntu pptpd[1231]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Sep 24 13:21:53 ubuntu pptpd[1231]: CTRL: Reaping child PPP[1232] Sep 24 13:21:53 ubuntu pptpd[1231]: CTRL: Client 166.137.85.165 control connection finished Sep 24 13:22:41 ubuntu pptpd[1276]: MGR: connections limit (100) reached, extra IP addresses ignored Sep 24 13:22:41 ubuntu pptpd[1277]: MGR: Manager process started Sep 24 13:22:41 ubuntu pptpd[1277]: MGR: Maximum of 100 connections available Sep 24 13:22:50 ubuntu pptpd[1278]: CTRL: Client 166.137.85.165 control connection started Sep 24 13:22:51 ubuntu pptpd[1278]: CTRL: Starting call (launching pppd, opening GRE) Sep 24 13:22:51 ubuntu pppd[1279]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Sep 24 13:22:51 ubuntu pppd[1279]: pppd 2.4.5 started by root, uid 0 Sep 24 13:22:51 ubuntu pppd[1279]: Using interface ppp0 Sep 24 13:22:51 ubuntu pppd[1279]: Connect: ppp0 <--> /dev/pts/1 Sep 24 13:22:51 ubuntu pptpd[1278]: GRE: Bad checksum from pppd. Sep 24 13:23:21 ubuntu pppd[1279]: LCP: timeout sending Config-Requests Sep 24 13:23:21 ubuntu pppd[1279]: Connection terminated. Sep 24 13:23:21 ubuntu pppd[1279]: Modem hangup Sep 24 13:23:21 ubuntu pppd[1279]: Exit. Sep 24 13:23:21 ubuntu pptpd[1278]: GRE: read(fd=6,buffer=805a540,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Sep 24 13:23:21 ubuntu pptpd[1278]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Sep 24 13:23:21 ubuntu pptpd[1278]: CTRL: Reaping child PPP[1279] Sep 24 13:23:21 ubuntu pptpd[1278]: CTRL: Client 166.137.85.165 control connection finished

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >