Search Results

Search found 27047 results on 1082 pages for 'multiple projects'.

Page 336/1082 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Application for time and project management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • error 503: Can't deploy rails 3 app with apache + thin (bitnamy ruby stack)

    - by Pacu
    As you'll notice, I'm a bit of a noob on Rails. Here's the thing I have a EC2 Bitnami RubyStack AMI running. I'm trying to deploy the sample project to be sure I'm doing the right thing, but I'm not getting anywhere at all. I just get a 503 error I'm following bitnami's docs on thin + apache Here are my files: the httpd.conf I include in the main httpd.conf Alias /sample "/home/bitnami/stack/projects/sample/public" <Directory "/home/bitnami/stack/projects/sample/public"> AllowOverride None Order allow,deny Allow from all </Directory> ProxyPass /sample balancer://appcluster ProxyPassReverse /sample balancer://appcluster <Proxy balancer://appcluster> BalancerMember http://127.0.0.1:3001/sample BalancerMember http://127.0.0.1:3002/sample BalancerMember http://127.0.0.1:3003/sample BalancerMember http://127.0.0.1:3004/sample </Proxy> the thin.yml file chdir: /opt/bitnami/projects/sample environment: production address: 127.0.0.1 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 512 require: [] wait: 30 servers: 5 prefix: /sample daemonize: true I'm able to start and stop apache, but thin does not stop correctly though. When I try to stop thin, I get this output /opt/bitnami/projects/sample$ sudo thin -C config/thin.yml stop Stopping server on 127.0.0.1:3000 ... Can't stop process, no PID found in tmp/pids/thin.3000.pid Stopping server on 127.0.0.1:3001 ... Can't stop process, no PID found in tmp/pids/thin.3001.pid Stopping server on 127.0.0.1:3002 ... Can't stop process, no PID found in tmp/pids/thin.3002.pid Stopping server on 127.0.0.1:3003 ... Can't stop process, no PID found in tmp/pids/thin.3003.pid Stopping server on 127.0.0.1:3004 ... Can't stop process, no PID found in tmp/pids/thin.3004.pid I've tried to use nginx as well, without any luck unfortunately. Thank you for your time and help!

    Read the article

  • How to get nested chain of objects in Linq and MVC2 application?

    - by Anders Svensson
    I am getting all confused about how to solve this problem in Linq. I have a working solution, but the code to do it is way too complicated and circular I think: I have a timesheet application in MVC 2. I want to query the database that has the following tables (simplified): Project Task TimeSegment The relationships are as follows: A project can have many tasks and a task can have many timesegments. I need to be able to query this in different ways. An example is this: A View is a report that will show a list of projects in a table. Each project's tasks will be listed followed by a Sum of the number of hours worked on that task. The timesegment object is what holds the hours. Here's the View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Report.Master" Inherits="System.Web.Mvc.ViewPage<Tidrapportering.ViewModels.MonthlyReportViewModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Månadsrapport </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h1> Månadsrapport</h1> <div style="margin-top: 20px;"> <span style="font-weight: bold">Kund: </span> <%: Model.Customer.CustomerName %> </div> <div style="margin-bottom: 20px"> <span style="font-weight: bold">Period: </span> <%: Model.StartDate %> - <%: Model.EndDate %> </div> <div style="margin-bottom: 20px"> <span style="font-weight: bold">Underlag för: </span> <%: Model.Employee %> </div> <table class="mainTable"> <tr> <th style="width: 25%"> Projekt </th> <th> Specifikation </th> </tr> <% foreach (var project in Model.Projects) { %> <tr> <td style="vertical-align: top; padding-top: 10pt; width: 25%"> <%:project.ProjectName %> </td> <td> <table class="detailsTable"> <tr> <th> Aktivitet </th> <th> Timmar </th> <th> Ex moms </th> </tr> <% foreach (var task in project.CurrentTasks) {%> <tr class="taskrow"> <td class="task" style="width: 40%"> <%: task.TaskName %> </td> <td style="width: 30%"> <%: task.TaskHours.ToString()%> </td> <td style="width: 30%"> <%: String.Format("{0:C}", task.Cost)%> </td> </tr> <% } %> </table> </td> </tr> <% } %> </table> <table class="summaryTable"> <tr> <td style="width: 25%"> </td> <td> <table style="width: 100%"> <tr> <td style="width: 40%"> Totalt: </td> <td style="width: 30%"> <%: Model.TotalHours.ToString() %> </td> <td style="width: 30%"> <%: String.Format("{0:C}", Model.TotalCost)%> </td> </tr> </table> </td> </tr> </table> <div class="price"> <table> <tr> <td>Moms: </td> <td style="padding-left: 15px;"> <%: String.Format("{0:C}", Model.VAT)%> </td> </tr> <tr> <td>Att betala: </td> <td style="padding-left: 15px;"> <%: String.Format("{0:C}", Model.TotalCostAndVAT)%> </td> </tr> </table> </div> </asp:Content> Here's the action method: [HttpPost] public ActionResult MonthlyReports(FormCollection collection) { MonthlyReportViewModel vm = new MonthlyReportViewModel(); vm.StartDate = collection["StartDate"]; vm.EndDate = collection["EndDate"]; int customerId = Int32.Parse(collection["Customers"]); List<TimeSegment> allTimeSegments = GetTimeSegments(customerId, vm.StartDate, vm.EndDate); vm.Projects = GetProjects(allTimeSegments); vm.Employee = "Alla"; vm.Customer = _repository.GetCustomer(customerId); vm.TotalCost = vm.Projects.SelectMany(project => project.CurrentTasks).Sum(task => task.Cost); //Corresponds to above foreach vm.TotalHours = vm.Projects.SelectMany(project => project.CurrentTasks).Sum(task => task.TaskHours); vm.TotalCostAndVAT = vm.TotalCost * 1.25; vm.VAT = vm.TotalCost * 0.25; return View("MonthlyReport", vm); } And the "helper" methods: public List<TimeSegment> GetTimeSegments(int customerId, string startdate, string enddate) { var timeSegments = _repository.TimeSegments .Where(timeSegment => timeSegment.Customer.CustomerId == customerId) .Where(timeSegment => timeSegment.DateObject.Date >= DateTime.Parse(startdate) && timeSegment.DateObject.Date <= DateTime.Parse(enddate)); return timeSegments.ToList(); } public List<Project> GetProjects(List<TimeSegment> timeSegments) { var projectGroups = from timeSegment in timeSegments group timeSegment by timeSegment.Task into g group g by g.Key.Project into pg select new { Project = pg.Key, Tasks = pg.Key.Tasks }; List<Project> projectList = new List<Project>(); foreach (var group in projectGroups) { Project p = group.Project; foreach (var task in p.Tasks) { task.CurrentTimeSegments = timeSegments.Where(ts => ts.TaskId == task.TaskId).ToList(); p.CurrentTasks.Add(task); } projectList.Add(p); } return projectList; } Again, as I mentioned, this works, but of course is really complex and I get confused myself just looking at it even now that I'm coding it. I sense there must be a much easier way to achieve what I want. Basically you can tell from the View what I want to achieve: I want to get a collection of projects. Each project should have it's associated collection of tasks. And each task should have it's associated collection of timesegments for the specified date period. Note that the projects and tasks selected must also only be the projects and tasks that have the timesegments for this period. I don't want all projects and tasks that have no timesegments within this period. It seems the group by Linq query beginning the GetProjects() method sort of achieves this (if extended to have the conditions for date and so on), but I can't return this and pass it to the view, because it is an anonymous object. I also tried creating a specific type in such a query, but couldn't wrap my head around that either... I hope there is something I'm missing and there is some easier way to achieve this, because I need to be able to do several other different queries as well eventually. I also don't really like the way I solved it with the "CurrentTimeSegments" properties and so on. These properties don't really exist on the model objects in the first place, I added them in partial classes to have somewhere to put the filtered results for each part of the nested object chain... Any ideas?

    Read the article

  • How to create a new Team Project Collection in TFS2010:

    - by jehan
    TFS 2010 has introduced the notion of Team Project Collection (TPC).  I have already discussed about TPC in my earlier post, you can check it out here. In this post, I will demonstrate how to create a new Team Project Collection in TFS2010. First, you have to open the TFS Administration Console (Start à All Programs à Microsoft Team Foundation Server 2010 à Team Foundation Server Administration Console), expand the Application Tier node in TFS Administration Console and click on Team Project Collection. Here you will see the TPC’s which are already exist, I am having only one TPC named New Collection and I’m going to create a new TPC called Demo Collection. To create a new Team Project Collection, you need to click on Create Collection; it will open the Create New Team Project Collection window.     Under the Name tab, you have to enter the name of Collection which you want to give for your new TPC (I naming it as Demo Collection). You can also provide some description about your TPC in Description tab which is optional and click next. Here, you need to enter the name of SQL Server Instance where you want your new TPC data to reside. You have the option either to choose the creating a Database for this TPC or use the already existing empty database and then click next.   In next screen, you have to choose SharePoint configuration. Here you have the options to either configure SharePoint Site for TPC at default collections or you can specify the your existing SharePoint site and  you can also choose not  to configure the SharePoint for this collection, if you choose last option then you cannot configure the Share Point sites for the all the Team Projects under this Project Collection. You also have the flexibility to create a Share Point site for this TPC later on, then if you need you have to configure SharePoint site for the existing team projects manually.   In next screen, you will have the Reports configuration. Here you have the options to either configure the Reports for TPC at default path or you can specify the path for at existing Reports folder, you can also choose not to configure the Reports for this collection, if you choose last option then you cannot create  the Reports  for the all the Team Projects under this Project Collection. Here also you can enable reporting for this TPC later on. The next screen is related to Lab Management Configuration, Lab Management is the new feature in TFS2010 which enables the users to create and manage virtual test environments where you can deploy and test your application. There are no options available here as I don’t have the Lab Management configured for my Team Foundation Server. The next screen is Review Configuration window, which will show up all the configuration settings you have specified, so that you can review the configurations before creating the Team Project Collection. If you want to make any changes to the configurations then you can go back to the previous windows and can make the changes. After Reviewing the configuration settings, you can click on verify button. Which will verify that if you’re Team Project Collection is ready to be created or not, it will show up the errors and warning (if any) which can make your Team Project Collection fail. You can then choose to create the Team Project Collection if the verify option doesn’t throw any warnings and errors. If the verify option throws any errors, then it is strongly suggested that you have to first rectify the issues then only go for TPC creation especially in case of warnings as it is a common practice to overlook the warnings.   If you choose the create TPC option, then it will start the process of creating a Team Project Collection  and once its completed you can check the status of configuration different components  during Team Project Collection. You can see in below screen that all the components are configured successfully.   In next screen, you can find the location of log file created for this Team Project Creation, this log file is really important in case of Team Project creation failure because it will help you to find  the root cause for the failure. Now, you can see that the New Team Projection (Demo Collection) which was created is now available in Team Foundation Collection tab and its status is Online.   You can now try to connect to this Team Project Collection from Team Explorer. Choose the newly created Team Project Collection and click on connect.     This Team Project Collection is empty because no Team Projects are created yet. Now, you can create the new Team Projects and start working.

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best Practices For Database Consolidation On Exadata - New Whitepapers

    - by Javier Puerta
     Best Practices For Database Consolidation On Exadata Database Machine (Nov. 2011) Consolidation can minimize idle resources, maximize efficiency, and lower costs when you host multiple schemas, applications or databases on a target system. Consolidation is a core enabler for deploying Oracle database on public and private clouds.This paper provides the Exadata Database Machine (Exadata) consolidation best practices to setup and manage systems and applications for maximum stability and availability:Download here Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles (Sep. 2011) This paper is focused on the aspects of segregating databases from each other in a platform consolidation environment on an Oracle Exadata Database Machine. Platform consolidation is the consolidation of multiple databases on to a single Oracle Exadata Database Machine. When multiple databases are consolidated on a single Database Machine, it may be necessary to isolate certain database components or functions in order to meet business requirements and provide best practices for a secure consolidation. In this paper we outline the use of Oracle Exadata database-scoped security to securely separate database management and provide a detailed case study that illustrates the best practices. Download here

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • Creating Hosting Accounts in WHM on a Single IP

    - by Daniel Hanly
    I've just purchased a VPS with the hope of transferring multiple shared hosting accounts onto it. The problem is that I've only got 2 IP addresses with my VPS. I can create an account and assign it an IP address, but once I've done this once, I can't do it again. (1 IP address is my main root WHM IP, the other is my new hosting account IP). Can I create multiple hosting accounts and use the same IP? How would I manage multiple hosting accounts in this way? The domain for this hosting account has been purchased by the client, and they hold it (can't transfer for 60 days), so I need to adjust the DNS settings to redirect to my newly created hosting area - how can I do this without a dedicated IP address?

    Read the article

  • Inside Red Gate - Project teams

    - by Simon Cooper
    Within each division in Red Gate, development effort is structured around one or more project teams; currently, each division contains 2-3 separate teams. These are self contained units responsible for a particular development project. Project team structure The typical size of a development team varies, but is normally around 4-7 people - one project manager, two developers, one or two testers, a technical author (who is responsible for the text within the application, website content, and help documentation) and a user experience designer (who designs and prototypes the UIs) . However, team sizes can vary from 3 up to 12, depending on the division and project. As an rule, all the team sits together in the same area of the office. (Again, this is my experience of what happens. I haven't worked in the DBA division, and SQL Tools might have changed completely since I moved to .NET. As I mentioned in my previous post, each division is free to structure itself as it sees fit.) Depending on the project, and the other needs in the division, the tech author and UX designer may be shared between several projects. Generally, developers and testers work on one project at a time. If the project is a simple point release, then it might not need a UX designer at all. However, if it's a brand new product, then a UX designer and tech author will be involved right from the start. Developers, testers, and the project manager will normally stay together in the same team as they work on different projects, unless there's a good reason to split or merge teams for a particular project. Technical authors and UX designers will normally go wherever they are needed in the division, depending on what each project needs at the time. In my case, I was working with more or less the same people for over 2 years, all the way through SQL Compare 7, 8, and Schema Compare for Oracle. This helped to build a great sense of camaraderie wihin the team, and helped to form and maintain a team identity. This, in turn, meant we worked very well together, and so the final result was that much better (as well as making the work more fun). How is a project started and run? The product manager within each division collates user feedback and ideas, does lots of research, throws in a few ideas from people within the company, and then comes up with a list of what the division should work on in the next few years. This is split up into projects, and after each project is greenlit (I'll be discussing this later on) it is then assigned to a project team, as and when they become available (I'm sure there's lots of discussions and meetings at this point that I'm not aware of!). From that point, it's entirely up to the project team. Just as divisions are autonomous, project teams are also given a high degree of autonomy. All the teams in Red Gate use some sort of vaguely agile methodology; most use some variations on SCRUM, some have experimented with Kanban. Some store the project progress on a whiteboard, some use our bug tracker, others use different methods. It all depends on what the team members think will work best for them to get the best result at the end. From that point, the project proceeds as you would expect; code gets written, tests pass and fail, discussions about how to resolve various problems are had and decided upon, and out pops a new product, new point release, new internal tool, or whatever the project's goal was. The project manager ensures that everyone works together without too much bloodshed and that thrown missiles are constrained to Nerf bullets, the developers write the code, the testers ensure it actually works, and the tech author and UX designer ensure that people will be able to use the final product to solve their problem (after all, developers make lousy UI designers and technical authors). Projects in Red Gate last a relatively short amount of time; most projects are less than 6 months. The longest was 18 months. This has evolved as the company has grown, and I suspect is a side effect of the type of software Red Gate produces. As an ISV, we sell packaged software; we only get revenue when customers purchase the ready-made tools. As a result, we only get a sellable piece of software right at the end of a project. Therefore, the longer the project lasts, the more time and money has to be invested by the company before we get any revenue from it, and the riskier the project becomes. This drives the average project time down. Small project teams are the core of how Red Gate produces software, and are what the whole development effort of the company is built around. In my next post, I'll be looking at the office itself, and how all 200 of us manage to fit on two floors of a small office building.

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • Death March

    - by Nick Harrison
    It is a horrible sight to watch a project fail. There are few things as bad. Watching a project fail regardless of the reason is almost like sitting in a room with a "Dementor" from Harry Potter. It will literally suck all of the life and joy out of the room. Nearly every project that I have seen fail has failed because of political challenges or management challenges. Sometimes there are technical challenges that bring a project to its knees, but usually projects fail for less technical reasons. Here a few observations about projects failing for political reasons. Both the client and the consultants have to be committed to seeing the project succeed. Put simply, you cannot solve a problem when the primary stake holders do not truly want it solved. This could come from a consultant being more interested in extended the engagement. It could come from a client being afraid of what will happen to them once the problem is solved. It could come from disenfranchised stake holders. Sometimes a project is beset on all sides. When you find yourself working on a project that has this kind of threat, do all that you can to constrain the disruptive influences of the bad apples. If their influence cannot be constrained, you truly have no choice but to move on to a new project. Tough choices have to be made to make a project successful. These choices will affect everyone involved in the project. These choices may involve users not getting a change request through that they want. Developers may not get to use the tools that they want. Everyone may have to put in more hours that they originally planned. Steps may be skipped. Compromises will be made, but if everyone stays committed to the end goal, you can still be successful. If individuals start feeling disgruntled or resentful of the compromises reached, the project can easily be derailed. When everyone is not working towards a common goal, it is like driving with one foot on the break and one foot on the accelerator. Not only will you not get to where you are planning, you will also damage the car and possibly the passengers as well.   It is important to always keep the end result in mind. Regardless of the development methodology being followed, the end goal is not comprehensive documentation. In all cases, it is working software. Comprehensive documentation is nice but useless if the software doesn't work.   You can never get so distracted by the next goal that you fail to meet the current goal. Most projects are ultimately marathons. This means that the pace must be sustainable. Regardless of the temptations, you cannot burn the team alive. Processes will fail. Technology will get outdated. Requirements will change, but your people will adapt and learn and grow. If everyone on the team from the most senior analyst to the most junior recruit trusts and respects each other, there is no challenge that they cannot overcome. When everyone involved faces challenges with the attitude "This is my project and I will not let it fail" "You are my teammate and I will not let you fail", you will in fact not fail. When you find a team that embraces this attitude, protect it at all cost. Edward Yourdon wrote a book called Death March. In it, he included a graph for categorizing Death March project types based on the Happiness of the Team and the Chances of Success.   Chances are we have all worked on Death March projects. We will all most likely work on more Death March projects in the future. To a certain extent, they seem to be inevitable, but they should never be suicide or ugly. Ideally, they can all be "Mission Impossible" where everyone works hard, has fun, and knows that there is good chance that they will succeed. If you are ever lucky enough to work on such a project, you will know that sense of pride that comes from the eventual success. You will recognize a profound bond with the team that you worked with. Chances are it will change your life or at least your outlook on life. If you have not already read this book, get a copy and study it closely. It will help you survive and make the most out of your next Death March project.

    Read the article

  • How to Use Windows 8's Storage Spaces to Mirror & Combine Drives

    - by Chris Hoffman
    “Storage Spaces” is a new feature in Windows 8 that can combine multiple hard drives into a single virtual drive. It can mirror data across multiple drives for redundancy or combine multiple physical drives into a single pool of storage. You can even create pools of storage larger than the amount of physical storage space you have available. When the physical storage fills up, you can plug in another drive and take advantage of it with no additional configuration required. Storage Spaces is similar to RAID or LVM on Linux. The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • How can I tweak this split-tar-gz-archive script?

    - by Sai
    I came across this shell script that can be used to split an entire directory across multiple compressed files. I am interested in making a minor tweak to this script. Lets say I have n GB of files in a directory. I do not want the contents of a particular folder to be split across multiple tar files but inside the same file. I want the n MB folder to be fit within a single compressed file and the remaining (n GB -n MB) split across multiple files. I am not sure whether it is possible with this script. I was looking forward to some suggestions. Though the script has been well documented, it is quite complex to understand for me

    Read the article

  • Caveat utilitor - Can I run two versions of Microsoft Project side-by-side?

    - by Martin Hinshelwood
    A number of out customers have asked if there are any problems in installing and running multiple versions of Microsoft Project on a single client. Although this is a case of Caveat utilitor (Let the user beware), as long as the user understands and accepts the issues that can occur then they can do this. Although Microsoft provide the ability to leave old versions of Office products (except Outlook) on your client when you are installing a new version of the product they certainly do not endorse doing so. Figure: For Project you can choose to keep the old stuff   That being the case I would have preferred that they put a “(NOT RECOMMENDED)” after the options to impart that knowledge to the rest of us, but they did not. The default and recommended behaviour is for the newer version installer to remove the older versions. Of course this does not apply in the revers. There are no forward compatibility packs for Office. There are a number of negative behaviours (or bugs) that can occur in this configuration: There is only one MS Project In Windows a file extension can only be associated with a single program.  In this case, MPP files can be associated with only one version of winproj.exe.  The executables are in different folders so if a user double-clicks a Project file on the desktop, file explorer, or Outlook email, Windows will launch the winproj.exe associated with MPP and then load the MPP file.  There are problems associated with this situation and in some cases workarounds. The user double-clicks on a Project 2010 file, Project 2007 launches but is unable to open the file because it is a newer version.  The workaround is for the user to launch Project 2010 from the Start menu then open the file.  If the file is attached to an email they will need to first drag the file to the desktop. All your linked MS Project files need to be of the same version There are a number of problems that occur when people use on Microsoft’s Object Linking and Embedding (OLE) technology.  The three common uses of OLE are: for inserted projects where a Master project contains sub-projects and each sub-project resides in its own MPP file shared resource pools where multiple MPP files share a common resource pool kept in a single MPP file cross-project links where a task or milestone in one MPP file has a  predecessor/successor relationship with a task or milestone in a different MPP file What I’ve seen happen before is that if you are running in a version of Project that is not associated with the MPP extension and then try and activate an OLE link then Project tries to launch the other version of Project.  Things start getting very confused since different MPP files are being controlled by different versions of Project running at the same time.  I haven’t tried this in awhile so I can’t give you exact symptoms but I suspect that if Project 2010 is involved the symptoms will be different then in a Project 2003/2007 scenario.  I’ve noticed that Project 2010 gives different error messages for the exact same problem when it occurs in Project 2003 or 2007.  -Anonymous The recommendation would be either not to use this feature if you have to have multiple versions of Project installed or to use only a single version of Project. You may get unexpected negative behaviours if you are using shared resource pools or resource pools even when you are not running multiple versions as I have found that they can get broken very easily. If you need these thing then it is probably best to use Project Server as it was created to solve many of these specific issues. Note: I would not even allow multiple people to access a network copy of a Project file because of the way Windows locks files in write mode. This can cause write-locks that get so bad a server restart is required I’ve seen user’s files get write-locked to the point where the only resolution is to reboot the server. Changing the default version to run for an extension So what if you want to change the default association from Project 2007 to Project 2010?   Figure: “Control Panel | Folder Options | Change the file associated with a file extension” Windows normally only lists the last version installed for a particular extension. You can select a specific version by selecting the program you want to change and clicking “Change program… | Browse…” and then selecting the .exe you want to use on the file system. Figure: You will need to select the exact version of “winproj.exe” that you want to run Conclusion Although it is possible to run multiple versions of Project on one system in the main it does not really make sense.

    Read the article

  • Is true multithreading really necessary?

    - by Jonathan Graef
    So yeah, I'm creating a programming language. And the language allows multiple threads. But, all threads are synchronized with a global interpreter lock, which means only one thread is allowed to execute at a time. The only way to get the threads to switch off is to explicitly tell the current thread to wait, which allows another thread to execute. Parallel processing is of course possible by spawning multiple processes, but the variables and objects in one process cannot be accessed from another. However the language does have a fairly efficient IPC interface for communicating between processes. My question is: Would there ever be a reason to have multiple, unsynchronized threads within a single process (thus circumventing the GIL)? Why not just put thread.wait() statements in key positions in the program logic (presuming thread.wait() isn't a CPU hog, of course)? I understand that certain other languages that use a GIL have processor scheduling issues (cough Python), but they have all been resolved.

    Read the article

  • Mark Hurd on the Customer Revolution: Oracle's Top 10 Insights

    - by Richard Lefebvre
    Reprint of an article from Forbes Businesses that fail to focus on customer experience will hear a giant sucking sound from their vanishing profitability. Because in today’s dynamic global marketplace, consumers now hold the power in the buyer-seller equation, and sellers need to revamp their strategy for this new world order. The ability to relentlessly deliver connected, personalized and rewarding customer experiences is rapidly becoming one of the primary sources of competitive advantage in today’s dynamic global marketplace. And the inability or unwillingness to realize that the customer is a company’s most important asset will lead, inevitably, to decline and failure. Welcome to the lifecycle of customer experience, in which consumers explore, engage, shop, buy, ask, compare, complain, socialize, exchange, and more across multiple channels with the unconditional expectation that each of those interactions will be completed in an efficient and personalized manner however, wherever, and whenever the customer wants. While many niche companies are offering point solutions within that sprawling and complex spectrum of needs and requirements, businesses looking to deliver superb customer experiences are still left having to do multiple product evaluations, multiple contract negotiations, multiple test projects, multiple deployments, and–perhaps most annoying of all–multiple and never-ending integration projects to string together all those niche products from all those niche vendors. With its new suite of customer-experience solutions, Oracle believes it can help companies unravel these challenges and move at the speed of their customers, anticipating their needs and desires and creating enduring and profitable relationships. Those solutions span the full range of marketing, selling, commerce, service, listening/insights, and social and collaboration tools for employees. When Oracle launched its suite of Customer Experience solutions at a recent event in New York City, president Mark Hurd analyzed the customer experience revolution taking place and presented Oracle’s strategy for empowering companies to capitalize on this important market shift. From Hurd’s presentation and related materials, I’ve extracted a list of Hurd’s Top 10 Insights into the Customer Revolution. 1. Please Don’t Feed the Competitor’s Pipeline!After enduring a poor experience, 89% of consumers say they would immediately take their business to your competitor. (Except where noted, the source for these findings is the 2011 Customer Experience Impact (CEI) Report including a survey commissioned by RightNow (acquired by Oracle in March 2012) and conducted by Harris Interactive.) 2. The Addressable Market Is Massive. Only 1% of consumers say their expectations were always met by their actual experiences. 3. They’re Willing to Pay More! In return for a great experience, 86% of consumers say they’ll pay up to 25% more. 4. The Social Media Microphone Is Always Live. After suffering through a poor experience, more than 25% of consumers said they posted a negative comment on Twitter or Facebook or other social media sites. Conversely, of those consumers who got a response after complaining, 22% posted positive comments about the company. 5.  The New Deal Is Never Done: Embrace the Entire Customer Lifecycle. An appropriately active and engaged relationship, says Hurd, extends across every step of the entire processs: need, research, select, purchase, receive, use, maintain, and recommend. 6. The 360-Degree Commitment. Customers want to do business with companies that actively and openly demonstrate the desire to establish strong and seamless connections across employees, the company, and the customer, says research firm Temkin Group in its report called “The CX Competencies.” 7. Understand the Emotional Drivers Behind Brand Love. What makes consumers fall in love with a brand? Among the top factors are friendly employees and customer reps (73%), easy access to information and support (55%), and personalized experiences, such as when companies know precisely what products or services customers have purchased in the past and what issues those customers have raised (36%). 8.  The Importance of Immediate Action. You’ve got one week to respond–and then the opportunity’s lost. If your company needs more than a week to answer a prospect’s question or request, most of those prospects will terminate the relationship. 9.  Want More Revenue, Less Churn, and More Referrals? Then improve the overall customer experience: Forrester’s research says that approach put an extra $900 million in the pockets of wireless service providers, $800 million for hotels, and $400 million for airlines. 10. The Formula for CX Success.  Hurd says it includes three elegantly interlaced factors: Connected Engagement, to personalize the experience; Actionable Insight, to maximize the engagement; and Optimized Execution, to deliver on the promise of value. RECOMMENDED READING: The Top 10 Strategic CIO Issues For 2013 Wal-Mart, Amazon, eBay: Who’s the Speed King of Retail? Career Suicide and the CIO: 4 Deadly New Threats Memo to Marc Benioff: Social Is a Tool, Not an App

    Read the article

  • Good architecture for user information on separate databases?

    - by James P. Wright
    I need to write an API to connect to an existing SQL database. The API will be written in ASP.Net MVC3. The slight problem is that with existing users of the system, they may have a username on multiple databases. Each company using the product gets a brand new instance of the database, but over the years (the system has been running for 10 years) there are quite a few users (hundreds) who have multiple usernames across multiple "companies" (things got fragmented obviously and sometimes a single Company has 5 "projects" that each have their own database). Long story short, I need to be able to have a single unified user login that will allow existing users to access their information across all their projects. The only thing I can think is storing a bunch of connection strings, but that feels like a really bad idea. I'll have a new Database that will hold the "unified user" information...can anyone suggest a solid system architecture that can handle a setup like this?

    Read the article

  • SQL Server 2008: Table Valued Parameters

    In SQL Server 2005 and earlier, it is not possible to pass a table variable as a parameter to a stored procedure. When multiple rows of data to SQL Server need to send multiple rows of data to SQL Server, developers either had to send one row at a time or come up with other workarounds to meet requirements. While a VB.Net developer recently informed me that there is a SQLBulkCopy object available in .Net to send multiple rows of data to SQL Server at once, the data still can not be passed to a stored proc.Possibly the most anticipated T-SQL feature of SQL Server 2008 is the new Table-Valued Parameters. This is the ability to easily pass a table to a stored procedure from T-SQL code or from an application as a parameter.

    Read the article

  • Is it possible to export Windows event logs from multiple servers to a non-windows host, without running event manager on each of the Windows servers?

    - by Taylor Matyasz
    I want to export event logs from Windows to a non-Windows host. I was considering using Logstash, but that would seem to require that I install and run Logstash on each server. Is it possible to do this without having to run it on all of the servers? I am hoping to be able to consolidate all of the information from different servers to make searching and reporting much easier. If not, what would you recommend is the best way to export to a non-Windows host in real time? Thank you.

    Read the article

  • multiple vlans routed on one nic? trunk?General? or Access?

    - by Aceth
    ok for the last week I've tried racking my head around this... I have a SRW208P with 802.1q support, and a virtual endian appliance. I would like to be able to have 3 vlans having everything routed through the endian appliance.. i.e. The Virtual server has 2 bridged NIC's to the switch. This is where I'm getting confused .. On the 8 port switch I've got the 3 vlans set up ok (all being untagged as they are not going to be vlan aware), it's the port I'm connecting the endian firewall to the switch I'm having trouble with (second nic goes to the adsl modem and NAT'd) Is it meant to be a trunk, "Genereal" or "Access" then untagged or tagged? the end goal is to have vlan traffic routing through the single NIC and have endian route vlan traffic according to the rules. Any one have any ideas on the cisco small business stuff? Thanks

    Read the article

  • Is there a RAR extractor (for multiple rar files like .r00 etc.) that will use all of my quad cores?

    - by Christopher Done
    I've got a quad core Intel processor. I've got a big file split into little ones as RAR files, foo.r00, foo.r01, etc. which the RAR program extracts into one file/directory. Is there a RAR program that I can specify like "use four cores" in the extract process? At the moment it sits there using 100% of one core. I recognise the bottleneck might be my hard drive anyway, but I don't see a lot of HD usage and suspect the decompression process is more intensive than waiting on I/O. For example, GNU Make accepts a (-j, I think) argument to tell it how many cores to use, which I used to compile PHP 6 really quickly.

    Read the article

  • Trouble with copying dictionaries and using deepcopy on an SQLAlchemy ORM object

    - by Az
    Hi there, I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects. This is language-agnostic pseudocode from Wikipedia: s ? s0; e ? E(s) // Initial state, energy. sbest ? s; ebest ? e // Initial "best" solution k ? 0 // Energy evaluation count. while k < kmax and e > emax // While time left & not good enough: snew ? neighbour(s) // Pick some neighbour. enew ? E(snew) // Compute its energy. if enew < ebest then // Is this a new best? sbest ? snew; ebest ? enew // Save 'new neighbour' to 'best found'. if P(e, enew, temp(k/kmax)) > random() then // Should we move to it? s ? snew; e ? enew // Yes, change state. k ? k + 1 // One more evaluation done return sbest // Return the best solution found. The following is an adaptation of the technique. My supervisor said the idea is fine in theory. First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below. The function does the following: Let this allocation, aOld be the best_node From all the students, pick a random number of students and stick in a list Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated) Randomise that list Try assigning (REALLOCATE) everyone in that list projects again Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101) For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue. If wOld < wNew, then apply the algorithm to aOld again and continue. The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict) Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node. So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA. Questions: I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me. Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means? I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one? An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means? A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?

    Read the article

  • how to extract all permissions that a domain user have on the network

    - by Alexandre Jobin
    I would like to know all the permissions a windows domain user have in my network. Is there a way, with a script file or a tool, that can extract this kind of information by checking all the servers and computers in my network? I'm on a Microsoft network with Windows Server 2008 R2, Windows XP, Windows 7. The report should include these kind of informations: report all permissions that the domain user have (read, write, etc...) if the domain user is in a domain group, tell me the permissions that this group have in my network so the report could be something like this: Permissions for USER_A in the DOMAIN.COM the user is part of theses domain groups: GROUP_A GROUP_B SERVER_A W:\wwwRoot (R/W inherited from GROUP_A) W:\sharedFolder (R) SERVER_B c:\projects (R/W) c:\projects\project_a (R/W) c:\projects\project_b (R/W) c:\dumpfolder (R/W inherited from GROUP_B) COMPUTER_A LOCAL\Administrator c:\ (R/W)

    Read the article

  • Setting alias in Windows PowerShell

    - by westsider
    In PowerShell, I type: PS C: sal cdp "cd 'C:\Users\ec\Documents\Visual Studio 2010\Projects'" I get no error from this, and PS C: gal cdp shows definition as: cd 'C:\Users\ec\Documents\Visual Studio 2010\Projects' But, when I try to use cdp, I get this: Cannot resolve alias 'cdp' because it refers to term 'cd 'C:\Users\ec\Documents\Visual Studio 2010\Projects'', which is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again. At line:1 char:4 + cdp <<<<  + CatergoryInfo   : ObjectNotFound (dsp:String) [], CommandNotFoundException  + FullyQualifiedErrorId   : AliasNotResolvedException I am guessing that this is trivially easy. So I apologize in advance if that is the case. I have googled and googled and have also read through Windows PowerShell Cookbook.

    Read the article

  • OSX pdf-kit vs Linux poppler or pdf/x

    - by Tahnoon Pasha
    I keep reading and hearing that the reason that there is no good pdf editing software for Linux is that the libraries are not as well developed. That is why there is no equivalent for Skim or Preview in Linux. I had a look a the pdf-kit documentation and the poppler documentation and they looked very similar to my admittedly non-technical view. Could someone explain to me why the OSX libraries (eg) are so much easier to write projects like Skim in than the linux ones. I'm not sure if the same applies to OSX projects to NVAlt, but it seems to be a common theme - I'd just like to understand what is behind the thesis that OSX is easier to code these projects in, and what would be involved in changing it. (I'm not disputing the value of Okular or Evince and the like, just noting that they don't have the richness of functionality of Skim, Preview or even things like Goodreader on the Ipad).

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >