Search Results

Search found 38088 results on 1524 pages for 'large scale project'.

Page 24/1524 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Place Eclipse Project File in Separate Directory

    - by Eddie Ringle
    I'm using a default Eclipse project file generated for my Android application, and I want to keep it in a targets/ directory (along with other eclipse-specific files) in order to better organize my project structure (I also plan on adding a target for NetBeans). Simple question, I suppose: Is this possible?

    Read the article

  • alternative of microsoft project along API :)

    - by adnan
    Dear, I am looking for well known Microsoft Project alike applications which somehow i get to know through this http://stackoverflow.com/questions/729926/alternatives-to-microsoft-project but I also need to know their API/Library through which I can pro grammatically read their files using .NET Hope m not asking for much :)

    Read the article

  • Why does eclipse break when the .project file is hidden?

    - by Tommy
    Why does eclipse break with the error "Could not write file: M:\workspaces\eclipse\project.project. M:\workspaces\eclipse\project.project (Access is denied)" when the .project file is hidden (on the Windows file system)? Note: This happens w/ other files as well. Steps to Reproduce: 1. Install the latest eclipse, I am using eclipse-jee-galileo-SR2-win32.zip. (Not sure if it happens in other versions) 2. Create a project. 3. Browse to the project in windows explorer, find the .project file. 4. Right click - properties 5. Under Attributes check hidden. 6. In eclipse, open the .project file, make a change and try to save. 7. After you get the error, uncheck the hidden box and save again.

    Read the article

  • Querying Visual Studio project files using T-SQL and Powershell

    - by jamiet
    Earlier today I had a need to get some information out of a Visual Studio project file and in this blog post I’m going to share a couple of ways of going about that because I’m pretty sure I won’t be the only person that ever wants to do this. The specific problem I was trying to solve was finding out how many objects in my database project (i.e. in my .dbproj file) had any warnings suppressed but the techniques discussed below will work pretty well for any Visual Studio project file because every such file is simply an XML document, hence it can be queried by anything that can query XML documents. Ever heard the phrase “when all you’ve got is hammer everything looks like a nail”? Well that’s me with querying stuff – if I can write SQL then I’m writing SQL. Here’s a little noddy database project I put together for demo purposes: Two views and a stored procedure, nothing fancy. I suppressed warnings for [View1] & [Procedure1] and hence the pertinent part my project file looks like this:   <ItemGroup>    <Build Include="Schema Objects\Schemas\dbo\Views\View1.view.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151,3276</SuppressWarnings>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Views\View2.view.sql">      <SubType>Code</SubType>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Programmability\Stored Procedures\Procedure1.proc.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151</SuppressWarnings>    </Build>  </ItemGroup>  <ItemGroup> Note the <SuppressWarnings> elements – those are the bits of information that I am after. With a lot of help from folks on the SQL Server XML forum  I came up with the following query that nailed what I was after. It reads the contents of the .dbproj file into a variable of type XML and then shreds it using T-SQL’s XML data type methods: DECLARE @xml XML; SELECT @xml = CAST(pkgblob.BulkColumn AS XML) FROM   OPENROWSET(BULK 'C:\temp\QueryingProjectFileDemo\QueryingProjectFileDemo.dbproj' -- <-Change this path!                    ,single_blob) AS pkgblob                    ;WITH XMLNAMESPACES( 'http://schemas.microsoft.com/developer/msbuild/2003' AS ns) SELECT  REVERSE(SUBSTRING(REVERSE(ObjectPath),0,CHARINDEX('\',REVERSE(ObjectPath)))) AS [ObjectName]        ,[SuppressedWarnings] FROM   (        SELECT  build.query('.') AS [_node]        ,       build.value('ns:SuppressWarnings[1]','nvarchar(100)') AS [SuppressedWarnings]        ,       build.value('@Include','nvarchar(1000)') AS [ObjectPath]        FROM    @xml.nodes('//ns:Build[ns:SuppressWarnings]') AS R(build)        )q And here’s the output: And that’s it – an easy way of discovering which warnings have been suppressed and for which objects in your database projects. I won’t bother going over the code as it is fairly self-explanatory – peruse it at your leisure.   Once I had the SQL above I figured I’d share it around a little in case it was ever useful to anyone else; hence I’m writing this blog post and I also posted it on the Visual Studio Database Development Tools forum at FYI: Discover which objects have had warnings suppressed. Luckily Kevin Goode saw the thread and he posted a different solution to the same problem, one that uses Powershell. The advantage of Kevin’s Powershell approach is that it is easy to analyse many .dbproj files at the same time. Below is Kevin’s code which I have tweaked ever so slightly so that it produces the same results as my SQL script (I just want any object that had had a warning suppressed whereas Kevin was querying specifically for warning 4151):   cd 'C:\Temp\QueryingProjectFileDemo\' cls $projects = ls -r -i *.dbproj Foreach($project in $projects) { $xml = new-object System.Xml.XmlDocument $xml.set_PreserveWhiteSpace( $true ) $xml.Load($project) #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings=4151]/@Include"} #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[contains(e:SuppressWarnings,'4151')]/@Include"} $xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings]/@Include"} $ns = @{ e = "http://schemas.microsoft.com/developer/msbuild/2003" } $xml | Select-Xml -XPath $xpath.Start -Namespace $ns |Select -Expand Node | Select -expand Value } and here’s the output: Nice reusable Powershell and SQL scripts – not bad for an evening’s work. Thank you to Kevin for allowing me to share his code. Don’t forget that these techniques can easily be adapted to query any Visual Studio project file, they’re only XML documents after all! Doubtless many people out there already have code for doing this but nonetheless here is another offering to the great script library in the sky. Have fun! @Jamiet

    Read the article

  • Project of Projects with team Foundation Server 2010

    - by Martin Hinshelwood
    It is pretty much accepted that you should use Areas instead of having many small Team Projects when you are using Team Foundation Server 2010. I have implemented this scenario many times and this is the current iteration of layout and considerations. If like me you work with many customers you will find that you get into a grove for how to set these things up to make them as easily understandable for everyone, while giving the best functionality. The trick is in making it as intuitive as possible for both you and the developers that need to work with it. There are five main places where you need to have the Product or Project name in prominence of any other value. Area Iteration Source Code Work Item Queries Build Once you decide how you are doing this in each of these places you need to keep to it religiously. Evan if you have one source code file to keep, make sure it is in the right place. This makes your developers and others working with the format familiar with where everything should go, as well as building up mussel memory. This prevents the neat system degenerating into a nasty mess. Areas Areas are traditionally used to separate out parts of your product / project so that you can see how much effort has gone into each. Figure: The top level areas are for reporting and work item separation There are massive advantages of using this method. You can: move work from one project to another rename a project / product It is far more likely that a project or product gets renamed than a department. Tip: If you have many projects, over 100, you should consider categorising them here, but make sure that the actual project name always sits at the same level so you know which is which. Figure: Always keep things that are the same at the same level Note: You may use these categories only at the Area/Iteration level to make it easier to select on drop down lists. You may not want to use them everywhere. On the other hand, for consistency it would be better to. Iterations Iterations are usually used to some sort of time based consideration. Here I am splitting into Iterations with periodic releases. Figure: Each product needs to be able to have its own cadence The ability to have each project run at its own pace and to enable them to have their own release schedule is often of paramount importance and you don’t want to fix your 100+ projects to all be released on the same date. Source Code Having a good structure for your source even if you are not branching or having multiple products under the same structure is always a good idea. Figure: Separate out your products source You need to think about both your branches as well as the structure of your source. All your code should be under “Source” and everything you need to build your solution including Build Scripts and 3rd party tools should be under your “Main” (branch) folder. This should them be branched by “Quality”, “Release” or both to get the most out of your branching structure. The important thing is to make sure you branch (or be able to branch) everything you need to build, test and deploy your application to an environment. That environment may be development, test or even production, but I can’t stress the importance of having everything your need. Note: You usually will not be able to install custom software on your build server. Store any *.dll’s or *.exe’s that you need under the “Tools\Tool1” folder. Note: Consult the Branching Guidance for Team Foundation Server 2010 for more on branching Figure: Adding category may be a necessary evil Even if you have to have a couple of categories called “Default”, it is better than not knowing the difference between a folder, Product and Branch. Work Item Queries Queries are used to load lists of Work Items out of TFS so you can see what work you have. This means that you want to also separate queries out by Product / project to make it easier to Figure: Again you have the same first level structure Having Folders also in Work Item Tracking we do the same thing. We put all the queries under a folder named for the Product / Project and change each query to have “AreaPath=[TeamProject]\[ProductX]” in the query instead of the standard “Project=@Project”. Tip: Don’t have a folder with new queries for each iteration. Instead have a single “Current” folder that has queries that point to the current iteration. Just change the queries as you move from one iteration to another. Tip: You can ctrl+drag the “Product1” folder to create your “Product2” folder. Builds You may have many builds both for individual products but also for different quality's. This can be further complicated by having some builds that action “Gated Check-In” and others that are specifically for “Release”, “Test” or another purpose. Figure: There are no folders, yet, for the builds so you need a good naming convention Its a pity that there are no folders under builds, some way to categorise would be nice. In lue of that at the moment you can use a functional naming convention that at least allows you to find what you want. Conclusion It is really easy to both achieve and to stick to this format if you take the time to do it. Unless you have 1000+ builds or 100+ Products you are unlikely run into any issues. Even then there are things you can do to mitigate the issues and I have describes some of them above. Let me know if you can think of any other things to make this easier.

    Read the article

  • What editor/viewer to use to inspect large text based files?

    - by Turismo
    Are there any text editors/viewers (preferably on windows but other platforms are also ok) that can handle files of 500 MB or more? The editors I checked so far (Notepad++, Notepad, Eclipse) all choked on files of that size. Edit: Many thanks for the great suggestions. I tried gvim as it was the top voted and was available on Windows. I opened the file in a reasonable time. After that scrolling and searching was very smooth as long as syntax highlighting was turned off. From the other editors mentioned TextPad and EmEditor both claim to be able to handle large files very well. EmEditor seems to be built exactly for editing large files. I'll probably try both and report back.

    Read the article

  • Synchronizing an ERWin model with a Visual Studio 2008 GDR 2/2010 db project

    - by Grant Back
    I am looking for options to get our vast collection of DB objects across many DBs into source control (TFS 2010). Once we succeed here, we will work toward generating our alter scripts for a particular DB change via TFS build. The problem is, our data architecture group is responsible for maintaining the DB objects (excluding SPs), and they work within a model centric process, via ERWin. What this means, is that they maintain the DBs via ERWin models, and generate alters from them that are used to release changes. In order to achieve our goal of getting the DB objects (not just the ERWin models) into TFS, I believe the best option is to do this via Visual Studio DB projects. From what I can tell, there is very little urgency for CA to continue supporting an integration between ERWin and Visual Studio, that no longer works as of Visual Studio 2008 DB Ed. GDR. If I have been mislead in this regard, please feel free to set me straight. One potential solution is to: Perform changes in the ERWin model. Take the alter script generated from ERWin, and import the script into the appropriate Visual Studio DB project, updating the objects in the in the DB project Check the changed objects in the DB project into TFS. TFS Build executes to generate the alter scripts that will be used to push the changes through our release process. My question is, is this solution viable, or are there any other options?

    Read the article

  • Best Practices for Setup and Management of an Open Source Project

    - by VirtuosiMedia
    Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project? Some ideas that I have to make it successful (beyond marketing it): Good documentation and tutorials Automated unit tests and builds to push update to the website A clear roadmap Bug Tracking integrated with the source control A style guide to keep the code consistent along with clear A forum for the community to get support, share ideas, etc. A good example application built with the framework A blog to keep the community informed Maintaining backwards compatibility wherever possible Some of my questions: How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process? How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated? What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama. I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.

    Read the article

  • Why do business analysts and project managers get higher salaries than programmers? [closed]

    - by jpartogi
    We have to admit that programming is much more difficult than creating documentation or even creating Gantt chart and asking progress to programmers. So for us that are naives, knowing that programming is generally more difficult, why do business analysts and project managers get higher salary than programmers? What is it that makes their job a high paying job when even at most times programmers are the ones that go home late? UPDATE Excuse my ignorance, from some of the response it seems that the reason why BAs and PMs gets higher salary because they are the ones that usually responsible for the mess programmers make. But at the end of the day, it is programmers that get their hands dirty to fix the mess and work harder. So it still does not make sense.

    Read the article

  • Alternative to MS Project 2007 for production scheduling?

    - by john c
    OK... Im coming to grips with the fact that MS Project 2007 may not be the correct tool for my production scheduling. We serve 120 to 150 projects a year with durations from 6 weeks to 12 months. The task are simple (6 to 8 per project) and the resource pool is stable (15 to 20 people). It's really an assembly line product but with extremely varied durations. I need to be able to prioritize the projects for production and run projects concurrently to fully utilize my resources. What are the feelings of the stackoverflow community. Am I using the wrong program? I was really hoping to make this simple for non-programer types to input project data into a form and have the schedule software automated enough to make most of the decisions. Is there a better solution available commercially? I'd like to hold on writing a custom spreadsheet as a last resort but if that's the best route then so be it. Thank you so much for your input.

    Read the article

  • Single developer, project organization

    - by poke
    I am looking for a good (and free) way to organize some of my personal projects. I am saying "organize" because I'm not sure, if the standard project management software solutions are exactly what I am looking for and especially something what I, as a single developer, need. In general, I just want to keep my progress of my projects organized in some way. I would like to be able to keep track of milestones and split those into multiple smaller tasks, so I can keep track of my progress. So some task/issue based system would probably be good, especially as I also want to keep track of issues/bugs with specific versions (although I alone will create those issues). I am and will be the only developer on those projects, so it doesn't matter if the software is offline or online, and I also don't need any collaboration features (like commenting on things, or assigning tasks to other developers etc). But if there is a good software that fits my needs, and in addition it has those things, I don't really care. After all it's easy enough to not use available features. Many online solutions also offer integrated code hosting. I am using git internally, but I don't plan to push any of the code, so such a feature is not needed either. In case of online solutions however I would like the projects to be closed to the public (some of the online utilities only offer open source projects for free and require payments for private projects). I have looked at some project management solutions already, I also read some similar questions here on SO. But given that I'm a single developer, my focus is probably a bit different as when others ask for a huge distributed software that supports many developers and different collaboration features. Some standard answers such as Trac (which also only supports one project), Redmine and FogBUGZ look interesting, but are a bit off my interest (although you may change my mind on that :P). Currently, I'm looking at Indefero which doesn't look too bad. But what do you think?

    Read the article

  • PHP Commercial Project Function define

    - by Shiro
    Currently I am working with a commercial project with PHP. I think this question not really apply to PHP for all programming language, just want to discuss how your guys solve it. I work in MVC framework (CodeIgniter). all the database transaction code in model class. Previously, I seperate different search criteria with different function name. Just an example function get_student_detail_by_ID($id){} function get_student_detail_by_name($name){} as you can see the function actually can merge to one, just add a parameter for it. But something you are rushing with project, you won't look back previously got what similar function just make some changes can meet the goal. In this case, we found that there is a lot function there and hard to maintenance. Recently, we try to group the entity to one ultimate search something like this function get_ResList($is_row_count=FALSE, $record_start=0, $arr_search_criteria='', $paging_limit=20, $orderby='name', $sortdir='ASC') we try to make this function to fit all the searching criteria. However, our system getting bigger and bigger, the search criteria not more 1-2 tables. It require join with other table with different purpose. What we had done is using IF ELSE, if(bla bla bla) { $sql_join = JOIN_SOME_TABLE; $sql_where = CONDITION; } at the end, we found that very hard to maintance the function. it is very hard to debug as well. I would like to ask your opinion, what is the commercial solution they solve this kind of issue, how to define a function and how to revise it. I think this is link project management skill. Hope you willing to share with us. Thanks.

    Read the article

  • When is it time to start afresh instead of trying to rewrite the existing project?

    - by gablin
    As projects grow and age and new features are introduced, there may come a time when it's probably better to start anew. Some features may require so much rewriting that it would be easier (and maybe faster) to just rewrite the whole thing from scratch. Also, then you have the opportunity to apply what you've learned from the first project and thus improve the code and design of the successor. Is there a way of recognizing when this time is? Is it just a gut feeling, or could you say that "after x years, you probably should start over", or some other rule of thumb?

    Read the article

  • Fixed Resource Monitor Graph Scale in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums?

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • Have You Heard About Project Lucy?

    - by KKline
    Lucy, You Got Some 'Splainin to Do!' Quest Software's latest community initiative, Windows Azure-based Project Lucy, has debuted! Project Lucy is part infrastructure analytics, part social media experiment, and part performance data warehouse. The best things about Project Lucy include: It’s Free - just like our SQLServerPedia website, Project Lucy is free to anyone who wants to upload a trace file It’s 1oo% web-based - you don’t have to download or maintain anything and updates roll out seamlessly,...(read more)

    Read the article

  • Web Based Project Management System

    We are very well aware about the software development life cycle and project management system. But many of the developers assume that it is available only as desktop application. But now a days there are many online web based project management tools are available in market. In this article I am providing information about what is web based project management system, the available open source web based PMS and the benefits of using web based project management system for business owners.

    Read the article

  • How do you assign resources and keep begin, end and duration of a task intact?

    - by Random
    I have problems with assigning more than one resource to a group of tasks. The idea is simple, my tasks are in one group and are manually scheduled to particular begin and end dates. I want to assign more than one resource to keep task duration and dates (fixed duration) and increase work. For top level tasks it works fine but as long tasks are grouped, the duration of each is extended to reach group end date and work remains. For the problematic tasks, the Gantt chart looks like this: One resource attached (good) ( Task 1.1 ) ( Task 1.2 ) (Task 1.3) More than one resource attached (wrong) ( Task 1.1 )....................... ( Task 1.2 ).......... (Task 1.3) So for tasks like that, I want to have a fixed schedule and just increase work by adding resources that work in the same time, but sometimes MS Project does leveling to do resources work sequentialy.

    Read the article

  • UIView scaled to thumbprint, what about subviews?

    - by Parad0x13
    I have a UIImageView that I create progmatically right? Works like a charm when I initWithImage and set the parameters for scaling to UIViewContentModeScaleToFill. I can scale all day long and create many different smaller 'thumbprint' sized versions of the UIImageView at whim, HOWEVER when I add subviews to said UIImageView and then try to scale the happy new parent the child subview does not scale! I looked around and found that I should enable the setAutoresizingSubviews boolean of the parentView to true (Which I did) and then call: ChildView.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth; Which I also did... now when I scale the parent view the child also scales but crazy wacked out like and goes all over the place (Not a very good way of putting it) but my point is that I assumed the child would scale proportionally to the parent and stay in the same place (Which it doesn't) My question is how do I set up my code so that when I scale my parent view by 1/4 or w/e to make a thumbprint of the view the children subviews that the parent view owns will scale accordingly and allow the 1/4 sized thumbprint to look just like the full sized view (Just smaller).

    Read the article

  • How do you get past the Analysis to Paralysis when working on a new project?

    - by Cape Cod Gunny
    I've been struggling with how to get my project going. I've got an old software package that is in need of desparate rewrite. I haven't compiled the source code since 2004. It still sells, it's stable but does require the “Run this program in compatibility mode for:” on a lot of the newer windows systems. It's also one of those hard coded 640 X 480 screen resolution programs. Yuck! I can't seem to get started with this rewrite. I'm constantly fiddling around with different things. I'll play around with different fluid layouts for a while. Then I start looking around at how the main menu should work/look. I quickly find out that there's this thing called "Cool Bars" and I'll spend hours playing with that. Then I start thinking about stuff like "Oh I need to make sure that the screen sizes are preserved so when the application gets relaunched it remebers how the screens were positioned." Which leads to what happens if they have two monitors? Which leads to what happens if they have a quad screen? Yikes it's got to stop. I have always been a slow starter. I think about stuff long and hard up front. This has always plagued me. Once I get my mind made up then bam... I'm off and running. I'm looking for advice from some other one-person software companies that can help someone like me get off to a quicker start?

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >