Search Results

Search found 33736 results on 1350 pages for 'project structure'.

Page 15/1350 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Actionscript 3.0 cross-project folder/package structure best practise

    - by dr_tchock
    I'm currently looking at structuring my teams projects into a consistent manner that properly utilises packages and is easily version-controlled (via SVN). I'm interested in any 'best practise' with regards to project structuring and how to use consistent packaging without lumping everything into a gigantic com.domainname.projects folder structure whilst maintaining that package structure. I'm also keen to use the src/bin/lib folder structure within each project. I guess I'm asking 'how do you do it?' and 'why?'. Sorry if this is a bit abstract for Stack Overflow but you guys give the best answers I've found.

    Read the article

  • Editing Project files, Resource Editors in VS 2010

    - by rajbk
    Editing Project Files Visual Studio 2010 gives you the ability to easily edit the project file associated with your project (.csproj or .vbproj). You might do this to change settings related to how the project is compiled since proj files are MSBuild files. One would normally close Visual Studio and edit the proj file using a text editor.  The better way is to first unload the project in Visual Studio by right clicking on the project in the solution explorer and selecting “Unload Project”   The project gets unloaded and is marked “unavailable” The project file can now be edited by right clicking on the unloaded project.    After editing the file, the project can be reloaded. Resource editors in VS 2010 Visual studio also comes with a number of resource editors (see list here). For example, you could open a file using the Binary editor like so. Go to File > Open > File.. Select a File and choose the “Open With..” option in the bottom right.   We are given the option to choose an editor.   Note that clicking on the “Add..” in the dialog above allows you to include your favorite editor.   Choosing the “Binary editor” above allows us to edit the file in hex format. In addition, we can also search for hex bytes or ASCII strings using the Find command.   The “Open With..” option is also available from within the solution explorer as shown below: Enjoy!   Mr. Incredible: No matter how many times you save the world, it always manages to get back in jeopardy again. Sometimes I just want it to stay saved! You know, for a little bit? I feel like the maid; I just cleaned up this mess! Can we keep it clean for... for ten minutes!

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

  • Company wants to write custom project management tool, rather then use third party product.

    - by Jason Evans
    At the company I work, we are really wanting to get into the agile methodology for developing software. One thing that I'm not excited about is the fact that management wants us to build a custom project management feature inside the company's Intranet. I think this is a total waste of time. There are many great third party tools available (e.g. Axosoft OnTime) that can do everything we need, and more. For how much development time it would cost us to build our own project management module, we could buy numerous licences for a third party product. One concern is that, whilst we are writing code for a client, and using our custom Intranet project management module, we find bugs in the module that need fixing ASAP. That means having to stop work on the client code to fix the Intranet. That just puts shivers down my spine. Another worry I have is lack of functionality. This custom module is going to be so basic, that it will just feel really crap to use. That might sound a bit snooty, but for goodness sake, many third party tools are so feature rich, that the idea of having to write our own tool makes feel very uneasy. In fact, I can't be bothered. What do you guys think? I'm going to raise this issue with my boss, since I feel it's such an important topic to talk about. EDIT: Thanks for the great responses, much appreciated. To summarize some of them: Money Naturally my boss does want to save money, by not forking out a few hundred £'s for licences. However, for us to write a custom tool, it will take x number of days, multiplied by approx £500, which is our costs. I don't see the business value in this. Management have mentioned that they want to sell the Intranet as a product in the future, but it's so custom to our needs (and downright basic), that in order to give it to another client, I can see us having to fork a version of the code and rebuild the majority of it anyway. So it's not like we're gaining anything there in reuse. Features Having our own custom module means not feature bloat - only the functionality we require will be in the product. My issue is that there are plenty of free, open-source project management tools out there with minimal features already. So even if cost is an issue, we could look into open-source. Again it all boils down to the fact that I don't see the point in writing a project management tool in this day and age. It's a bit like writing your own web browser - why?, what's the point? Although management are asking for this tool, just because they are, it does not mean I'm going to please them and do it just because they asked for it. If something does not make sense, then I will raise it as a concern. At the end of the day, it's the developers who write the code, it's the developers who make money for a business. Thus, as far I'm concerned, the devs have a very big role in deciding how a company should manage projects and what tools are used. "I am Spartan, argh!" :) Hmm, I've not been able to make this question a wiki for some reason, thus I'm going to have to pick an answer to accept. Cheers. Jas.

    Read the article

  • Roll up project-level tasks to the project collection portal in TFS2010

    - by adam.mokan
    I have a Project Collection setup in my TFS2010RC deployment. I have two Projects setup under this collection with their own task lists, which are populated with data. I fully expected the tasks from these individual projects to "roll up" and appear in the task list at the Project Collection level, but they do not. The Project Collection task list is empty. Basically, I'm looking to provide a view so a supervisor could see all tasks across projects quickly and easily. I'm sure I could write a reporting services report, but it seems like this is something so basic that it would have been included and it just need to be turned on or something. I'm sure I'm probably missing something really simple here. Thanks.

    Read the article

  • Connect MS Project 2010 to MS Project Server 2010.

    - by Nelson R
    Hello everyone, I am trying to get a tutorial or step by step instructions on how to connect MS Project 2010 to MS Project Server 2010. I have installed Server 2008 R2 (64 bit), Sharepoint 2010, and Project 2010 on my server and created a new site using the project 2010 template. I am now trying to connect my stand alone Project 2010 to that site for updates and such. I tried the File-Info-Manage accounts option and it comes up with "Could not retrieve server initialization data." I cannot find a step by step tutorial to set it up or to trouble shoot the error message. Any hints or good resources would be much appreciated.

    Read the article

  • How to assemble a multi-project ant build system

    - by Alex Worden
    At my new gig, they use Ant and cannot be persuaded to move to Maven. I've looked everywhere for a decent example of how a multi-project ant build system should be assembled. The apache site falls short. I'm looking specifically for best practices to: Automatically build local projects that are dependencies of a project Share artifacts from project to their dependents Export a project's dependencies and generated artifacts (jars) to be inherited by dependent projects Share third-party dependencies between projects I'm sure I can do all this without using Ivy - what did people do before Ivy? I really don't want to have to set up a corporate repository or rely on external repositories - the engineers here are really against that and have all their third-party jars checked into src control. Can anyone point me at a good open source example of a multi-project ant build?

    Read the article

  • Problems with WiX and Visual Studio web deployment project

    - by Valeriu
    Hello, I want to create an .MSI package from a web deployment project in Visual Studio 2008. Now we want to use continuous integration and we would need the .MSI package build in the nightly builds. Till now we used standard Visual Studio Web Setup project, but this is not compatible with the MSBuild. So we decided to use WiX. The problem is that I have not found any good tutorial/documentation about this. Is there a way to do a WiX installer package from a web deployment project? If yes, how? Also, I tried to use heat.exe to create the XML for the WiX project .wxs file, but it seems that heat.exe doesn't recognize the web deployment project format. Thank you for your responses. Regards, V.

    Read the article

  • Managing Static Library project as a module like Framework on iOS project in Xcode4.

    - by Eonil
    (Solution Note, I'll answer immediately) Many people including me trying to make a kind of Static Library framework for iOS to archive some kind of modularity. Framework is best way to do this, but it doesn't provided by Apple, and workarounds don't work well. https://github.com/kstenerud/iOS-Universal-Framework/tree/master/Fake%20Framework/Templates Fake framework cannot be referenced from linking tab in Build Phases. Real framework needs modification of system setting. And still not work smoothly on every parts. Problem is static library need header files, and it's impossible to reference header files on project at another location on different project without some script. And script breaks IDE's file management abstraction. How can I use static library project like a convenient module manner? (just dragging project into another project to complete embedding)

    Read the article

  • Creating a new project from a project skeleton using git

    - by asciitaxi
    In order to get a new django project up and running faster, I'd like to maintain a separate "project skeleton" on which I base all my new projects. It would be great if, as I improved the skeleton, I could bring those improvements into my active projects. How can I accomplish this with git? So, maybe in my remote git repository machine I would have 1 repo for each project and one for the skeleton? proj-A-repo proj-B-repo skeleton-repo If I want to create a new proj-C locally based on the skeleton, then push my local changes up to the remote server in a new repo called proj-C-repo, how might I do this? I've read through quite a bit of git documentation, but I'm confused about how to go about this. Do I need to clone the skeleton, or create an empty repo and then track a remote branch, or something else?

    Read the article

  • Eclipse, Android ndk, source files, and library project dependencies

    - by Android Noob
    In Microsoft Visual Studio 2010, it is possible to create a Solution with multiple projects and set dependencies between projects. I'm trying to figure out if the same thing can be done using Eclipse via the NDK. More specifically, I want to know if it is possible to create C source files in an ordinary Android project that can reference C header files in an Android library project. For example: Android library project: Sockets Ordinary Android project: Socket_Server Sockets contains all the C header/source files that are needed to do socket I/O. Socket_Server contains test code that makes calls to the functions that are defined in Sockets library project. This test code requires a header file that contains the function declaration of all API calls. I already set the library dependencies between the projects via: Properties > Android > Library > Add

    Read the article

  • MVVM Project and Item Templates

    - by Timmy Kokke
    Intro This is the first in a series of small articles about what is new in Silverlight 4 and Expression Blend 4. The series is build around a open source demo application SilverAmp which is available on http://SilverAmp.CodePlex.com.   MVVM Project and Item Templates Expression Blend has got a new project template to get started with a Model-View-ViewModel project  easily. The template provides you with a View and a ViewModel bound together. It also adds the ViewModel to the SampleData of your project. It is available for both Silverlight and Wpf. To get going, start a new project in Expression Blend and select Silverlight DataBound Application from the Silverlight project type. In this case I named the project DemoTest1. The solution now contains several folders: SampleData; which contains a data to show in Blend ViewModels; starts with one file, MainViewModel.cs Views; containing MainView.xaml with codebehind used for binding with the MainViewModel class. and your regular App.xaml and MainPage.xaml The MainViewModel class contains a sample property and a sample method. Both the property and the method are used in the MainView control. The MainView control is a regular UserControl and is placed in the MainPage. You can continue on building your applicaition by adding your own properties and methods to the ViewModel and adding controls to the View. Adding Views with ViewModels is very easy too. The guys at Microsoft where nice enough to add a new Item template too: a UserControl with ViewModel. If you add this new item to the root of your solution it will add the .xaml file to the views folder and a .cs file to the ViewModels folder. Conclusion The databound Application project type is a great to get your MVVM based project started. It also functions a great source of information about how to connect it all together.   Technorati Tags: Silverlight,Wpf,Expression Blend,MVVM

    Read the article

  • Import SSIS Project in Denali CTP1

    For years Analysis Services has had the ability to take an existing database from a server and reverse engineer it into a BIDS project.  This is extremely useful when all you have is the running instance of the database and the project that created it has long since disappeared.  Reverse engineering has never been a feature of SSIS until now. Let me walk you through the simple steps. The first step is that you obviously have to have a project deployed to an SSIS Catalog.  I will do a video on this soon but in case you can’t wait then my good buddy Jamie Thomson has written it up here As you can see I have a project called imaginatively “Denali1” with one package “Package.dtsx” The next thing we need to do is fire up BIDS and choose the right project type (Integration Services Import Project) Now we just follow the wizard.  We make sure we specify on which server to find the Catalog and in which folder to look for the project. Next the setting are validated and we are greeted with the familiar review screen before the creation of our new project from the deployed project happens Hit Import and away we go The result is just what we wanted.

    Read the article

  • Project Jigsaw: Late for the train: The Q&A

    - by Mark Reinhold
    I recently proposed, to the Java community in general and to the SE 8 (JSR 337) Expert Group in particular, to defer Project Jigsaw from Java 8 to Java 9. I also proposed to aim explicitly for a regular two-year release cycle going forward. Herewith a summary of the key questions I’ve seen in reaction to these proposals, along with answers. Making the decision Q Has the Java SE 8 Expert Group decided whether to defer the addition of a module system and the modularization of the Platform to Java SE 9? A No, it has not yet decided. Q By when do you expect the EG to make this decision? A In the next month or so. Q How can I make sure my voice is heard? A The EG will consider all relevant input from the wider community. If you have a prominent blog, column, or other communication channel then there’s a good chance that we’ve already seen your opinion. If not, you’re welcome to send it to the Java SE 8 Comments List, which is the EG’s official feedback channel. Q What’s the overall tone of the feedback you’ve received? A The feedback has been about evenly divided as to whether Java 8 should be delayed for Jigsaw, Jigsaw should be deferred to Java 9, or some other, usually less-realistic, option should be taken. Project Jigsaw Q Why is Project Jigsaw taking so long? A Project Jigsaw started at Sun, way back in August 2008. Like many efforts during the final years of Sun, it was not well staffed. Jigsaw initially ran on a shoestring, with just a handful of mostly part-time engineers, so progress was slow. During the integration of Sun into Oracle all work on Jigsaw was halted for a time, but it was eventually resumed after a thorough consideration of the alternatives. Project Jigsaw was really only fully staffed about a year ago, around the time that Java 7 shipped. We’ve added a few more engineers to the team since then, but that can’t make up for the inadequate initial staffing and the time lost during the transition. Q So it’s really just a matter of staffing limitations and corporate-integration distractions? A Aside from these difficulties, the other main factor in the duration of the project is the sheer technical difficulty of modularizing the JDK. Q Why is modularizing the JDK so hard? A There are two main reasons. The first is that the JDK code base is deeply interconnected at both the API and the implementation levels, having been built over many years primarily in the style of a monolithic software system. We’ve spent considerable effort eliminating or at least simplifying as many API and implementation dependences as possible, so that both the Platform and its implementations can be presented as a coherent set of interdependent modules, but some particularly thorny cases remain. Q What’s the second reason? A We want to maintain as much compatibility with prior releases as possible, most especially for existing classpath-based applications but also, to the extent feasible, for applications composed of modules. Q Is modularizing the JDK even necessary? Can’t you just put it in one big module? A Modularizing the JDK, and more specifically modularizing the Java SE Platform, will enable standard yet flexible Java runtime configurations scaling from large servers down to small embedded devices. In the long term it will enable the convergence of Java SE with the higher-end Java ME Platforms. Q Is Project Jigsaw just about modularizing the JDK? A As originally conceived, Project Jigsaw was indeed focused primarily upon modularizing the JDK. The growing demand for a truly standard module system for the Java Platform, which could be used not just for the Platform itself but also for libraries and applications built on top of it, later motivated expanding the scope of the effort. Q As a developer, why should I care about Project Jigsaw? A The introduction of a modular Java Platform will, in the long term, fundamentally change the way that Java implementations, libraries, frameworks, tools, and applications are designed, built, and deployed. Q How much progress has Project Jigsaw made? A We’ve actually made a lot of progress. Much of the core functionality of the module system has been prototyped and works at both compile time and run time. We’ve extended the Java programming language with module declarations, worked out a structure for modular source trees and corresponding compiled-class trees, and implemented these features in javac. We’ve defined an efficient module-file format, extended the JVM to bootstrap a modular JRE, and designed and implemented a preliminary API. We’ve used the module system to make a good first cut at dividing the JDK and the Java SE API into a coherent set of modules. Among other things, we’re currently working to retrofit the java.util.ServiceLoader API to support modular services. Q I want to help! How can I get involved? A Check out the project page, read the draft requirements and design overview documents, download the latest prototype build, and play with it. You can tell us what you think, and follow the rest of our work in real time, on the jigsaw-dev list. The Java Platform Module System JSR Q What’s the relationship between Project Jigsaw and the eventual Java Platform Module System JSR? A At a high level, Project Jigsaw has two phases. In the first phase we’re exploring an approach to modularity that’s markedly different from that of existing Java modularity solutions. We’ve assumed that we can change the Java programming language, the virtual machine, and the APIs. Doing so enables a design which can strongly enforce module boundaries in all program phases, from compilation to deployment to execution. That, in turn, leads to better usability, diagnosability, security, and performance. The ultimate goal of the first phase is produce a working prototype which can inform the work of the Module-System JSR EG. Q What will happen in the second phase of Project Jigsaw? A The second phase will produce the reference implementation of the specification created by the Module-System JSR EG. The EG might ultimately choose an entirely different approach than the one we’re exploring now. If and when that happens then Project Jigsaw will change course as necessary, but either way I think that the end result will be better for having been informed by our current work. Maven & OSGi Q Why not just use Maven? A Maven is a software project management and comprehension tool. As such it can be seen as a kind of build-time module system but, by its nature, it does nothing to support modularity at run time. Q Why not just adopt OSGi? A OSGi is a rich dynamic component system which includes not just a module system but also a life-cycle model and a dynamic service registry. The latter two facilities are useful to some kinds of sophisticated applications, but I don’t think they’re of wide enough interest to be standardized as part of the Java SE Platform. Q Okay, then why not just adopt the module layer of OSGi? A The OSGi module layer is not operative at compile time; it only addresses modularity during packaging, deployment, and execution. As it stands, moreover, it’s useful for library and application modules but, since it’s built strictly on top of the Java SE Platform, it can’t be used to modularize the Platform itself. Q If Maven addresses modularity at build time, and the OSGi module layer addresses modularity during deployment and at run time, then why not just use the two together, as many developers already do? A The combination of Maven and OSGi is certainly very useful in practice today. These systems have, however, been built on top of the existing Java platform; they have not been able to change the platform itself. This means, among other things, that module boundaries are weakly enforced, if at all, which makes it difficult to diagnose configuration errors and impossible to run untrusted code securely. The prototype Jigsaw module system, by contrast, aims to define a platform-level solution which extends both the language and the JVM in order to enforce module boundaries strongly and uniformly in all program phases. Q If the EG chooses an approach like the one currently being taken in the Jigsaw prototype, will Maven and OSGi be made obsolete? A No, not at all! No matter what approach is taken, to ensure wide adoption it’s essential that the standard Java Platform Module System interact well with Maven. Applications that depend upon the sophisticated features of OSGi will no doubt continue to use OSGi, so it’s critical that implementations of OSGi be able to run on top of the Java module system and, if suitably modified, support OSGi bundles that depend upon Java modules. Ideas for how to do that are currently being explored in Project Penrose. Java 8 & Java 9 Q Without Jigsaw, won’t Java 8 be a pretty boring release? A No, far from it! It’s still slated to include the widely-anticipated Project Lambda (JSR 335), work on which has been going very well, along with the new Date/Time API (JSR 310), Type Annotations (JSR 308), and a set of smaller features already in progress. Q Won’t deferring Jigsaw to Java 9 delay the eventual convergence of the higher-end Java ME Platforms with Java SE? A It will slow that transition, but it will not stop it. To allow progress toward that convergence to be made with Java 8 I’ve suggested to the Java SE 8 EG that we consider specifying a small number of Profiles which would allow compact configurations of the SE Platform to be built and deployed. Q If Jigsaw is deferred to Java 9, would the Oracle engineers currently working on it be reassigned to other Java 8 features and then return to working on Jigsaw again after Java 8 ships? A No, these engineers would continue to work primarily on Jigsaw from now until Java 9 ships. Q Why not drop Lambda and finish Jigsaw instead? A Even if the engineers currently working on Lambda could instantly switch over to Jigsaw and immediately become productive—which of course they can’t—there are less than nine months remaining in the Java 8 schedule for work on major features. That’s just not enough time for the broad review, testing, and feedback which such a fundamental change to the Java Platform requires. Q Why not ship the module system in Java 8, and then modularize the platform in Java 9? A If we deliver a module system in one release but don’t use it to modularize the JDK until some later release then we run a big risk of getting something fundamentally wrong. If that happens then we’d have to fix it in the later release, and fixing fundamental design flaws after the fact almost always leads to a poor end result. Q Why not ship Jigsaw in an 8.5 release, less than two years after 8? Or why not just ship a new release every year, rather than every other year? A Many more developers work on the JDK today than a couple of years ago, both because Oracle has dramatically increased its own investment and because other organizations and individuals have joined the OpenJDK Community. Collectively we don’t, however, have the bandwidth required to ship and then provide long-term support for a big JDK release more frequently than about every other year. Q What’s the feedback been on the two-year release-cycle proposal? A For just about every comment that we should release more frequently, so that new features are available sooner, there’s been another asking for an even slower release cycle so that large teams of enterprise developers who ship mission-critical applications have a chance to migrate at a comfortable pace.

    Read the article

  • Tasks carried out for a Software Project

    - by Sara
    Hi, We were asked to propose a web based system for a shop, for an assignment. As i'm a newbee for project management stuff find it quite difficult to come with Tasks for our Gantt chart. Can someone pls suggest a sample gantt or main tasks followed in developing such system Thanks

    Read the article

  • Cannot download Microsoft Project 2010 demo videos

    - by Nam Gi VU
    Hi everyone, I need to download the demo videos at my office so that I can view it later at home since I have slow internet connection there but it seems to be no way to download the videos at [the site][1], and other resources in this site. Using some of the firefox addons to download them is not possible. Do you have any tips for me? Please share. [1]: http://www.microsoft.com/showcase/en/US/search?phrase="project conference 2009"

    Read the article

  • Unregistering COM dll with a C# Setup Project

    - by lb
    Hi All. I've been stuck on this one for a while. I'll try explain in the simplest terms and at the best of my knowledge. I will honour any help. I've got a C# project which uses a VB6 compiled ActiveX DLL that I'm constantly updating. I compile the setup project, send it to the client and they run the setup. When building the updated setup project, I would increase the 'Version' of the setup project so it wouldn't bother with 'Another version is already installed'. What started happening after a few updates I began to notice the DLL would not be updated to the new version in the installer. The client computer had the original DLL both installed and registered. First symptom: method not found exceptions from the client C# code. This is not a shared DLL and only this application needs it. I've noticed that when uninstalling the application (through the usual procedure) the DLL is also not removed from the application folder although I would set this file's property 'Permanent' to false. The registration entries in the registry are mantained also. I do update in VS6.0 the version of the DLL (usually increase the build number) before building it. Then in VS2008, I remove it from the References, and add it again from the 'Browse tab', without re-registering it on my dev machine and adding it from the COM tab. I've thought of these options. Custom step in Setup project to regsvr32.exe /u 'hardcoded path of my dll' at uninstall (ugly) Somehow find out how the 'Isolate' property can work for me without registering Find out how to execute setup project 'Conditions' that would actually check the version of the library and to update the file accordingly at every install) Any help would be incredibly welcome.

    Read the article

  • How to determine the size of a project (lines of code, function points, other)

    - by sixtyfootersdude
    How would you evaluate project size? Part A: Before you start a project. Part B: For a complete project. I am interested in comparing unrelated projects. Here are some options: 1) Lines of code. I know that this is not a good metric of productivity but is this a reasonable measure of project size? If I wanted to estimate how long it would take to recreate a project would this be a reasonable way to do it? How many lines of code should I estimate a day? 2) Function Points. Functions points are defined as the number of: inputs outputs inquires internal files external interfaces Anyone have a veiw point on whether this is a good measure? Is there a way to **actually do this? Does anyone have another solution? Hours taken seems like it could be a useful metric but not solely. If I ask you what is a "bigger program" and give you two programs how would you approach the question? I have seen several discussions of this on stackover flow but most discuss how to measure programmer productivity. I am more interested in project size.

    Read the article

  • Project management and bundling dependencies

    - by Joshua
    I've been looking for ways to learn about the right way to manage a software project, and I've stumbled upon the following blog post. I've learned some of the things mentioned the hard way, others make sense, and yet others are still unclear to me. To sum up, the author lists a bunch of features of a project and how much those features contribute to a project's 'suckiness' for a lack of a better term. You can find the full article here: http://spot.livejournal.com/308370.html In particular, I don't understand the author's stance on bundling dependencies with your project. These are: == Bundling == Your source only comes with other code projects that it depends on [ +20 points of FAIL ] Why is this a problem, (especially given the last point)? If your source code cannot be built without first building the bundled code bits [ +10 points of FAIL ] Doesn't this necessarily have to be the case for software built against 3rd party libs? Your code needs that other code to be compiled into its library before the linker can work? If you have modified those other bundled code bits [ +40 points of FAIL ] If this is necessary for your project, then it naturally follows that you've bundled said code with yours. If you want to customize a build of some lib,say WxWidgets, you'll have to edit that projects build scripts to bulid the library that you want. Subsequently, you'll have to publish those changes to people who wish to build your code, so why not use a high level make script with the params already written in, and distribute that? Furthermore, (especially in a windows env) if your code base is dependent on a particular version of a lib (that you also need to custom compile for your project) wouldn't it be easier to give the user the code yourself (because in this case, it is unlikely that the user will already have the correct version installed)? So how would you respond to these comments, and what points may I be failing to take into consideration? Would you agree or disagree with the author's take (or mine), and why?

    Read the article

  • Optimal Data Structure for our own API

    - by vermiculus
    I'm in the early stages of writing an Emacs major mode for the Stack Exchange network; if you use Emacs regularly, this will benefit you in the end. In order to minimize the number of calls made to Stack Exchange's API (capped at 10000 per IP per day) and to just be a generally responsible citizen, I want to cache the information I receive from the network and store it in memory, waiting to be accessed again. I'm really stuck as to what data structure to store this information in. Obviously, it is going to be a list. However, as with any data structure, the choice must be determined by what data is being stored and what how it will be accessed. What, I would like to be able to store all of this information in a single symbol such as stack-api/cache. So, without further ado, stack-api/cache is a list of conses keyed by last update: `(<csite> <csite> <csite>) where <csite> would be (1362501715 . <site>) At this point, all we've done is define a simple association list. Of course, we must go deeper. Each <site> is a list of the API parameter (unique) followed by a list questions: `("codereview" <cquestion> <cquestion> <cquestion>) Each <cquestion> is, you guessed it, a cons of questions with their last update time: `(1362501715 <question>) (1362501720 . <question>) <question> is a cons of a question structure and a list of answers (again, consed with their last update time): `(<question-structure> <canswer> <canswer> <canswer> and ` `(1362501715 . <answer-structure>) This data structure is likely most accurately described as a tree, but I don't know if there's a better way to do this considering the language, Emacs Lisp (which isn't all that different from the Lisp you know and love at all). The explicit conses are likely unnecessary, but it helps my brain wrap around it better. I'm pretty sure a <csite>, for example, would just turn into (<epoch-time> <api-param> <cquestion> <cquestion> ...) Concerns: Does storing data in a potentially huge structure like this have any performance trade-offs for the system? I would like to avoid storing extraneous data, but I've done what I could and I don't think the dataset is that large in the first place (for normal use) since it's all just human-readable text in reasonable proportion. (I'm planning on culling old data using the times at the head of the list; each inherits its last-update time from its children and so-on down the tree. To what extent this cull should take place: I'm not sure.) Does storing data like this have any performance trade-offs for that which must use it? That is, will set and retrieve operations suffer from the size of the list? Do you have any other suggestions as to what a better structure might look like?

    Read the article

  • How to model a system to help my team grasp the project's bigger picture? [closed]

    - by user1796528
    According to the software engineering point of view, I should model the system to make it easier for other people to understand well what they work on. To do so, I have used the Dia drawing program. But, after having used Dia for some time, I find that it falls short in helping me to correctly and efficiently model my project. How do you usually tackle this problem (modelling a project in the large) and what tools would you recommend for the job, and why?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >