Search Results

Search found 421 results on 17 pages for 'artifacts'.

Page 5/17 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Dart and NetBeans IDE 7.4

    - by Geertjan
    Here's the start of Dart in NetBeans IDE. Basic Dart editing support is done and on saving a Dart file the related JavaScript files are automatically generated. In the context of an HTML5 application in NetBeans IDE, that gives you deep integration with the embedded browser and, even better, Chrome, as well as Chrome Developer Tools. Below, notice that the "Sunflower Spectacular" H1 element is selected (click the image to enlarge it to get a better view), which is therefore highlighted in the live DOM view in the bottom left, as well as in the CSS Styles window in the top right, from where the CSS styles can be edited and from where the related files can be opened in the IDE. Identical features are available for Chrome, as well as on Android and iOS. And if you like that, watch this YouTube movie showing how Chrome Developer Tools integration can fit directly into the workflow below. Anyone want to help get this plugin further? What's needed: Much deeper Dart editing support, i.e., right now only very basic syntax coloring is provided, i.e., an ANTLR lexer is integrated into the NetBeans syntax coloring infrastructure. Parsing, error checking, code completion, and some small code templates are needed. A new panel is needed in the Project Properties dialog on NetBeans HTML5 projects for enabling Dart (i.e., similar to enabling Cordova), at which point the "dart.js" file and other Dart artifacts should be added to the project, so that a Dart project is immediately generated and the application should be immediately deployable. Whenever changes are made to a Dart file, Dart should run in the background to create the Dart artifacts in some hidden way, so that the user doesn't see all the Dart artifacts as is currently the case. Some way of recognizing Dart projects (there's a YAML file as an identifier) and creating NetBeans HTML5 projects from that, i.e., from Dart projects outside the IDE. I think that's all... The official Dart Editor is based on Eclipse and requires a massive download of heaps of Eclipse bundles. Compare that to the NetBeans equivalent, which is a very small "HTML5 and PHP" bundle (60 MB), available here, together with the above small Dart plugin. Plus, when you look at how NetBeans IDE integrates with a bunch of Google-oriented projects, i.e., Chrome, Chrome Developer Tools, and Android (via Cordova), that's a pretty interesting toolbox for anyone using Dart. And bear in mind that ANTLRWorks, Microchip, and heaps of other organizations have built and are building their tools on top of NetBeans!

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • Ubuntu 12.04 / 12.10 Randomly Freezing - nVidia?

    - by Alix Axel
    My Ubuntu install frequently freezes, sometimes showing a black screen (not very common anymore - in my latest installs), some other times the mouse and keyboard just fail to move and respond (not even Ctrl + Alt + F1 works) and some other times I'm able to move the mouse with a huge delay (2-5 seconds) but I'm not able to do/click anything. I have a pretty strong feeling that this problem is related to my graphic card drivers because: after hard reset, I usually get error reports about X.org / jockey it's common for artifacts to appear during loading / shutdown / whenever, for instance: pattern filled with £ during log off ugly-colored squared pattern during boot windows that are partially moved (i.e.: only the top half) Firefox renderings that leave the bottom ~30% of the page black These artifacts appear right before the system freezes. I've installed Ubuntu 12.04 LTS and after several failed attempts to get my dual monitor setup to work properly I tried installing the new 12.10 version, hoping that this new version would have this problem solved... Unfortunatly, that was not the case, so I reverted to Ubuntu 12.04. I've tried all the drivers in the Additional Drivers application (even the experimental ones), I've also tried the nvidia-current package from the PPA repository ubuntu-x-swat/x-updates as well as the nouveau OSS driver. Nothing (except no driver at all with a 640*480 resolution) at all seems stable. Here is the info of my graphic card: alix@alix-E500:~$ lspci | grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation G86 [GeForce 8400M G] (rev a1) alix@alix-E500:~$ sudo lshw -C video [sudo] password for alix: *-display description: VGA compatible controller product: G86 [GeForce 8400M G] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:fd000000-fdffffff memory:d0000000-dfffffff memory:fa000000-fbffffff ioport:cc00(size=128) memory:fe0e0000-fe0fffff Right now, I don't even have my 22" monitor connected as I can't even get my laptop display to work properly and without freezes. I've searched, read and tried all that I could (over several fresh reinstalls) to fix the problem, but so far, no solution has proven definitive. I'm sorry I can't precise which symptom maps to each driver but I've been trying to solve this one on my own without logging what I'm doing, perhaps someone here will be able to point me to a certain-fix solution, if not I'll keep updating this question as I go along. Please let me know if any more info is needed to pinpoint the exact problem. Trying out NVIDIA accelerated graphics driver (version 173). The scrolling, minimizing / maximizing windows takes between 2 and 5 seconds to finalize. Context menus also pop up very slowly and the typing seems delayed by ~1 second. No critical issues so far. Firefox rendering of the Save Edits button is consistently messed up (random black lines in the top). Trying out NVIDIA accelerated graphics driver (version current) [Recommended]. All the delays mentioned above and the buggy rendering of the Save Edits button are gone, but I'm noticing that the whole screen flashes black for a couple of microseconds and while I was writing this test for the first time, the bottom 30% of the screen went black and I couldn't do anything (not even Ctrl + Alt + F1 would work). Had to force a hard reset. Also, the system hanged a little for a couple of seconds with the fade out of the "Restart" menu. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-304). Same symptoms as before, it crashed once while I was trying to install Chromium and again after a hard reset when I was trying to remove the driver. The bottom of the screen did not went black and I could move my mouse both times. Ctrl + Alt + F1 didn't work. The ugly-colored pattern also showed up during the second boot. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-307). The system crashed as soon as I clicked something. Had to do a fresh re-install. Trying out Nouveau: Accelerated Open Source driver for nVidia cards. Artifacts still show up during boot but other than that this one seems stable. As soon as I connected my second monitor, the responsiveness dropped a lot, animations and video are somewhat slow. I'm gonna try this solution http://askubuntu.com/a/98871/9018 later on.

    Read the article

  • SQL SERVER – Partition Parallelism Support in expressor 3.6

    - by pinaldave
    I am very excited to learn that there is a new version of expressor’s data integration platform coming out in March of this year.  It will be version 3.6, and I look forward to using it and telling everyone about it.  Let me describe a little bit more about what will be so great in expressor 3.6: Greatly enhanced user interface Parallel Processing Bulk Artifact Upgrading The User Interface First let me cover the most obvious enhancements. The expressor Studio user interface (UI) has had some significant work done. Kudos to the expressor Engineering team; the entire UI is a visual masterpiece that is very responsive and intuitive. The improvements are more than just eye candy; they provide significant productivity gains when developing expressor Dataflows. Operator shape icons now include a description that identifies the function of each operator, instead of having to guess at the function by the icon. Operator shapes and highlighting depict the current function and status: Disabled, enabled, complete, incomplete, and error. Each status displays an appropriate message in the message panel with correction suggestions. Floating or docking property panels provide descriptive tool tips for each property as well as auto resize when adjusting the canvas, without having to search Help or the need to scroll around to get access to the property. Progress and status indicators let you know when an operation is working. “No limit” canvas with snap-to-grid allows automatic sizing and accurate positioning when you have numerous operators in the Dataflow. The inline tool bar offers quick access to pan, zoom, fit and overview functions. Selecting multiple artifacts with a right click context allows you to easily manage your workspace more efficiently. Partitioning and Parallel Processing Partitioning allows each operator to process multiple subsets of records in parallel as opposed to processing all records that flow through that operator in a single sequential set. This capability allows the user to configure the expressor Dataflow to run in a way that most efficiently utilizes the resources of the hardware where the Dataflow is running. Partitions can exist in most individual operators. Using partitions increases the speed of an expressor data integration application, therefore improving performance and load times. With the expressor 3.6 Enterprise Edition, expressor simplifies enabling parallel processing by adding intuitive partition settings that are easy to configure. Bulk Artifact Upgrading Bulk Artifact Upgrading sounds a bit intimidating, but it actually is not and it is a welcome addition to expressor Studio. In past releases, users were prompted to confirm that they wanted to upgrade their individual artifacts only when opened. This was a cumbersome and repetitive process. Now with bulk artifact upgrading, a user can easily select what artifact or group of artifacts to upgrade all at once. As you can see, there are many new features and upgrade options that will prove to make expressor Studio quicker and more efficient.  I hope I’m not the only one who is excited about all these new upgrades, and that I you try expressor and share your experience with me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Managing a file-based public maven repository

    - by Roland Ewald
    I am looking for an easy way to manage a public file-based Maven repository. While we are using the open-source version of Artifactory internally, we now want to put a file-based repository of our published artifacts (and their dependencies) on a separate machine that is publicly available. There are several ways how to do this, but none of them seems ideal: Use Maven Dependency plugin: if it is configured correctly and executed with the goal dependency:copy-dependencies for the release-module of our project, it creates a local repository structure that is fine, but this structure does not contain the meta-data.xml files, nor the hash-sums. Use Artifactory to export repo: AFAIK Artifactory only allows to export a repository as a whole. This would include the non-published modules from our project (which would then need to be deleted manually). Also, all dependencies are sitting in another repository, so this needs to be done twice, and many dependencies are not even required by a published artifact (only by artifacts that are still for internal use only). Nevertheless, this method would also include the meta-data.xml files and the hash-sums for all files. To set up an initial version of the repository, I used a mixture of both methods: I first created the Maven repository for all required dependencies via dependency:copy-dependencies and then wrote a script to cherry-pick the meta-data.xml files (etc.) from Artifactory. This is terribly cumbersome, isn't there a better way to solve this? Maybe there is another Maven 3 - plugin that I am unaware of, or some other command-line tool that does the job? I basically just need a simple way to create a Maven repository that contains all artifacts a given artifact depends on (and no more), and also contains all meta-data expected in a remote repository. Any ideas?

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • Documentation in RETL, RIB, and RSL Release 13.2.4

    - by Oracle Retail Documentation Team
    The Patch Release 13.2.4 of the integration-related products, Oracle Retail Extract, Transform and Load (RETL), Oracle Retail Integration Bus (RIB), and Oracle Retail Service Layer (RSL), is now available from My Oracle Support. End User Documentation Enhancements The following enhancements have been made to the documentation: New RETL Installation GuideNew in Release 13.2.4, the RETL Installation Guide includes complete instructions to install and configure RETL 13.2.4. Installation instructions were previously in the Programmer’s Guide. As part of this enhancement, content was added to and tested in the RETL Installation Guide to ensure that it contain similar chapters and sections included in other Oracle Retail Installation Guides. Template Creator documentation, under the RIB product umbrellaThe Oracle Retail Functional Artifact Guide and the Oracle Retail Functional ArtifactGenerator Guide contain new information about a new tool called the Template Creator. The Functional Artifacts Generator tool has been enhanced to generate custom and localized payloads business objects on demand, based on Oracle Retail Functional Artifact rules. A new tool called the Template Creator has been provided to create the placeholder XSDs and the import hooks in the base objects on an as-needed basis. In other words, this tool constructs the appropriate placeholders in the packaging structure in the correct locations. The Artifact Generator tools, including the Template Creator, can be used either as a command line or GUI tool set.   List of Documents in RETL, RIB, and the Oracle Retail Service Layer (RSL) 13.2.4  The following documents are included in release 13.2.4 of the applications noted above: RIB Oracle Retail Integration Bus Release Notes Oracle Retail Integration Bus Implementation Guide Oracle Retail Integration Bus Installation Guide Oracle Retail Integration Bus Operations Guide Oracle Retail Functional Artifact Generator Guide Oracle Retail Functional Artifacts Guide Oracle Retail Service Layer Installation Guide Oracle Retail SOA Enabler Tool Guide RIB Integration Guide (ID 1277421.1) RETL Oracle Retail Extract, Transform, and Load Release Notes Oracle Retail Extract, Transform, and Load Installation Guide Oracle Retail Extract, Transform, and Load Programmer’s Guide RSL Oracle Retail Service Layer Release Notes Oracle Retail Service Layer Installation Guide Oracle Retail Service Layer Programmer’s Guide

    Read the article

  • CruiseControl.Net - not able to see Project Statistics

    - by Anders Juul
    Hi all, I've made a reinstallation of the buildserver and can no longer see the standard graphs of project statistics. The error message shown is "Missing/Invalid statistics reports. Please check if you have enabled the Statistics Publisher, and statistics have been collected atleast once after that." To the best of my knowledge, the ccnet.config file has not been changed in this respect and by inspection it is verified that I have a Statistics / statisticsList-section for the project. Furthermore, the values appear in the Artifacts\statistics.csv and Artifacts\report.xml files. My guess would then be StatisticsGraph.xslt, which I have copied fresh from distribution to both Server\xlst and WebDashboard\xslt (why are they located in both places, by the way!?). Rebuild and check - still same error message. Any hints to how to debug this would be appreciated!

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • wine 1.4 regedit makes screen flicker on 12.04 with dual monitor setup

    - by s1lv3r
    I have a dualmonitor setup running dual 23" on 1920x1080 which has the following problem: When running any wine application (for example "wine regedit" from console) the screen flickers and the windows have artifacts like this: Also sometimes taking a screenshot using the print key will make compiz crash (starter and all window bars/menus are gone) when an wine application is started. I don't have the same problems on my notebook which has the same setup. Only difference is the notebook has ATI Graphics and this PC has nvidia. This is the output of lshw -c video: *-display Beschreibung: VGA compatible controller Produkt: G72 [GeForce 7300 LE] Hersteller: NVIDIA Corporation Physische ID: 0 Bus-Informationen: pci@0000:07:00.0 Version: a1 Breite: 64 bits Takt: 33MHz Fähigkeiten: pm msi pciexpress vga_controller bus_master cap_list rom Konfiguration: driver=nvidia latency=0 Ressourcen: irq:16 memory:fa000000-faffffff memory:d0000000-dfffffff memory:fb000000-fbffffff memory:fce00000-fce1ffff I also noticed that running xrandr from console makes the screen flicker for some seconds on this PC, which also doesn't happen on my notebook. Removing one screen from the setup will stop the flickering and the artifacts inside the wine applications from appearing. Does anybody have an advice what I could try to change to make this work?

    Read the article

  • How to assemble a multi-project ant build system

    - by Alex Worden
    At my new gig, they use Ant and cannot be persuaded to move to Maven. I've looked everywhere for a decent example of how a multi-project ant build system should be assembled. The apache site falls short. I'm looking specifically for best practices to: Automatically build local projects that are dependencies of a project Share artifacts from project to their dependents Export a project's dependencies and generated artifacts (jars) to be inherited by dependent projects Share third-party dependencies between projects I'm sure I can do all this without using Ivy - what did people do before Ivy? I really don't want to have to set up a corporate repository or rely on external repositories - the engineers here are really against that and have all their third-party jars checked into src control. Can anyone point me at a good open source example of a multi-project ant build?

    Read the article

  • Maven best practice for generating multiple jars with different/filtered classes ?

    - by jaguard
    I developed a Java utility library (similarly to Apache Commons) that I use in various projects. Additionally to fat clients I also use it for mobile clients (PDA with J9 Foundation profile). In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality but not really needed in all the projects. Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars Currently in the projects that area using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the generated jar size constraints for the mobile projects. Migrating now to Maven I just wonder if there are any best practices to arrive to something similar so I consider the following scenarios: [1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes ... I'm not very comfortable with this approach since it pollute the library with library consumers demands (filters) [2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach my look problematic since even that will let "my-lib" project intact (with no additional classifiers & artifacts), basically will double the number of projects since for every client project I'll have one just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency) [3] - Into the every client project that use the library (ex: my-pda-app) add some specialized Maven plugins to trim - out (when generating the final artifact/package) the un-needed classes (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin) What is the best practice for solving this kind of problems in the "Maven way" ?!

    Read the article

  • Caching the repository index in m2eclipse

    - by Titi Wangsa bin Damhore
    everytime i start with a fresh new workspace, m2eclipse downloads nexus-maven-repository-index.gz from the maven central repository. this is good. but, some times, i just want to start a new workspace, and not wait for it to download, it tried copying the whole .metadata directory from an old workspace to the new one, but the list of maven artifacts are still empty. is there a way i can cache it? or at least download the file once, and the copy/extract/repackage it so that m2eclipse thinks it has already downloaded it and allows me to search for maven artifacts. or a short version of the question where and in what format is the "nexus-maven-repository-index.gz" file stored in the workspace?

    Read the article

  • TeamCity Mercurial, How to handle tags and releases

    - by Garrett
    Hi First off, i'm new to Teamcity comming from CC. I have a server up and running and building my projects, so far so good :). My problem is this: Whenever I make a Mercurial Tag, I would like TeamCity to react in a special way, so that the resulting binaries/artifacts from the Tag build, are made avaliable for download (automatic release when tagging). In short can I do automated release so that when I create a Tag AND it builds successfully then the result is copied to some location X, but ONLY when I Tag. Im pretty sure that this can be done, but when I read the posts around the net and read the documentation, I tend to get a little confused about artifacts/build scripts/publishing and how to do it. Can anyone give me some hints about where/how to start or guide me to a location with a tutorial/example of how to do this? I want it soooooo bad :-D. Kind regards

    Read the article

  • Deploy ASP.NET Web Applications with Web Deployment Projects

    - by Ben Griswold
    One may quickly build and deploy an ASP.NET web application via the Publish option in Visual Studio.  This option works great for most simple deployment scenarios but it won’t always cut it.  Let’s say you need to automate your deployments. Or you have environment-specific configuration settings. Or you need to execute pre/post build operations when you do your builds.  If so, you should consider using Web Deployment Projects. The Web Deployment Project type doesn’t come out-of-the-box with Visual Studio 2008.  You’ll need to Download Visual Studio® 2008 Web Deployment Projects – RTW and install if you want to follow along with this tutorial. I’ve created a shiny new ASP.NET MVC project.  Web Deployment Projects work with websites, web applications and MVC projects so feel free to go with any web project type you’d like.  Once your web application is in place, it’s time to add the Web Deployment project.  You can hunt and peck around the File > New > New Project… dialogue as long as you’d like, but you aren’t going to find what you need.  Instead, select the web project and then choose the “Add Web Deployment Project…” hiding behind the Build menu option. I prefer to name my projects based on the environment in which I plan to deploy.  In this case, I’ll be rolling to the QA machine. Don’t expect too much to happen at this point.  A seemingly empty project with a funny icon will be added to your solution.  That’s it. I want to take a minute and talk about configuration settings before we continue.  Some of the common settings which might change from environment to environment are appSettings, connectionStrings and mailSettings.  Here’s a look at my updated web.config: <appSettings>   <add key="MvcApplication293.Url" value="http://localhost:50596/" />     </appSettings> <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   <system.net>   <mailSettings>     <smtp from="[email protected]">         <network host="server.com" userName="username" password="password" port="587" defaultCredentials="false"/>     </smtp>   </mailSettings> </system.net> I want to update these values prior to deploying to the QA environment.  There are variations to this approach, but I like to maintain environment-specific settings for each of the web.config sections in the Config/[Environment] project folders.  I’ve provided a screenshot of the QA environment settings below. It may be obvious what one should include in each of the three files.  Basically, it is a copy of the associated web.config section with updated setting values.  For example, the AppSettings.config file may include a reference to the QA web url, the DB.config would include the QA database server and login information and the StmpSettings.config would include a QA Stmp server and user information. <?xml version="1.0" encoding="utf-8" ?> <appSettings>   <add key="MvcApplication293.Url" value="http://qa.MvcApplicatinon293.com/" /> </appSettings> AppSettings.config  <?xml version="1.0" encoding="utf-8" ?> <connectionStrings>   <add name="ApplicationServices"        connectionString="server=QAServer;integrated security=SSPI;database=MvcApplication293"        providerName="System.Data.SqlClient"/>   </connectionStrings> Db.config  <?xml version="1.0" encoding="utf-8" ?> <smtp from="[email protected]">     <network host="qaserver.com" userName="qausername" password="qapassword" port="587" defaultCredentials="false"/> </smtp> SmtpSettings.config  I think our web project is ready to deploy.  Now, it’s time to concentrate on the Web Deployment Project itself.  Right-click on the project file and open the Property Pages. The first thing to call out is the Configuration dropdown.  I only deploy a project which is built in Release Mode so I only setup the Web Deployment Project for this mode.  (This is when you change the Configuration selection to “Release.”)  I typically keep the Output Folder default value – .\Release\.  When the application is built, all artifacts will be dropped in the .\Release\ folder relative to the Web Deployment Project root.  The final option may be up for some debate.  I like to roll out updatable websites so I select the “Allow this precompiled site to be updatable” option.  I really do like to follow standard SDLC processes when I release my software but there are those times when you just have to make a hotfix to production and I like to keep this option open if need be.  If you are strongly opposed to this idea, please, by all means, don’t check the box. The next tab is boring.  I don’t like to deploy a crazy number of DLLs so I merge all outputs to a single assembly.  Again, you may have another option and feel free to change this selection if you so wish. If you follow my lead, take care when choosing a single assembly name.  The Assembly Name can not be the same as the website or any other project in your solution otherwise you’ll receive a circular reference build error.  In other words, I can’t name the assembly MvcApplication293 or my output window would start yelling at me. Remember when we called out our QA configuration files?  Click on the Deployment tab and you’ll see how where going to use them.  Notice the Web.config file section replacements value.  All this does is swap called out web.config sections with the content of the Config\QA\* files.  You can reduce or extend this list as you deem fit.  Did you see the “Use external configuration source file” option?  You know how you can point any of your web.config sections to an external file via the configSource attribute?  This option allows you to leverage that technique and instead of replacing the content of the sections, you will replace the configSource attribute value instead. <appSettings configSource="Config\QA\AppSettings.config" /> Go ahead and Apply your changes.  I’d like to take a look at the project file we just updated.  Right-click on the Web Deployment Project and select “Open Project File.” One of the first configuration blocks reflects core Release build settings.  There are a couple of points I’d like to call out here: DebugSymbols=false ensures the compilation debug attribute in your web.config is flipped to false as part of build process.  There’s some crumby (more likely old) documentation which implies you need a ToggleDebugCompilation task to make this happen.  Nope. Just make sure the DebugSymbols is set to false.  EnableUpdateable implies a single dll for the web application rather than a dll for each object and and empty view file. I think updatable applications are cleaner and include the benefit (or risk based on your perspective) that portions of the application can be updated directly on the server.  I called this out earlier but I wanted to reiterate. <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">     <DebugSymbols>false</DebugSymbols>     <OutputPath>.\Release</OutputPath>     <EnableUpdateable>true</EnableUpdateable>     <UseMerge>true</UseMerge>     <SingleAssemblyName>MvcApplication293</SingleAssemblyName>     <DeleteAppCodeCompiledFiles>true</DeleteAppCodeCompiledFiles>     <UseWebConfigReplacement>true</UseWebConfigReplacement>     <ValidateWebConfigReplacement>true</ValidateWebConfigReplacement>     <DeleteAppDataFolder>true</DeleteAppDataFolder>   </PropertyGroup> The next section is self-explanatory.  The content merely reflects the replacement value you provided via the Property Pages. <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>   </ItemGroup> You’ll want to extend the ItemGroup section to include the files you wish to exclude from the build.  The sample ExcludeFromBuild nodes exclude all obj, svn, csproj, user, pdb artifacts from the build. Enough though they files aren’t included in your web project, you’ll need to exclude them or they’ll show up along with required deployment artifacts.  <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\obj\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.csproj" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.user" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\bin\*.pdb" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\Notes.txt" />   </ItemGroup> Pre/post build and Pre/post merge tasks are added to the final code block.  By default, your project file should look like the following – a completely commented out section. <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.   <Target Name="BeforeBuild">   </Target>   <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>   <Target Name="AfterBuild">   </Target>   –> Update the section to remove all temporary Config folders and files after the build.  <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.     <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>     <Target Name="BeforeBuild">      </Target>       –>   <Target Name="AfterBuild">     <!– WebConfigReplacement requires the Config files. Remove after build. –>     <RemoveDir Directories="$(OutputPath)\Config" />   </Target> That’s it for setup.  Save the project file, flip the solution to Release Mode and build.  If there’s an issue, consult the Output window for details.  If all went well, you will find your deployment artifacts in your Web Deployment Project folder like so. Both the code source and published application will be there. Inside the Release folder you will find your “published files” and you’ll notice the Config folder is no where to be found.  In the Source folder, all project files are found with the exception of the items which were excluded from the build. I’ll wrap up this tutorial by calling out a little Web Deployment pet peeve of mine: there doesn’t appear to be a way to add an existing web deployment project to a solution.  The best I can come up with is create a new web deployment project and then copy and paste the contents of the existing project file into the new project file.  It’s not a big deal but it bugs me. Download the Solution

    Read the article

  • Oracle collaborates with leading IT vendors on Cloud Management Standards

    - by Anand Akela
    During the last couple of days, two key specifications for cloud management standards have been announced. Oracle collaborated with leading technology vendors from the IT industry on both of these cloud management specifications. One of the specifications focuses "Infrastructure as a Service" ( IaaS )  cloud service model , while the other specification announced today focuses on "Platform as a Service" ( PaaS ) cloud service model. Please see The NIST Definition of Cloud Computing to learn more about IaaS and PaaS . Earlier today Oracle , CloudBees, Cloudsoft, Huawei, Rackspace, Red Hat, and Software AG   announced the Cloud Application Management for Platforms (CAMP) specification that will be submitted to Organization for the Advancement of Structured Information Standards (OASIS) for development of an industry standard, in an effort to help ensure interoperability for deploying and managing applications across cloud environments.  Typical PaaS architecture - Source : CAMP specification The CAMP specification defines the artifacts and APIs that need to be offered by a PaaS cloud to manage the building, running, administration, monitoring and patching of applications in the cloud. Its purpose is to enable interoperability among self-service interfaces to PaaS clouds by defining artifacts and formats that can be used with any conforming cloud and enable independent vendors to create tools and services that interact with any conforming cloud using the defined interfaces. Cloud vendors can use these interfaces to develop new PaaS offerings that will interact with independently developed tools and components. In a separate cloud standards announcement yesterday, the Distributed Management Task Force ( DMTF ), the organization bringing the IT industry together to collaborate on systems management standards development, validation, promotion and adoption, released the new Cloud Infrastructure Management Interface (CIMI) specification. Oracle collaborated with various technology vendors and industry organizations on this specification. CIMI standardizes interactions between cloud environments to achieve interoperable cloud infrastructure management between service providers and their consumers and developers, enabling users to manage their cloud infrastructure use easily and without complexity. DMTF developed CIMI as a self-service interface for infrastructure clouds ( IaaS focus ) , allowing users to dynamically provision, configure and administer their cloud usage with a high-level interface that greatly simplifies cloud systems management. Mark Carlson, Principal Cloud Strategist at Oracle provides more details about CAMP  and CIMI his blog . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • How do I fix the problems I'm having with Xfce and Xubuntu?

    - by Noel Quiles
    I booted into a Xubuntu session and when the splash disappeared it left behind artifacts on the desktop. I went into Desktop to change the background but it stayed completely gray. Not only that, GTK isn't theming any of my windows, nor can I change GTK themes in the settings manager. I recently installed GTK+ 2.x alongside GTK+ 3 for a compatibility issue with xfce-battery-plugin, which also refuses to show anything but 0%. What is going on?

    Read the article

  • git, maven and jenkins - versioning, dev and release builds workflow

    - by varesa
    What is the preferred way to do the following with git, maven and jenkins: I am developping an application, which I would like to maintain "dev" and "release" branches. I would like jenkins to build both. It could be so that the release-artifacts would have versions like 1.5.2 and the dev-builds would just be 0.0.1-SNAPSHOTs. I would like to not have to have 2 different pom.xml files. I looked into profiles, but they don't seem to be able to change artifact versions. One way I looked at could be adding a 'qualifier' to the test-builds. Of course I could just rename the file, because the real artifact-information on this is not important, because the app is a standalone one. What would be the preferred way to doing this? Or how would you do this?

    Read the article

  • ArchBeat Link-o-Rama for 2012-07-11

    - by Bob Rhubart
    Is the future of retail showrooming? | GigaOm "The digital shopper isn’t just digital and she expects to be served seamlessly across all channels, physical and digital," reports GigaOm. Twenty years into the Internet era and the changes just keep coming. Solution architects take note... Agile Bureaucracy: When Practices become Principles | Jim Highsmith.com "Principles and values are a critical part of keeping individuals in organizations aligned and engaged," says Agile guru Jim Highsmith, "but the more pseudo-principles are piled on top of principles, the less and less organizations are able to adapt." Oracle Fusion Applications 11g Basics | Michel Schildmeijer "We are trying to build up a Oracle Fusion Apps environment on a Exalogic system, though still on bare metal, because officially there still is no Oracle VM available yet on Exalogic," says Michel Schildmeijer, an Oracle Fusion Middleware Architect at Qualogy. "It is a bit of a challenge, but getting to know the basics and which components the install, build and configure phase use, might bring you a step further on the way." Process Centric Banking: Loan Origination Solution | Manish Palaparthy This interesting, detailed post by Manish Palaparthy explains the process behind the execution of a proof-of-concept for a Fusion Middleware-based loan-origination solution for a bank. The solution incorporates Oracle BPM Suite, Webcenter, and ADF technolgies in a SOA infrastructure. How eBay and Facebook are Cleaning Up Data Centers | Amy Gallo - HBR The Cloud has needs! As reported by Amy Gallo in an article in the Harvard Business Review, "The electricity demand of data centers and the telecommunications network is rivaling that of most nations. If the cloud were itself a country, it would rank fifth in the world on energy demand behind the U.S., China, Russia, and Japan." Do WebLogic configuration from ANT | Edwin Biemond "With WebLogic WLST you can script the creation of all your Application DataSources or SOA Integration artifacts( like JMS etc)," says Oracle ACE Edwin Biemond. "This is necessary if your domain contains many WebLogic artifacts or you have more then one WebLogic environment. If so, you want to script this so you can configure a new WebLogic domain in minutes and you can repeat this task with always the same result." Oracle Special-Edition E-Book: Cloud Architecture for Dummies Learn how to architect and model your cloud implementation to drive efficiency and leverage economies of scale with Cloud Architecture for Dummies, a free Oracle e-book. (Registration required.) Thought for the Day "One of the best things to come out of the home computer revolution could be the general and widespread understanding of how severely limited logic really is." — Frank Herbert Source: SoftwareQuotes.com

    Read the article

  • Telesharp – An Application Repository for .NET applications

    - by cibrax
    A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services. At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”. Telesharp provides in a nutshell the following features, 1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end. 2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment. 3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications. 4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW.  The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

    Read the article

  • Best way to convert episodic DVDs for Windows Media Center?

    - by Roger Lipscombe
    I'm archiving my DVD collection. My goal is to be able to play them back in Windows Media Center. For feature-length DVDs, I'm using AnyDVD and CloneDVD, which is working well. For playing back TV shows (and other episodic content), I'm using Media Browser, which doesn't support a VIDEO_TS folder per episode. It expects the shows to be broken up into one file per episode (e.g. "Willo the Wisp - S01E12.avi"). For this, I'm attempting to use Handbrake, which, for extracting the episodes from DVD (or already-ripped VIDEO_TS folder), is working pretty well. The problem that I have is that the default x264 encoder over-compresses the resulting video stream, which results in hideous artifacts in animated shows. The aforementioned Willo the Wisp is a particularly bad example, because the original DVD is particularly "noisy". If I switch to using the ffmpeg encoder, the artifacts are gone in Windows Media Player, but I can't get the resulting files to play back in Windows Media Center. I see the first frame, and then there's an error message. I've installed the CCCP codec collection, but it doesn't seem to have made any difference. So: what's the best way to convert VIDEO_TS to individual episode files for playback in Windows Media Center?

    Read the article

  • DXVA and ATI.... what am I missing?

    - by Shiki
    ATI Radeon HD3650 ffdshow (the one comes with CCCP 2010-10 AND the latest from Free-codecs site) Catalyst 10.12 Latest DirectX, Windows 7 HP x64 If I try to play it simply, the player (mpc-hc) won't use the acceleration. If I disable checks, I get artifacts. Back then it worked perfectly with my Intel and ATI too (Lenovo ThinkPad T500, switchable graphics). This only appeared recently. What should I do? What am I missing? If possible, I don't really want to downgrade drivers (waaa ATI drivers s.ck ... sorry but.. honestly), because the older one got a buggy DP driver so my external display gets a totally wrong resolution. Info about the media file: Video: Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Codec ID : V_MPEG4/ISO/AVC Writing library : x264 core 88 r1462 7fcffde Encoding settings : cabac=1 / ref=8 / deblock=1:0:0 / analyse=0x3:0x133 / me=umh / subme=8 / psy=1 / psy_rd=0.80:0.20 / mixed_ref=1 / me_range=24 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-3 / threads=3 / sliced_threads=0 / nr=0 / decimate=1 / mbaff=0 / constrained_intra=0 / bframes=8 / b_pyramid=2 / b_adapt=2 / b_bias=0 / direct=3 / wpredb=1 / wpredp=2 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc_lookahead=40 / rc=crf / mbtree=1 / crf=16.0 / qcomp=0.60 / qpmin=10 / qpmax=51 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00 The sample file I'm testing at the moment: [Coalgirls]_Kiddy_Girl-and_01_(1920x1080_Blu-Ray_FLAC)_[91428679].mkv (I tried many x264 files, no use.) Artifacts

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >