Search Results

Search found 9349 results on 374 pages for 'turing complete'.

Page 42/374 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Oracle OpenWorld Highlights

    - by Doug Reid
    We are in the final days of Oracle OpenWorld 2012 and the data integration team have been hard at work giving sessions, meeting customers, demonstrating product and conducting hands-on labs.    It has been a great conference, but the best part is meeting our customers and learning about all the great implementations of our products.  Wednesday was the last day that the exhibition hall was open and attendees were getting in their final opportunities to see our products and meet with the product management team.   Two hours before the close of the hall, people lined up to learn about GoldenGate 11gR2, Monitor, Adapters, Veridata, and all the different use cases.    Here's a picture of Sjaak Vossepoel, who is our DIS Sales Consulting Manager for EMEA speaking to a potential customer on the options of using Oracle GoldenGate for heterogenous data replication.  Over the last two days, the GoldenGate team ran two labs; Introduction to Oracle GoldenGate Veridata and Deep Dive into Oracle GoldenGate.   Both of the labs were completely booked out and unfortunately we had to turn away people.   BUT,  all of our labs were recorded recently so if you were not able to get into the lab or did not have enough time to complete your labs, visit youtube.com/oraclegoldengate to see a  complete recording of the labs we used at OpenWorld plus more.  Here are a couple pictures from the Deep Dive into Oracle GoldenGate lead by Chis Lawless from the Product Management team.   Thanks to the GoldenGate Hands-on Lab team for putting on a great session!!! We will post more information about where you can find additional details on OpenWorld as they become public.   

    Read the article

  • Sand, Sun and Partner Fun

    - by Kristin Rose
    Last Memorial Day weekend marked the unofficial start of summer and a time to remember those who fight for our freedom each day. It was a weekend complete with BBQ’s, beach time, and of course plenty of sunshine. Here at OPN, the start of summer marks an exciting time no doubt; a time where we finalize and fine tune some Oracle OpenWorld partner events, like this year’s Oracle PartnerNetwork Exchange @ OpenWorld program! Oracle is launching the new Oracle PartnerNetwork Exchange program to offer a new look and feel for partners attending OpenWorld. Get your bathing suits ready because this week long event will feature new deep dive content that will have you swimming in networking opportunities, including an Oracle Partner keynote with Oracle executives like Judson Althoff, SVP of WWA&C. For those who have already registered, be sure to bring your sunscreen as you take part in some exclusive, first degree exposure to Oracle’s top experts, providing a unique and unified partner experience.For those partners wanting to make a real splash, don’t forget that you will be able to complete OPN Certification testing onsite at Oracle OpenWorld. To learn more about the many opportunities and ways to engage with Oracle and other partners, watch the below video hosted by Lydia Smyers, GVP of WWA&C.Wishing you sun and fun,The OPN Communications Team

    Read the article

  • On Screen Coin Animation

    - by Siddharth
    am working with side scrolling skater game. I want to perform coin animation such that as player collect coin it moves upside and attach with currency sprite. My main character and coin present in game scene and currency sprite present in HUD layer. This situation creates problem for me. Directly I can not apply modifier to coin because it is side scrolling game so based on main character speed it reaches at different position. That I have checked. So that I have to generate other coin at same position at game layer coin has, in HUD layer and move upward to it. But I didn't able to get its y position correct though I can able to get x position correctly. Many time main character goes downward so it get minus value many time. I also tried following code float[] position = GameHUD.this .convertSceneCoordinatesToLocalCoordinates(GameManager .getInstance().getCoinX(), GameManager.getInstance() .getCoinY()); But I am getting same coordinate as I provide. No difference in that so please some one provide me guidance in that. Because I am near to complete my game. EDIT: Here game layer and hud layer is totally different. Actual coin present in game layer which player has to collect and at same position I want to generate another coin in hud layer to perform some animation. It is recommended to generate coin in hud layer because through that only I can able to complete my target.

    Read the article

  • b2b SOA Suite partner training November 13th & 14th Bucharest

    - by JuergenKress
    Description: Oracle SOA Suite 11g is a complete infrastructure for building, deploying, and managing composite applications and business processes. For an enterprise to extend business processes to its trading partners, it requires a platform that addresses compliance, security, visibility, scalability, and standards. The Oracle SOA Suite (Oracle B2B) is this platform. Oracle B2B, the "Edge Component", enables an enterprise to define, configure, manage, and monitor the exchange of information, with its trading partners. Oracle SOA Suite, the "B2B Infrastructure", enables business process orchestration, administration, monitoring, auditing, inter-enterprise connectivity, governance and security. Together they provide a complete end-to-end business process integration platform. Date Location Time Facilitator Register 13-14 November Oracle Room MtgRm15_6, Bucharest - Nusco Tower, Romania D1: 08:30 - 17:30 D2: 09:00 - 17:30 Krishnaprem Bhatia Please contact us directly Registration Please contact us directly - Please note that there are limited seats and confirmation will be on a first come, first served basis Travel Each delegate is responsible for his/her own travel arrangements. Please obtain approval from your manager first. Contact For logistic questions, please contact Nadja Vogl SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: b2b,SOA Suite,training,eduction,b2b training,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

  • How can I be certain that my code is flawless? [duplicate]

    - by David
    This question already has an answer here: Theoretically bug-free programs 5 answers I have just completed an exercise from my textbook which wanted me to write a program to check if a number is prime or not. I have tested it and seems to work fine, but how can I be certain that it will work for every prime number? public boolean isPrime(int n) { int divisor = 2; int limit = n-1 ; if (n == 2) { return true; } else { int mod = 0; while (divisor <= limit) { mod = n % divisor; if (mod == 0) { return false; } divisor++; } if (mod > 0) { return true; } } return false; } Note that this question is not a duplicate of Theoretically Bug Free Programs because that question asks about whether one can write bug free programs in the face of the the limitative results such as Turing's proof of the incomputability of halting, Rice's theorem and Godel's incompleteness theorems. This question asks how a program can be shown to be bug free.

    Read the article

  • From the Coalface - 3 - Work as hard as you can to be as lazy as you can!

    - by TATWORTH
    The saga of the Change Log A recent conversation reminded me of the need for change logs within a database, to record when various change scripts were run. Creating such the required table is simple. A typical table for this consists of: Id - identity Integer primary key ChangeFileName - NVARCHAR(128) to hold the name of the file run. DateAdded - DateTime non-null with default value of getutcdate() Purpose - NVARCHAR(128) Rerunnable - Bit non-null default 0. By good design of the table only two data values normally need to be supplied. Two stored procedures, one for inserting data and one to list in reverse sequence the log complete the database essentials. The complete implementation can be found in the CommonData solution at http://CommonData.CodePlex.Com By including a call the add Change Log stored procedure, each script can log its name and purpose for posterity. The scripts that were applied to say the UAT system and their sequence of application can be readily identified for running on the Live system. Formatting XML XML is often produced as one continous string with no embedded CR/LF. To get it into human readable form, open it in visual studio, swap to another tab and back and click the format document button. The XML will then be nicely formatted!

    Read the article

  • Garage Sale Code &ndash; Everything must go!

    - by mbcrump
    Garage Sale Code     The term “Garage Sale Code” came from a post by Scott Hanselman. He defines Garage Sale Code as: Complete – It’s a whole library or application. Concise – It does one discrete thing. Clear – It’ll work when you get it. Cheap – It’s free or < 25 cents. (Quite Possibly) Crap – As with a Garage Sale, you’ll never know until you get it home if it’s useless. With the code I’ve posted here, you’ll get all 5 of those things (with an emphasis on crap). All of the projects listed below are available on CodePlex with full source code and executables (for those that just want to run it).  I plan on keeping this page updated when I complete projects that benefit the community.  You can always find this page again by swinging by http://garagesale.michaelcrump.net or you can keep on driving and find another sale. Name Description Language/Technology Used WPF Alphabet WPF Alphabet is a application that I created to help my child learn the alphabet. It displays each letter and pronounces it using speech synthesis. It was developed using WPF and c# in about 3 hours (so its kinda rough). C#, WPF Windows 7 Playlist Generator This program allows you to quickly create wvx video playlist for Windows Media Center. This functionality is not included in WMC and is useful if you want to play video files back to back without selecting the next file. It is also useful to queue up video files to keep children occupied! C#, WinForms Windows 7 Automatic Playlist Creator This application is designed to create W7MC playlist automatically whenever you want. You can select if you want the playlist sorted Alphabetical, by Creation Date or Random. C#, WinForms, Console Generator Twitter Message for Live Writer This is a plug-in for Windows Live Writer that generates a twitter message with your blog post name and a TinyUrl link to the blog post. It will do all of this automatically after you publish your post. C#, LiveWriter API

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • Partner Spotlight: Compasso

    - by Joe Diemer
    EVERY MONDAY at 2 PM Eastern Time (USA) Join Compasso, one of Oracle's Enterprise Manager partners that is Specialized in Application Quality Management, for a series of presentations focused on Oracle end to end performance management. The objective is to provide a comprehensive overview of Oracle's Enterprise Manager 12c software.  Attendees will learn how Oracle can provide a complete end to end management  solution, for not only for Oracle Databases, but also other aspects of the Oracle technology, such as Middleware and Oracle applications.  Key areas include: 1. Next-Generation Management Framework which cover: Better performance and scalability Modular, extensible architecture Easier to manage and diagnose Enhanced security Web 2.0 UI 2. Enhanced Application-to-Disk Management End-to-end application performance management Fusion Applications management Exadata and Exalogic management 3. Complete Lifecycle Management for Enterprise Private Cloud Self-service provisioning Policy based resource and workload management Chargeback There are two ways you can sign up: 1. Call Nicole Wakefield of Compasso at +1 (971) 245-5042 2. Email [email protected] with Subject Line of "Monday Enterprise Manager Presentations" WebEx Conference details will be provided with your confirmation For more information about partner opportunities with Enterprise Manager, including becoming Specialized, click here. For more information about Compasso, click here.

    Read the article

  • Iterative and Incremental Principle Series 5: Conclusion

    - by llowitz
    Thank you for joining me in the final segment in the Iterative and Incremental series.  During yesterday’s segment, I discussed Iteration Planning, and specifically how I planned my daily exercise (iteration) each morning by assessing multiple factors, while following my overall Implementation plan. As I mentioned in yesterday’s blog, regardless of the type of exercise or how many increment sets I decide to complete each day, I apply the 6 minute interval sets and a timebox approach.  When the 6 minutes are up, I stop the interval, even if I have more to give, saving the extra energy to apply to my next interval set.   Timeboxes are used to manage iterations.  Once the pre-determined iteration duration is reached – whether it is 2 weeks or 6 weeks or somewhere in between-- the iteration is complete.  Iteration group items (requirements) not fully addressed, in relation to the iteration goal, are addressed in the next iteration.  This approach helps eliminate the “rolling deadline” and better allows the project manager to assess the project progress earlier and more frequently than in traditional approaches. Not only do smaller, more frequent milestones allow project managers to better assess potential schedule risks and slips, but process improvement is encouraged.  Even in my simple example, I learned, after a few interval sets, not to sprint uphill!  Now I plan my route more efficiently to ensure that I sprint on a level surface to reduce of the risk of not completing my increment.  Project managers have often told me that they used an iterative and incremental approach long before OUM.   An effective project manager naturally organizes project work consistent with this principle, but a key benefit of OUM is that it formalizes this approach so it happens by design rather than by chance.    I hope this series has encouraged you to think about additional ways you can incorporate the iterative and incremental principle into your daily and project life.  I further hope that you will share your thoughts and experiences with the rest of us.

    Read the article

  • How to handle notifications to several partial views of the same model?

    - by Seki
    I am working on refactoring an old simulation of a Turing machine. The application uses a class that contains the state and the logic of program execution, and several panels to display the tape representation and show the state, messages, and the GUI controls (start, stop, program listing, ...). I would like to refactor it using the MVC architecture that was not used originaly: the Frame is the only way to get access to the different panels and there is also a strong coupling between the "engine" class and the GUI updates in the way of frame.displayPanel.state.setText("halted"); or frame.outputPanel.messages.append("some thing"); It looks to me that I should put the state related code into an observable model class and make the different panels observers. My problem is that the java Observable class only provides a global notification to the Observers, while I would prefer not to refresh every Observers everytime, but only when the part that specificaly observe has changed. I am thinking of implementing myself several vectors of listeners (for the state / position, for the output messages, ...) but I feel like reinventing the wheel. I though also about adding some flags that the observers could check like isNewMessageAvailable(), hasTapeMoved(), etc but it sounds also approximative design. BTW, is it ok to keep the fetch / execute loop into the model or should I move it in another place? We can think in a theorical ideal way as I am completely revamping this small application.

    Read the article

  • For what purpose I can use c++ to increase my skills?

    - by user824981
    I want to learn new things. Initially I was a PHP programmer. Then I thought it was not enough. Then I started learning Java thing. It took me 3 months to learn. Java, J2EE, Spring, Hibernte, Spring Security, Spring Roo and many design patterns MVC and stuff like AOP, DI . I never knew that before but I got the idea what J2EE. After 3 months, I just made a simple page with Registration form integrated with Spring Security. I wanted to make one complete project in it but that was too much for me and I didn't want spend more time on it as then i need to host that as well so I left that. Then I started learning Python and made few sys admin scripts and then Django and now I am finishing a complete web app in Python. Now I want to learn C++, but before that I need to find out what i can do with it. Just like I know Python is very useful because I have my own servers so I can write scripting and websites so Python is good for me. But I am confused in which areas C++ can help me. I don't want to end up like I have with Java where either I have big projects or nothing for day to day use.

    Read the article

  • Flash As3 Loader Problem

    - by Hadar
    Hi iam trying to load a few external jpegs in As3. It works fine locally in Flash, but it dosen't work at all ion my server. My app also loads a youtube video simultaneously. function drawResult(index,videoID,song_title,thumbnail:String=null) { var theClip:resultRowClip=new resultRowClip (); _clip.addChild(theClip); myArray[index] = new Array(videoID,theClip); theClip.y=0+(43*(index-1)); theClip.rowText.text = song_title; theClip.rowBack.visible = false; if (thumbnail != ""){ theClip.tHolder.visible=true; loadImage(thumbnail,index); } } function loadImage(url:String,index):void { //this.myClip.myText.text += "load image"; // Set properties on my Loader object var imageLoader:Loader = new Loader(); imageLoader.load(new URLRequest(url)); imageLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, function (evt:Event){imageLoaded(evt,index)}); } function imageLoaded(evt,id):void { //this.myClip.myText.text += "id : evt : " + evt.status; // Load Image var image:Bitmap = new Bitmap(evt.target.content.bitmapData); myArray[id][1].tHolder.addChild(image); myArray[id][1].tHolder.width=myArray[id][1].tHolder.width*0.35; myArray[id][1].tHolder.height=myArray[id][1].tHolder.height*0.35; } Does anyone knows what the problem is ? ** I added two Evenet listeners from io Error : imageLoader.addEventListener(IOErrorEvent.IO_ERROR,ioErrorHandler); imageLoader.contentLoaderInfo.addEventListener(IOErrorEvent.IO_ERROR,ioErrorHandler); This is the function for handling the errors : private function ioErrorHandler(event:IOErrorEvent):void { this.myClip.myText.text +=("ioErrorHandler: " + event); } anyway, I got no errors... I also tried to move the listeners before imageLoader.load but it's still the same...no errors and no data loaded.. I change my code to patrikS suggestion : function loadImage(url:String,index):void { //this.myClip.myText.text += "load image"; // Set properties on my Loader object //if (index != 1) return; var imageLoader:Loader = new Loader(); imageLoader.name = index.toString(); //myArray[index][1].addChild(imageLoader); //imageLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, function (evt:Event){imageLoaded(evt,index)}); imageLoader.contentLoaderInfo.addEventListener(Event.COMPLETE,completeHandler); imageLoader.contentLoaderInfo.addEventListener(IOErrorEvent.IO_ERROR,ioErrorHandler); imageLoader.load(new URLRequest(url)); } My current completeHandler function(tnx patrikS ) : private function completeHandler(evt:Event):void{ //this.myClip.myText.text += "id : evt : " + evt.status; // Load Image trace("evt target loader name : "+ evt.target.loader.name ); evt.target.removeEventListener(Event.COMPLETE, completeHandler ); var image:Bitmap = new Bitmap(evt.target.content.bitmapData); myArray[evt.target.loader.name][1].tHolder.addChild(image); myArray[evt.target.loader.name][1].tHolder.width=myArray[evt.target.loader.name][1].tHolder.width*0.35; myArray[evt.target.loader.name][1].tHolder.height=myArray[evt.target.loader.name][1].tHolder.height*0.35; //trace (hadar.y + "Y / X" + hadar.x); } It still work only on flash IDE and dosent work on any browser...

    Read the article

  • client problems - misaligned expectations & not following SDLC protocols

    - by louism
    hi guys, im having some serious problems with a client on a project - i could use some advice please the short version i have been working with this client now for almost 6 months without any problems (a classified website project in the range of 500 hours) over the last few days things have drastically deteriorated to the point where ive had to place the project on-hold whilst i work-out what to do (this has pissed the client off even more) to be simplistic, the root cause of the issue is this: the client doesnt read the specs i make for him, i code the feature, he than wants to change things, i tell him its not to the agreed spec and that that change will have to be postponed and possibly charged for, he gets upset and rants saying 'hes paid for the feature' and im not keeping to the agreement (<- misalignment of expectations) i think the root cause of the root cause is my clients failure to take my SDLC protocols seriously. i have a bug tracking system in place which he practically refuses to use (he still emails me bugs), he doesnt seem to care to much for the protocols i use for dealing with scope creep and change control the whole situation came to a head recently where he 'cracked it' (an aussie term for being fed-up). the more terms like 'postponed for post-launch implementation', 'costed feature addition', and 'not to agreed spec' i kept using, the worse it got finally, he began to bully me - basically insisting i shut-up and do the work im being paid for. i wrote a long-winded email explaining how wrong he was on all these different points, and explaining what all the SDLC protocols do to protect the success of the project. than i deleted that email and wrote a new one in the new email, i suggested as a solution i write up a list of grievances we both had. we than review the list and compromise on different points: he gets some things he wants, i get some things i want. sometimes youve got to give ground to get ground his response to this suggestion was flat-out refusal, and a restatement that i should just get on with the work ive been paid to do so there you have the very subjective short version. if you have the time and inclination, the long version may be a little less bias as it has the email communiques between me and my client the long version (with background) the long version works by me showing you the email communiques which lead to the situation coming to a head. so here it is, judge for yourself where the trouble started... 1. client asked me why something was missing from a feature i just uploaded, my response was to show him what was in the spec: it basically said the item he was looking for was never going to be included 2. [clients response...] Memo Louis, We are following your own title fields and keeping a consistent layout. Why the big fuss about not adding "Part". It simply replaces "model" and is consistent with your current title fields. 3. [my response...] hi [client], the 'part' field appeared to me as a redundancy / mistake. i requested clarification but never received any in a timely manner (about 2 weeks ago) the specification for this feature also indicated it wasnt going to be included: RE: "Why the big fuss about not adding "Part" " it may not appear so, but it would actually be a lot of work for me to now add a 'Part' field it could take me up to 15-20 minutes to properly explain why its such a big undertaking to do this, but i would prefer to use that time instead to work on completing your v1.1 features as a simplistic explanation - it connects to the change in paradigm from a 'generic classified ad' model to a 'specific attributes for specific categories' model basically, i am saying it is a big fuss, but i understand that it doesnt look that way - after all, it is just one ity-bitty field :) if you require a fuller explanation, please let me know and i will commit the time needed to write that out also, if you recall when we first started on the project, i said that with the effort/time required for features, you would likely not know off the top of your head. you may think something is really complex, but in reality its quite simple, you might think something is easy - but it could actually be a massive trauma to code (which is the case here with the 'Part' field). if you also recalled, i said the best course of action is to just ask, and i would let you know on a case-by-case basis 4. [email from me to client...] hi [client], the online catalogue page is now up live (see my email from a few days ago for information on how it works) note: the window of opportunity for input/revisions on what data the catalogue stores has now closed (as i have put the code up live now) RE: the UI/layout of the online catalogue page you may still do visual/ui tweaks to the page at the moment (this window for input/revisions will close in a couple of days time) 5. [email from client to me...] *(note: i had put up the feature & asked the client to review it, never heard back from them for a few days)* Memo Louis, Here you go again. CLOSED without a word of input from the customer. I don't think so. I will reply tomorrow regarding the content and functionality we require from this feature. 5. [from me to client...] hi [client]: RE: from my understanding, you are saying that the mini-sale yard control would change itself based on the fact someone was viewing for parts & accessories <- is that correct? this change is outside the scope of the v1.1 mini-spec and therefore will need to wait 'til post launch for costing/implementation 6. [email from client to me...] Memo Louis, Following your v1.1 mini-spec and all your time paid in full for the work selected. We need to make the situation clear. There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon. You have undertaken to complete the Parts and accessories feature as follows. Obviously, as part of this process the "mini search" will be effected, and will require "adaption to make sense". 7. [email from me to client...] hi [client], RE: "There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon." a few points to consider: 1) the specification for the 'parts & accessories' feature was as follows: (i.e. [what] "...we have agreed upon.") 2) you have received the 'parts & accessories' feature free of charge (you have paid $0 for it). ive spent two days coding that feature as a gesture of good will i would request that you please consider these two facts carefully and sincerely 8. [email from client to me...] Memo Louis, I don't see how you are giving us anything for free. From your original fee proposal you have deleted more than 30 hours of included features. Your title "shelved features". Further you have charged us twice by adding back into the site, at an addition cost, some of those "shelved features" features. See v1.1 mini-spec. Did include in your original fee proposal a change request budget but then charge without discussion items included in v1.1 mini-spec. Included a further Features test plan for a regression test, a fee of 10 hours that would not have been required if the "shelved features" were not left out of the agreed fee proposal. I have made every attempt to satisfy your your uneven business sense by offering you everything your heart desired, in the v1.1 mini-spec, to be left once again with your attitude of "its too hard, lets leave it for post launch". I am no longer accepting anything less than what we have contracted you to do. That is clearly defined in v1.1 mini-spec, and you are paid in advance for delivering those items as an acceptable function. a few notes about the above email... i had to cull features from the original spec because it didnt fit into the budget. i explained this to the client at the start of the project (he wanted more features than he had budget hours to do them all) nothing has been charged for twice, i didnt charge the client for culled features. im charging him to now do those culled features the draft version of the project schedule included a change request budget of 10 hours, but i had to remove that to meet the budget (the client may not have been aware of this to be fair to them) what the client refers to as my attitude of 'too hard/leave it for post-launch', i called a change request protocol and a method for keeping scope creep under control 9. [email from me to client...] hi [client], RE: "...all your grievances..." i had originally written out a long email response; it was fantastic, it had all these great points of how 'you were wrong' and 'i was right', you would of loved it (and by 'loved it', i mean it would of just infuriated you more) so, i decided to deleted it start over, for two reasons: 1) a long email is being disrespectful of your time (youre a busy businessman with things to do) 2) whos wrong or right gets us no closer to fixing the problems we are experiencing what i propose is this... i prepare a bullet point list of your grievances and my grievances (yes, im unhappy too about how things are going - and it has little to do with money) i submit this list to you for you to add to as necessary we then both take a good hard look at this list, and we decide which areas we are willing to give ground on as an example, the list may look something like this: "louis, you keep taking away features you said you would do" [your grievance 2] [your grievance 3] [your grievance ...] "[client], i feel you dont properly read the specs i prepare for you..." [my grievance 2] [my grievance 3] [my grievance ...] if you are willing to give this a try, let me know will it work? who knows. but if it doesnt, we can always go back to arguing some more :) obviously, this will only work if you are willing to give it a genuine try, and you can accept that you may have to 'give some ground to get some ground' what do you think? 10. [email from client to me ...] Memo Louis, Instead of wasting your time listing grievances, I would prefer you complete the items in v1.1 mini-spec, to a satisfactory conclusion. We almost had the website ready for launch until you brought the v1.1 mini-spec into the frame. Obviously I expected you could complete the v1.1 mini-spec in a two-week time frame as you indicated and give the site a more profession presentation. Most of the problems have been caused by you not following our instructions, but deciding to do what you feel like at the time. And then arguing with us how the missing information is not necessary. For instance "Parts and Accessories". Why on earth would you leave out the parts heading, when it ties-in with the fields you have already developed. It replaces "model" and is just as important in the context of information that appears in the "Details" panel. We are at a stage where the the v1.1 mini-spec needs to be completed without further time wasting and the site is complete (subject to all features working). We are on standby at this end to do just that. Let me know when you are back, working on the site and we will process and complete each v1.1 mini-spec, item by item, until the job is complete. 11. [last email from me to client...] hi [client], based on this reply, and your demonstrated unwillingness to compromise/give any ground on issues at hand, i have decided to place your project on-hold for the moment i will be considering further options on how to over-come our challenges over the next few days i will contact you by monday 17/may to discuss any new options i have come up with, and if i believe it is appropriate to restart work on your project at that point or not told you it was long... what do you think?

    Read the article

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • MVC Portable Areas Enhancement &ndash; Embedded Resource Controller

    - by Steve Michelotti
    MvcContrib contains a feature called Portable Areas which I’ve recently blogged about. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. This is an extremely cool feature but once you start building robust portable areas, you’ll also want to be able to access other external files like css and javascript.  After my recent post suggesting portable areas be expanded to include other embedded resources, Eric Hexter asked me if I’d like to contribute the code to MvcContrib (which of course I did!). Embedded resources are stored in a case-sensitive way in .NET assemblies and the existing embedded view engine inside MvcContrib already took this into account. Obviously, we’d want the same case sensitivity handling to be taken into account for any embedded resource so my job consisted of 1) adding the Embedded Resource Controller, and 2) a little refactor to extract the logic that deals with embedded resources so that the embedded view engine and the embedded resource controller could both leverage it and, therefore, keep the code DRY. The embedded resource controller targets these scenarios: External image files that are referenced in an <img> tag External files referenced like css or JavaScript files Image files referenced inside css files Embedded Resources Walkthrough This post will describe a walkthrough of using the embedded resource controller in your portable areas to include the scenarios outlined above. I will build a trivial “Quick Links” widget to illustrate the concepts. The portable area registration is the starting point for all portable areas. The MvcContrib.PortableAreas.EmbeddedResourceController is optional functionality – you must opt-in if you want to use it.  To do this, you simply “register” it by providing a route in your area registration that uses it like this: 1: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 2: new { controller = "EmbeddedResource", action = "Index" }, 3: new string[] { "MvcContrib.PortableAreas" }); First, notice that I can specify any route I want (e.g., “quicklinks/resources/…”).  Second, notice that I need to include the “MvcContrib.PortableAreas” namespace as the fourth parameter so that the framework is able to find the EmbeddedResourceController at runtime. The handling of embedded views and embedded resources have now been merged.  Therefore, the call to: 1: RegisterTheViewsInTheEmmeddedViewEngine(GetType()); has now been removed (breaking change).  It has been replaced with: 1: RegisterAreaEmbeddedResources(); Other than that, the portable area registration remains unchanged. The solution structure for the static files in my portable area looks like this: I’ve got a css file in a folder called “Content” as well as a couple of image files in a folder called “images”. To reference these in my aspx/ascx code, all of have to do is this: 1: <link href="<%= Url.Resource("Content.QuickLinks.css") %>" rel="stylesheet" type="text/css" /> 2: <img src="<%= Url.Resource("images.globe.png") %>" /> This results in the following HTML mark up: 1: <link href="/quicklinks/resource/Content.QuickLinks.css" rel="stylesheet" type="text/css" /> 2: <img src="/quicklinks/resource/images.globe.png" /> The Url.Resource() method is now included in MvcContrib as well. Make sure you import the “MvcContrib” namespace in your views. Next, I have to following html to render the quick links: 1: <ul class="links"> 2: <li><a href="http://www.google.com">Google</a></li> 3: <li><a href="http://www.bing.com">Bing</a></li> 4: <li><a href="http://www.yahoo.com">Yahoo</a></li> 5: </ul> Notice the <ul> tag has a class called “links”. This is defined inside my QuickLinks.css file and looks like this: 1: ul.links li 2: { 3: background: url(/quicklinks/resource/images.navigation.png) left 4px no-repeat; 4: padding-left: 20px; 5: margin-bottom: 4px; 6: } On line 3 we’re able to refer to the url for the background property. As a final note, although we already have complete control over the location of the embedded resources inside the assembly, what if we also want control over the physical URL routes as well. This point was raised by John Nelson in this post. This has been taken into account as well. For example, suppose you want your physical url to look like this: 1: <img src="/quicklinks/images/globe.png" /> instead of the same corresponding URL shown above (i.e., “/quicklinks/resources/images.globe.png”). You can do this easily by specifying another route for it which includes a “resourcePath” parameter that is pre-pended. Here is the complete code for the area registration with the custom route for the images shown on lines 9-11: 1: public class QuickLinksRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 6: new { controller = "EmbeddedResource", action = "Index" }, 7: new string[] { "MvcContrib.PortableAreas" }); 8:   9: context.MapRoute("ResourceImageRoute", "quicklinks/images/{resourceName}", 10: new { controller = "EmbeddedResource", action = "Index", resourcePath = "images" }, 11: new string[] { "MvcContrib.PortableAreas" }); 12:   13: context.MapRoute("quicklink", "quicklinks/{controller}/{action}", 14: new {controller = "links", action = "index"}); 15:   16: this.RegisterAreaEmbeddedResources(); 17: } 18:   19: public override string AreaName 20: { 21: get 22: { 23: return "QuickLinks"; 24: } 25: } 26: } The Quick Links portable area results in the following requests (including custom route formats): The complete code for this post is now included in the Portable Areas sample solution in the latest MvcContrib source code. You can get the latest code now.  Portable Areas open up exciting new possibilities for MVC development!

    Read the article

  • New Product: Oracle Java ME Embedded 3.2 – Small, Smart, Connected

    - by terrencebarr
    The Internet of Things (IoT) is coming. And, with todays launch of the Oracle Java ME Embedded 3.2 product, Java is going to play an even greater role in it. Java in the Internet of Things By all accounts, intelligent embedded devices are penetrating the world around us – driving industrial processes, monitoring environmental conditions, providing better health care, analyzing and processing data, and much more. And these devices are becoming increasingly connected, adding another dimension of utility. Welcome to the Internet of Things. As I blogged yesterday, this is a huge opportunity for the Java technology and ecosystem. To enable and utilize these billions of devices effectively you need a programming model, tools, and protocols which provide a feature-rich, consistent, scalable, manageable, and interoperable platform.  Java technology is ideally suited to address these technical and business problems, enabling you eliminate many of the typical challenges in designing embedded solutions. By using Java you can focus on building smarter, more valuable embedded solutions faster. To wit, Java technology is already powering around 10 billion devices worldwide. Delivering on this vision and accelerating the growth of embedded Java solutions, Oracle is today announcing a brand-new product: Oracle Java Micro Edition (ME) Embedded 3.2, accompanied by an update release of the Java ME Software Development Kit (SDK) to version 3.2. What is Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a complete Java runtime client, optimized for ARM architecture connected microcontrollers and other resource-constrained systems. The product provides dedicated embedded functionality and is targeted for low-power, limited memory devices requiring support for a range of network services and I/O interfaces.  What features and APIs are provided by Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a Java ME runtime based on CLDC 1.1 (JSR-139) and IMP-NG (JSR-228). The runtime and virtual machine (VM) are highly optimized for embedded use. Also included in the product are the following optional JSRs and Oracle APIs: File I/O API’s (JSR-75)  Wireless Messaging API’s (JSR-120) Web Services (JSR-172) Security and Trust Services subset (JSR-177) Location API’s (JSR-179) XML API’s (JSR-280)  Device Access API Application Management System (AMS) API AccessPoint API Logging API Additional embedded features are: Remote application management system Support for continuous 24×7 operation Application monitoring, auto-start, and system recovery Application access to peripheral interfaces such as GPIO, I2C, SPIO, memory mapped I/O Application level logging framework, including option for remote logging Headless on-device debugging – source level Java application debugging over IP Connection Remote configuration of the Java VM What type of platforms are targeted by Oracle Java ME 3.2 Embedded? The product is designed for embedded, always-on, resource-constrained, headless (no graphics/no UI), connected (wired or wireless) devices with a variety of peripheral I/O.  The high-level system requirements are as follows: System based on ARM architecture SOCs Memory footprint (approximate) from 130 KB RAM/350KB ROM (for a minimal, customized configuration) to 700 KB RAM/1500 KB ROM (for the full, standard configuration)  Very simple embedded kernel, or a more capable embedded OS/RTOS At least one type of network connection (wired or wireless) The initial release of the product is delivered as a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN).  What types of applications can I develop with Oracle Java ME Embedded 3.2? The Oracle Java ME Embedded 3.2 product is a full-featured embedded Java runtime supporting applications based on the IMP-NG application model, which is derived from the well-known MIDP 2 application model. The runtime supports execution of multiple concurrent applications, remote application management, versatile connectivity, and a rich set of APIs and features relevant for embedded use cases, including the ability to interact with peripheral I/O directly from Java applications. This rich feature set, coupled with familiar and best-in class software development tools, allows developers to quickly build and deploy sophisticated embedded solutions for a wide range of use cases. Target markets well supported by Oracle Java ME Embedded 3.2 include wireless modules for M2M, industrial and building control, smart grid infrastructure, home automation, and environmental sensors and tracking. What tools are available for embedded application development for Oracle Java ME Embedded 3.2? Along with the release of Oracle Java ME Embedded 3.2, Oracle is also making available an updated version of the Java ME Software Development Kit (SDK), together with plug-ins for the NetBeans and Eclipse IDEs, to deliver a complete development environment for embedded application development.  OK – sounds great! Where can I find out more? And how do I get started? There is a complete set of information, data sheet, API documentation, “Getting Started Guide”, FAQ, and download links available: For an overview of Oracle Embeddable Java, see here. For the Oracle Java ME Embedded 3.2 press release, see here. For the Oracle Java ME Embedded 3.2 data sheet, see here. For the Oracle Java ME Embedded 3.2 landing page, see here. For the Oracle Java ME Embedded 3.2 documentation page, including a “Getting Started Guide” and FAQ, see here. For the Oracle Java ME SDK 3.2 landing and download page, see here. Finally, to ask more questions, please see the OTN “Java ME Embedded” forum To get started, grab the “Getting Started Guide” and download the Java ME SDK 3.2, which includes the Oracle Java ME Embedded 3.2 device emulation.  Can I learn more about Oracle Java ME Embedded 3.2 at JavaOne and/or Java Embedded @ JavaOne? Glad you asked Both conferences, JavaOne and Java Embedded @ JavaOne, will feature a host of content and information around the new Oracle Java ME Embedded 3.2 product, from technical and business sessions, to hands-on tutorials, and demos. Stay tuned, I will post details shortly. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "Oracle Java ME Embedded", Connected, embedded, Embedded Java, Java Embedded @ JavaOne, JavaOne, Smart

    Read the article

  • Installing Visual Studio Team Foundation Server Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. Although I had no errors on my main computer, my netbook did have problems. Although I am not ready to call it a Service Pack problem just yet! Update 2011-03-10 – Running the Team Foundation Server 2010 Service Pack 1 install a second time worked As per Brian's post I am installing the Team Foundation Server Service Pack first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. This however does not affect you if you are running Visual Studio and Team Foundation Server on separate computers as is normal in a production deployment. Main workhorse I will be installing the service pack first on my main computer as I want to actually use it here. Figure: My main workhorse I will also be installing this on my netbook which is obviously of significantly lower spec, but I will do that one after. Although, as always I had my fingers crossed, I was not really worried. Figure: KB2182621 Compared to Visual Studio there are not really a lot of components to update. Figure: TFS 2010 and SQL 2008 are the main things to update There is no “web” installer for the Team Foundation Server 2010 Service Pack, but that is ok as most people will be installing it on a production server and will want to have everything local. I would have liked a Web installer, but the added complexity for the product team is not work the capability for a 500mb patch. Figure: There is currently no way to roll SP1 and RTM together Figure: No problems with the file verification, phew Figure: Although the install took a while, it progressed smoothly   Figure: I always like a success screen Well, as far as the install is concerned everything is OK, but what about TFS? Can I still connect and can I still administer it. Figure: Service Pack 1 is reflected correctly in the Administration Console I am confident that there are no major problems with TFS on my system and that it has been updated to SP1. I can do all of the things that I used before with ease, and with the new features detailed by Brian I think I will be happy. Netbook The great god Murphy has stuck, and my poor wee laptop spat the Team Foundation Server 2010 Service Pack 1 out so fast it hit me on the back of the head. That will teach me for not looking… Figure: “Installation did not succeed” I am pretty sure should not be all caps! On examining the file I found that everything worked, except the actual Team Foundation Server 2010 serving step. Action: System Requirement Checks... Action complete Action: Downloading and/or Verifying Items c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp: Verifying signature for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp Signature verified successfully for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi: Verifying signature for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi Signature verified successfully for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi: Verifying signature for DACProjectSystemSetup_enu.msi Exists: evaluating Exists evaluated to false c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi Signature verified successfully for DACProjectSystemSetup_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi: Verifying signature for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi Signature verified successfully for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi: Verifying signature for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi Signature verified successfully for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi: Verifying signature for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi Signature verified successfully for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi: Verifying signature for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi Signature verified successfully for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi: Verifying signature for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi Signature verified successfully for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab: Verifying signature for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab Signature verified successfully for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi: Verifying signature for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi Signature verified successfully for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\SetupUtility.exe: Verifying signature for SetupUtility.exe c:\757fe6efe9f065130d4838081911\SetupUtility.exe Signature verified successfully for SetupUtility.exe c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab: Verifying signature for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab Signature verified successfully for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi: Verifying signature for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi Signature verified successfully for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe: Verifying signature for NDP40-KB2468871.exe c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe Signature verified successfully for NDP40-KB2468871.exe Action complete Action: Performing actions on all Items Entering Function: BaseMspInstallerT >::PerformAction Action: Performing Install on MSP: c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp targetting Product: Microsoft Team Foundation Server 2010 - ENU Returning IDOK. INSTALLMESSAGE_ERROR [Error 1935.An error occurred during the installation of assembly 'Microsoft.TeamFoundation.WebAccess.WorkItemTracking,version="10.0.0.0",publicKeyToken="b03f5f7f11d50a3a",processorArchitecture="MSIL",fileVersion="10.0.40219.1",culture="neutral"'. Please refer to Help and Support for more information. HRESULT: 0x80070005. ] Returning IDOK. INSTALLMESSAGE_ERROR [Error 1712.One or more of the files required to restore your computer to its previous state could not be found. Restoration will not be possible.] Patch (c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp) Install failed on product (Microsoft Team Foundation Server 2010 - ENU). Msi Log: MSI returned 0x643 Entering Function: MspInstallerT >::Rollback Action Rollback changes PerformMsiOperation returned 0x643 PerformMsiOperation returned 0x643 OnFailureBehavior for this item is to Rollback. Action complete Final Result: Installation failed with error code: (0x80070643), "Fatal error during installation. " (Elapsed time: 0 00:14:09). Figure: Error log for Team Foundation Server 2010 install shows a failure As there is really no information in this log as to why the installation failed so I checked the event log on that box. Figure: There are hundreds of errors and it actually looks like there are more problems than a failed Service Pack I am going to just run it again and see if it was because the netbook was slow to catch on to the update. Hears hoping, but even if it fails, I would question the installation of Windows (PDC laptop original install) before I question the Service Pack Figure: Second run through was successful I don’t know if the laptop was just slow, or what… Did you get this error? If you did I will push this to the product team as a problem, but unless more people have this sort of error, I will just look to write this off as a corrupted install of Windows and reinstall.

    Read the article

  • Using CMS for App Configuration - Part 1, Deploying Umbraco

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/06/04/using-cms-for-app-configurationndashpart-1-deploy-umbraco.aspxSince my last post on using CMS for semi-static API content, How about a new platform for your next API… a CMS?, I’ve been using the idea for centralized app configuration, and this post is the first in a series that will walk through how to do that, step-by-step. The approach gives you a platform-independent, easily configurable way to specify your application configuration for different environments, with a built-in approval workflow, change auditing and the ability to easily rollback to previous settings. It’s like Azure Web and Worker Roles where you can specify settings that change at runtime, but it's not specific to Azure - you can use it for any app that needs changeable config, provided it can access the Internet. The series breaks down into four posts: Deploying Umbraco – the CMS that will store your configurable settings and the current values; Publishing your config – create a document type that encapsulates your settings and a template to expose them as JSON; Consuming your config – in .NET, a simple client that uses dynamic objects to access settings; Config lifecycle management – how to publish, audit, and rollback settings. Let’s get started. Deploying Umbraco There’s an Umbraco package on Azure Websites, so deploying your own instance is easy – but there are a couple of things to watch out for, so this step-by-step will put you in a good place. Create From Gallery The easiest way to get started is with an Azure subscription, navigate to add a new Website and then Create From Gallery. Under CMS, you’ll see an Umbraco package (currently at version 7.1.3): Configure Your App For high availability and scale, you’ll want your CMS on separate kit from anything else you have in Azure, so in the configuration of Umbraco I’d create a new SQL Azure database – which Umbraco will use to store all its content: You can use the free 20mb database option if you don’t have demanding NFRs, or if you’re just experimenting. You’ll need to specify a password for a SQL Server account which the Umbraco service will use, and changing from the default username umbracouser is probably wise. Specify Database Settings You can create a new database on an existing server if you have one, or create new. If you create a new server *do not* use the same username for the database server login as you used for the Umbraco account. If you do, the deployment will fail later. Think of this as the SQL Admin account that you can use for managing the db, the previous account was the service account Umbraco uses to connect. Make Tea If you have a fast kettle. It takes about two minutes for Azure to create and provision the website and the database. Install Umbraco So far we’ve deployed an empty instance of Umbraco using the Azure package, and now we need to browse to the site and complete installation. My Website was called my-app-config, so to complete installation I browse to http://my-app-config.azurewebsites.net:   Enter the credentials you want to use to login – this account will have full admin rights to the Umbraco instance. Note that between deploying your new Umbraco instance and completing installation in this step, anyone can browse to your website and complete the installation themselves with their own credentials, if they know the URL. Remote possibility, but it’s there. From this page *do not* click the big green Install button. If you do, Umbraco will configure itself with a local SQL Server CE database (.sdf file on the Web server), and ignore the SQL Azure database you’ve carefully provisioned and may be paying for. Instead, click on the Customize link and: Configure Your Database You need to enter your SQL Azure database details here, so you’ll have to get the server name from the Azure Management Console. You don’t need to explicitly grant access to your Umbraco website for the database though. Click Continue and you’ll be offered a “starter” website to install: If you don’t know Umbraco at all (but you are familiar with ASP.NET MVC) then a starter website is worthwhile to see how it all hangs together. But after a while you’ll have a bunch of artifacts in your CMS that you don’t want and you’ll have to work out which you can safely delete. So I’d click “No thanks, I do not want to install a starter website” and give yourself a clean Umbraco install. When it completes, the installation will log you in to the welcome screen for managing Umbraco – which you can access from http://my-app-config.azurewebsites.net/umbraco: That’s It Easy. Umbraco is installed, using a dedicated SQL Azure instance that you can separately scale, sync and backup, and ready for your content. In the next post, we’ll define what our app config looks like, and publish some settings for the dev environment.

    Read the article

  • What Counts For a DBA: Simplicity

    - by Louis Davidson
    Too many computer processes do an apparently simple task in a bizarrely complex way. They remind me of this strip by one of my favorite artists: Rube Goldberg. In order to keep the boss from knowing one was late, a process is devised whereby the cuckoo clock kisses a live cuckoo bird, who then pulls a string, which triggers a hat flinging, which in turn lands on a rod that removes a typewriter cover…and so on. We rely on creating automated processes to keep on top of tasks. DBAs have a lot of tasks to perform: backups, performance tuning, data movement, system monitoring, and of course, avoiding being noticed.  Every day, there are many steps to perform to maintain the database infrastructure, including: checking physical structures, re-indexing tables where needed, backing up the databases, checking those backups, running the ETL, and preparing the daily reports and yes, all of these processes have to complete before you can call it a day, and probably before many others have started that same day. Some of these tasks are just naturally complicated on their own. Other tasks become complicated because the database architecture is excessively rigid, and we often discover during “production testing” that certain processes need to be changed because the written requirements barely resembled the actual customer requirements.   Then, with no time to change that rigid structure, we are forced to heap layer upon layer of code onto the problematic processes. Instead of a slight table change and a new index, we end up with 4 new ETL processes, 20 temp tables, 30 extra queries, and 1000 lines of SQL code.  Report writers then need to build reports and make magical numbers appear from those toxic data structures that are overly complex and probably filled with inconsistent data. What starts out as a collection of fairly simple tasks turns into a Goldbergian nightmare of daily processes that are likely to cause your dinner to be interrupted by the smartphone doing the vibration dance that signifies trouble at the mill. So what to do? Well, if it is at all possible, simplify the problem by either going into the code and refactoring the complex code to simple, or taking all of the processes and simplifying them into small, independent, easily-tested steps.  The former approach usually requires an agreement on changing underlying structures that requires countless mind-numbing meetings; while the latter can generally be done to any complex process without the same frustration or anger, though it will still leave you with lots of steps to complete, the ability to test each step independently will definitely increase the quality of the overall process (and with each step reporting status back, finding an actual problem within the process will be definitely less unpleasant.) We all know the principle behind simplifying a sequence of processes because we learned it in math classes in our early years of attending school, starting with elementary school. In my 4 years (ok, 9 years) of undergraduate work, I remember pretty much one thing from my many math classes that I apply daily to my career as a data architect, data programmer, and as an occasional indentured DBA: “show your work”. This process of showing your work was my first lesson in simplification. Each step in the process was in fact, far simpler than the entire process.  When you were working an equation that took both sides of 4 sheets of paper, showing your work was important because the teacher could see every step, judge it, and mark it accordingly.  So often I would make an error in the first few lines of a problem which meant that the rest of the work was actually moving me closer to a very wrong answer, no matter how correct the math was in the subsequent steps. Yet, when I got my grade back, I would sometimes be pleasantly surprised. I passed, yet missed every problem on the test. But why? While I got the fact that 1+1=2 wrong in every problem, the teacher could see that I was using the right process. In a computer process, the process is very similar. We take complex processes, show our work by storing intermediate values, and test each step independently. When a process has 100 steps, each step becomes a simple step that is tested and verified, such that there will be 100 places where data is stored, validated, and can be checked off as complete. If you get step 1 of 100 wrong, you can fix it and be confident (that if you did your job of testing the other steps better than the one you had to repair,) that the rest of the process works. If you have 100 steps, and store the state of the process exactly once, the resulting testable chunk of code will be far more complex and finding the error will require checking all 100 steps as one, and usually it would be easier to find a specific needle in a stack of similarly shaped needles.  The goal is to strive for simplicity either in the solution, or at least by simplifying every process down to as many, independent, testable, simple tasks as possible.  For the tasks that really can’t be done completely independently, minimally take those tasks and break them down into simpler steps that can be tested independently.  Like working out division problems longhand, have each step of the larger problem verified and tested.

    Read the article

  • Oracle VM RAC template - what it took

    - by wcoekaer
    In my previous posting I introduced the latest Oracle Real Application Cluster / Oracle VM template. I mentioned how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. In fact, you don't need any prior knowledge at all to get a complete production-ready setup going. Here is an example... I built a 4 node RAC cluster, completely configured in just over 40 minutes - starting from import template into Oracle VM, create VMs to fully up and running Oracle RAC. And what was needed? 1 textfile with some hostnames and ip addresses and deploycluster.py. The setup is a 4 node cluster where each VM has 8GB of RAM and 4 vCPUs. The shared ASM storage in this case is 100GB, 5 x 20GB volumes. The VM names are racovm.0-racovm.3. The deploycluster script starts the VMs, verifies the configuration and sends the database cluster configuration info through Oracle VM Manager to the 4 node VMs. Once the VMs are up and running, the first VM starts the actual Oracle RAC setup inside and talks to the 3 other VMs. I did not log into any VM until after everything was completed. In fact, I connected to the database remotely before logging in at all. # ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini Oracle RAC OneCommand (v1.1.0) for Oracle VM - deploy cluster - (c) 2011-2012 Oracle Corporation (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1100:v1.1.0) - v2.4.3 - wopr8.wimmekes.net (x86_64) Invoked as root at Sat Jun 2 17:31:29 2012 (size: 37500, mtime: Wed May 16 00:13:19 2012) Using: ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting... Password: INFO: Attempting to connect to Oracle VM Manager... INFO: Oracle VM Client (3.1.1.305) protocol (1.8) CONNECTED (tcp) to Oracle VM Manager (3.1.1.336) protocol (1.8) IP (192.168.1.40) UUID (0004fb0000010000cbce8a3181569a3e) INFO: Inspecting /root/rac/deploycluster/netconfig.ini for number of nodes defined... INFO: Detected 4 nodes in: /root/rac/deploycluster/netconfig.ini INFO: Located a total of (4) VMs; 4 VMs with a simple name of: ['racovm.0', 'racovm.1', 'racovm.2', 'racovm.3'] INFO: Verifying all (4) VMs are in Running state INFO: VM with a simple name of "racovm.0" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.1" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.2" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.3" is in a Stopped state, attempting to start it...OK. INFO: Detected that all (4) VMs specified on command have (5) common shared disks between them (ASM_MIN_DISKS=5) INFO: The (4) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /root/rac/deploycluster/netconfig.ini buildcluster: yes INFO: Starting to send cluster details to all (4) VM(s)....... INFO: Sending to VM with a simple name of "racovm.0".... INFO: Sending to VM with a simple name of "racovm.1"..... INFO: Sending to VM with a simple name of "racovm.2"..... INFO: Sending to VM with a simple name of "racovm.3"...... INFO: Cluster details sent to (4) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (racovm.0)... INFO: deploycluster.py completed successfully at 17:32:02 in 33.2 seconds (00m:33s) Logfile at: /root/rac/deploycluster/deploycluster2.log my netconfig.ini # Node specific information NODE1=db11rac1 NODE1VIP=db11rac1-vip NODE1PRIV=db11rac1-priv NODE1IP=192.168.1.56 NODE1VIPIP=192.168.1.65 NODE1PRIVIP=192.168.2.2 NODE2=db11rac2 NODE2VIP=db11rac2-vip NODE2PRIV=db11rac2-priv NODE2IP=192.168.1.58 NODE2VIPIP=192.168.1.66 NODE2PRIVIP=192.168.2.3 NODE3=db11rac3 NODE3VIP=db11rac3-vip NODE3PRIV=db11rac3-priv NODE3IP=192.168.1.173 NODE3VIPIP=192.168.1.174 NODE3PRIVIP=192.168.2.4 NODE4=db11rac4 NODE4VIP=db11rac4-vip NODE4PRIV=db11rac4-priv NODE4IP=192.168.1.175 NODE4VIPIP=192.168.1.176 NODE4PRIVIP=192.168.2.5 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=wimmekes.net DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=db11vip SCANIP=192.168.1.57 last few lines of the in-VM log file : 2012-06-02 14:01:40:[clusterstate:Time :db11rac1] Completed successfully in 2 seconds (0h:00m:02s) 2012-06-02 14:01:40:[buildcluster:Done :db11rac1] Build 11gR2 RAC Cluster 2012-06-02 14:01:40:[buildcluster:Time :db11rac1] Completed successfully in 1779 seconds (0h:29m:39s) From start_vm to completely configured : 29m:39s. The other 10m was the import template and create 4 VMs from template along with the shared storage configuration. This consists of a complete Oracle 11gR2 RAC database with ASM, CRS and the RDBMS up and running on all 4 nodes. Simply connect and use. Production ready. Oracle on Oracle.

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • How to draw textures on a model

    - by marc wellman
    The following code is a complete XNA 3.1 program almost unaltered to that code skeleton Visual Studio is creating when creating a new project. The only things I have changed are imported a .x model to the content folder of the VS solution. (the model is a simple square with a texture spanning over it - made in Google Sketchup and exported with several .x exporters) in the Load() method I am loading the .x model into the game. The Draw() method uses a BasicEffect to render the model. Except these three things I haven't added any code. Why does the model does not show the texture ? What can I do to make the texture visible ? This is the texture file (a standard SketchUp texture from the palette): And this is what my program looks like - as you can see: No texture! Find below the complete source code of the program AND the complete .x file: namespace WindowsGame1 { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } Model newModel; /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: usse this.Content to load your game content here newModel = Content.Load<Model>(@"aau3d"); foreach (ModelMesh mesh in newModel.Meshes) { foreach (ModelMeshPart meshPart in mesh.MeshParts) { meshPart.Effect = new BasicEffect(this.GraphicsDevice, null); } } } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { if (newModel != null) { GraphicsDevice.Clear(Color.CornflowerBlue); Matrix[] transforms = new Matrix[newModel.Bones.Count]; newModel.CopyAbsoluteBoneTransformsTo(transforms); foreach (ModelMesh mesh in newModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.TextureEnabled = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(new Vector3(0, 0, 0)); effect.View = Matrix.CreateLookAt(new Vector3(200, 1000, 200), Vector3.Zero, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), 0.75f, 1.0f, 10000.0f); } mesh.Draw(); } } base.Draw(gameTime); } } } This is the model I am using (.x): xof 0303txt 0032 // SketchUp 6 -> DirectX (c)2008 edecadoudal, supports: faces, normals and textures Material Default_Material{ 1.0;1.0;1.0;1.0;; 3.2; 0.000000;0.000000;0.000000;; 0.000000;0.000000;0.000000;; } Material _Groundcover_RiverRock_4inch_{ 0.568627450980392;0.494117647058824;0.427450980392157;1.0;; 3.2; 0.000000;0.000000;0.000000;; 0.000000;0.000000;0.000000;; TextureFilename { "aau3d.xGroundcover_RiverRock_4inch.jpg"; } } Mesh mesh_0{ 4; -81.6535;0.0000;74.8031;, -0.0000;0.0000;0.0000;, -81.6535;0.0000;0.0000;, -0.0000;0.0000;74.8031;; 2; 3;0,1,2, 3;1,0,3;; MeshMaterialList { 2; 2; 1, 1; { Default_Material } { _Groundcover_RiverRock_4inch_ } } MeshTextureCoords { 4; -2.1168,-3.4022; 1.0000,-0.0000; 1.0000,-3.4022; -2.1168,-0.0000;; } MeshNormals { 4; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000;; 2; 3;0,1,2; 3;1,0,3;; } }

    Read the article

  • How to avoid the Portlet Skin mismatch

    - by Martin Deh
    here are probably many on going debates whether to use portlets or taskflows in a WebCenter custom portal application.  Usually the main battle on which side to take in these debates are centered around which technology enables better performance.  The good news is that both of my colleagues, Maiko Rocha and George Maggessy have posted their respective views on this topic so I will not have to further the discussion.  However, if you do plan to use portlets in a WebCenter custom portal application, this post will help you not have the "portlet skin mismatch" issue.   An example of the presence of the mismatch can be view from the applications log: The skin customsharedskin.desktop specified on the requestMap will be used even though the consumer's skin's styleSheetDocumentId on the requestMap does not match the local skin's styleSheetDocument's id. This will impact performance since the consumer and producer stylesheets cannot be shared. The producer styleclasses will not be compressed to avoid conflicts. A reason the ids do not match may be the jars are not identical on the producer and the consumer. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. Notice that due to the mismatch the portlet's CSS will not be able to be compressed, which will most like impact performance in the portlet's consuming portal. The first part of the blog will define the portlet mismatch and cover some debugging tips that can help you solve the portlet mismatch issue.  Following that I will give a complete example of the creating, using and sharing a shared skin in both a portlet producer and the consumer application. Portlet Mismatch Defined  In general, when you consume/render an ADF page (or task flow) using the ADF Portlet bridge, the portlet (producer) would try to use the skin of the consumer page - this is called skin-sharing. When the producer cannot match the consumer skin, the portlet would generate its own stylesheet and reference it from its markup - this is called mismatched-skin. This can happen because: The consumer and producer use different versions of ADF Faces, or The consumer has additional skin-additions that the producer doesn't have or vice-versa, or The producer does not have the consumer skin For case (1) & (2) above, the producer still uses the consumer skin ID to render its markup. For case (3), the producer would default to using portlet skin. If there is a skin mis-match then there may be a performance hit because: The browser needs to fetch this extra stylesheet (though it should be cached unless expires caching is turned off) The generated portlet markup uses uncompressed styles resulting in a larger markup It is often not obvious when a skin mismatch occurs, unless you look for either of these indicators: The log messages in the producer log, for example: The skin blafplus-rich.desktop specified on the requestMap will not be used because the styleSheetDocument id on the requestMap does not match the local skin's styleSheetDocument's id. It could mean the jars are not identical. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. View the portlet markup inside the iframe, there should be a <link> tag to the portlet stylesheet resource like this (note the CSS is proxied through consumer's resourceproxy): <link rel=\"stylesheet\" charset=\"UTF-8\" type=\"text/css\" href=\"http:.../resourceproxy/portletId...252525252Fadf%252525252Fstyles%252525252Fcache%252525252Fblafplus-rich-portlet-d1062g-en-ltr-gecko.css... Using HTTP monitoring tool (eg, firebug, httpwatch), you can see a request is made to the portlet stylesheet resource (see URL above) There are a number of reasons for mismatched-skin. For skin to match the producer and consumer must match the following configurations: The ADF Faces version (different versions may have different style selectors) Style Compression, this is defined in the web.xml (default value is false, i.e. compression is ON) Tonal styles or themes, also defined in the web.xml via context-params The same skin additions (jars with skin) are available for both producer and consumer.  Skin additions are defined in the trinidad-skins.xml, using the <skin-addition> tags. These are then aggregated from all the jar files in the classpath. If there's any jar that exists on the producer but not the consumer, or vice veras, you get a mismatch. Debugging Tips  Ensure the style compression and tonal styles/themes match on the consumer and producer, by looking at the web.xml documents for the consumer & producer applications It is bit more involved to determine if the jars match.  However, you can enable the Trinidad logging to show which skin-addition it is processing.  To enable this feature, update the logging.xml log level of both the producer and consumer WLS to FINEST.  For example, in the case of the WebLogic server used by JDeveloper: $JDEV_USER_DIR/system<version number>/DefaultDomain/config/fmwconfig/servers/DefaultServer/logging.xml Add a new entry: <logger name="org.apache.myfaces.trinidadinternal.skin.SkinUtils" level="FINEST"/> Restart WebLogic.  Run the consumer page, you should see the following logging in both the consumer and producer log files. Any entries that don't match is the cause of the mismatch.  The following is an example of what the log will produce with this setting: [SRC_CLASS: org.apache.myfaces.trinidadinternal.skin.SkinUtils] [APP: WebCenter] [SRC_METHOD: _getMetaInfSkinsNodeList] Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/announcement-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/calendar-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/forum-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/page-service-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-kudos-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-wall-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/portlet-client-adf-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/rtc-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/serviceframework-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/smarttag-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/spaces-service-skins.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.composer/3yo7j/WEB-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/adf-richclient-impl-11.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-faces.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-trinidad.jar!/META-INF/trinidad-skins.xml   The Complete Example The first step is to create the shared library.  The WebCenter documentation covering this is located here in section 15.7.  In addition, our ADF guru Frank Nimphius also covers this in hes blog.  Here are my steps (in JDeveloper) to create the skin that will be used as the shared library for both the portlet producer and consumer. Create a new Generic Application Give application name (i.e. MySharedSkin) Give a project name (i.e. MySkinProject) Leave Project Technologies blank (none selected), and click Finish Create the trinidad-skins.xml Right-click on the MySkinProject node in the Application Navigator and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file trinidad-skins.xml, and (IMPORTANT) give the directory path to MySkinProject\src\META-INF In the trinidad-skins.xml, complete the skin entry.  for example: <?xml version="1.0" encoding="windows-1252" ?> <skins xmlns="http://myfaces.apache.org/trinidad/skin">   <skin>     <id>mysharedskin.desktop</id>     <family>mysharedskin</family>     <extends>fusionFx-v1.desktop</extends>     <style-sheet-name>css/mysharedskin.css</style-sheet-name>   </skin> </skins> Create CSS file In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Gallery, select Web-Tier-> HTML, CSS File from the the Items and click OK In the Create Cascading Style Sheet dialog, give the name (i.e. mysharedskin.css) Ensure that the Directory path is the under the META-INF (i.e. MySkinProject\src\META-INF\css) Once the new CSS opens in the editor, add in a style selector.  For example, this selector will style the background of a particular panelGroupLayout: af|panelGroupLayout.customPGL{     background-color:Fuchsia; } Create the MANIFEST.MF (used for deployment JAR) In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file MANIFEST.MF, and (IMPORTANT) ensure that the directory path is to MySkinProject\src\META-INF Complete the MANIFEST.MF, where the extension name is the shared library name Manifest-Version: 1.1 Created-By: Martin Deh Implementation-Title: mysharedskin Extension-Name: mysharedskin.lib.def Specification-Version: 1.0.1 Implementation-Version: 1.0.1 Implementation-Vendor: MartinDeh Create new Deployment Profile Right click on the MySkinProject node, and select New From the New Gallery, select General->Deployment Profiles, Shared Library JAR File from Items, and click OK In the Create Deployment Profile dialog, give name (i.e.mysharedskinlib) and click OK In the Edit JAR Deployment dialog, un-check Include Manifest File option  Select Project Output->Contributors, and check Project Source Path Select Project Output->Filters, ensure that all items under the META-INF folder are selected Click OK to exit the Project Properties dialog Deploy the shared lib to WebLogic (start server before steps) Right click on MySkin Project and select Deploy For this example, I will deploy to JDeverloper WLS In the Deploy dialog, select Deploy to Weblogic Application Server and click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a shared Library radio is selected Click Finish Open the WebLogic console to see the deployed shared library The following are the steps to create a simple test Portlet Create a new WebCenter Portal - Portlet Producer Application In the Create Portlet Producer dialog, select default settings and click Finish Right click on the Portlets node and select New IIn the New Gallery, select Web-Tier->Portlets, Standards-based Java Portlet (JSR 286) and click OK In the General Portlet information dialog, give portlet name (i.e. MyPortlet) and click Next 2 times, stopping at Step 3 In the Content Types, select the "view" node, in the Implementation Method, select the Generate ADF-Faces JSPX radio and click Finish Once the portlet code is generated, open the view.jspx in the source editor Based on the simple CSS entry, which sets the background color of a panelGroupLayout, replace the <af:form/> tag with the example code <af:form>         <af:panelGroupLayout id="pgl1" styleClass="customPGL">           <af:outputText value="background from shared lib skin" id="ot1"/>         </af:panelGroupLayout>  </af:form> Since this portlet is to use the shared library skin, in the generated trinidad-config.xml, remove both the skin-family tag and the skin-version tag In the Application Resources view, under Descriptors->META-INF, double-click to open the weblogic-application.xml Add a library reference to the shared skin library (note: the library-name must match the extension-name declared in the MANIFEST.MF):  <library-ref>     <library-name>mysharedskin.lib.def</library-name>  </library-ref> Notice that a reference to oracle.webcenter.skin exists.  This is important if this portlet is going to be consumed by a WebCenter Portal application.  If this tag is not present, the portlet skin mismatch will happen.  Configure the portlet for deployment Create Portlet deployment WAR Right click on the Portlets node and select New In the New Gallery, select Deployment Profiles, WAR file from Items and click OK In the Create Deployment Profile dialog, give name (i.e. myportletwar), click OK Keep all of the defaults, however, remember the Context Root entry (i.e. MyPortlet4SharedLib-Portlets-context-root, this will be needed to obtain the producer WSDL URL) Click OK, then OK again to exit from the Properties dialog Since the weblogic-application.xml has to be included in the deployment, the portlet must be deployed as a WAR, within an EAR In the Application dropdown, select Deploy->New Deployment Profile... By default EAR File has been selected, click OK Give Deployment Profile (EAR) a name (i.e. MyPortletProducer) and click OK In the Properties dialog, select Application Assembly and ensure that the myportletwar is checked Keep all of the other defaults and click OK For this demo, un-check the Auto Generate ..., and all of the Security Deployment Options, click OK Save All In the Application dropdown, select Deploy->MyPortletProducer In the Deployment Action, select Deploy to Application Server, click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a standalone Application radio is selected The select deployment type (identifying the deployment as a JSR 286 portlet) dialog appears.  Keep default radio "Yes" selection and click OK Open the WebLogic console to see the deployed Portlet The last step is to create the test portlet consuming application.  This will be done using the OOTB WebCenter Portal - Framework Application.  Create the Portlet Producer Connection In the JDeveloper Deployment log, copy the URL of the portlet deployment (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root Open a browser and paste in the URL.  The Portlet information page should appear.  Click on the WSRP v2 WSDL link Copy the URL from the browser (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root/portlets/wsrp2?WSDL) In the Application Resources view, right click on the Connections folder and select New Connection->WSRP Connection Give the producer a name or accept the default, click Next Enter (paste in) the WSDL URL, click Next If connection to Portlet is succesful, Step 3 (Specify Additional ...) should appear.  Accept defaults and click Finish Add the portlet to a test page Open the home.jspx.  Note in the visual editor, the orange dashed border, which identifies the panelCustomizable tag. From the Application Resources. select the MyPortlet portlet node, and drag and drop the node into the panelCustomizable section.  A Confirm Portlet Type dialog appears, keep default ADF Rich Portlet and click OK Configure the portlet to use the shared skin library Open the weblogic-application.xml and add the library-ref entry (mysharedskin.lib.def) for the shared skin library.  See create portlet example above for the steps Since by default, the custom portal using a managed bean to (dynamically) determine the skin family, the default trinidad-config.xml will need to be altered Open the trinidad-config.xml in the editor and replace the EL (preferenceBean) for the skin-family tag, with mysharedskin (this is the skin-family named defined in the trinidad-skins.xml) Remove the skin-version tag Right click on the index.html to test the application   Notice that the JDeveloper log view does not have any reporting of a skin mismatch.  In addition, since I have configured the extra logging outlined in debugging section above, I can see the processed skin jar in both the producer and consumer logs: <SkinUtils> <_getMetaInfSkinsNodeList> Processing skin URL:zip:/JDeveloper/system11.1.1.6.38.61.92/DefaultDomain/servers/DefaultServer/upload/mysharedskin.lib.def/[email protected]/app/mysharedskinlib.jar!/META-INF/trinidad-skins.xml 

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >