Search Results

Search found 20187 results on 808 pages for 'team oracle'.

Page 551/808 | < Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >

  • Debugging the NetBeans Platform

    - by Geertjan
    Once you've set up the NetBeans Platform sources as your NetBeans Platform, you're able to debug the NetBeans Platform itself. That's an occasional question (certainly not a frequent question) on the mailing list and in NetBeans Platform courses: "Is it possible to debug the NetBeans Platform?" Well, here's how: Firstly, set up the NetBeans Platform sources as your NetBeans Platform. Now, open into NetBeans IDE the NetBeans module where you'd like to place a breakpoint. That in itself is the hardest part of this task. I.e., you know you want to debug the NetBeans Platform, but have no idea where to place your breakpoint. One way to figure that out, from 7.1 onwards, is to take a visual snapshot of the NetBeans Platform and then analyze that snapshot in NetBeans IDE. To do this, right-click a module that you've set as using the NetBeans Platform sources as your NetBeans Platform and then choose Debug. The application, i.e., the NetBeans Platform, including your custom module, starts up and you'll see this, i.e., NetBeans IDE in debug mode together with your NetBeans Platform application:Notice there's a new toolbar button (new in NetBeans IDE 7.1) that resembles an orange camera. Click that button and the IDE creates a visual snapshot of the running application, which in this case is the NetBeans Platform. When you click components in the visual snapshot, the Navigator and Properties window display information about the related GUI component: By clicking the above components, you can end up identifying the component you'd like to debug and even the module where it is found. Open that module. Set a breakpoint on the line of interest. Right-click the module again and choose Debug. A debug session starts and when the breakpoint is hit, the Debugger in the IDE will open and there you can step through the NetBeans Platform sources.

    Read the article

  • Wildcards!

    - by Tim Dexter
    Yes, its been a while, Im sorry, mumble, mumble ... no excuses. Well other than its been, as my son would say 'hecka busy.' On a brighter note I see Kan has been posting some cool stuff in my absence, long may he continue! I received a question today asking about using a wildcard in a template, something like: <?if:INVOICE = 'MLP*'?> where * is the wildcard Well that particular try does not work but you can do it without building your own wildcard function. XSL, the underpinning language of the RTF templates, has some useful string functions - you can find them listed here. I used the starts-with function to achieve a simple wildcard scenario but the contains can be used in conjunction with some of the others to build something more sophisticated. Assume I have a a list of friends and the amounts of money they owe me ... Im very generous and my interest rates a pretty competitive :0) <ROWSET> <ROW> <NAME>Andy</NAME> <AMT>100</AMT> </ROW> <ROW> <NAME>Andrew</NAME> <AMT>60</AMT> </ROW> <ROW> <NAME>Aaron</NAME> <AMT>50</AMT> </ROW> <ROW> <NAME>Alice</NAME> <AMT>40</AMT> </ROW> <ROW> <NAME>Bob</NAME> <AMT>10</AMT> </ROW> <ROW> <NAME>Bill</NAME> <AMT>100</AMT> </ROW> Now, listing my friends is easy enough <for-each:ROW> <NAME> <AMT> <end for-each> but lets say I just want to see all my friends beginning with 'A'. To do that I can use an XPATH expression to filter the data and tack it on to the for-each expression. This is more efficient that using an 'if' statement just inside the for-each. <?for-each:ROW[starts-with(NAME,'A')]?> will find me all the A's. The square braces denote the start of the XPATH expression. starts-with is the function Im calling and Im passing the value I want to check i.e. NAME and the string Im looking for. Just substitute in the characters you are looking for. You can of course use the function in a if statement too. <?if:starts-with(NAME,'A')?><?attribute@incontext:color;'red'?><?end if?> Notice I removed the square braces, this will highlight text red if the name begins with an 'A' You can even use the function to do conditional calculations: <?sum (AMT[starts-with(../NAME,'A')])?> Sum only the amounts where the name begins with an 'A' Notice the square braces are back, its a function we want to apply to the AMT field. Also notice that we need to use ../NAME. The AMT and NAME elements are at the same level in the tree, so when we are at the AMT level we need the ../ to go up a level to then come back down to test the NAME value. I have built out the above functions in a sample template here. Huge prizes for the first person to come up with a 'true' wildcard solution i.e. if NAME like '*im*exter* demand cash now!

    Read the article

  • It's raining development VirtualBox images again!

    - by pieter.humphrey
                                                The cloud has burst.. forecast is looking like large amounts of VirtualBox images are coming down from OTN.   Are you finding the install for Database, WebLogic, SOA or WebCenter to be complicated when your goal is simply to setup a development sandbox?  Sick of giving your credit card info to cloud vendors, only to be stuck in a walled garden where you can't connect to your own internal systems?   Are you new to Java and just wanted something technical to sink your teeth into?  Or maybe you just want to put some stuff on that new terabyte drive you got? ;) Have no fear.  VirtualBox 4.0 is here.  We've have several development (read: don't use in production) images that were designed for use for in-person events, but we're posting them for your enjoyment.  Some of the images have step by step hands on labs baked into them too!  So get a freeware download manager like BitComet, install VirtualBox, an MD5 checksum utility (if you are on windows) and get wet!   del.icio.us Tags: java,development,java ee,java fx,virtualBox,virtualization,database,soa,weblogic,jdeveloper,eclipse,netbeans,sql developer,times ten,zend,php,SOA,SOA Suite,BPM,BAM,B2B,hudson,maven,subversion,Eclipse,Solaris,OTN Technorati Tags: java,development,java ee,java fx,virtualBox,virtualization,database,soa,weblogic,jdeveloper,eclipse,netbeans,sql developer,times ten,zend,php,SOA,SOA Suite,BPM,BAM,B2B,hudson,maven,subversion,Eclipse,Solaris,OTN

    Read the article

  • Commandline Purge in AS11

    - by Dheeraj Kumar
    AS11 - B2B offering consists of numerous features that have been made available via commandline approach. Most of these are supplement to the already available User Interface based approach. One such is purging of runtime data. The commandline purge option enables the users to purge the runtime data, based on various criteria. This is an ANT based command, provides the flexibility to selectively set the criteria to purge the runtime data. Providing the command line option also enables the administrator to purge in bulk, without visiting the B2B UI, which can also be used for automation purpose By default archival is turned on for purge activity. As a pre-requisite, the respective folder needs to be configured in database with the proper permission. When no filename is provided for archived data, the sysdate will be considered for filename. Below are the various options to purge the runtime data Normal 0 Option ANT option   Message state -Dmsgstate   Date range -Dfromdate,  -Dtodate Format : dd/mm/yyyy hh:mm AM/PM Trading partner -Dtp   Direction -Ddirection   Message Type -Dmsgtype   Agreement Name -Dagreement   IdType/ value -Didtype,  -Didvalue   Archive -Darchive True/false By default true Archive file name -Darchivename File name (optional), will be used when archive is set to true. Normal 0 Note: When using -Darchivename the value must be a unique file name. An existing file name used with -Darchivename throws an exception v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 Below are the few of ant commands and various options.   Purge based on date range and message state: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dfromdate="19/12/2009 1:04 AM" -Dtodate="19/12/2009 1:05 AM" -Dmsgstate=MSG_COMPLETE -Darchivename="filename.dmp"  Purge based on direction: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Ddirection="OUTBOUND" Normal 0 Purge based on agreement Name: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dagreement="agreement_name" Normal 0 Purge based on Trading partner Name: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dtp=GlobalChips Normal 0 Purge based on Message State: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dmsgstate="MSG_COMPLETE" Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Ddirection="OUTBOUND" -Dmsgstate="MSG_COMPLETE"

    Read the article

  • Submit Nominations for Duke's Choice Awards Latin America

    - by Tori Wieldt
    The Duke's Choice Awards are nominated by members of the Java community and recognize compelling uses of Java technology or community involvement.  The first of the regional Duke's Choice Awards will be in December in Latin America. Three winners will be announced on stage during JavaOne Latin America December 4th to 6th and in the Jan/Feb issue of Java Magazine.   Nominations are accepted from anyone in the Java community for compelling uses of Java technology or community involvement.   Duke's Choice Awards LAD judges include community members Yara Senger (Brazil) and Alexis Lopez (Colombia). In keeping with the 10 year tradition of the Duke's Choice Award program, the most important ingredient is innovation. Let's recognize and celebrate the innovation that Java delivers within Latin America! Submit your nominations now!  Nominations close 7 November. www.java.net/dukeschoiceLAD As announced at JavaOne San Francisco, the Duke's Choice Award program has been expanded to include regional awards in conjunction with each international JavaOne conference.  The expanded Duke's Choice Award program celebrates Java innovation happening within specific regions and provides an opportunity to recognize winners locally. Regions include Latin America (LAD), Europe Africa Middle East (EMEA), and Asia.  The global program will continue in association with the flagship JavaOne conference.  

    Read the article

  • Java Spotlight Episode 112: Joonas Lehiten on @Vaadin

    - by Roger Brinkley
    Interview with Joonas Lehtinen on Vaadin. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Java Smart Metering video JavaFX for Tablets and Mobile survey on FXExperience Muliple JSR Migrating to the Latest JCP Version A number of JEPs added to  JDK 8 features and JDK 8 Milestones Adopt-a-JSR for Java EE 7 Events Dec 14-15, IndicThreads, Pune, India Dec 20, 9:30am JCP Spec Lead Call December on Developing a TCK Jan 15-16, JCP EC Face to Face Meeting, West Coast USA Feature InterviewJoonas Lehtinen started the development of Vaadin, a Java-based open source framework for building business-oriented Rich Internet Applications. He has been developing applications for the web since 1995 with a strong focus on Ajax and Java. He is also the founder and CEO of the company behind the Vaadin framework. What’s Cool Hinkmond Wong’s work with RasberryPI and Java Embedded GPIO Collaborative Whiteboard using WebSocket

    Read the article

  • Company Administrators: Stay Alert!

    - by Pete
    Some of our customers choose to use the Themes feature to rebrand their Training and Support Center link, and redirect it to an internal support site. If your company does this, we strongly advise that for your employees that have the Administrator role, you maintain a separate theme that keeps the Administrator's Training and Support link pointed to the CRM On Demand Training and Support Center, and not redirect it to an internal support site. Why? The company administrator needs access to the Training and Support Center because it gives them pod-specific application alerts on the Support tab and pod-specific release information on the Release Info tab. If a customer no longer has access to the Training and Support Center URL because they have already rebranded that link, they can contact Customer Care to request it again.  

    Read the article

  • Too Clever for My Own Good

    - by AjarnMark
    Yesterday I caught myself being a little too clever for my own good with some ASP.NET code.  It seems that I have forgotten some of my good old classic HTML and JavaScript skills, and become too dependent on the .NET Framework and WebControls to do the work for me.  Here’s the scenario… In order to improve the User Interface and better communicate to the user when something is happening that they need to wait for, we have started to modify some of our larger (slower) pages to display messages like Processing… or Reloading… while they are cycling through a postback.  (Yes, I understand this could be improved by using AJAX / Callbacks and so on, but even then, you need to let your user know that they need to wait for that section to be re-rendered, so for the moment these pages will continue to use good ol’ Postbacks.)  It’s a very simple trick, really.  All I want to do is when some control triggers a postback, first run a little client-side JavaScript to hide the main contents of the page (such as a GridView) and display the appropriate message.  This lets the user know, “Hey, we’re doing something, don’t click another link or scroll and try to take action right now.” The first places I hooked this up were easy.  Most common cause of a postback:  Buttons.  And when you’re writing the markup or declarative code for an ASP:Button control, there is the handy OnClientClick property which is designed for just this purpose…to run client-side JavaScript before the postback occurs.  This is distinguished from the OnClick property which tells the control what Server-side code to run.  Great!  Done!  Easy! But then there are other controls like DropDownLists and CheckBoxes that we use on our pages with the AutoPostback=True setting which cause postbacks.  And these don’t have OnClientClick or OnClientSelectedIndexChanged events.  So I started getting creative, using an ASP:CustomValidator control in conjunction with setting the CausesValidation and ValidationGroup settings on these controls, which basically caused the action on the control to fire the Custom Validator, which was defined with a Client Side validation function which then did the hide content/show message code (and return a meaningless IsValid setting).  This also caused me to define a different ValidationGroup setting for my real data entry validator controls so that I could control them separately and only have them fire when I really wanted validation, and not just my show/hide trick. For a little while I was pretty proud of myself for coming up with this clever approach to get around what I considered to be a serious oversight on the DropDownList and CheckBox controls declarative syntax.  Then, in the midst of my smugness, just as I was about to commit my changes to the source code repository, it dawned on me that there is a much simpler and much more appropriate way to accomplish this.  All that I really needed to do was to put in my server-side code (I used the Page_Init section) a call to MyControl.Attributes.Add(“onClick”, “myJavaScriptFunctionName()”) for the checkboxes, and for the DropDownLists (which become select tags) use “onChange” instead of “onClick”.  This is exactly the type of thing that the Attributes collection is there for…so you can add attributes to be rendered with the control that you would have otherwise stuck right into the HTML markup if you had been doing this by hand in the first place. Ugh!  A few hours wasted on clever tricks that I ended up completely removing, but I did learn a lot more about custom validators and validation groups in the process.  And got a good reminder that all that stuff (HTML, JavaScript, and CSS) I learned back when I wrote classic ASP pages is still valuable today.  Oh, and one more thing…don’t get lulled into too much reliance on the the whiz-bang tool to do it for you.  After all, WebControls are just another layer of abstraction, and sometimes you need to dig down through the layers and get a little closer to the native language.

    Read the article

  • On my way home ...

    - by Mike Dietrich
    Modern technology is nice - sitting in the speed train from Holyhead to London Euston - working a bit. This means: I'm heading home. Still 16 hours to go - but up to now everything seems to work fine. Irish Ferries did a great job. Even though they might never have seen some many passengers entering the Ulysses (what a good name for a ship to start the journey with) everybody was so friendly and helpful. The night at Holyhead station ... ahm ... But the train left right in time. German airspace is still closed until at least 8pm tonight. And Irish airspace seems to be closed as well today. So it might be the best decision to take the longer journey. At least now I have the chance to see some countryside (a bit flat out there - but very green) ;-)

    Read the article

  • Remote synchronization

    - by Tomas Mysik
    Hi all, today we would like to show you another improvement we have prepared for NetBeans 7.2. Today, let's talk a little bit about remote synchronization. If you already use our simple (S)FTP client, this enhancement could be useful for you. Simply right click on Source Files and select Synchronize. Please notice that the remote synchronization works better only on the whole project (it means that the Source Files must be selected). The Synchronize action is also available on individual files (more files can be selected at once) but the suggested operation (download, upload etc.) does not work so precisely. Also please notice that the suggested operations are not 100% reliable since the timestamps provided by FTP servers are not exact. Once the remote files (their names and paths only, of course) are fetched, the main dialog appears: As you can see, NetBeans tries to suggest you operations (upload, download etc.) which should be done for each individual file of your project. If you are interested only in some particular changes, you can simply filter the list: Since we have a file conflict, we need to resolve it first. Fortunately this is very easy because we just select the desired file and click the Diff button . The remote version of our file is downloaded and compared with the local version. The resut is displayed in the dialog where you can easily apply and/or refuse the remote changes or even simply type manually to the local version of the selected file: Once we are done with our changes, the operation for the selected file changes to Upload and the file is marked with * (since we made some changes). Please notice that if you now click the Cancel button, in fact no changes are done in our local file. As you can see, if we have one or more files selected, we can change their operation to: no operation (file won't be synchronized) download upload delete (both local and remote file) reset (the operation is resetted to the original one suggested by NetBeans and also all changes done via Diff action are discarded) Now we are ready to synchronize our project. NetBeans will show us the synchronization summary (this dialog can be omitted, see the Show Summary checkbox on the previous image). The synchronization itself starts and we can see its progress and of course its result. As always, all the operations can be reviewed in the Output window. That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans BugZilla (component php, subcomponent FTP support).

    Read the article

  • surviveFocusChange=true

    - by Geertjan
    Here's a very cool thing that I keep forgetting about but that Jesse reminded me of in the recent blog entries on Undo/Redo: "surviveFocusChange=true". Look at the screenshot below. You see two windows with a toolbar button. The toolbar button is enabled whenever an object named "Bla" is in the Lookup. The "Demo" window has a "Bla" object in its Lookup and hence the toolbar button is enabled when the focus is in the "Demo" window, as shown below: Now the focus is in the "Output" window, which does not have a "Bla" object in its Lookup and hence the button is disabled: However, there are scenarios where you might like the button to remain enabled even when the focus changes. (One such scenario is the Undo/Redo scenario in this blog a few days ago, i.e., even when the Properties window has the focus the Undo/Redo buttons should be enabled.) Here you can see that the button is enabled even though the focus has switched to the "Output" window: How to achieve this? Well, you need to register your Action to have "surviveFocusChange" set to "true". It is, by default, set to "false": import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionReferences; import org.openide.awt.ActionRegistration; import org.openide.util.NbBundle.Messages; @ActionID(category = "File", id = "org.mymodule.BlaAction") @ActionRegistration(surviveFocusChange=true, iconBase = "org/mymodule/Datasource.gif", displayName = "#CTL_BlaAction") @ActionReferences({     @ActionReference(path = "Toolbars/Bla", position = 0) }) @Messages("CTL_BlaAction=Bla") public final class BlaAction implements ActionListener {     private final Bla context;     public BlaAction(Bla context) {         this.context = context;     }     @Override     public void actionPerformed(ActionEvent ev) {         // TODO use context     } } That's all. Now folders and files will be created in the NetBeans Platform filesystem from the annotations above when the module is compiled such that the NetBeans Platform will automatically keep the button enabled even when the user switches focus to a window that does not contain a "Bla" object in its Lookup. Hence, the same "Bla" object will remain available when switching from one window to another, until a new "Bla" object will be made available in the Lookup.

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy. [ Read More ]

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.   Within this new Framework Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful. It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.   The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update. You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder. Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.   Here are the basic steps to perform propagation. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well. Go into the folder to get the results.   In the Edit menu, select 'Propagate'. Select the check-box next to the field to update and enter the new value  Click the Propagate button. Once complete, a dialog will appear showing it is complete What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation. There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated. One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

    Read the article

  • The Changing Face of PASS

    - by Bill Graziano
    I’m starting my sixth year on the PASS Board.  I served two years as the Program Director, two years as the Vice-President of Marketing and I’m starting my second year as the Executive Vice-President of Finance.  There’s a pretty good chance that if PASS has done something you don’t like or is doing something you don’t like, that I’m involved in one way or another. Andy Leonard asked in a comment on his blog if the Board had ever reversed itself based on community input.  He asserted that it hadn’t.  I disagree.  I’m not going to try and list all the changes we make inside portfolios based on feedback from and meetings with the community.  I’m going to focus on major governance issues since I was elected to the Board. Management Company The first big change was our management company.  Our old management company had a standard approach to running a non-profit.  It worked well when PASS was launched.  Having a ready-made structure and process to run the organization enabled the organization to grow quickly.  As time went on we were limited in some of the things we wanted to do.  The more involved you were with PASS, the more you saw these limitations.  Key volunteers were regularly providing feedback that they wanted certain changes that were difficult for us to accomplish.  The Board at that time wanted changes that were difficult or impossible to accomplish under that structure. This was not a simple change.  Imagine a $2.5 million dollar company letting all its employees go on a Friday and starting with a new staff on Monday.  We also had a very narrow window to accomplish that so that we wouldn’t affect the Summit – our only source of revenue.  We spent the year after the change rebuilding processes and putting on the Summit in Denver.  That’s a concrete example of a huge change that PASS made to better serve its members.  And it was a change that many in the community were telling us we needed to make. Financials We heard regularly from our members that they wanted our financials posted.  Today on our web site you can find audited financials going back to 2004.  We publish our budget at the start of each year.  If you ask a question about the financials on the PASS site I do my best to answer it.  I’m also trying to do a better job answering financial questions posted in other locations.  (And yes, I know I owe a few of you some blog posts.) That’s another concrete example of a change that our members asked for that the Board agreed was a good decision. Minutes When I started on the Board the meeting minutes were very limited.  The minutes from a two day Board meeting might fit on one page.  I think we did the bare minimum we were legally required to do.  Today Board meeting minutes run from 5 to 12 pages and go into incredible detail on what we talk about.  There are certain topics that are NDA but where possible we try to list the topic we discussed but that the actual discussion was under NDA.  We also publish the agenda of Board meetings ahead of time. This is another specific example where input from the community influenced the decision.  It was certainly easier to have limited minutes but I think the extra effort helps our members understand what’s going on. Board Q&A At the 2009 Summit the Board held its first public Q&A with our members.  We’d always been available individually to answer questions.  There’s a benefit to getting us all in one room and asking the really hard questions to watch us squirm.  We learn what questions we don’t have good answers for.  We get to see how many people in the crowd look interested in the various questions and answers. I don’t recall the genesis of how this came about.  I’m fairly certain there was some community pressure though. Board Votes Until last November, the Board only reported the vote totals and not how individual Board members voted.  That was one of the topics at a great lunch I had with Tim Mitchell and Kendal van Dyke at the Summit.  That was also the topic of the first question asked at the Board Q&A by Kendal.  Kendal expressed his opposition to to anonymous votes clearly and passionately and without trying to paint anyone into a corner.  Less than 24 hours later the PASS Board voted to make individual votes public unless the topic was under NDA.  That’s another area where the Board decided to change based on feedback from our members. Summit Location While this isn’t actually a governance issue it is one of the more public decisions we make that has taken some public criticism.  There is a significant portion of our members that want the Summit near them.  There is a significant portion of our members that like the Summit in Seattle.  There is a significant portion of our members that think it should move around the country.  I was one that felt strongly that there were significant, tangible benefits to our attendees to being in Seattle every year.  I’m also one that has been swayed by some very compelling arguments that we need to have at least one outside Seattle and then revisit the decision.  I can’t tell you how the Board will vote but I know the opinion of our members weighs heavily on the decision. Elections And that brings us to the grand-daddy of all governance issues.  My thesis for this blog post is that the PASS Board has implemented policy changes in response to member feedback.  It isn’t to defend or criticize our election process.  It’s just to say that is has been under going continuous change since I’ve been on the Board.  I ran for the Board in the fall of 2005.  I don’t know much about what happened before then.  I was actively volunteering for PASS for four years prior to that as a chapter leader and on the program committee.  I don’t recall any complaints about elections but that doesn’t mean they didn’t occur.  The questions from the Nominating Committee (NomCom) were trivial and the selection process rudimentary (For example, “Tell us about your accomplishments”).  I don’t even remember who I ran against or how many other people ran.  I ran for the VP of Marketing in the fall of 2007.  I don’t recall any significant changes the Board made in the election process for that election.  I think a lot of the changes in 2007 came from us asking the management company to work on the election process.  I was expecting a similar set of puff ball questions from my previous election.  Boy, was I in for a shock.  The NomCom had found a much better set of questions and really made the interview portion difficult.  The questions were much more behavioral in nature.  I’d already written about my vision for PASS and my goals.  They wanted to know how I handled adversity, how I handled criticism, how I handled conflict, how I handled troublesome volunteers, how I motivated people and how I responded to motivation. And many, many other things. They grilled me for over an hour.  I’ve done a fair bit of technical sales in my time.  I feel I speak well under pressure addressing pointed questions.  This interview intentionally put me under pressure.  In addition to wanting to know about my interpersonal skills, my work experience, my volunteer experience and my supervisory experience they wanted to see how I’d do under pressure.  They wanted to see who would respond under pressure and who wouldn’t.  It was a bit of a shock. That was the first big change I remember in the election process.  I know there were other improvements around the process but none of them stick in my mind quite like the unexpected hour-long grilling. The next big change I remember was after the 2009 elections.  Andy Warren was unhappy with the election process and wanted to make some changes.  He worked with Hannes at HQ and they came up with a better set of processes.  I think Andy moved PASS in the right direction.  Nonetheless, after the 2010 election even more people were very publicly clamoring for changes to our election process.  In August of 2010 we had a choice to make.  There were numerous bloggers criticizing the Board and our upcoming election.  The easy change would be to announce that we were changing the process in a way that would satisfy our critics.  I believe that a knee-jerk response to criticism is seldom correct. Instead the Board spent August and September and October and November listening to the community.  I visited two SQLSaturdays and asked questions of everyone I could.  I attended chapter meetings and asked questions of as many people as they’d let me.  At Summit I made it a point to introduce myself to strangers and ask them about the election.  At every breakfast I’d sit down at a table full of strangers and ask about the election.  I’m happy to say that I left most tables arguing about the election.  Most days I managed to get 2 or 3 breakfasts in. I spent less time talking to people that had already written about the election.  They were already expressing their opinion.  I wanted to talk to people that hadn’t spoken up.  I wanted to know what the silent majority thought.  The Board all attended the Q&A session where our members expressed their concerns about a variety of issues including the election. The PASS Board also chose to create the Election Review Committee.  We wanted people from the community that had been involved with PASS to look at our election process with fresh eyes while listening to what the community had to say and give us some advice on how we could improve the process.  I’m a part of this as is Andy Warren.  None of the other members are on the Board.  I’ve sat in numerous calls and interviews with this group and attended an open meeting at the Summit.  We asked anyone that wanted to discuss the election to come speak with us.  The ERC held an open meeting at the Summit and invited anyone to attend.  There are forums on the ERC web site where we’ve invited people to participate.  The ERC has reached to key people involved in recent elections.  The years that I haven’t mentioned also saw minor improvements in the election process.  Off the top of my head I don’t recall what exact changes were made each year.  Specifically since the 2010 election we’ve gone out of our way to seek input from the community about the process.  I’m not sure what more we could have done to invite feedback from the community. I think to say that we haven’t “fixed” the election process isn’t a fair criticism at this time.  We haven’t rushed any changes through the process.  If you don’t see any changes in our election process in July or August then I think it’s fair to criticize us for ignoring the community or ask for an explanation for what we’ve done. In Summary Andy’s main point was that the PASS Board hasn’t changed in response to our members wishes.  I think I’ve shown that time and time again the PASS Board has changed in response to what our members want.  There are only two outstanding issues: Summit location and elections.  The 2013 Summit location hasn’t been decided yet.  Our work on the elections is also in progress.  And at every step in the election review we’ve gone out of our way to listen to the community and incorporate their feedback on the process. I also hope I’m not encouraging everyone that wants some change in the organization to organize a “blog rush” against the Board.  We take public suggestions very seriously but we also take the time to evaluate those suggestions and learn what the rest of our members think and make a measured decision.

    Read the article

  • How-to logout from ADF Security

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ADF Security configures an authentication servlet, AuthenticationServlet, in the web.xml file that also provides a logout functionality. Developers can invoke the logout by a redirect performed from an action method in a managed bean as shown next  public String onLogout() {   FacesContext fctx = FacesContext.getCurrentInstance();   ExternalContext ectx = fctx.getExternalContext();   String url = ectx.getRequestContextPath() +              "/adfAuthentication?logout=true&end_url=/faces/Home.jspx";       try {     ectx.redirect(url);   } catch (IOException e) {     e.printStackTrace();   }   fctx.responseComplete();   return null; } To use this functionality in your application, change the Home.jspx reference to a public page of yours that the user is redirected to after successful logout. Note that for a successful logout, authentication should be through form based authentication. Basic authentication is known as browser sign-on and re-authenticates users after the logout redirect. Basic authentication is confusing to many developers for this reason.

    Read the article

  • JSF 2.2 Update from Ed Burns

    - by arungupta
    In a recent interview the JavaServer Faces specification lead, Ed Burns, gave an update on JSF 2.2. This is a required component of the Java EE 7 platform. The work is expected to wrap up by CY 2012 and the schedule is publicly available. The interview provide an update on how Tenant Scope from CDI and multi-templating will be included. It also provide details on which HTML 5 content categories will be addressed. The EG discussions are mirrored at jsr344-experts@javaserverfaces-spec-public. You can also participate in the discussion by posting a message to users@javaserverfaces-spec-public. All the mailing lists are open for subscription anyway and JIRA for spec provide more details about features targeted for the upcoming release. A blog at J-Development provide complete details about the new features coming in this version. And an Early Draft of the specification is available for some time now.

    Read the article

  • Benchmark Against 160 Identity and Access Programs Worldwide

    - by Naresh Persaud
    Aberdeen documented the results of taking a "platform approach" to Identity and Access Management in a recent study - you can read the complete report here. Aberdeen has created an assessment tool that allows organizations to take a similar survey and compare their performance to companies surveyed in the original report. The assessment takes 5 minutes to complete and provides a complete printable report with a statistical comparison for each performance indicator. In addition, the assessment report provides guidance on improvements that organizations can take to achieve better results based on the benchmark. Take the assessment by clicking here.  You can also attend one of the physical events and discuss the results of the survey with Derek Brink the author. In the events, Derek discusses how organizations take advantage of the report. Register here. 

    Read the article

  • 65536% Autogrowth!

    - by Tara Kizer
    Twice a year, we move our production systems to our disaster recovery site.  Last Saturday night was one of those days.  There are about 50 SQL Server databases to be moved to the DR site, which is done via database mirroring.  It takes only a few seconds to failover, but some databases have a bit more involved work such as setting up replication.  Everything went relatively smooth, but we encountered a weird bug on our most mission critical system.  After everything was successfully failed over to the DR site, it was noticed that mirroring was in a suspended state on one of the databases.  We thought we had run into a SQL Server 2005 bug that we had been encountering and were working with Microsoft on a fix.  Microsoft did fix it in both SQL Server 2005 service pack 3 cumulative update package 13 and service pack 4 cumulative update package 2, however SP3 CU13 and SP4 both recently failed on this system so we were not patched yet with the bug fix.  As the suspended state was causing us issues with replication, we dropped mirroring.  We then noticed we had 10MB of free disk space on the mount point where the principal’s data files are stored.  I knew something went amiss as this system should have at least 150GB free on that mount point.  I immediately checked the main database’s data file and was shocked to see an autgrowth size of 65536%.  The data file autogrew right before mirroring went into the suspended state. 65536%! I didn’t have a lot of time to research if this autgrowth problem was a known SQL Server bug, so I deferred that research to today.  A quick Google search yielded no results but emphasis on “quick”.  I checked our performance system, which was recently restored with a copy of the affected production database, and found the autogrowth setting to be 512MB.  So this autogrowth bug was encountered sometime in the last two weeks.  On February 26th, we had attempted to install SQL 2005 SP4 on production, however it had failed (PSS case open with Microsoft).  I suspected that the SP4 failure was somehow related to this autgrowth bug although that turned out not to be the case. I then tweeted (@TaraKizer) about this problem to see if the SQL Server community (#sqlhelp) had any insights.  It seems several people have either heard of this bug or encountered it.  Aaron Bertrand (blog|twitter) referred me to this Connect item. Our affected database originated on SQL Server 2000 and was upgraded to SQL Server 2005 in 2007.  Back on SQL Server 2000, we were using the default file growth setting which was a percentage.  Sometime after the 2005 upgrade is when we changed it to 512MB.  Our situation seemed to fit the bug Aaron referred to me, so now the question was whether Microsoft had fixed it yet. I received a reply to my tweet from Amit Banerjee (twitter) that it had been fixed in SP3 CU1 (KB958004).  My affected system is SP3 CU8, so I was initially confused why we had encountered the bug.  Because I don’t read things fully, I had missed that there are additional steps you have to follow after applying the bug fix.  Amit set me straight.  Although you can read this information in the KB article, I will also copy it here in case you are as lazy as me and miss the most important section of it (although if you are as lazy as me, you won’t have read this far down my blog post): This hotfix will prevent only future occurrences of this problem. For example, if you restore a database from SQL Server 2000 to a SQL Server 2005 instance that contains this hotfix, this problem will not occur. However, if you already have a database that is affected by this problem, you must follow these steps to resolve this problem manually: Apply this hotfix. Set the file growth settings for the affected files to percentage settings, and then set the settings back to megabyte settings. Take the database offline, and then bring it back online. Verify that the values of the is_percent_growth column are correct in the sys.database_files system table and in the sys.master_files system table.

    Read the article

  • Giving a Zone "More Power"

    - by Brian Leonard
    In addition to the traditional virtualization benefits that Solaris zones offer, applications running in zones are also running in a more secure environment. One way to quantify this is compare the privileges available to the global zone with those of a local zone. For example, there a 82 distinct privileges available to the global zone: bleonard@solaris:~$ ppriv -l | wc -l 82 You can view the descriptions for each of those privileges as follows: bleonard@solaris:~$ ppriv -lv contract_event Allows a process to request critical events without limitation. Allows a process to request reliable delivery of all events on any event queue. contract_identity Allows a process to set the service FMRI value of a process contract template. ... Or for just one or more privileges: bleonard@solaris:~$ ppriv -lv file_dac_read file_dac_write file_dac_read Allows a process to read a file or directory whose permission bits or ACL do not allow the process read permission. file_dac_write Allows a process to write a file or directory whose permission bits or ACL do not allow the process write permission. In order to write files owned by uid 0 in the absence of an effective uid of 0 ALL privileges are required. However, in a non-global zone, only 43 of the 83 privileges are available by default: root@myzone:~# ppriv -l zone | wc -l 43 The missing privileges are: cpc_cpu dtrace_kernel dtrace_proc dtrace_user file_downgrade_sl file_flag_set file_upgrade_sl graphics_access graphics_map net_mac_implicit proc_clock_highres proc_priocntl proc_zone sys_config sys_devices sys_ipc_config sys_linkdir sys_dl_config sys_net_config sys_res_bind sys_res_config sys_smb sys_suser_compat sys_time sys_trans_label virt_manage win_colormap win_config win_dac_read win_dac_write win_devices win_dga win_downgrade_sl win_fontpath win_mac_read win_mac_write win_selection win_upgrade_sl xvm_control However, just like Tim Taylor, it is possible to give your zones more power. For example, a zone by default doesn't have the privileges to support DTrace: root@myzone:~# dtrace -l ID PROVIDER MODULE FUNCTION NAME The DTrace privileges can be added, however, as follows: bleonard@solaris:~$ sudo zonecfg -z myzone Password: zonecfg:myzone> set limitpriv="default,dtrace_proc,dtrace_user" zonecfg:myzone> verify zonecfg:myzone> exit bleonard@solaris:~$ sudo zoneadm -z myzone reboot Now I can run DTrace from within the zone: root@myzone:~# dtrace -l | more ID PROVIDER MODULE FUNCTION NAME 1 dtrace BEGIN 2 dtrace END 3 dtrace ERROR 7115 syscall nosys entry 7116 syscall nosys return ... Note, certain privileges are never allowed to be assigned to a zone. You'll be notified on boot if you attempt to assign a prohibited privilege to a zone: bleonard@solaris:~$ sudo zoneadm -z myzone reboot privilege "dtrace_kernel" is not permitted within the zone's privilege set zoneadm: zone myzone failed to verify Here's a nice listing of all the privileges and their zone status (default, optional, prohibited): Privileges in a Non-Global Zone.

    Read the article

  • OmniGraffle for iPad Now Supports VGA Output

    - by pat.shepherd
    I have (surprisingly) gotten a lot of comments over the last post about using OmniGraffle as an interactive EA tool.  The news flash/update is that it now supports VGA output.  I had sent a note to the developers and they responded that this was a highly sought after feature…well, they delivered. I have tried it informally and it works, thought there is a little lag between the drawing on the screen and the output, but it is not terrible. So buy yourself a VGA adapter and start trying it out in JAD (Joint Architecture Design) sessions. Here is a link to a couple little OG tutorials: "What's OmniGraffle for iPad", you say? Let us show you! Use the link below to see watch a guided tour of the powerful diagraming tool for the iPad. Videos - OmniGraffle for iPad - Products - The Omni Group

    Read the article

  • SQL MDS - Updating the Name attribute of member using Staging Table

    - by Randy Aldrich Paulo
    Creating member is usually done by populating the Member Staging Table (tblStgMember), during this process you assign a value for member code and member name. Now if you want to update the member name attribute you can do this by adding record in Attribute staging table (tblStgMemberAttribute) with Attribute Name = "Name". If you try populating the tblStgMember table it will say that the member code already exists.   INSERT INTO mdm.tblStgMemberAttribute (ModelName, EntityName, MemberType_ID, MemberCode, AttributeName, AttributeValue) VALUES (N'Product', N'Product', 1, N'BK-M101', N'Name',N'Updated Member Name Description')

    Read the article

  • Easy Profiling Point Insertion

    - by Geertjan
    One really excellent feature of NetBeans IDE is its Profiler. What's especially cool is that you can analyze code fragments, that is, you can right-click in a Java file and then choose Profiling | Insert Profiling Point. When you do that, you're able to analyze code fragments, i.e., from one statement to another statement, e.g., how long a particular piece of code takes to execute: https://netbeans.org/kb/docs/java/profiler-profilingpoints.html However, right-clicking a Java file and then going all the way down a longish list of menu items, to find "Profiling", and then "Insert Profiling Point" is a lot less easy than right-clicking in the sidebar (known as the glyphgutter) and then setting a profiling point in exactly the same way as a breakpoint: That's much easier and more intuitive and makes it far more likely that I'll use the Profiler at all. Once profiling points have been set then, as always, another menu item is added for managing the profiling point: To achieve this, I added the following to the "layer.xml" file: <folder name="Editors"> <folder name="AnnotationTypes"> <file name="profiler.xml" url="profiler.xml"/> <folder name="ProfilerActions"> <file name="org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.shadow"> <attr name="originalFile" stringvalue="Actions/Profile/org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.instance"/> <attr name="position" intvalue="300"/> </file> </folder> </folder> </folder> Notice that a "profiler.xml" file is referred to in the above, in the same location as where the "layer.xml" file is found. Here is the content: <!DOCTYPE type PUBLIC '-//NetBeans//DTD annotation type 1.1//EN' 'http://www.netbeans.org/dtds/annotation-type-1_1.dtd'> <type name='editor-profiler' description_key='HINT_PROFILER' localizing_bundle='org.netbeans.eppi.Bundle' visible='true' type='line' actions='ProfilerActions' severity='ok' browseable='false'/> Only disadvantage is that this registers the profiling point insertion in the glyphgutter for all file types. But that's true for the debugger too, i.e., there's no MIME type specific glyphgutter, instead, it is shared by all MIME types. Little bit confusing that the profiler point insertion can now, in theory, be set for all MIME types, but that's also true for the debugger, even though it doesn't apply to all MIME types. That probably explains why the profiling point insertion can only be done, officially, from the right-click popup menu of Java files, i.e., the developers wanted to avoid confusion and make it available to Java files only. However, I think that, since I'm already aware that I can't set the Java debugger in an HTML file, I'm also aware that the Java profiler can't be set that way as well. If you find this useful too, you can download and install the NBM from here: http://plugins.netbeans.org/plugin/55002

    Read the article

  • New release of &quot;OLAP PivotTable Extensions&quot;

    - by Luca Zavarella
    For those who are not familiar with this add-in, the OLAP PivotTable Extensions add features of interest to Excel 2007 or 2010 PivotTables pointing to an OLAP cube in Analysis Services. One of these features I like very much, is to know the MDX query code associated with the pivot used at that time in Excel: You can find all the details here: http://olappivottableextend.codeplex.com/ It was recently released a new version of the add-in (version 0.7.4), which does not introduce any new features, but fixes a significant bug: Release 0.7.4 now properly handles languages but introduces no new features. International users who run a different Windows language than their Excel UI language may be receiving an error message when they double click a cell and perform drillthrough which reads: "XML for Analysis parser: The LocaleIdentifier property is not overwritable and cannot be assigned a new value". This error was caused by OLAP PivotTable Extensions in some situations, but release 0.7.4 fixes this problem. Enjoy!

    Read the article

  • WebLogic not reading boot.properties 11.1.1.x

    - by James Taylor
    In WebLogic 11.1.1.1 the boot.properties file was stored in the $MW_HOME/user_projects/domains/[domain] directory. It would be read at startup and there would be no requirement to enter username and password. In later releases the location has changed to $MW_HOME/user_projects/domains/[domain]/servers/[managed_server]/security In most instances you will need to create the security directory If you want to specify a custom directory add the following to the startup scripts for the server. -Dweblogic.system.BootIdentityFile=[loc]/boot.properties create a boot.properties file using the following entry username=<adminuser> password=<password>

    Read the article

  • Doing a P2V in OVM 3.0.3

    - by Steen Schmidt
    The other day I was talking to a customer about how you can do a P2V in OVM. I had already written about this topic earlier in my Blog and there was also some good documentation on the topic on how you do this. But what about seing the whole process from start to end, so I have include a link to a demo on the topic. Here is demo that has been divide into three steps: Step 1. Taget System,   Step 2. Import into OVM, and    Step 3. Use the new Template.

    Read the article

< Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >