Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 492/739 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • JCP EC Nomination Materials for 2012

    - by heathervc
    The nomination period of the 2012 Annual JCP EC Elections will begin at the end of September 2012.  The JCP will be accept self-nominations for 2 seats on what will become the merged JCP EC, starting 28 September, with the nomination period ending on Thursday, 11 October. JCP Members (JSPA 2 primary contacts) will receive messages with instructions for nominating and their login credentials via email.  You will need this credential information to login and complete the nomination.The JCP EC Special Election schedule is posted online in the JCP calendar, highlights are below:Nominations for elected seats: 28 September-11 OctoberBallot (ratified and elected): 16-29 OctoberNew members take office: 13 November The ballot with nominees for ratified and nominated seats begins on 16 October. The results will also be available on jcp.org on 30 October. If you are attending JavaOne 2012 in San Francisco, there are several events happening that you may be interested in attending, in particular the following BOF session.Meet the JCP Executive Committee CandidatesSession ID: BOF6307Location: Hilton San Francisco - Golden Gate 3/4/5Date and Time: 10/2/12, 4:30 PM - 5:15 PM We will also be hosting a call for all of the candidates following the nomination period.  The following information is required for self-nomination.1) Contact information/Biography Each EC seat is represented by two people - a primary and alternate representative. Provide the following information for each representative: - Name - Title - Email Address - Mailing Address - Phone Number - Fax - A brief biography (3-5 sentences/~100-200 words) for primary contact - Photograph (prefer jpg format, head only shot) for primary contactBios and photos for the EC members are posted here:http://jcp.org/en/press/news/ec-feature_MEhttp://jcp.org/en/press/news/ec-feature_SE2) Qualification StatementA brief (2-3 paragraph) description of your qualifications for an EC seat; this is a Qualification Statement for the organization you represent. It should include the value and perspective you bring to the EC, your interests in the JCP program, as well as a summary of your current participation or planned participation in the JCP program (your entire organization)--JSRs, participation on Expert Groups, meetings/events attended, etc.  This statement will appear on the ballot and will convince community members that they should vote for you, so please include relevant information about your experience within the JCP program and your investments in Java technology.A few sample qualification statements are available here.3) Position PaperOne of the pieces of information we make available to the JCP membership for voting purposes is a position paper.  If you would like to provide this type of information for the ballot, please prepare in pdf format for posting.  This would be more detail on areas that you would put focus into during your tenure on the JCP EC.You can read more about some of the topics under discussion in the EC here, including links to JCP.Next materials. If you have an interest in participating in the JCP EC, please start preparing these materials now.  We look forward to a successful election process.

    Read the article

  • SSAS: Utility to check you have the correct data types and sizes in your cube definition

    - by DrJohn
    This blog describes a tool I developed which allows you to compare the data types and data sizes found in the cube’s data source view with the data types/sizes of the corresponding dimensional attribute.  Why is this important?  Well when creating named queries in a cube’s data source view, it is often necessary to use the SQL CAST or CONVERT operation to change the data type to something more appropriate for SSAS.  This is particularly important when your cube is based on an Oracle data source or using custom SQL queries rather than views in the relational database.   The problem with BIDS is that if you change the underlying SQL query, then the size of the data type in the dimension does not update automatically.  This then causes problems during deployment whereby processing the dimension fails because the data in the relational database is wider than that allowed by the dimensional attribute. In particular, if you use some string manipulation functions provided by SQL Server or Oracle in your queries, you may find that the 10 character string you expect suddenly turns into an 8,000 character monster.  For example, the SQL Server function REPLACE returns column with a width of 8,000 characters.  So if you use this function in the named query in your DSV, you will get a column width of 8,000 characters.  Although the Oracle REPLACE function is far more intelligent, the generated column size could still be way bigger than the maximum length of the data actually in the field. Now this may not be a problem when prototyping, but in your production cubes you really should clean up this kind of thing as these massive strings will add to processing times and storage space. Similarly, you do not want to forget to change the size of the dimension attribute if your database columns increase in size. Introducing CheckCubeDataTypes Utiltity The CheckCubeDataTypes application extracts all the data types and data sizes for all attributes in the cube and compares them to the data types and data sizes in the cube’s data source view.  It then generates an Excel CSV file which contains all this metadata along with a flag indicating if there is a mismatch between the DSV and the dimensional attribute.  Note that the app not only checks all the attribute keys but also the name and value columns for each attribute. Another benefit of having the metadata held in a CSV text file format is that you can place the file under source code control.  This allows you to compare the metadata of the previous cube release with your new release to highlight problems introduced by new development. You can download the C# source code from here: CheckCubeDataTypes.zip A typical example of the output Excel CSV file is shown below - note that the last column shows a data size mismatch by TRUE appearing in the column

    Read the article

  • What's upcoming in the GlassFish Webinar Series

    - by pieter.humphrey
    2011 is kicking off with the return of the GF Webinar series as you've never seen it before.  It's going to be packed with information about Java EE6 and how simplicity, testability and convention-over-configuration is winning the hearts and minds of enterprise Java developers.  Don't miss these industry leading speakers and topics reviewing the cutting edge of Java EE6 implementations, tools, and much more.   Note:  future dates are subject to change. Jan 20th: GlassFish & Netbeans Jan 27th: Building a Simple Web Application with Java EE Feb 15th: Java EE Developer Tools 'shootout' with GlassFish Feb 24th: What's New in GlassFish 3.1 Clustering & HA Admin Console Coherence Web Integration Security Microkernel Architecture March 15th: GlassFish 3.1 - clustering deep dive March 29th: GlassFish 3.1 - Admin Console & Productivity Features April 5th: GlassFish 3.1 - Coherence Web Integration deep dive Possible "Tech cast live" event: April (date TBC): Special Guest Adam Bien April 19th: GlassFish 3.1 - Security deep dive with Byron Nevins & TBD May 3rd: GlassFish 3.1 - Microkernel Architecture deep dive Possible "Tech cast live" event: May 17th: "Upgrading to 3.1 from existing GlassFish installations" May 31st: Embedded GlassFish del.icio.us Tags: glassfish,development,java,java ee,java ee6,OTN,NetBeans,JDeveloper,enterprise Pack for Eclipse Technorati Tags: glassfish,development,java,java ee,java ee6,OTN,NetBeans,JDeveloper,enterprise Pack for Eclipse

    Read the article

  • Tab Sweep: Email, AntClassLoader, CouchBase Manager, Memory Usage, ...

    - by arungupta
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Java, GlassFish v3, High CPU and Memory Usage, Locked Threads, Death (Gregor Bowie) • Why I will continue to use Spring *and* Java EE in new Enterprise Java Projects in 2012/2013 (Nikos Maravitsas) • The Most Frequently Asked Question About Java EE 6 & NetBeans (Geertjan) • AntClassLoader bug exposed by forgetful NetBeans (Vince) • Quick Fix for GlassFish/MySQL NoPasswordCredential Found (Mark Heckler) • Sending email via Glassfish v3 (Zbynek Šlajchrt) • COUCHBASE MANAGER FOR GLASSFISH: MORE TESTS (Ricky Poderi)

    Read the article

  • It is not quiet....

    - by Anthony Shorten
    You may of noticed that the blog entries have been not so frequent lately. I have been extremely busy with putting the final touches on the release of the next generation of our products. It is an exciting time for me to see the release of new functionality from start to finish for the first time becoming a Product Manager (good to see one's designs actually implemented). Once the next generation of the products have been released there will be a flood of entries outlining all the new and exciting features. Watch the skies....

    Read the article

  • The Diabolical Developer: What You Need to Do to Become Awesome

    - by Tori Wieldt
    Wearing sunglasses and quite possibly hungover, Martijn Verburg's evil persona provided key tips on how to be a Diabolical Developer. His presentation at TheServerSide Java Symposium was heavy on the sarcasm and provided lots of laughter. Martijn insisted that developers take their power back and get rid of all the "modern fluff" that distract developers.He provided several key tips to become a Diabolical Developer:*Learn only from yourself. Don't read blogs or books, and don't attend conferences. If you must go on forums, only do it display your superiority, answer as obscurely as possible.*Work aloneBest coding happens when you alone in your room, lock yourself in for days. Make sure you have a gaming machine in with you.*Keep information to yourselfKnowledge is power. Think job security. Never provide documentation. *Make sure only you can read your code.Don't put comments in your code. Name your variables A,B,C....A1,B1, etc.If someone insists you format your in a standard way, change a small section and revert it back as soon as they walk away from your screen. *Stick to what you knowStay on Java 1.3. Don't bother learning abstractions. Write your application in a single file. Stuff as much code into one class as possible, a 30,000-line class is fine. Makes it easier for you to read and maintain.*Use Real ToolsNo "fancy-pancy" IDEs. Real developers only use vi.*Ignore FadsThe cloud is massively overhyped. Mobile is a big fad for young kids.The big, clunky desktop computer (with a real keyboard) will return.Learn new stuff only to pad your resume. Ajax is great for that. *Skip TestingTest-driven development is a complete waste of time. They sent men to the moon without unit tests.Just write your code properly in the first place and you don't need tests.*Compiled = Ship ItUser acceptance testing is an absolute waste of time. *Use a Single ThreadDon't use multithreading. All you need to do is throw more hardware at the problem.*Don't waste time on SEO.If you've written the contract correctly, you are paid for writing code, not attracting users.You don't want a lot of users, they only report problems. *Avoid meetingsFake being sick to avoid meetings. If you are forced into a meeting, play corporate bingo.Once you stand up and shout "bingo" you will kicked out of the meeting. Job done.Follow these tips and you'll be well on your way to being a Diabolical Developer!

    Read the article

  • Chem eStandards 5.1 in Public Review

    - by michael.rowell
    The Open Applications Group has announced the opening of the 45 day public review period for Chem eStandards version 5.1. Interested parties have until 13 July to submit comments. There will be two webinars review sessions on 23 June and 24 June. The details of the webinars will be available soon. You can download the Chem eStandards review package. If you have any questions, contact Jim Wilson, the OAGi Chemical Council Architect.

    Read the article

  • SPARC Power Management Article at OTN

    - by nospam(at)example.com (Joerg Moellenkamp)
    My colleague Karoly Vegh pointed in a tweet to a really interesting article about the usage of Power Management of SPARC T-series systems. The article explains how to use the power management, how it works, what it's able to do and how to use it in a dynamic fashion according to anticipated load patterns. You find the article "How to Use the Power Management Controls on SPARC Servers" written by by Bruce Evans, Julia Harper, and Terry Whatley on OTN.

    Read the article

  • Free SANS Mobility Policy Survey Webcast - October 23rd @10:00 am PST

    - by Darin Pendergraft
    Join us for a free webcast tomorrow, October 23 @ 10:00 am PST as SANS presents the findings from their mobility policy survey. -- Register here for Part 1: https://www.sans.org/webcasts/byod-security-lists-policies-mobility-policy-management-survey-95429 This is a great opportunity to see where companies are with respect to mobile access policies and overall mobile application management. This first part is entitled: BYOD Wish Lists and Policies.  Part 2 will be run on October 25th and is entitled: BYOD security practices. -- Register here for Part 2: https://www.sans.org/webcasts/byod-security-practices-2-mobility-policy-management-survey-95434

    Read the article

  • Facebook Stories for Retailers

    - by David Dorf
    Getting people to "like" a brand is important because it opens the door to a possible B2C relationship. Once a person likes that brand, the brand can post to their newsfeed with promotions, announcements, and surveys. At least for me, I "hide" the noisy brands and just monitor the ones that keep posts under 4 times a week. I see lots of people, especially with fashion brands, comment on postings at which point the posting is seen by their network. A metric I've heard (but not verified) is that for every person that comments, ten of their friends see the original posting. That's a pretty cheap way to communicate to potential customers in a viral way. Over at mainstreet.com they compiled the a list of the top liked retailers on Facebook as of Feb 1, 2011. They are listed below: 19,414,892 Starbucks 11,302,939 Victoria's Secret 7,925,184 Zara 7,032,398 McDonald's 6,117,222 H&M 5,400,586 Taco Bell 4,665,760 Subway 4,494,849 Lacoste 4,185,570 Hollister 3,973,181 Forever 21 So I guess the public likes their fast-food and fashion. To take this to the next level, Facebook is now displaying Sponsored Stories, which I saw for the first time on my page this weekend. I found this picture at the Wall Blog that depicits Sponsored Stories very well. Over on the right-hand column of a person's page, where they see advertisements and such, Facebook will post stories involving their network of friends and their interaction with sponsored brands. Now their "likes" can suddenly become your ads. "Jessica and Philip like Starbucks. What are you waiting for?" This is another great way to take messages viral by accessing social graphs. As usual there will be a certain level of outcry from privacy advocates, but given the other more iniquitous issues, I believe this will fall by the wayside. Retailers should consider using Sponsored Stories to increase their Likes, and thus increase their voice in the social world.

    Read the article

  • Java Spotlight Episode 111: Bruno Souza @brjavaman and Fabiane Nardon @fabianenardonon StoryTroop @storytroop

    - by Roger Brinkley
    Interview with Bruno Souza and Fabiane Nardon on StoryTroop. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News End of Puplic Updates for JDK 6 Bean Valdiation 1.1 public review approved Two key JSRs accepted in time for JavaEE7 Public_JCP EC_meeting_audio_and materials posted Devoxx UK and Devoxx France CFP open JPA 2.1 Schema Generation WebSocket, Java EE 7, and GlassFish Events Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India JCP Spec Lead Call December on Developing a TCK JCP EC Face to Face Meeting, January 15-16, West Coast USA Feature InterviewBruno Souza is a Java Developer and Open Source Evangelist at Summa Technologies, and a Cloud Expert at ToolsCloud. Nurturing developer communities is a personal passion, and Bruno worked actively with Java, NetBeans, Open Solaris, OFBiz, and many other open source communities. As founder and coordinator of SouJava (The Java Users Society), one of the world's largest Java User Groups, Bruno leaded the expansion of the Java movement in Brazil. Founder of the Worldwide Java User Groups Community, Bruno helped the creation and organization of hundreds of JUGs worldwide. A Java Developer since the early days, Bruno participated in some of the largest Java projects in Brazil.Fabiane Nardon is a computer scientist who is passionate about creating software that will positively change the world we live in. She was the architect of the Brazilian Healthcare Information System, considered the largest JavaEE application in the world and winner of the 2005 Duke's Choice Award. She leaded several communities, including the JavaTools Community at java.net, where 800+ open source projects were born. She is a frequent speaker at conferences in Brazil and abroad, including JavaOne, OSCON, Jfokus, JustJava and more. She’s also the author of several technical articles and member of the program committee of several conferences as JavaOne, OSCON, TDC. She was chosen a Java Champion by Sun Microsystems as a recognition of her contribution to the Java ecosystem. Currently, she works as a tools expert at ToolsCloud and in companies she co-founded, where she is helping to shape new disruptive Internet based services.StoryTroop is a space where we combine multiple perspectives about a story. This creates an understanding of that story like never seen before. Pieces of a story are organized in time and space and anyone can add a different perspective.What’s Cool Geek Bike Ride at JavaOne LAD Devoxx UK (Mar 26, 27) and FR (Mar 27 - 29) CFP jFokus schedule is firming up Nashorn Blog 1,500 @JavaSpotlight Twitter followers

    Read the article

  • JDK 7u25: Solutions to Issues caused by changes to Runtime.exec

    - by Devika Gollapudi
    The following examples were prepared by Java engineering for the benefit of Java developers who may have faced issues with Runtime.exec on the Windows platform. Background In JDK 7u21, the decoding of command strings specified to Runtime.exec(String), Runtime.exec(String,String[]) and Runtime.exec(String,String[],File) methods, has been made more strict. See JDK 7u21 Release Notes for more information. This caused several issues for applications. The following section describes some of the problems faced by developers and their solutions. Note: In JDK 7u25, the system property jdk.lang.Process.allowAmbigousCommands can be used to relax the checking process and helps as a workaround for some applications that cannot be changed. The workaround is only effective for applications that are run without a SecurityManager. See JDK 7u25 Release Notes for more information. Note: To understand the details of the Windows API CreateProcess call, see: http://msdn.microsoft.com/en-us/library/windows/desktop/ms682425%28v=vs.85%29.aspx There are two forms of Runtime.exec calls: with the command as string: "Runtime.exec(String command[, ...])" with the command as string array: "Runtime.exec(String[] cmdarray [, ...] )" The issues described in this section relate to the first form of call. With the first call form, developers expect the command to be passed "as is" to Windows where the command needs be split into its executable name and arguments parts first. But, in accordance with Java API, the command argument is split into executable name and arguments by spaces. Problem 1: "The file path for the command includes spaces" In the call: Runtime.getRuntime().exec("c:\\Program Files\\do.exe") the argument is split by spaces to an array of strings as: c:\\Program, Files\\do.exe The first element of parsed array is interpreted as the executable name, verified by SecurityManager (if present) and surrounded by quotations to avoid ambiguity in executable path. This results in the wrong command: "c:\\Program" "Files\\do.exe" which will fail. Solution: Use the ProcessBuilder class, or the Runtime.exec(String[] cmdarray [, ...] ) call, or quote the executable path. Where it is not possible to change the application code and where a SecurityManager is not used, the Java property jdk.lang.Process.allowAmbigousCommands could be used by setting its value to "true" from the command line: -Djdk.lang.Process.allowAmbigousCommands=true This will relax the checking process to allow ambiguous input. Examples: new ProcessBuilder("c:\\Program Files\\do.exe").start() Runtime.getRuntime().exec(new String[]{"c:\\Program Files\\do.exe"}) Runtime.getRuntime().exec("\"c:\\Program Files\\do.exe\"") Problem 2: "Shell command/.bat/.cmd IO redirection" The following implicit cmd.exe calls: Runtime.getRuntime().exec("dir temp.txt") new ProcessBuilder("foo.bat", "", "temp.txt").start() Runtime.getRuntime().exec(new String[]{"foo.cmd", "", "temp.txt"}) lead to the wrong command: "XXXX" "" temp.txt Solution: To specify the command correctly, use the following options: Runtime.getRuntime().exec("cmd /C \"dir temp.txt\"") new ProcessBuilder("cmd", "/C", "foo.bat temp.txt").start() Runtime.getRuntime().exec(new String[]{"cmd", "/C", "foo.cmd temp.txt"}) or Process p = new ProcessBuilder("cmd", "/C" "XXX").redirectOutput(new File("temp.txt")).start(); Problem 3: "Group execution of shell command and/or .bat/.cmd files" Due to enforced verification procedure, arguments in the following calls create the wrong commands.: Runtime.getRuntime().exec("first.bat && second.bat") new ProcessBuilder("dir", "&&", "second.bat").start() Runtime.getRuntime().exec(new String[]{"dir", "|", "more"}) Solution: To specify the command correctly, use the following options: Runtime.exec("cmd /C \"first.bat && second.bat\"") new ProcessBuilder("cmd", "/C", "dir && second.bat").start() Runtime.exec(new String[]{"cmd", "/C", "dir | more"}) The same scenario also works for the "&", "||", "^" operators of the cmd.exe shell. Problem 4: ".bat/.cmd with special DOS chars in quoted params” Due to enforced verification, arguments in the following calls will cause exceptions to be thrown.: Runtime.getRuntime().exec("log.bat \"error new ProcessBuilder("log.bat", "error Runtime.getRuntime().exec(new String[]{"log.bat", "error Solution: To specify the command correctly, use the following options: Runtime.getRuntime().exec("cmd /C log.bat \"error new ProcessBuilder("cmd", "/C", "log.bat", "error Runtime.getRuntime().exec(new String[]{"cmd", "/C", "log.bat", "error Examples: Complicated redirection for shell construction: cmd /c dir /b C:\ "my lovely spaces.txt" becomes Runtime.getRuntime().exec(new String[]{"cmd", "/C", "dir \b \"my lovely spaces.txt\"" }); The Golden Rule: In most cases, cmd.exe has two arguments: "/C" and the command for interpretation.

    Read the article

  • Upgrade nap prezentáció

    - by Lajos Sárecz
    Még tart az Upgrade 11g szeminárium, de a prezentáció már elérheto, letöltheto innen az upgrade112 jelszó megadását követoen. A prezentáció egy bovebb változat, hiszen két napos workshop-ra van tervezve. Lehetoség van arra, hogy a jövoben egy 2 napos workshopot szervezzünk, ahol már nem csak slide-ok, de hands-on gyakorlatok is lennének. Kérjük aki szeretne ilyenen Budapesten részt venni, az jelezze nekünk. Azt gondolom akik eljöttek, azok meggyozodhettek róla, hogy Mike komoly upgrade tapasztalattal rendelkezik, és számos jó ötletet adott a résztvevok számára. Aki felbátorodik, és nekiáll az upgrade-nek, majd szívesen beszámol a tapasztalatairól, azoknak eloadás pályázatát majd örömmel látjuk a jövo évi HOUG-on!

    Read the article

  • Filtering option list values based on security in UCM

    - by kyle.hatlestad
    Fellow UCM blog writer John Sim recently posted a comment asking about filtering values based on the user's security. I had never dug into that detail before, but thought I would take a look. It ended up being tricker then I originally thought and required a bit of insider knowledge, so I thought I would share. The first step is to create the option list table in Configuration Manager. You want to define the column for the option list value and any other columns desired. You then want to have a column which will store the security attribute to apply to the option list value. In this example, we'll name the column 'dGroupName'. Next step is to create a View based on the new table. For the Internal and Visible column, you can select the option list column name. Then click on the Security tab, uncheck the 'Publish view data' checkbox and select the 'Use standard document security' radio button. Click on the 'Edit Values...' button and add the values for the option list. In the dGroupName field, enter the Security Group (or Account if you use Accounts for security) to apply to that value. Create the custom metadata field and apply the View just created. The next step requires file system access to the server. Open the file [ucm directory]\data\schema\views\[view name].hda in a text editor. Below the line '@Properties LocalData', add the line: schSecurityImplementorColumnMap=dGroupName:dSecurityGroup The 'dGroupName' value designates the column in the table which stores the security value. 'dSecurityGroup' indicates the type of security to check against. It would be 'dDocAccount' if using Accounts. Save the file and restart UCM. Now when a user goes to the check-in page, they will only see the options for which they have read and write privileges to the associated Security Group. And on the Search page, they will see the options for which they have just read access. One thing to note is if a value that a user normally can't view on Check-in or Search is applied to a document, but the document is viewable by the user, the user will be able to see the value on the Content Information screen.

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • Adopt-a-JSR for Java EE 7 - Getting Started

    - by arungupta
    Adopt-a-JSR is an initiative started by JUG leaders to encourage JUG members to get involved in a JSR, in order to increase grass roots participation. This allows JUG members to provide early feedback to specifications before they are finalized in the JCP. The standards in turn become more complete and developer-friendly after getting feedback from a wide variety of audience. adoptajsr.org provide more details about the logistics and benefits for you and your JUG. A similar activity was conducted for OpenJDK as well. Markus Eisele also provide a great introduction to the program (in German). Java EE 7 (JSR 342) is scheduled to go final in Q2 2013. There are several new JSRs that are getting included in the platform (e.g. WebSocket, JSON, and Batch), a few existing ones are getting an overhaul (e.g. JAX-RS 2 and JMS 2), and several other getting minor updates (e.g. JPA 2.1 and Servlets 3.1). Each Java EE 7 JSR can leverage your expertise and would love your JUG to adopt a JSR. What does it mean to adopt a JSR ? Your JUG is going to identify a particular JSR, or multiple JSRs, that is of interest to the JUG members. This is mostly done by polling/discussing on your local JUG members list. Your JUG will download and review the specification(s) and javadocs for clarity and completeness. The complete set of Java EE 7 specifications, their download links, and EG archives are listed here. glassfish.org/adoptajsr provide specific areas where different specification leads are looking for feedback. Your JUG can then think of a sample application that can be built using the chosen specification(s). An existing use case (from work or a personal hobby project) may be chosen to be implemented instead. This is where your creativity and uniqueness comes into play. Most of the implementations are already integrated in GlassFish 4 and others will be integrated soon. You can also explore integration of multiple technologies and provide feedback on the simplicity and ease-of-use of the programming model. Especially look for integration with existing Java EE technologies and see if you find any discrepancies. Report any missing features that may be included in future release of the specification. The most important part is to provide feedback by filing bugs on the corresponding spec or RI project. Any thing that is not clear either in the spec or implementation should be filed as a bug. This is what will ensure that specification and implementation leads are getting the required feedback and improving the quality of the final deliverable of the JSR. How do I get started ? A simple way to get started can be achieved by following S.M.A.R.T. as explained below. Specific Identify who all will be involved ? What would you like to accomplish ? For example, even though building a sample app will provide real-world validity of the API but because of time constraints you may identify that reviewing the specification and javadocs only can be accomplished. Establish a time frame by which the activities need to be complete. Measurable Define a success for metrics. For example, this could be the number of bugs filed. Remember, quality of bugs is more important that quantity of bugs. Define your end goal, for example, reviewing 4 chapters of the specification or completing the sample application. Create a dashboard that will highlight your JUG's contribution to this effort. Attainable Make sure JUG members understand the time commitment required for providing feedback. This can vary based upon the level of involvement (any is good!) and the number of specifications picked. adoptajsr.org defines different categories of involvement. Once again, any level of involvement is good. Just reviewing a chapter, a section, or javadocs for your usecase is helpful. Relevant Pick JSRs that JUG members are willing and able to work. If the JUG members are not interested then they might loose motivation half-way through. The "able" part is tricky as you can always stretch yourself and learn a new skill ;-) Time-bound Define a time table of activities with clearly defined tasks. A tentative time table may look like: Dec 25: Discuss and agree upon the specifications with JUG Jan 1: Start Adopt-a-JSR for Java EE 7 Jan 15: Initial spec reading complete. Keep thinking through the application that will be implemented. Jan 22: Early design of the sample application is ready Jan 29: JUG members agree upon the application Next 4 weeks: Implement the application Of course, you'll need to alter this based upon your commitment. Maintaining an activity dashboard will help you monitor and track the progress. Make sure to keep filing bugs through out the process! 12 JUGs from around the world (SouJava, Campinas JUG, Chennai JUG, London Java Community, BeJUG, Morocco JUG, Peru JUG, Indonesia JUG, Congo JUG, Silicon Valley JUG, Madrid JUG, and Houston JUG) have already adopted one of the Java EE 7 JSRs. I'm already helping some JUGs bootstrap and would love to help your JUG too. What are you waiting for ?

    Read the article

  • Upcoming Upgrade Workshops in the US

    - by Mike Dietrich
    As Roy is really busy in traveling the whole North American continent I would like to highlight a few of Roy's upcoming workshops with registration links - so simply "click" and register :-) March 23, 2011: Philadelphia, PA March 24, 2011: Reston, VA April 07, 2011: Dallas, TX April 13, 2011: Birmingham, AL April 14, 2011: Minneapolis, MN Roy is looking forward to meet you in one of the above or the upcoming events in California and Oregon. Mike

    Read the article

  • OpenSSL Versions in Solaris

    - by darrenm
    Those of you have have installed Solaris 11 or have read some of the blogs by my colleagues will have noticed Solaris 11 includes OpenSSL 1.0.0, this is a different version to what we have in Solaris 10.  I hope the following explains why that is and how it fits with the expectations on binary compatibility between Solaris releases. Solaris 10 was the first release where we included OpenSSL libraries and headers (part of it was actually statically linked into the SSH client/server in Solaris 9).  At time we were building and releasing Solaris 10 the current train of OpenSSL was 0.9.7.  The OpenSSL libraries at that time were known to not always be completely API and ABI (binary) compatible between releases (some times even in the lettered patch releases) though mostly if you stuck with the documented high level APIs you would be fine.   For this reason OpenSSL was classified as a 'Volatile' interface and in Solaris 10 Volatile interfaces were not part of the default library search path which is why the OpenSSL libraries live in /usr/sfw/lib on Solaris 10.  Okay, but what does Volatile mean ? Quoting from the attributes(5) man page description of Volatile (which was called External in older taxonomy): Volatile interfaces can change at any time and for any reason. The Volatile interface stability level allows Sun pro- ducts to quickly track a fluid, rapidly evolving specif- ication. In many cases, this is preferred to providing additional stability to the interface, as it may better meet the expectations of the consumer. The most common application of this taxonomy level is to interfaces that are controlled by a body other than Sun, but unlike specifications controlled by standards bodies or Free or Open Source Software (FOSS) communities which value interface compatibility, it can not be asserted that an incompatible change to the interface specifica- tion would be exceedingly rare. It may also be applied to FOSS controlled software where it is deemed more important to track the community with minimal latency than to provide stability to our customers. It also common to apply the Volatile classification level to interfaces in the process of being defined by trusted or widely accepted organization. These are generically referred to as draft standards. An "IETF Internet draft" is a well understood example of a specification under development. Volatile can also be applied to experimental interfaces. No assertion is made regarding either source or binary compatibility of Volatile interfaces between any two releases, including patches. Applications containing these interfaces might fail to function properly in any future release. Note that last paragraph!  OpenSSL is only one example of the many interfaces in Solaris that are classified as Volatile.  At the other end of the scale we have Committed (Stable in Solaris 10 terminology) interfaces, these include things like the POSIX APIs or Solaris specific APIs that we have no intention of changing in an incompatible way.  There are also Private interfaces and things we declare as Not-an-Interface (eg command output not intended for scripting against only to be read by humans). Even if we had declared OpenSSL as a Committed/Stable interface in Solaris 10 there are allowed exceptions, again quoting from attributes(5): 4. An interface specification which isn't controlled by Sun has been changed incompatibly and the vast majority of interface consumers expect the newer interface. 5. Not making the incompatible change would be incomprehensible to our customers. In our opinion and that of our large and small customers keeping up with the OpenSSL community is important, and certainly both of the above cases apply. Our policy for dealing with OpenSSL on Solaris 10 was to stay at 0.9.7 and add fixes for security vulnerabilities (the version string includes the CVE numbers of fixed vulnerabilities relevant to that release train).  The last release of OpenSSL 0.9.7 delivered by the upstream community was more than 4 years ago in Feb 2007. Now lets roll forward to just before the release of Solaris 11 Express in 2010. By that point in time the current OpenSSL release was 0.9.8 with the 1.0.0 release known to be coming soon.  Two significant changes to OpenSSL were made between Solaris 10 and Solaris 11 Express.  First in Solaris 11 Express (and Solaris 11) we removed the requirement that Volatile libraries be placed in /usr/sfw/lib, that means OpenSSL is now in /usr/lib, secondly we upgraded it to the then current version stream of OpenSSL (0.9.8) as was expected by our customers. In between Solaris 11 Express in 2010 and the release of Solaris 11 in 2011 the OpenSSL community released version 1.0.0.  This was a huge milestone for a long standing and highly respected open source project.  It would have been highly negligent of Solaris not to include OpenSSL 1.0.0e in the Solaris 11 release. It is the latest best supported and best performing version.     In fact Solaris 11 isn't 'just' OpenSSL 1.0.0 but we have added our SPARC T4 engine and the AES-NI engine to support the on chip crypto acceleration. This gives us 4.3x better AES performance than OpenSSL 0.9.8 running on AIX on an IBM POWER7. We are now working with the OpenSSL community to determine how best to integrate the SPARC T4 changes into the mainline OpenSSL.  The OpenSSL 'pkcs11' engine we delivered in Solaris 10 to support the CA-6000 card and the SPARC T1/T2/T3 hardware is still included in Solaris 11. When OpenSSL 1.0.1 and 1.1.0 come out we will asses what is best for Solaris customers. It might be upgrade or it might be parallel delivery of more than one version stream.  At this time Solaris 11 still classifies OpenSSL as a Volatile interface, it is our hope that we will be able at some point in a future release to give it a higher interface stability level. Happy crypting! and thank-you OpenSSL community for all the work you have done that helps Solaris.

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >