Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 473/739 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • iiR Hospital Digital 2011: Tras la historia digital ¿qué?

    - by Eloy M. Rodríguez
    Como el acceso a la documentación está restringido, sólo voy a comentar por encima algunos temas o planteamientos que me han llamado la atención del VI Foro Hospital Digital 2011, organizado por iiR. Y comienzo destacando la buena moderación de Maribel Grau del Hospital Clínic de Barcelona que estuvo sobria, eficaz y motivadora del debate. Me impresionó el proyecto Hospital Líquido del Hospital San Joan de Deu de Barcelona por el compromiso corporativo con una medicina colaborativa involucrando a los pacientes y a los profesionales, con unas iniciativas de eSalud y Salud2.0 avanzadas y apoyadas en un buen soporte legal, tecnológico, de los profesionales y con procesos bien definidos. Es un tema corporativo y no una prueba, como bien explicó Jorge Juan Fernández y detalló después Júlia Cutillas, cuyo rol, por cierto, es de Community Manager. En el debate salió el tema del retorno de la inversión y ese es un tema inmaduro, ya que es difícil de encontrar métricas adecuadas, pero no dudan de su continuidad ya que forma parte de una estrategia corporativa, en la que siempre hay elementos que forman parte de los costes generales y que se consideran necesarios para prestar el nivel de servicio que se desea ofrecer. Cecilia Pérez desde su posición como Jefe de Implantación de HCE en el Hospital de Móstoles hizo énfasis en la importancia de la gestión efectiva del cambio cuando se implanta un sistema de historia clínica electrónica que pasa por una inicial negación de los usarios al cambio, que luego presentan una resistencia al prinicipio para luego empezar a explorar posibilidades y llegar a un compromiso con el cambio. Santiago Borrás, Jefe de Sistemas del Hospital del Henares, partió de un hospital digital, pero eso no es más que el comienzo. Tras tres años la frustración de los profesionales es no perderse entre demasiada información. La etapa necesaria tras la digitalición es la generación y compartición del cononocimiento. Cristina Ibarrola, Directora de Atención Primaria del SNS-O comentó la experiencia de las interconsultas primaria-especializada que reducen la carga asistencial en primaria al aumentar la resolución. Hay una reserva de tiempos específicos en las agendas de los profesionales de ambos lados para garantizar una respuesta en un máximo de 48 horas. Eso ha llevado a una flexibiliazación de la agenda de los médicos de primaria que tienen un 25% más de tiempo para las consultas presenciales. Parece que aquí la opción tomada es dar más tiempo por paciente en vez de más pacientes, supongo que en parte porque la presión asistencial en Navarra tengo entendido que no es tan fuerte como en otras zonas. Alejandra Cubero comentó la experiencia de identificación de pacientes y de inteoperabilidad en Hospitales de Madrid. Ana Rosa Pulido presentó los logros del SES y su proyecto actual de Imagen Médica No Radiológica. Richard Bernat explicó la experiencia de HCE de Salud de la Mujer Dexeus, indicando que si bien no hay métricas del retorno de la inversión, sí hay una percepción del valor por las diferentes direcciones. Arturo Quesada glosó la experiencia de Jimena en el Hospital de Ávila, Joan Chafer desgranó el arduo proceso de introducción de sucesivas soluciones digitales en el Hospital Clínico San Carlos de Madrid comenzando por “Hogar Digital”, todo ello con financiación externa o recursos propios y cerró el turno de intervenciones no comerciales Pedro A. Bonal que presentó el valor de los eDocs dentro del Complejo (aplicado en sus dos acepciones de conjunto y complicado) Hospitalario de Toledo como tránsito a la HCE plenamente digital. Tweet

    Read the article

  • User Lockout & WLST

    - by Bala Kothandaraman
    WebLogic server provides an option to lockout users to protect accounts password guessing attack. It is implemented with a realm-wide Lockout Manager. This feature can be used with custom authentication provider also. But if you implement your own authentication provider and wish to implement your own lockout manager that is possible too. If your domain is configured to use the user lockout manager the following WLST script will help you to: - check whether a user is locked using a WLST script - find out the number of locked users in the realm #Define constants url='t3://localhost:7001' username='weblogic' password='weblogic' checkuser='test-deployer' #Connect connect(username,password,url) #Get Lockout Manager Runtime serverRuntime() dr = cmo.getServerSecurityRuntime().getDefaultRealmRuntime() ulmr = dr.getUserLockoutManagerRuntime() print '-------------------------------------------' #Check whether a user is locked if (ulmr.isLockedOut(checkuser) == 0): islocked = 'NOT locked' else: islocked = 'locked' print 'User ' + checkuser + ' is ' + islocked #Print number of locked users print 'No. of locked user - ', Integer(ulmr.getUserLockoutTotalCount()) print '-------------------------------------------' print '' #Disconnect & Exit disconnect() exit()

    Read the article

  • Where Next for Google Translate? And What of Information Quality?

    - by ultan o'broin
    Fascinating article in the UK Guardian newspaper called Can Google break the computer language barrier? In it, Andreas Zollman, who works on Google Translate, comments that the quality of Google Translate's output relative to the amount of data required to create that output is clearly now falling foul of the law of diminishing returns. He says: "Each doubling of the amount of translated data input led to about a 0.5% improvement in the quality of the output," he suggests, but the doublings are not infinite. "We are now at this limit where there isn't that much more data in the world that we can use," he admits. "So now it is much more important again to add on different approaches and rules-based models." The Translation Guy has a further discussion on this, called Google Translate is Finished. He says: "And there aren't that many doublings left, if any. I can't say how much text Google has assimilated into their machine translation databases, but it's been reported that they have scanned about 11% of all printed content ever published. So double that, and double it again, and once more, shoveling all that into the translation hopper, and pretty soon you get the sum of all human knowledge, which means a whopping 1.5% improvement in the quality of the engines when everything has been analyzed. That's what we've got to look forward to, at best, since Google spiders regularly surf the Web, which in its vastness dwarfs all previously published content. So to all intents and purposes, the statistical machine translation tools of Google are done. Outstanding job, Googlers. Thanks." Surprisingly, all this analysis hasn't raised that much comment from the fans of machine translation, or its detractors either for that matter. Perhaps, it's the season of goodwill? What is clear to me, however, of course is that Google Translate isn't really finished (in any sense of the word). I am sure Google will investigate and come up with new rule-based translation models to enhance what they have already and that will also scale effectively where others didn't. So too, will they harness human input, which really is the way to go to train MT in the quality direction. But that aside, what does it say about the quality of the data that is being used for statistical machine translation in the first place? From the Guardian article it's clear that a huge humanly translated corpus drove the gains for Google Translate and now what's left is the dregs of badly translated and poorly created source materials that just can't deliver quality translations. There's a message about information quality there, surely. In the enterprise applications space, where we have some control over content this whole debate reinforces the relationship between information quality at source and translation efficiency, regardless of the technology used to do the translation. But as more automation comes to the fore, that information quality is even more critical if you want anything approaching a scalable solution. This is important for user experience professionals. Issues like user generated content translation, multilingual personalization, and scalable language quality are central to a superior global UX; it's a competitive issue we cannot ignore.

    Read the article

  • Java Spotlight Episode 75: Greg Luck on JSR 107 Java Temporary Caching API

    - by Roger Brinkley
    Tweet Recorded live at Jfokus 2012, an interview with Greg Luck on JSR 107 Java Temporary Caching API. Joining us this week on the Java All Star Developer Panel is Alexis Moussine-Pouchkine, Java EE Developer Advocate. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News JavaOne 2012 call for papers is open (closes April 9th) LightFish, Adam Bien's lightweight telemetry application Java EE 6 sample code JavaFX 1.2 and 1.3 EOL Repeating Annotations in the Works Events March 26-29, EclipseCon, Reston, USA March 27, Virtual Developer Days - Java (Asia Pacific (English)),9:30 am to 2:00pm IST / 12:00pm to 4.30pm SGT  / 3.00pm - 7.30pm AEDT April 4-5, JavaOne Japan, Tokyo, Japan April 12, GreenJUG, Greenville, SC April 17-18, JavaOne Russia, Moscow Russia April 18–20, Devoxx France, Paris, France April 26, Mix-IT, Lyon, France, May 3-4, JavaOne India, Hyderabad, India Feature Interview Greg Luck founded Ehcache in 2003. He regularly speaks at conferences, writes and codes. He has also founded and maintains the JPam and Spnego open source projects, which are security focused. Prior to joining Terracotta in 2009, Greg was Chief Architect at Wotif.com where he provided technical leadership as the company went from a single product startup to a billion dollar public company with multiple product lines. Before that Greg was a consultant for ThoughtWorks with engagements in the US and Australia in the travel, health care, geospatial, banking and insurance industries. Before doing programming, Greg managed IT. He was CIO at Virgin Blue, Tempo Services, Stamford Hotels and Resorts and Australian Resorts. He is a Chartered Accountant, and spent 7 years with KPMG in small business and insolvency. Mail Bag What’s Cool RT @harkje: To update an earlier tweet: #JavaFX feels like Swing with added convenience methods, better looking widgets, nice effects and...

    Read the article

  • Debugging Tips for Skinning

    - by Christian David Straub
    Another guest post by Jeanne Waldman.If you are developing a skin for your Fusion Application in JDeveloper you should know these tips.   1. Firebug is your friend 2. Uncompress the css style classes 3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away 4. View the generated CSS File   1. Firebug is your friend Install Firebug (http://getfirebug.com/layout) into Firefox and use it to view your rendered jspx page in the browser. You can select the HTML dom nodes on your page and you can see the css styles applied to each dom node.   2. Uncompress the css style classes By default the styleclasses that are rendered are compressed. You may see style classes like class="x10" and class="x2e". But in your skin css file you have styleclasses like: af|inputText::content or af|panelBox::header   It is easier for you to develop a skin and debug a skin with Firebug if you see the uncompressed styleclasses. To do this, a. open web.xml b. add   <context-param>     <param-name>org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION</param-name>     <param-value>true</param-value>   </context-param> c. save d. restart the server and re-run your page.   3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away   For performance sake the ADF Faces framework does not check if you skin .css file has changed on every render. But this is exactly what you want to happen when you are developing or debugging a skin. You want your changes to get noticed right away, without restarting the server.   To do this, a. open web.xml b. add   <context-param>     <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>     <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>     <param-value>false</param-value>   </context-param> c. save d. restart the server and re-run your page. e. from then on, you can change your skin's .css file, save it and refresh your page and you should see the changes right away   4. View the generated CSS File   There are different ways to view the generated CSS File which is your skin's css file merged in with all the skins it extends and processed and generated to the filesystem and linked to your generated html page. One way is to view it with Firebug. The problem with this approach is you might see something that is a little different than the actual css file because Firebug may do some extra processing. I like to view the generated css file by: Right click on your page in the browser 

    Read the article

  • Introducción a ENUM (E.164 Number Mapping)

    - by raul.goycoolea
    E.164 Number Mapping (ENUM o Enum) se diseñó para resolver la cuestión de como se pueden encontrar servicios de internet mediante un número telefónico, es decir cómo se pueden usar los los teléfonos, que solamente tienen 12 teclas, para acceder a servicios de Internet. La parte más básica de ENUM es por tanto la convergencia de las redes del STDP y la IP; ENUM hace que pueda haber una correspondencia entre un número telefónico y un identificador de Internet. En síntesis, Enum es un conjunto de protocolos para convertir números E.164 en URIs, y viceversa, de modo que el sistema de numeración E.164 tenga una función de correspondencia con las direcciones URI en Internet. Esta función es necesaria porque un número telefónico no tiene sentido en el mundo IP, ni una dirección IP tiene sentido en las redes telefónicas. Así, mediante esta técnica, las comunicaciones cuyo destino se marque con un número E.164, puedan terminar en el identificador correcto (número E.164 si termina en el STDP, o URI si termina en redes IP). La solución técnica de mirar en una base de datos cual es el identificador de destino tiene consecuencias muy interesantes, como que la llamada se pueda terminar donde desee el abonado llamado. Esta es una de las características que ofrece ENUM : el destino concreto, el terminal o terminales de terminación, no lo decide quien inicia la llamada o envía el mensaje sino la persona que es llamada o recibe el mensaje, que ha escrito sus preferencias en una base de datos. En otras palabras, el destinatario de la llamada decide cómo quiere ser contactado, tanto si lo que se le comunica es un email, o un sms, o telefax, o una llamada de voz. Cuando alguien quiera llamarle a usted, lo que tiene que hacer el llamante es seleccionar su nombre (el del llamado) en la libreta de direcciones del terminal o marcar su número ENUM. Una aplicación informática obtendrá de una base de datos los datos de contacto y disponibilidad que usted decidió. Y el mensaje le será remitido tal como usted especificó en dicha base de datos. Esto es algo nuevo que permite que usted, como persona llamada, defina sus preferencias de terminación para cualquier tipo de contenido. Por ejemplo, usted puede querer que todos los emails le sean enviados como sms o que los mensajes de voz se le remitan como emails; las comunicaciones ya no dependen de donde esté usted o deque tipo de terminal utiliza (teléfono, pda, internet). Además, con ENUM usted puede gestionar la portabilidad de sus números fijos y móviles. ENUM emplea una técnica de búsqueda indirecta en una base de datos que tiene los registros NAPTR ("Naming Authority Pointer Resource Records" tal como lo define el RFC 2915), y que utiliza el número telefónico Enum como clave de búsqueda, para obtener qué URIs corresponden a cada número telefónico. La base de datos que almacena estos registros es del tipo DNS.Si bien en uno de sus diversos usos sirve para facilitar las llamadas de usuarios de VoIP entre redes tradicionales del STDP y redes IP, debe tenerse en cuenta que ENUM no es una función de VoIP sino que es un mecanismo de conversión entre números/identificadores. Por tanto no debe ser confundido con el uso normal de enrutar las llamadas de VoIP mediante los protocolos SIP y H.323. ENUM puede ser muy útil para aquellas organizaciones que quieran tener normalizada la manera en que las aplicaciones acceden a los datos de comunicación de cada usuario. FundamentosPara que la convergencia entre el Sistema Telefónico Disponible al Público (STDP) y la Telefonía por Internet o Voz sobre IP (VoIP) y que el desarrollo de nuevos servicios multimedia tengan menos obstáculos, es fundamental que los usuarios puedan realizar sus llamadas tal como están acostumbrados a hacerlo, marcando números. Para eso, es preciso que haya un sistema universal de correspondencia de número a direcciones IP (y viceversa) y que las diferentes redes se puedan interconectar. Hay varias fórmulas que permiten que un número telefónico sirva para establecer comunicación con múltiples servicios. Una de estas fórmulas es el Electronic Number Mapping System ENUM, normalizado por el grupo de tareas especiales de ingeniería en Internet (IETF, Internet engineering task force), del que trata este artículo, que emplea la numeración E.164, los protocolos y la infraestructura telefónica para acceder indirectamente a diferentes servicios. Por tanto, se accede a un servicio mediante un identificador numérico universal: un número telefónico tradicional. ENUM permite comunicar las direcciones del mundo IP con las del mundo telefónico, y viceversa, sin problemas. Antes de entrar en mayores profundidades, conviene dar una breve pincelada para aclarar cómo se organiza la correspondencia entre números o URI. Para ello imaginemos una llamada que se inicia desde el servicio telefónico tradicional con destino a un número Enum. En ENUM Público, el abonado o usuario Enum a quien va destinada lallamada, habrá decidido incluir en la base de datos Enum uno o varios URI o números E.164, que forman una lista con sus preferencias para terminar la llamada. Y el sistema como se explica más adelante, elegirá cual es el número o URI adecuado para dicha terminación. Por tanto como resultado de la consulta a la base dedatos Enum siempre se da una relación unívoca entre el número Enum marcado y el de terminación, conforme a los deseos de la persona llamada.Variedades de ENUMUna posible fuente de confusión cuando se trata sobre ENUM es la variedad de soluciones o sistemas que emplean este calificativo. Lo habitual es que cuando se haga una referencia a ENUM se trate de uno de los siguientes casos: ENUM Público: Es la visión original de ENUM, como base de datos pública, parecida a un directorio, donde el abonado "opta" a ser incluido en la base de datos, que está gestionada en el dominio e164.arpa, delegando a cada país la gestión de la base de datos y la numeración. También se conoce como ENUM de usuario. Carrier ENUM, o ENUM Infraestructura, o de Operador: Cuando grupos de operadores proveedores de servicios de comunicaciones electrónicas acuerdan compartir la información de los abonados por medio de ENUM mediante acuerdos privados. En este caso son los operadores quienes controlan la información del abonado en vez de hacerlo (optar) los propios abonados. Carrier ENUM o ENUM de Operador también se conoce como Infrastructure ENUM o ENUM Infraestructura, y está siendo normalizado por IETF para la interconexión de VoIP (mediante acuerdos de peering). Como se explicará en la correspondiente sección, también se puede utilizar para la portabilidad o conservación de número. ENUM Privado: Un operador de telefonía o de VoIP, o un ISP, o un gran usuario, puede utilizar las técnicas de ENUM en sus redes y en las de sus clientes sin emplear DNS públicos, con DNS privados o internos. Resulta fácil imaginar como puede utilizarse esta técnica para que compañías multinacionales, o bancos, o agencias de viajes, tengan planes de numeración muy coherentes y eficaces. Cómo funciona ENUMPara conocer cómo funciona Enum, le remitimos a la página correspondiente a ENUM Público, puesto que esa variedad de Enum es la típica, la que dió lugar a todos los procedimientos y normas de IETF .Más detalles sobre: @page { margin: 0.79in } P { margin-bottom: 0.08in } H4 { margin-bottom: 0.08in } H4.ctl { font-family: "Lohit Hindi" } A:link { so-language: zxx } -- ENUM Público. En esta página se explica con cierto detalle como funciona Enum Carrier ENUM o ENUM de Operador ENUM Privado Normas técnicas: RFC 2915: NAPTR RR. The Naming Authority Pointer (NAPTR) DNS Resource Record RFC 3761: ENUM Protocol. The E.164 to Uniform Resource Identifiers (URI) Dynamic Delegation Discovery System (DDDS) Application (ENUM). (obsoletes RFC 2916). RFC 3762: Usage of H323 addresses in ENUM Protocol RFC 3764: Usage of SIP addresses in ENUM Protocol RFC 3824: Using E.164 numbers with SIP RFC 4769: IANA Registration for an Enumservice Containing Public Switched Telephone Network (PSTN) Signaling Information RFC 3026: Berlin Liaison Statement RFC 3953: Telephone Number Mapping (ENUM) Service Registration for Presence Services RFC 2870: Root Name Server Operational Requirements RFC 3482: Number Portability in the Global Switched Telephone Network (GSTN): An Overview RFC 2168: Resolution of Uniform Resource Identifiers using the Domain Name System Organizaciones relacionadas con ENUM RIPE - Adimistrador del nivel 0 de ENUM e164.arpa. ITU-T TSB - Unión Internacional de Telecomunicaciones ETSI - European Telecommunications Standards Institute VisionNG - Administrador del rango ENUM 878-10 IETF ENUM Chapter

    Read the article

  • The C++ Standard Template Library as a BDB Database (part 1)

    - by Gregory Burd
    If you've used C++ you undoubtedly have used the Standard Template Libraries. Designed for in-memory management of data and collections of data this is a core aspect of all C++ programs. Berkeley DB is a database library with a variety of APIs designed to ease development, one of those APIs extends and makes use of the STL for persistent, transactional data storage. dbstl is an STL standard compatible API for Berkeley DB. You can make use of Berkeley DB via this API as if you are using C++ STL classes, and still make full use of Berkeley DB features. Being an STL library backed by a database, there are some important and useful features that dbstl can provide, while the C++ STL library can't. The following are a few typical use cases to use the dbstl extensions to the C++ STL for data storage. When data exceeds available physical memory.Berkeley DB dbstl can vastly improve performance when managing a dataset which is larger than available memory. Performance suffers when the data can't reside in memory because the OS is forced to use virtual memory and swap pages of memory to disk. Switching to BDB's dbstl improves performance while allowing you to keep using STL containers. When you need concurrent access to C++ STL containers.Few existing C++ STL implementations support concurrent access (create/read/update/delete) within a container, at best you'll find support for accessing different containers of the same type concurrently. With the Berkeley DB dbstl implementation you can concurrently access your data from multiple threads or processes with confidence in the outcome. When your objects are your database.You want to have object persistence in your application, and store objects in a database, and use the objects across different runs of your application without having to translate them to/from SQL. The dbstl is capable of storing complicated objects, even those not located on a continous chunk of memory space, directly to disk without any unnecessary overhead. These are a few reasons why you should consider using Berkeley DB's C++ STL support for your embedded database application. In the next few blog posts I'll show you a few examples of this approach, it's easy to use and easy to learn.

    Read the article

  • Java EE talks at JAX Conf

    - by arungupta
    JAX Conf is starting in San Jose today and there are several talks on Java EE there. Java EE Wednesday and Thursday Java Persistence API 2.0 with Eclipse Link RESTful Services with Java EE Cast Study: Functional programming in Scala with CDI GlassFish 3.1: Deploying your Java EE 6 Applications The future of Java Enterprise Testing Forge new ground in Rapid Enterprise Development The Java EE 7 Platform: Developing for the Cloud (Keynote) Exploring Java EE 6 for the Enterprise Developer JBoss Day JSF Summit CDI Tutorial And many more ... Check out the complete schedule and see ya there!

    Read the article

  • Java EE talks at JAX Conf

    - by arungupta
    JAX Conf is starting in San Jose today and there are several talks on Java EE there. Java EE Wednesday and Thursday Java Persistence API 2.0 with Eclipse Link RESTful Services with Java EE Cast Study: Functional programming in Scala with CDI GlassFish 3.1: Deploying your Java EE 6 Applications The future of Java Enterprise Testing Forge new ground in Rapid Enterprise Development The Java EE 7 Platform: Developing for the Cloud (Keynote) Exploring Java EE 6 for the Enterprise Developer JBoss Day JSF Summit CDI Tutorial And many more ... Check out the complete schedule and see ya there!

    Read the article

  • Trust: A New Line of Business

    - by ruth.donohue
    What do you think are the key factors in building and maintaining your company's reputation... Innovation? Price? Surprisingly, according to the recent 2010 Edelman Trust Barometer, survey respondents in the US valued transparency of business practices as well as company trustworthiness as the two most important factors influencing corporate reputation. What is trust? It's the confidence in a company's ability to do what is right for all its stakeholders -- shareholders, customers, employees, and the broader society at large -- and not just shareholders. Trust is an increasingly important component to maintaining your company's reputation and brand, and Western countries have seen an increase in global trust. Global businesses headquartered in the United States in particular have seen an 18 point boost to 54 percent in global trust. Whether this uptick represents the start of a new trend or a mere blip in the barometer remains to be seen. The Edelman report notes that the increase is "tenuous" as people expect companies to return to "business as usual" after the economy rebounds. This warning underscores the need for companies to continue engaging in open and frequent communications and business practices with its stakeholders across multiple channels and view trust as a "new line of business" to cultivate.

    Read the article

  • Using the new CSS Analyzer in JavaFX Scene Builder

    - by Jerome Cambon
    As you know, JavaFX provides from the API many properties that you can set to customize or make your components to behave as you want. For instance, for a Button, you can set its font, or its max size.Using Scene Builder, these properties can be explored and modified using the inspector. However, JavaFX also provides many other properties to have a fine grained customization of your components : the css properties. These properties are typically set from a css stylesheet. For instance, you can set a background image on a Button, change the Button corners, etc... Using Scene Builder, until now, you could set a css property using the inspector Style and Stylesheet editors. But you had to go to the JavaFX css documentation to know the css properties that can be applied to a given component. Hopefully, Scene Builder 1.1 added recently a very interesting new feature : the CSS Analyzer.It allows you to explore all the css properties available for a JavaFX component, and helps you to build your css rules. A very simple example : make a Button rounded Let’s take a very simple example:you would like to customize your Buttons to make them rounded. First, enable the CSS Analyzer, using the ‘View->Show CSS Analyzer’ menu. Grow the main window, and the CSS Analyzer to get more room: Then, drop a Button from the Library to the ContentView: the CSS Analyzer is now showing the Button css properties: As you can see, there is a ‘-fx-background-radius’ css property that allow to define the radius of the background (note that you can get the associated css documentation by clicking on the property name). You can then experiment this by setting the Button style property from the inspector: As you can see in the css doc, one can set the same radius for the 4 corners by a simple number. Once the style value is applied, the Button is now rounded, as expected.Look at the CSS Analyzer: the ‘-fx-background-radius’ property has now 2 entries: the default one, and the one we just entered from the Style property. The new value “win”: it overrides the default one, and become the actual value (to highlight this, the cell background becomes blue). Now, you will certainly prefer to apply this new style to all the Buttons of your FXML document, and have a css rule for this.To do this, save you document first, and create a css file in the same directory than the new document.Create an empty css file (e.g. test.css), and attach it the the root AnchorPane, by first selecting the AnchorPane, then using the Stylesheets editor from the inspector: Add the corresponding css rule to your new test.css file, from your preferred editor (Netbeans for me ;-) and save it. .button { -fx-background-radius: 10px;} Now, select your Button and have a look at the CSS Analyzer. As you can see, the Button is inheriting the css rule (since the Button is a child of the AnchorPane), and still have its inline Style. The Inline style “win”, since it has precedence on the stylesheet. The CSS Analyzer columns are displayed by precedence order.Note the small right-arrow icons, that allow to jump to the source of the value (either test.css, or the inspector in this case).Of course, unless you want to set a specific background radius for this particular Button, you can remove the inline Style from the inspector. Changing the color of a TitledPane arrow In some cases, it can be useful to be able to select the inner element you want to style directly from the Content View . Drop a TitledPane to the Content View. Then select from the CSS Analyzer the CSS cursor (the other cursor on the left allow you to come back to ‘standard’ selection), that will allow to select an inner element: height: 62px;" align="LEFT" border="0"> … and select the TitledPane arrow, that will get a yellow background: … and the Styleable Path is updated: To define a new css rule, you can first copy the Styleable path : .. then paste it in your test.css file. Then, add an entry to set the -fx-background-color to red. You should have something like: .titled-pane:expanded .title .arrow-button .arrow { -fx-background-color : red;} As soon as the test.css is saved, the change is taken into account in Scene Builder. You can also use the Styleable Path to discover all the inner elements of TitledPane, by clicking on the arrow icon: More details You can see the CSS Analyzer in action (and many other features) from the Java One BOF: BOF4279 - In-Depth Layout and Styling with the JavaFX Scene Builder presented by my colleague Jean-Francois Denise. On the right hand, click on the Media link to go to the video (streaming) of the presa. The Scene Builder support of CSS starts at 9:20 The CSS Analyzer presentation starts at 12:50

    Read the article

  • MySQL Connect 8 Days Away - Replication Sessions

    - by Mat Keep
    Following on from my post about MySQL Cluster sessions at the forthcoming Connect conference, its now the turn of MySQL Replication - another technology at the heart of scaling and high availability for MySQL. Unless you've only just returned from a 6-month alien abduction, you will know that MySQL 5.6 includes the largest set of replication enhancements ever packaged into a single new release: - Global Transaction IDs + HA utilities for self-healing cluster..(yes both automatic failover and manual switchover available!) - Crash-safe slaves and binlog - Binlog Group Commit and Multi-Threaded Slaves for high performance - Replication Event Checksums and Time-Delayed replication - and many more There are a number of sessions dedicated to learn more about these important new enhancements, delivered by the same engineers who developed them. Here is a summary Saturday 29th, 13.00 Replication Tips and Tricks, Mats Kindahl In this session, the developers of MySQL Replication present a bag of useful tips and tricks related to the MySQL 5.5 GA and MySQL 5.6 development milestone releases, including multisource replication, using logs for auditing, handling filtering, examining the binary log, using relay slaves, splitting the replication stream, and handling failover. Saturday 29th, 17.30 Enabling the New Generation of Web and Cloud Services with MySQL 5.6 Replication, Lars Thalmann This session showcases the new replication features, including • High performance (group commit, multithreaded slave) • High availability (crash-safe slaves, failover utilities) • Flexibility and usability (global transaction identifiers, annotated row-based replication [RBR]) • Data integrity (event checksums) Saturday 29th, 1900 MySQL Replication Birds of a Feather In this session, the MySQL Replication engineers discuss all the goodies, including global transaction identifiers (GTIDs) with autofailover; multithreaded, crash-safe slaves; checksums; and more. The team discusses the design behind these enhancements and how to get started with them. You will get the opportunity to present your feedback on how these can be further enhanced and can share any additional replication requirements you have to further scale your critical MySQL-based workloads. Sunday 30th, 10.15 Hands-On Lab, MySQL Replication, Luis Soares and Sven Sandberg But how do you get started, how does it work, and what are the best practices and tools? During this hands-on lab, you will learn how to get started with replication, how it works, architecture, replication prerequisites, setting up a simple topology, and advanced replication configurations. The session also covers some of the new features in the MySQL 5.6 development milestone releases. Sunday 30th, 13.15 Hands-On Lab, MySQL Utilities, Chuck Bell Would you like to learn how to more effectively manage a host of MySQL servers and manage high-availability features such as replication? This hands-on lab addresses these areas and more. Participants will get familiar with all of the MySQL utilities, using each of them with a variety of options to configure and manage MySQL servers. Sunday 30th, 14.45 Eliminating Downtime with MySQL Replication, Luis Soares The presentation takes a deep dive into new replication features such as global transaction identifiers and crash-safe slaves. It also showcases a range of Python utilities that, combined with the Release 5.6 feature set, results in a self-healing data infrastructure. By the end of the session, attendees will be familiar with the new high-availability features in the whole MySQL 5.6 release and how to make use of them to protect and grow their business. Sunday 30th, 17.45 Scaling for the Web and the Cloud with MySQL Replication, Luis Soares In a Replication topology, high performance directly translates into improving read consistency from slaves and reducing the risk of data loss if a master fails. MySQL 5.6 introduces several new replication features to enhance performance. In this session, you will learn about these new features, how they work, and how you can leverage them in your applications. In addition, you will learn about some other best practices that can be used to improve performance. So how can you make sure you don't miss out - the good news is that registration is still open ;-) And just to whet your appetite, listen to the On-Demand webinar that presents an overview of MySQL 5.6 Replication.  

    Read the article

  • PonyEdit: It’s really fast

    - by Gary Pendergast
    Over the past few months, a friend and I have been hard at work on a new breed of text editor that we call PonyEdit. If you’ve ever found yourself cursing over the lag of working on remote cloud servers, this is the editor for you.It’s not just another SFTP editor…Reading and writing files over SFTP is nothing new; dozens of text editors can do it. But it’s always slow, clunky and feels like the feature was bolted on as an afterthought. You’ll find yourself using separate shortcuts to open files locally vs remotely, and dealing with sometimes painful save times with every edit, no matter how minor.PonyEdit gets rid of this terribly slow method of working by connecting over SSH, and using edit streaming to push changes to the server in the background as-you-type.Head on over to PonyEdit.com to download a free trial, and let me know what you think! Oh, and…Stand by to have your mind blown.

    Read the article

  • ADF Faces Layouts Demo - A Hidden Treasure

    - by shay.shmeltzer
    Layouting pages with ADF Faces containers is sometimes not as simple as we would have liked it to be - especially for people who are just getting started. There are some tricks that can help you achieve the layout that you are looking for. One great way to learn some of those tricks is to look at the new "Visual Design" section of the ADF Faces Hosted demo. For example look at the Form Layout part - and you'll see nicely aligned forms that contain various UI layout scenarios. Want to learn how this has been achieved? - just click the "page source" link at the top right - and you can see how that layout has been done. Don't forget that you can also download the full demo source here. One other good resource I came across today is the "Designing well known websites with ADF Rich Faces" presentation from Maiko Rocha and George Magessy - it's missing the demo part - but you can still learn a lot from the slides. Designing well known websites with ADF Rich Faces

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • How to configure a zone cluster on Solaris Cluster 4.0

    - by JuergenS
    This is a short overview on how to configure a zone cluster on Solaris Cluster 4.0. This is a little bit different as in Solaris Cluster 3.2/3.3 because Solaris Cluster 4.0 is only running on Solaris 11. The name of the zone cluster must be unique throughout the global Solaris Cluster and must be configured on a global Solaris Cluster. Please read all the requirements for zone cluster in Solaris Cluster Software Installation Guide for SC4.0. For Solaris Cluster 3.2/3.3 please refer to my previous blog Configuration steps to create a zone cluster in Solaris Cluster 3.2/3.3. A. Configure the zone cluster into the already running global clusterCheck if zone cluster can be created # cluster show-netprops to change number of zone clusters use # cluster set-netprops -p num_zoneclusters=12 Note: 12 zone clusters is the default, values can be customized! Create config file (zc1config) for zone cluster setup e.g: Configure zone cluster # clzc configure -f zc1config zc1 Note: If not using the config file the configuration can also be done manually # clzc configure zc1 Check zone configuration # clzc export zc1 Verify zone cluster # clzc verify zc1 Note: The following message is a notice and comes up on several clzc commands Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1"... Install the zone cluster # clzc install zc1 Note: Monitor the consoles of the global zone to see how the install proceed! (The output is different on the nodes) It's very important that all global cluster nodes have installed the same set of ha-cluster packages! Boot the zone cluster # clzc boot zc1 Login into non-global-zones of zone cluster zc1 on all nodes and finish Solaris installation. # zlogin -C zc1 Check status of zone cluster # clzc status zc1 Login into non-global-zones of zone cluster zc1 and configure the shell environment for root (for PATH: /usr/cluster/bin, for MANPATH: /usr/cluster/man) # zlogin -C zc1 If using additional name service configure /etc/nsswitch.conf of zone cluster non-global zones. hosts: cluster files netmasks: cluster files Configure /etc/inet/hosts of the zone cluster zones Enter all the logical hosts of non-global zones B. Add resource groups and resources to zone cluster Create a resource group in zone cluster # clrg create -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Note1: Use command # cluster status for zone cluster resource group overview. Note2: You can also run all commands for zone cluster in global cluster by adding the option -Z to the command. e.g: # clrg create -Z zc1 -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Set up the logical host resource for zone cluster In the global zone do: # clzc configure zc1 clzc:zc1 add net clzc:zc1:net set address=<zone-logicalhost-ip> clzc:zc1:net end clzc:zc1 commit clzc:zc1 exit Note: Check that logical host is in /etc/hosts file In zone cluster do: # clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs Set up storage resource for zone cluster Register HAStoragePlus # clrt register SUNW.HAStoragePlus Example1) ZFS storage pool In the global zone do: Configure zpool eg: # zpool create <zdata> mirror cXtXdX cXtXdX and # clzc configure zc1 clzc:zc1 add dataset clzc:zc1:dataset set name=zdata clzc:zc1:dataset end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p zpools=zdata app-hasp-rs Example2) HA filesystem In the global zone do: Configure SVM diskset and SVM devices. and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/data clzc:zc1:fs set special=/dev/md/datads/dsk/d0 clzc:zc1:fs set raw=/dev/md/datads/rdsk/d0 clzc:zc1:fs set type=ufs clzc:zc1:fs add options [logging] clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/data app-hasp-rs Example3) Global filesystem as loopback file system In the global zone configure global filesystem and it to /etc/vfstab on all global nodes e.g.: /dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2 yes global,logging and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/zone/fs (zc-lofs-mountpoint) clzc:zc1:fs set special=/global/fs (globalcluster-mountpoint) clzc:zc1:fs set type=lofs clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: (Create scalable rg if not already done) # clrg create -p desired_primaries=2 -p maximum_primaries=2 app-scal-rg # clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/zone/fs hasp-rs More details of adding storage available in the Installation Guide for zone cluster Switch resource group and resources online in the zone cluster # clrg online -eM app-rg # clrg online -eM app-scal-rg Test: Switch of the resource group in the zone cluster # clrg switch -n zonehost2 app-rg # clrg switch -n zonehost2 app-scal-rg Add supported dataservice to zone cluster Documentation for SC4.0 is available here Example output: Appendix: To delete a zone cluster do: # clrg delete -Z zc1 -F + Note: Zone cluster uninstall can only be done if all resource groups are removed in the zone cluster. The command 'clrg delete -F +' can be used in zone cluster to delete the resource groups recursively. # clzc halt zc1 # clzc uninstall zc1 Note: If clzc command is not successful to uninstall the zone, then run 'zoneadm -z zc1 uninstall -F' on the nodes where zc1 is configured # clzc delete zc1

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • Good ol fashioned debugging

    - by Tim Dexter
    I have been helping out one of our new customers over the last day or two and I have even managed to get to the bottom of their problem FTW! They use BIEE and BIP and wanted to mount a BIP report in a dashboard page, so far so good, BIP does that! Just follow the instructions in the BIEE user guide. The wrinkle is that they want to enter some fixed instruction strings into the dashboard prompts to help the user. These are added as fixed values to the prompt as the default values so they appear first. Once the user makes a selection, the default strings disappear. Its a fair requirement but the BIP report chokes Now, the BIP report had been setup with the Autorun checkbox, unchecked. I expected the BIP report to wait for the Go button to be hit but it was trying to run immediately and failing. That was the first issue. You can not stop the BIP report from trying to run in a dashboard. Even if the Autorun is turned off, it seems that dashboard still makes the request to BIP to run the report. Rather than BIP refusing because its waiting for input it goes ahead anyway, I guess the mechanism does not check the autorun flag when the request is coming from the dashboard. It appears that between BIEE and BIP, they collectively ignore the autorun flag. A bug? might be, at least an enhancement request. With that in mind, how could we get BIP to not at least not fail? This fact was stumping me on the parameter error, if the autorun flag was being respected then why was BIP complaining about the parameter values it should not even be doing anything until the Go button is clicked. I now knew that the autorun flag was being ignored, it was a simple case of putting BIP into debug mode. I use the OC4J server on my laptop so debug msgs are routed through the dos box used to start the OC4J container. When I changed a value on the dashboard prompt I spotted some debug text rushing by that subsequently disappeared from the log once the operation was complete. Another bug? I needed to catch that text as it went by, using the print screen function with some software to grab multiple screens as the log appeared and then disappeared. The upshot is that when you change the dashboard prompt value, BIP validates the value against its own LOVs, if its not in the list then it throws the error. Because 'Fill this first' and 'Fill this second' ie fixed strings from the dashboard prompts, are not in the LOV lists and because the report is auto running as soon as the dashboard page is brought up, the report complains about invalid parameters. To get around this, I needed to get the strings into the LOVs. Easily done with a UNION clause: select 'Fill this first' from SH.Products Products UNION select Products."Prod Category" as "Prod Category" from SH.Products Products Now when BIP wants to validate the prompt value, the LOV query fires and finds the fixed string -> No Error. No data, but definitely no errors :0) If users do run with the fixed values, you can capture that in the template. If there is no data in the report, either the fixed values were used or the parameters selected resulted in no rows. You can capture this in the template and display something like. 'Either your parameter values resulted in no data or you have not changed the default values' Thats the upside, the downside is that if your users run the report in the BP UI they re going to see the fixed strings. You could alleviate that by having BIP display the fixed strings in top of its parameter drop boxes (just set them as the default value for the parameter.) But they will not disappear like they do in the dashboard prompts, see below. If the expected autorun behaviour worked ie wait for the Go button, then we would not have to workaround it but for now, its a pretty good solution. It was an enjoyable hour or so for me, took me back to my developer daze, when we used to race each other for the most number of bug fixes. I used to run a distant 2nd behind 'Bugmeister Chen Hu' but led the chasing pack by a reasonable distance.

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • IPS Facets and Info files

    - by mkupfer
    One of the unusual things about IPS is its "facet" feature. For example, if you're a developer using the foo library, you don't install a libfoo-dev package to get the header files. Intead, you install the libfoo package, and your facet.devel setting controls whether you get header files. I was reminded of this recently when I tried to look at some documentation for Emacs Org mode. I was surprised when Emacs's Info browser said it couldn't find the top-level Info directory. I poked around in /usr/share but couldn't find any info files. $ ls -l /usr/share/info ls: cannot access /usr/share/info: No such file or directory Was I was missing a package? $ pkg list -a | egrep "info|emacs" editor/gnu-emacs 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-gtk 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-lisp 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-no-x11 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-x11 23.1-0.175.0.0.0.2.537 i-- system/data/terminfo 0.5.11-0.175.0.0.0.2.1 i-- system/data/terminfo/terminfo-core 0.5.11-0.175.0.0.0.2.1 i-- text/texinfo 4.7-0.175.0.0.0.2.537 i-- x11/diagnostic/x11-info-clients 7.6-0.175.0.0.0.0.1215 i-- $ Hmm. I didn't have the gnu-emacs-lisp package. That seemed an unlikely place to stick the Info files, and pkg(1) confirmed that the info files were not there: $ pkg contents -r gnu-emacs-lisp | grep info usr/share/emacs/23.1/lisp/info-look.el.gz usr/share/emacs/23.1/lisp/info-xref.el.gz usr/share/emacs/23.1/lisp/info.el.gz usr/share/emacs/23.1/lisp/informat.el.gz usr/share/emacs/23.1/lisp/org/org-info.el.gz usr/share/emacs/23.1/lisp/org/org-jsinfo.el.gz usr/share/emacs/23.1/lisp/pcvs-info.el.gz usr/share/emacs/23.1/lisp/textmodes/makeinfo.el.gz usr/share/emacs/23.1/lisp/textmodes/texinfo.el.gz $ Well, if I have what look like the right packages but don't have the right files, the next thing to check are the facets. The first check is whether there is a facet associated with the Info files: $ pkg contents -m gnu-emacs | grep usr/share/info dir facet.doc.info=true group=bin mode=0755 owner=root path=usr/share/info file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-1 [...] file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-2 [...] [...] Yes, they're associated with facet.doc.info. Now let's look at the facet settings on my desktop: $ pkg facet FACETS VALUE facet.locale.en* True facet.locale* False facet.doc.man True facet.doc* False $ Oops. I've got man pages and various English documentation files, but not the Info files. Let's fix that: # pkg change-facet facet.doc.info=True Packages to update: 970 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 970/970 181/181 9.2/9.2 PHASE ACTIONS Install Phase 226/226 PHASE ITEMS Image State Update Phase 2/2 PHASE ITEMS Reading Existing Index 8/8 Indexing Packages 970/970 # Now we have the info files: $ ls -F /usr/share/info a2ps.info dir@ flex.info groff-2 regex.info aalib.info dired-x flex.info-1 groff-3 remember ...

    Read the article

  • Sessions De Passage De Tests D’implementation

    - by swalker
    Colombes Exceptionnel pour vos Partenaires : Exemption des frais de passage pour les Examens d'implémentation s'ils participent à l'une des sessions des 3 jours de Test Fest le 28 novembre, le 29 novembre et le 9 décembre (cette dernière session est presque pleine) Pour les inscriptions c’est ici : >> 28 novembre 2011 >> 29 novembre 2011 >> 9 décembre 2011

    Read the article

  • Book Review

    - by frank.buytendijk
    ... and in the series of videoblogs, here is number 3: reviewing a few really books I recently read. Access the videoblog here. More on www.youtube.com/frankbuytendijk. frank

    Read the article

  • Architects overcoming challenges in the cloud

    - by stephen.g.bennett
    Computerworld has released an article based on an Silver Clouds, Dark Linings : A Concise Guide to Cloud Computing. This exceprt is from the roadmap chapter of the book. The book highlights common techniques in building roadmaps such as current reality, future vision, gap analysis, roadmap but also goes into detail in identifying the type of organization you are and what the common challenges you will need to address within your roadmap. In addition over at ArchBeat they have released a four part interview dicussing the book. Have a happy holiday

    Read the article

  • Excel Template Teaser

    - by Tim Dexter
    In lieu of some official documentation I'm in the process of putting together some posts on the new 10.1.3.4.1 Excel templates. No more HTML, maskerading as Excel; far more flexibility than Excel Analyzer and no need to write complex XSL templates to create the same output. Multi sheet outputs with macros and embeddable XSL commands are here. Their capabilities are pretty extensive and I have not worked on them for a few years since I helped put them together for EBS FSG users, so Im back on the learning curve. Let me say up front, there is no template builder, its a completely manual process to build them but, the results can be fantastic and provide yet another 'superstar' opportunity for you. The templates can take hierarchical XML data and walk the structure much like an RTF template. They use named cells/ranges and a hidden sheet to provide the rendering engine the hooks to drop the data in. As a taster heres the data and output I worked with on my first effort: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... </LIST_G_DEPT> </EMPLOYEES> Structured XML coming from a data template, check out the data template progression post. I can then generate the following binary XLS file. There are few cool things to notice in this output. DEPARTMENT-EMPLOYEE master detail output. Not easy to do in the Excel analyzer. Date formatting - this is using an Excel function. Remember BIP generates XML dates in the canonical format. I have formatted the other data in the template using native Excel functionality Salary Total - although in the data I have calculated this in the template Conditional formatting - this is handled by Excel based on the incoming data Bursting department data across sheets and using the department name for the sheet name. This alone is worth the wait! there's more, but this is surely enough to whet your appetite. These new templates are already tucked away in EBS R12 under controlled release by the GL team and have now come to the BIEE and standalone releases in the 10.1.3.4.1+ rollup patch. For the rest of you, its going to be a bit of a waiting game for the relevant teams to uptake the latest BIP release. Look out for more soon with some explanation of how they work and how to put them together!

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >