Search Results

Search found 1424 results on 57 pages for 'roles'.

Page 23/57 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Who should ‘own’ the Enterprise Architecture?

    - by Michael Glas
    I recently had a discussion around who should own an organization’s Enterprise Architecture. It was spawned by an article titled “Busting CIO Myths” in CIO magazine1 where the author interviewed Jeanne Ross, director of MIT's Center for Information Systems Research and co-author of books on enterprise architecture, governance and IT value.In the article Jeanne states that companies need to acknowledge that "architecture says everything about how the company is going to function, operate, and grow; the only person who can own that is the CEO". "If the CEO doesn't accept that role, there really can be no architecture."The first question that came up when talking about ownership was whether you are talking about a person, role, or organization (there are pros and cons to each, but in general, I like to assign accountability to as few people as possible). After much thought and discussion, I came to the conclusion that we were answering the wrong question. Instead of talking about ownership we were talking about responsibility and accountability, and the answer varies depending on the particular role of the organization’s Enterprise Architecture and the activities of the enterprise architect(s).Instead of looking at just who owns the architecture, think about what the person/role/organization should do. This is one possible scenario (thanks to Bob Covington): The CEO should own the Enterprise Strategy which guides the business architecture. The Business units should own the business processes and information which guide the business, application and information architectures. The CIO should own the technology, IT Governance and the management of the application and information architectures/implementations. The EA Governance Team owns the EA process.  If EA is done well, the governance team consists of both IT and the business. While there are many more roles and responsibilities than listed here, it starts to provide a clearer understanding of ‘ownership’. Now back to Jeanne’s statement that the CEO should own the architecture. If you agree with the statement about what the architecture is (and I do agree), then ultimately the CEO does need to own it. However, what we ended up with was not really ownership, but more statements around roles and responsibilities tied to aspects of the enterprise architecture. You can debate the semantics of ownership vs. responsibility and accountability, but in the end the important thing is to come to a clearer understanding that is easily communicated (and hopefully measured) around the question “Who owns the Enterprise Architecture”.The next logical step . . . create a RACI matrix that details the findings . . . but that is a step that each organization needs to do on their own as it will vary based on current EA maturity, company culture, and a variety of other factors. Who ‘owns’ the Enterprise Architecture in your organization? 1 CIO Magazine Article (Busting CIO Myths): http://www.cio.com/article/704943/Busting_CIO_Myths Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Calling XAI Inbound Services from Oracle BI Publisher

    - by ACShorten
    Note: This technique requires Oracle BI Publisher 1.1.3.4.1 which supports Service Complex Types. Web Services require credentials for authentication. Note: The deafults for the product installation are used in this article. If your site uses alternative values then substitute those alternatives where applicable. Note: Examples shown in this article are examples for illustrative purposes only. When building a report in Oracle BI Publisher it may be necessary to call an XAI Inbound Service to get information via the object rather than directly calling the database tables for various reasons: The CLOB fields used in the Object are accessible for a report. Note: CLOB fields cannot be used as criteria in the current release. Objects can take advantage of algorithms to format or calculate additional data that is not stored in the database directly. For example, Information format strings can automatically generated by the object which gives consistent information between a report and the online screens. To use this facility the following process must be performed: Ensure that the product group, cisusers by default, is enabled for the SPLServiceBean in the console. This allows BI Publisher access to call Web Services directly. To ensure this follow the instructions below: Logon to the Oracle WebLogic server console using an appropriate administrator account. By default the user system or weblogic is provided for this purpose. Navigate to the Security Realms section and select your configured realm. This is set to myrealm by default. In the Roles and Policies section, expand the SPLService section of the Deployments option to reveal the SPLServiceBean roles. If there is no role associated with the SPLServiceBean, create a new EJB role and specify the cisusers role, by default. For example:   Add a Role Condition to the role just created, with a Predicate List of Group and specify cisusers as the Group Argument Name. For example: Save all your changes. The XAI Inbound Services to be used by BI Publisher must be defined prior to using the interface. Refer to the XAI Best Practices (Doc Id: 942074.1) from My Oracle Support or via the online help for more information about this process. Inside BI Publisher create your report, according to the BI Publisher documentation. When specifying the dataset, under the Data Model Report option, specify the following to use an XAI Inbound Service as a data source: Parameter Comment Type Web Service Complex Type true Username Any valid user name within the product. This user MUST have security access to the objects referenced in the XAI Inbound Service Password Authentication password for Username Timeout Timeout, in seconds, set for the Web Service call. For example 60 seconds. WSDL URL Use the WSDL URL on the XAI Inbound Service definition as your WSDL URL. It will be in the following format by default:http://<host>:<port>/<server>/XAIApp/xaiserver/<service>?WSDLwhere: <host> - Host Name of Web Application Server <port> - Port allocated to Web Application Server for product access <server> - Server context for server <service> - XAI Inbound Service Name Note: For customers using secure transmission should substitute https instead of http and use the HTTPS port allocated to the product at installation time. Web Service Select the name of the service that shows in the drop-down menu. If no service name shows up, it means that Publisher could not establish a connection with the server or WSDL name provided in the above URL in order to get the service name. See BI Publisher server log for more information. Method Select the name of the Method that shows in the drop-down menu. A method name should show in the Method drop-down menu once the Web Service name is selected. For example: Additionally, filters can be used from the Web Service that can be generated, required or optional, from the WSDL in the Parameter List. For example:

    Read the article

  • La búsqueda de la eficiencia como Santo Grial de las TIC sanitarias

    - by Eloy M. Rodríguez
    Las XVIII Jornadas de Informática Sanitaria en Andalucía se han cerrado el pasado viernes con 11.500 horas de inteligencia colectiva. Aunque el cálculo supongo que resulta de multiplicar las horas de sesiones y talleres por el número de inscritos, lo que no sería del todo real ya que la asistencia media calculo que andaría por las noventa personas, supongo que refleja el global si incluimos el montante de interacciones informales que el formato y lugar de celebración favorecen. Mi resumen subjetivo es que todos somos conscientes de que debemos conseguir más eficiencia en y gracias a las TIC y que para ello hemos señalado algunas pautas, que los asistentes, en sus diferentes roles debiéramos aplicar y ayudar a difundir. En esa línea creo que destaca la necesidad de tener muy claro de dónde se parte y qué se quiere conseguir, para lo que es imprescindible medir y que las medidas ayuden a retroalimentar al sistema en orden de conseguir sus objetivos. Y en este sentido, a nivel anecdótico, quisiera dejar una paradoja que se presentó sobre la eficiencia: partiendo de que el coste/día de hospitalización es mayor al principio que los últimos días de la estancia, si se consigue ser más eficiente y reducir la estancia media, se liberarán últimos días de estancia que se utilizarán para nuevos ingresos, lo que hará que el número de primeros días de estancia aumente el coste económico total. En este caso mejoraríamos el servicio a los ciudadanos pero aumentaríamos el coste, salvo que se tomasen acciones para redimensionar la oferta hospitalaria bajando el coste y sin mejorer la calidad. También fue tema destacado la posibilidad/necesidad de aprovechar las capacidades de las TIC para realizar cambios estructurales y hacer que la medicina pase de ser reactiva a proactiva mediante alarmas que facilitasen que se actuase antes de ocurra el problema grave. Otro tema que se trató fue la necesidad real de corresponsabilizar de verdad al ciudadano, gracias a las enormes posibilidades a bajo coste que ofrecen las TIC, asumiendo un proceso hacia la salud colaborativa que tiene muchos retos por delante pero también muchas más oportunidades. Y la carpeta del ciudadano, emergente en varios proyectos e ideas, es un paso en ese aspecto. Un tema que levantó pasiones fue cuando la Directora Gerente del Sergas se quejó de que los proyectos TIC eran lentísimos. Desgraciadamente su agenda no le permitió quedarse al debate que fue bastante intenso en el que salieron temas como el larguísimo proceso administrativo, las especificaciones cambiantes, los diseños a medida, etc como factores más allá de la eficiencia especifica de los profesionales TIC involucrados en los proyectos. Y por último quiero citar un tema muy interesante en línea con lo hablado en las jornadas sobre la necesidad de medir: el Índice SEIS. La idea es definir una serie de criterios agrupados en grandes líneas y con un desglose fino que monitorice la aportación de las TIC en la mejora de la salud y la sanidad. Nos presentaron unas versiones previas con debate aún abierto entre dos grandes enfoques, partiendo desde los grandes objetivos hasta los procesos o partiendo desde los procesos hasta los objetivos. La discusión no es sólo académica, ya que influye en los parámetros a establecer. La buena noticia es que está bastante avanzado el trabajo y que pronto los servicios de salud podrán tener una herramienta de comparación basada en la realidad nacional. Para los interesados, varios asistentes hemos ido tuiteando las jornadas, por lo que el que quiera conocer un poco más detalles puede ir a Twitter y buscar la etiqueta #jisa18 y empezando del más antiguo al más moderno se puede hacer un seguimiento con puntos de vista subjetivos sobre lo allí ocurrido. No puedo dejar de hacer un par de autocríticas, ya que soy miembro de la SEIS. La primera es sobre el portal de la SEIS que no ha tenido la interactividad que unas jornadas como estas necesitaban. Pronto empezará a tener documentos y análisis de lo allí ocurrido y luego vendrán las crónicas y análisis más cocinados en la revista I+S. Pero en la segunda década del siglo XXI se necesita bastante más. La otra es sobre la no deseada poca presencia de usuarios de las TIC sanitarias en los roles de profesionales sanitarios y ciudadanos usuarios de los sistemas de información sanitarios. Tenemos que ser proactivos para que acudan en número significativo, ya que si no estamos en riesgo de ser unos TIC-sanitarios absolutistas: todo para los usuarios pero sin los usuarios. Tweet

    Read the article

  • Closer look at the SOA 12c Feature: Oracle Managed File Transfer

    - by Tshepo Madigage-Oracle
    The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application integration complexity. To meet this challenge, Oracle introduced Oracle SOA Suite 12c, the latest version of the industry's most complete and unified application integration and SOA solution. With simplified cloud, mobile, on-premises, and Internet of Things (IoT) integration capabilities, all within a single platform, Oracle SOA Suite 12c helps organizations speed time to integration, improve productivity, and lower TCO. To extend its B2B solution capabilities with Oracle SOA Suite 12c, Oracle unveiled Oracle Managed File Transfer, an integrated solution that enables organizations to virtually eliminate file transfer complexities. This allows customers to load data securely into Oracle Cloud applications as well as third-party cloud or partner applications. Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal departments and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use especially for non technical staff so you can leverage more resources to manage the transfer of files. The extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required. You can protect data in your DMZ by using the SSH/FTP reverse proxy. Oracle Managed File Transfer can help integrate applications by transferring files between them in complex use case patterns. Standalone: Transferring files on its own using embedded FTP and sFTP servers and the file systems to which it has access. SOA Integration: a SOA application can be the source or target of a transfer. A SOA application can also be the common endpoint for the target of one transfer and the source of another. B2B Integration: B2B application can be the source or target of a transfer. A B2B application can also be the common endpoint for the target of one transfer and the source of another. Healthcare Integration:  Healthcare application can be the source or target of a transfer. A Healthcare application can also be the common endpoint for the target of one transfer and the source of another. Oracle Service Bus (OSB) integration: OMT can integrate with Oracle Service Bus web service interfaces. OSB interface can be the source or target of a transfer. An Oracle Service Bus interface can also be the common endpoint for the target of one transfer and the source of another. Hybrid Integration: can be one participant in a web of data transfers that includes multiple application types. Oracle Managed File Transfers has four user roles: file handlers, designers, monitors, and administrators. File Handlers: - Copy files to file transfer staging areas, which are called sources. - Retrieve files from file transfer destinations, which are called targets. Designers: - Create, read, update and delete file transfer sources. - Create, read, update and delete file transfer targets. - Create, read, update and delete transfers, which link sources and targets in complete file delivery flows. - Deploy and test transfers. Monitors: - Use the Dashboard and reports to ensure that transfer instances are successful. - Pause and resume lengthy transfers. - Troubleshoot errors and resubmit transfers. - View artifact deployment details and history. - View artifact dependence relationships. - Enable and disable sources, targets, and transfers. - Undeploy sources, targets, and transfers. - Start and stop embedded FTP and sFTP servers. Administrators: - All file handler tasks - All designer tasks - All monitor tasks - Add other users and determine their roles - Configure user directory permissions - Configure the Oracle Managed File Transfer server - Configure embedded FTP and sFTP servers, including security - Configure B2B and Healthcare domains - Back up and restore the Oracle Managed File Transfer configuration - Purge transferred files and instance data - Archive and restore instance data and payloads - Import and export metadata You will find all the related information about SOA 12.1.3. Oracle Manages File Transfer OMT in the documentation: Using Oracle Manages File Transfer Resources and links: Oracle Unveils Oracle SOA Suite 12c Oracle Managed Files Transfer Oracle Managed Files Transfer SOA 12c White Paper For further enquiries don't hesitate to contact us at [email protected] and join our Partner Webcast on Oracle SOA Suite 12c

    Read the article

  • Silverlight Cream for March 31, 2010 -- #826

    - by Dave Campbell
    In this Issue: Andrea Boschin, Radenko Zec, Andrej Tozon, Bobby Diaz, Brad Abrams, Wolf Schmidt, Colin Eberhardt, Anand Iyer, Matthias Shapiro, Jaime Rodriguez, Bill Reiss, and Lee. Shoutouts: Cigdem has a post up about here MIX10 Interviewing experiences: MIX10 SilverlightShow Interviews Ian T. Lackey has his material up from his talk Silverlight SEO at the St. Louis .Net Users Group Not Silverlight but definitely WP7 cool, Michael Klucher reports that there are New Windows Phone Samples on Creators Club Online Tim Heuer posted a survey: What tools are the minimum to get started in Silverlight? From SilverlightCream.com: A RoleManager to apply roles declaratively to user interface Andrea Boschin also has a new post at SilverlightShow discussing the use of a RoleManager in WCF RIA Services to apply user roles to elements of the UI... good stuff, Andrea. Virtualization in Silverlight 4 RC Radenko Zec has a post out at SilverlightShow where he explains UI and Data Virtualization then gives some examples of their use in Silverlight 4RC, and some issues as well. MS Word Mail Merge with Silverlight 4 COM Automation Andrej Tozon has a post up at SilverlightShow that I missed in the rush of MIX10. He's doing MailMerge with COM automation and Silverlight 4... actually prett cool stuff and all the source! KISS and Tell - MVVM and the ViewModelLocator Bobby Diaz is blogging about a very popular subject right now: ViewModelLocator. He's not showing production code, but it's a thought... check it out. Silverlight 4 + RIA Services - Ready for Business: Validating Data I'm running behind, but Brad Abrams' next post in his series is about validating data in the business application. He also discusses setting up shared code validation. A One-stop Shopping XAML Namespace for Silverlight Client SDK Controls Wolf Schmidt at the Silverlight SDK has a post up highlighting the SL4 XAML namespace prefix. He starts with SL3 then demonstrates the feature's use in SL4. Binding a Silverlight 3 DataGrid to dynamic data via IDictionary (Updated) Colin Eberhardt has an update to his previous article of the same title. This one is a bug fix on an upgrade to SL3 and also an expansion of the previous post. Demo Apps from MIX10 on Windows Phone 7 Anand Iyer posted links to all the WP7 demos used at MIX10 and at least in the case of FourSquare, the source is on CodePlex. XAML Files for Location Visualizations in Silverlight and WPF Matthias Shapiro has graciously provided XAML for us for Silverlight and WPF for a bunch of different US maps... too cool, now we don't have to be asking 'where did you get that map?'... thanks Matthias! Theming in Windows Phone Jaime Rodriguez has a post up that deep-dives theming in general and demonstrates using it on WP7... end-user configurations and developer stuff. Space Rocks game step 7: Moving the ship It appears that in the heat of battle (blogging) I said Bill Reiss' Space Rocks game he's building is for WP7... obviously it's not, but it's a game folks... :) THis is Episode 7 and he's moving the ship now. SL4(RC) RichTextBox and Access Violation Lee has some code that looks like it should work for a RichTextBox in SL4RC, and it's throwing an error... see if you have a solution for him... or is it a bug? Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Extend Your Applications Your Way: Oracle OpenWorld Live Poll Results

    - by Applications User Experience
    Lydia Naylor, Oracle Applications User Experience Manager At OpenWorld 2012, I attended one of our team’s very exciting sessions: “Extend Your Applications, Your Way”. It was clear that customers were engaged by the topics presented. Not only did we see many heads enthusiastically nodding in agreement during the presentation, and witness a large crowd surround our speakers Killian Evers, Kristin Desmond and Greg Nerpouni afterwards, but we can prove it…with data! Figure 1. Killian Evers, Kristin Desmond, and Greg Nerpouni of Oracle at the OOW 2012 session. At the beginning of our OOW 2012 journey, Greg Nerpouni, Fusion HCM Principal Product Manager, told me he really wanted to get feedback from the audience on our extensibility direction. Initially, we were thinking of doing a group activity at the OOW UX labs events that we hold every year, but Greg was adamant- he wanted “real-time” feedback. So, after a little tinkering, we came up with a way to use an online survey tool, a simple QR code (Quick Response code: a matrix barcode that can include information like URLs and can be read by mobile device cameras), and the audience’s mobile devices to do just that. Figure 2. Actual QR Code for survey Prior to the session, we developed a short survey in Vovici (an online survey tool), with questions to gather feedback on certain points in the presentation, as well as demographic data from our participants. We used Vovici’s feature to generate a mobile HTML version of the survey. At the session, attendees accessed the survey by simply scanning a QR code or typing in a TinyURL (a shorthand web address that is easily accessible through mobile devices). Killian, Kristin and Greg paused at certain points during the session and asked participants to answer a few survey questions about what they just presented. Figure 3. Session survey deployed on a mobile phone The nice thing about Vovici’s survey tool is that you can see the data real-time as participants are responding to questions - so we knew during the session that not only was our direction on track but we were hitting the mark and fulfilling Greg’s request. We planned on showing the live polling results to the audience at the end of the presentation but it ran just a little over time, and we were gently nudged out of the room by the session attendants. We’ve included a quick summary below and this link to the full results for your enjoyment. Figure 4. Most important extensions to Fusion Applications So what did participants think of our direction for extensibility? A total of 94% agreed that it was an improvement upon their current process. The vast majority, 80%, concurred that the extensibility model accounts for the major roles involved: end user, business systems analyst and programmer. Attendees suggested a few supporting roles such as systems administrator, data architect and integrator. Customers and partners in the audience verified that Oracle‘s Fusion Composers allow them to make changes in the most common areas they need to: user interface, business processes, reporting and analytics. Integrations were also suggested. All top 10 things customers can do on a page rated highly in importance, with all but two getting an average rating above 4.4 on a 5 point scale. The kinds of layout changes our composers allow customers to make align well with customers’ needs. The most common were adding columns to a table (94%) and resizing regions and drag and drop content (both selected by 88% of participants). We want to thank the attendees of the session for allowing us another great opportunity to gather valuable feedback from our customers! If you didn’t have a chance to attend the session, we will provide a link to the OOW presentation when it becomes available.

    Read the article

  • Q&A: Oracle's Paul Needham on How to Defend Against Insider Attacks

    - by Troy Kitch
    Source: Database Insider Newsletter: The threat from insider attacks continues to grow. In fact, just since January 1, 2014, insider breaches have been reported by a major consumer bank, a major healthcare organization, and a range of state and local agencies, according to the Privacy Rights Clearinghouse.  We asked Paul Needham, Oracle senior director, product management, to shed light on the nature of these pernicious risks—and how organizations can best defend themselves against the threat from insider risks. Q. First, can you please define the term "insider" in this context? A. According to the CERT Insider Threat Center, a malicious insider is a current or former employee, contractor, or business partner who "has or had authorized access to an organization's network, system, or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organization's information or information systems."  Q. What has changed with regard to insider risks? A. We are actually seeing the risk of privileged insiders growing. In the latest Independent Oracle Users Group Data Security Survey, the number of organizations that had not taken steps to prevent privileged user access to sensitive information had grown from 37 percent to 42 percent. Additionally, 63 percent of respondents say that insider attacks represent a medium-to-high risk—higher than any other category except human error (by an insider, I might add). Q. What are the dangers of this type of risk? A. Insiders tend to have special insight and access into the kinds of data that are especially sensitive. Breaches can result in long-term legal issues and financial penalties. They can also damage an organization's brand in a way that directly impacts its bottom line. Finally, there is the potential loss of intellectual property, which can have serious long-term consequences because of the loss of market advantage.  Q. How can organizations protect themselves against abuse of privileged access? A. Every organization has privileged users and that will always be the case. The questions are how much access should those users have to application data stored in the database, and how can that default access be controlled? Oracle Database Vault (See image) was designed specifically for this purpose and helps protect application data against unauthorized access.  Oracle Database Vault can be used to block default privileged user access from inside the database, as well as increase security controls on the application itself. Attacks can and do come from inside the organization, and they are just as likely to come from outside as attempts to exploit a privileged account.  Using Oracle Database Vault protection, boundaries can be placed around database schemas, objects, and roles, preventing privileged account access from being exploited by hackers and insiders.  A new Oracle Database Vault capability called privilege analysis identifies privileges and roles used at runtime, which can then be audited or revoked by the security administrators to reduce the attack surface and increase the security of applications overall.  For a more comprehensive look at controlling data access and restricting privileged data in Oracle Database, download Needham's new e-book, Securing Oracle Database 12c: A Technical Primer. 

    Read the article

  • Windows Azure Recipe: Mobile Computing

    - by Clint Edmonson
    A while back, mashups were all the rage. The idea was to compose solutions that provided aggregation and integration across applications and services to make information more available, useful, and personal. Mashups ushered in the era of Web 2.0 in all it’s socially connected goodness. They taught us that to be successful, we needed to add web service APIs to our web applications. Web and client based mashups met with great success and have evolved even further with the introduction of the internet connected smartphone. Nothing is more available, useful, or personal than our smartphones. The current generation of cloud connected mobile computing mashups allow our mobilized workforces to receive, process, and react to information from disparate sources faster than ever before. Drivers Integration Reach Time to market Solution Here’s a sketch of a prototypical mobile computing solution using Windows Azure: Ingredients Web Role – with the phone running a dedicated client application, the web role is responsible for serving up backend web services that implement the solution’s core connected functionality. Database – used to store core operational and workflow data for the solution’s web services. Access Control – this service is used to authenticate and manage users identity, roles, and groups, possibly in conjunction with 3rd identity providers such as Windows LiveID, Google, Yahoo!, and Facebook. Worker Role – this role is used to handle the orchestration of long-running, complex, asynchronous operations. While much of the integration and interaction with other services can be handled directly by the mobile client application, it’s possible that the backend may need to integrate with 3rd party services as well. Offloading this work to a worker role better distributes computing resources and keeps the web roles focused on direct client interaction. Queues – these provide reliable, persistent messaging between applications and processes. They are an absolute necessity once asynchronous processing is involved. Queues facilitate the flow of distributed events and allow a solution to send push notifications back to mobile devices at appropriate times. Training & Resources These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Windows Azure Toolkit for Windows Phone The Windows Azure Toolkit for Windows Phone is designed to make it easier for you to build mobile applications that leverage cloud services running in Windows Azure. The toolkit includes Visual Studio project templates for Windows Phone and Windows Azure, class libraries optimized for use on the phone, sample applications, and documentation Windows Azure Toolkit for iOS The Windows Azure Toolkit for iOS is a toolkit for developers to make it easy to access Windows Azure storage services from native iOS applications. The toolkit can be used for both iPhone and iPad applications, developed using Objective-C and XCode. Windows Azure Toolkit for Android The Windows Azure Toolkit for Android is a toolkit for developers to make it easy to work with Windows Azure from native Android applications. The toolkit can be used for native Android applications developed using Eclipse and the Android SDK. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Securing Flexfield Value Sets in EBS 12.2

    - by Sara Woodhull
    Release 12.2 includes a new feature: flexfield value set security. This new feature gives you additional options for ensuring that different administrators have non-overlapping responsibilities, which in turn provides checks and balances for sensitive activities.  Separation of Duties (SoD) is one of the key concepts of internal controls and is a requirement for many regulations including: Sarbanes-Oxley (SOX) Act Health Insurance Portability and Accountability Act (HIPAA) European Union Data Protection Directive. Its primary intent is to put barriers in place to prevent fraud or theft by an individual acting alone. Implementing Separation of Duties requires minimizing the possibility that users could modify data across application functions where the users should not normally have access. For flexfields and report parameters in Oracle E-Business Suite, values in value sets can affect functionality such as the rollup of accounting data, job grades used at a company, and so on. Controlling access to the creation or modification of value set values can be an important piece of implementing Separation of Duties in an organization. New Flexfield Value Set Security feature Flexfield value set security allows system administrators to restrict users from viewing, adding or updating values in specific value sets. Value set security enables role-based separation of duties for key flexfields, descriptive flexfields, and report parameters. For example, you can set up value set security such that certain users can view or insert values for any value set used by the Accounting Flexfield but no other value sets, while other users can view and update values for value sets used for any flexfields in Oracle HRMS. You can also segregate access by Operating Unit as well as by role or responsibility.Value set security uses a combination of data security and role-based access control in Oracle User Management. Flexfield value set security provides a level of security that is different from the previously-existing and similarly-named features in Oracle E-Business Suite: Function security controls whether a user has access to a specific page or form, as well as what operations the user can do in that screen. Flexfield value security controls what values a user can enter into a flexfield segment or report parameter (by responsibility) during routine data entry in many transaction screens across Oracle E-Business Suite. Flexfield value set security (this feature, new in Release 12.2) controls who can view, insert, or update values for a particular value set (by flexfield, report, or value set) in the Segment Values form (FNDFFMSV). The effect of flexfield value set security is that a user of the Segment Values form will only be able to view those value sets for which the user has been granted access. Further, the user will be able to insert or update/disable values in that value set if the user has been granted privileges to do so.  Flexfield value set security affects independent, dependent, and certain table-validated value sets for flexfields and report parameters. Initial State of the Feature upon Upgrade Because this is a new security feature, it is turned on by default.  When you initially install or upgrade to Release 12.2.2, no users are allowed to view, insert or update any value set values (users may even think that their values are missing or invalid because they cannot see the values).  You must explicitly set up access for specific users by enabling appropriate grants and roles for those users.We recommend using flexfield value set security as part of a comprehensive Separation of Duties strategy. However, if you choose not to implement flexfield value set security upon upgrading to or installing Release 12.2, you can enable backwards compatibility--users can access any value sets if they have access to the Values form--after you upgrade. The feature does not affect day-to-day transactions that use flexfields.  However, you must either set up specific grants and roles or enable backwards compatibility before users can create new values or update or disable existing values. For more information, see: Release 12.2 Flexfield Value Set Security Documentation Update for Patch 17305947:R12.FND.C (Document 1589204.1) R12.2 TOI: Implement and Use Application Object Library (AOL) - Flexfields Security and Separation of Duties for Value Sets (recorded training)

    Read the article

  • Windows Azure Recipe: Social Web / Big Media

    - by Clint Edmonson
    With the rise of social media there’s been an explosion of special interest media web sites on the web. From athletics to board games to funny animal behaviors, you can bet there’s a group of people somewhere on the web talking about it. Social media sites allow us to interact, share experiences, and bond with like minded enthusiasts around the globe. And through the power of software, we can follow trends in these unique domains in real time. Drivers Reach Scalability Media hosting Global distribution Solution Here’s a sketch of how a social media application might be built out on Windows Azure: Ingredients Traffic Manager (optional) – can be used to provide hosting and load balancing across different instances and/or data centers. Perfect if the solution needs to be delivered to different cultures or regions around the world. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model has extensibility points to hook into other identity providers as well. Web Role – hosts the core of the web application and presents a central social hub users. Database – used to store core operational, functional, and workflow data for the solution’s web services. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Tables (optional) – for semi-structured data streams that don’t need relational integrity such as conversations, comments, or activity streams, tables provide a faster and more flexible way to store this kind of historical data. Blobs (optional) – users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • How to prepare for a programming competition? Graphs, Stacks, Trees, oh my! [closed]

    - by Simucal
    Last semester I attended ACM's (Association for Computing Machinery) bi-annual programming competition at a local University. My University sent 2 teams of 3 people and we competed amongst other schools in the mid-west. We got our butts kicked. You are given a packet with about 11 problems (1 problem per page) and you have 4 hours to solve as many as you can. They'll run your program you submit against a set of data and your output must match theirs exactly. In fact, the judging is automated for the most part. In any case.. I went there fairly confident in my programming skills and I left there feeling drained and weak. It was a terribly humbling experience. In 4 hours my team of 3 people completed only one of the problems. The top team completed 4 of them and took 1st place. The problems they asked were like no problems I have ever had to answer before. I later learned that in order to solve them some of them effectively you have to use graphs/graph algorithms, trees, stacks. Some of them were simply "greedy" algo's. My question is, how can I better prepare for this semesters programming competition so I don't leave there feeling like a complete moron? What tips do you have for me to be able to answer these problems that involve graphs, trees, various "well known" algorithms? How can I easily identify the algorithm we should implement for a given problem? I have yet to take Algorithm Design in school so I just feel a little out of my element. Here are some examples of the questions asked at the competitions: ACM Problem Sets Update: Just wanted to update this since the latest competition is over. My team placed 1st for our small region (about 6-7 universities with between 1-5 teams each school) and ~15th for the midwest! So, it is a marked improvement over last years performance for sure. We also had no graduate students on our team and after reviewing the rules we found out that many teams had several! So, that would be a pretty big advantage in my own opinion. Problems this semester ranged from about 1-2 "easy" problems (ie bit manipulation, string manipulation) to hard (graph problems involving fairly complex math and network flow problems). We were able to solve 4 problems in our 5 hours. Just wanted to thank everyone for the resources they provided here, we used them for our weekly team practices and it definitely helped! Some quick tips that I have that aren't suggested below: When you are seated at your computer before the competition starts, quickly type out various data structures that you might need that you won't have access to in your languages libraries. I typed out a Graph data-structure complete with floyd-warshall and dijkstra's algorithm before the competition began. We ended up using it in our 2nd problem that we solved and this is the main reason why we solved this problem before anyone else in the midwest. We had it ready to go from the beginning. Similarly, type out the code to read in a file since this will be required for every problem. Save this answer "template" someplace so you can quickly copy/paste it to your IDE at the beginning of each problem. There are no rules on programming anything before the competition starts so get any boilerplate code out the way. We found it useful to have one person who is on permanent whiteboard duty. This is usually the person who is best at math and at working out solutions to get a head start on future problems you will be doing. One person is on permanent programming duty. Your fastest/most skilled "programmer" (most familiar with the language). This will save debugging time also. The last person has several roles between assessing the packet of problems for the next "easiest" problem, helping the person on the whiteboard work out solutions and helping the person programming work out bugs/issues. This person needs to be flexible and be able to switch between roles easily.

    Read the article

  • What is the difference between Workcenters, Dashboards, and the Interaction Hub?

    - by Matthew Haavisto
    Oracle Open World has just concluded.  Over the course of the conference, we presented several sessions covering different aspects of the PeopleSoft user experience, including Workcenters, Dashboards, and the PeopleSoft Interaction Hub (formerly known as the PeopleSoft Applications Portal).  Although we've produced collateral on these features and covered them in sessions, it became apparent at the conference that customers still have many questions about the these products, including how they are licensed, how they are installed, what their various purposes are, and how they can be used together synergistically. Let's Start with Licensing and Installation As you may know, we've extended the restricted use license (RUL) for the Interaction Hub.  This grants customers with PeopleTools 8.52 licenses the right to install the Interaction Hub for free for use as specified in the Tools license notes.  Note that this means customers receive a restricted use license for the Interaction Hub that doesn't cost them an additional license fee, but it is a separate product, not part of PeopleTools or PeopleSoft applications, and is a separate installation.  This means customers must provide the infrastructure to install and run the Hub, just like any other application.  The benefits of using the Hub to unify your PeopleSoft user experience can be great.  PeopleSoft applications have not yet delivered instances of the Hub with their products, though they may in the future. Workcenters and Dashboards, on the other hand, are frameworks provided by PeopleTools.  No other license is required, and no additional installation of a separate product is needed (apart from PeopleTools and PeopleSoft applications).  PeopleSoft applications are delivering instances of the workcenters and dashboards with their products.  Some are available now, and more are coming in future releases.  These delivered workcenter and dashboard instances require no additional licenses, and no additional installations beyond Tools and the applications that provide them.  In addition, the workcenter and dashboard frameworks provided by PeopleTools can be used by customers to build their own workcenters and dashboards, and it's quite easy and simple to do so. What are Their Differences?  What Purposes do they Serve? Workcenters, Dashboards and the Interaction Hub appear somewhat similar.  They all contain pagelets, and have some visual characteristics in common.  However, their strengths and purposes are very different, and they were designed to provide different benefits to your PeopleSoft ecosystem. Workcenters and Dashboards have the following characteristics: Designed for specific roles Focus on the daily tasks of those roles Help to streamline the work performed most often Personal view of my work world Makes navigation and search easier and quicker, particularly for transactions and decision support Reports and data needed for day-to-day work Personalizable, but minimal Delivered by PS Apps, but can be altered by customer for their requirements Customers can create their own Workcenters can be used for guided processes  The Interaction Hub is designed to aggregate content from multiple applications, and is is used to unify the user experience of those applications.  It offers a rich, web site-based user experience, and is often used to provide access to infrequently performed activities like benefits enrollment, payroll inquiries, life event changes, onboarding, and so on. Full-featured and robust Centrally administered Pushed to large audience Broad info like Company News Infrequent activities like benefits, not day-to-day tasks Self-service, access to employer info Central launch point for other activities and can navigate to workcenters and dashboards Deployed by customers or consultants, instances not delivered by PeopleSoft (at this time) Content management Unified PS application navigation Although these products are quite different and serve different purposes in your PeopleSoft environment, they can be used together to provide a richer, more efficient and engaging user experience for your all your user communities.

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

  • PowerShell and SMO – be careful how you iterate

    - by Fatherjack
    I’ve yet to have a totally smooth experience with PowerShell and it was late on Friday when I crashed into this problem. I haven’t investigated if this is a generally well understood circumstance and if it is then I apologise for repeating everything. Scenario: I wanted to scan a number of server for many properties, including existing logins and to identify which accounts are bestowed with sysadmin privileges. A great task to pass to PowerShell, so with a heavy heart I started up PowerShellISE and started typing. The script doesn’t come easily to me but I follow the logic of SMO and the properties and methods available with the language so it seemed something I should be able to master. Version #1 of my script. And the results it returns when executed against my home laptop server. These results looked good and for a long time I was concerned with other parts of the script, for all intents and purposes quite happy that this was an accurate assessment of the server. Let’s just review my logic for each step of the code at the top. Lines 1 to 7 just set up our variables and write out the header message Line 8 our first loop, to go through each login on the server Line 10 an inner loop that will assess each role name that each login has been assigned Line 11 a test to see if each role has the name ‘sysadmin’ Line 13 write out the login name with a bright format as it is a sysadmin login Line 17 write out the login name with no formatting It is quite possible that here someone with more PowerShell experience than me will be shouting at their screen pointing at the error I made but to me this made total sense. Until I altered the code, I altered lines 6 and 7 of code above to be: $c = $Svr.Logins.Count write-host “There are $c Logins on the server” This changed my output to look like this: This started alarm bells ringing – there are clearly not 13 logins listed So, let’s see where things are going wrong, edit the script so it looks like this. I’ve highlighted the changes to make Running this code shows me these results Our $n variable should count up by one for each login returned and We are clearly missing some logins. I referenced this list back to Management Studio for my server and see the Logins as below, where there are clearly 13 logins. We see a Login called Annette in SSMS but not in the script results so I opened that up and looked at its properties and it’s server roles in particular. The account has only public access to the server. Inspection of the other logins that the PowerShell script misses out show they too are only members of the public role. Right now I can’t work out whether there is a good reason for this and if it should be expected behaviour or not. Please spend a few minutes to leave a comment if you have an opinion or theory for this. How to get the full list of logins. Clearly I needed to get a full list of the logins so set about reviewing my code to see if there was a better way to iterate through the roles for each login. This is the code that I came up with and I think it is doing everything that I need it to. It gives me the expected results like this: So it seems that the ListMembers() method is the trouble maker in my first versions of the code. I would have expected that ListMembers should return Logins that are only members of the public role, certainly Technet makes no reference to it being left out in it’s Login.ListMembers details. Suffice to say, it’s a lesson learned and I will approach using it with caution in future circumstances.

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • Using Content Analytics for More Effective Engagement

    - by Kellsey Ruppel
    Using Content Analytics for More Effective Engagement: Turning High-Volume Content into Templates for Success By Mitchell Palski, Oracle WebCenter Sales Consultant Many organizations use Oracle WebCenter Portal to develop these basic types of portals: Intranet portals used for collaboration, employee self-service, and company communication Extranet portals used by customers and partners for self-service and support Team collaboration portals that allow users to share documents and content, track activity, and engage in discussions Portals are intended to provide a personalized, single point of interaction with web-based applications and information. The user experiences that a Portal is capable of displaying should be relevant to an individual user or class of users (a group or role). The components of a Portal that would vary based on a user’s identity include: Web content such as images, news articles, and on-screen instruction Social tools such as threaded discussions, polls/surveys, and blogs Document management tools to upload, download, and edit files Web applications that present data visualizations and data entry modules These collections of content, tools, and applications make up valuable workspaces. The challenge that a development team may have is defining which combinations are the most effective for its users. No one wants to create and manage a workspace that goes un-used or (even worse) that is used but is ineffective. Oracle WebCenter Portal provides you with the capabilities to not only rapidly develop variations of portals, but also identify which portals are the most effective and should be re-used throughout an enterprise. Capturing Portal AnalyticsOracle WebCenter Portal provides an analytics service that allows administrators and business users to track and analyze portal usage. These analytics are captured in the form of: Usage tracking metrics Behavior tracking User Profile Correlation The out-of-the-box task reports that come with Oracle WebCenter Portal include: WebCenter Portal Traffic Page Traffic Login Metrics Portlet Traffic Portlet Response Time Portlet Instance Traffic Portlet Instance Response Time Search Metrics Document Metrics Wiki Metrics Blog Metrics Discussion Metrics Portal Traffic Portal Response Time By determining the usage and behavior tracking metrics that are associated with specific user profiles (including groups and roles), your administrators will be able to identify the components of your solution that are the most valuable.  Your first step as an administrator should be to identify the specific pages and/or components are used the most frequently. Next, determine the user(s) or user-group(s) that are accessing those high-use elements of a portal. It is also important to determine patterns in high-usage and see if they correlate to a specific schedule. One of the goals of any development team (especially those that are following Agile methodologies) should be to develop reusable web components to minimize redundant development. Oracle WebCenter Portal provides you the tools to capture the successful workspaces that have already been developed and identified so that they can be reused for similar user demographics. Re-using Successful PortalsWhen creating a new Portal in Oracle WebCenter, developers have the option to base that portal on a template that includes: Pre-seeded data such as pages, tools, user roles, and look-and-feel assets Specific sub-sets of page-layouts, tools, and other resources to standardize what is added to a Portal’s pages Any custom components that your team creates during development cycles Once you have identified a successful workspace and its most valuable components, leverage Oracle WebCenter’s ability to turn that custom portal into a portal template. By creating a template from your already successful portal, you are empowering your enterprise by providing a starting point for future initiatives. Your new projects, new teams, and new web pages can benefit from lessons learned and adjustments that have already been made to optimize user experiences instead of starting from scratch. ***For a complete explanation of how to work with Portal Templates, be sure to read the Fusion Middleware documentation available online.

    Read the article

  • How to be Agile when new work keeps affecting completed work?

    - by jdln
    The project I'm working on is to re-skin an existing website. The functionally will stay the same, its just the styles that are changing. The HTML is not changing, I'm only modifying the CSS files. The site is pretty complex. There are dozens of pages. Users can be logged in and have a number of different roles. Depending on their role the content of the page and what pages they are allowed to see varys. We're using GIT and Github. I'm trying to write CSS that works as components. So when the same form elements, headings, etc appear on multiple pages they are already styled and are consistent. Most of time this is working well. Sadly the format and class names in the HTML are at times messy and unpredictable. When I fix something on one page it can break another. The job is also harder as no one knows exactly all the variations that are possible due to the user roles. As such I'm continuously finding new variations as I go along. I'm making headway by putting a lot of comments in my CSS. If I need to remove a CSS rule Ill comment it out so I can still see it with the chrome dev tools, and ill put a comment in the CSS saying why I removed it and for what page this was done. This means that if on another page I'm about to add add the rule to fix a different problem, there is more of a chance I will see how this would break the first page. This allows me to either find a different solution that will work for both pages, or I can make the override page specific. This has been working quite well for me. If I had complete free reign and the only deadline was to finish the project by the end then this method would be fine. However my manager is trying to mitigate risk by breaking the work into areas to be completed per sprint. This is counter to how I have been approaching things as something like my typography styles will affect all other pages on the site. The other issue is that the different stakeholders want to sign off each section as I go along. However once I've finished a section it may change if I change CSS that affects it and also affects a new section I'm working on. I've asked that the stakeholders have a quick unofficial sign off in stages (eg per sprint), and have the final official sign off at the end of the project, but this is being met with resistance. I do understand why it would be higher risk to do this, but the only way to guarantee that a signed off section will not change is to make ALL future changes page specific. In addition to this I'm being told that all work that I push to the Git repo should be ready to go live, and as such should not contain any code comments. This is risky for me as I wont know until I've finished the site if I will ever benefit from these comments or not. Has anyone else been in a similar situation and managed to find a compromise that worked for my development approach and also the desires of management and stakeholders to have a more Agile approach? A more Agile workflow works great when you can break the work into components and know that once something is done it wont be affected by future work. However the nature of this project makes this hard to achieve.

    Read the article

  • How to override ATTR_DEFAULT_IDENTIFIER_OPTIONS in Models in Doctrine?

    - by user309083
    Here someone explained that setting a 'primary' attribute for any row in your Model will override Doctrine_Manager's ATTR_DEFAULT_IDENTIFIER_OPTIONS attribute: http://stackoverflow.com/questions/2040675/how-do-you-override-a-constant-in-doctrines-models This works, however if you have a many to many relation whereby the intermediate table is created, even if you have set both columns in the intermediate to primary an error still results when Doctrine tries to place an index on the nonexistant 'id' column upon table creation. Here's my code: //Bootstrap // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine_Core::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'id', 'type' => 'integer', 'length' => 4)); //User Model class User extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('users'); } public function setUp() { $this->hasMany('Role as roles', array( 'local' => 'id', 'foreign' => 'user_id', 'refClass' => 'UserRole', 'onDelete' => 'CASCADE' )); } } //Role Model class Role extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('roles'); } public function setUp() { $this->hasMany('User as users', array( 'local' => 'id', 'foreign' => 'role_id', 'refClass' => 'UserRole' )); } } //UserRole Model class UserRole extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('roles_users'); $this->hasColumn('user_id', 'integer', 4, array('primary'=>true)); $this->hasColumn('role_id', 'integer', 4, array('primary'=>true)); } } Resulting error: SQLSTATE[42000]: Syntax error or access violation: 1072 Key column 'id' doesn't exist in table. Failing Query: "CREATE TABLE roles_users (user_id INT UNSIGNED NOT NULL, role_id INT UNSIGNED NOT NULL, INDEX id_idx (id), PRIMARY KEY(user_id, role_id)) ENGINE = INNODB". Failing Query: CREATE TABLE roles_users (user_id INT UNSIGNED NOT NULL, role_id INT UNSIGNED NOT NULL, INDEX id_idx (id), PRIMARY KEY(user_id, role_id)) ENGINE = INNODB I'm creating my tables using Doctrine::createTablesFromModels();

    Read the article

  • HttpServletRequest#login() not working in Java.

    - by Nitesh Panchal
    Hello, j_security_check just doesn't seem enough for me to perform login process. So, instead of submitting the form to j_security_check i created my own servlet and in that i am programmatically trying to do login. This works but i am not able to redirect to my restricted resource. Can anybody tell me what can be the problem? This is processRequest method of my servlet :- protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { String strUsername = request.getParameter("txtusername"); String strPassword = request.getParameter("txtpassword"); if(strUsername == null || strPassword == null || strUsername.equals("") || strPassword.equals("")) throw new Exception("Username and/or password missing."); request.login(strUsername, strPassword); System.out.println("Login succeeded!!"); if(request.isUserInRole(ROLES.ADMIN.getValue())){//enum System.out.println("Found in Admin Role"); response.sendRedirect("/Admin/home.jsf"); } else if (request.isUserInRole(ROLES.GENERAL.getValue())) response.sendRedirect("/Common/index.jsf"); else //guard throw new Exception("No role for user " + request.getRemoteUser()); }catch(Exception ex){ //patch work why there needs to be blogger here? System.out.println("Invalid username and/or password!!"); response.sendRedirect("/Common/index.jsf"); }finally { out.close(); } } Everything works fine and i can even see message "Found in Admin Role" but problem is even after authenticating i am not able to redirect my request to some other page. Please help geeks.

    Read the article

  • Applying business logic to form elements in ASP.NET MVC

    - by Brettski
    I am looking for best practices in applying business logic to form elements in an ASP.NET MVC application. I assume the concepts would apply to most MVC patterns. The goal is to have all the business logic stem from the same place. I have a basic form with four elements: Textbox: for entering data Checkbox: for staff approval Checkbox: for client approval Button: for submitting form The textbox and two check boxes are fields in a database accessed using LINQ to SQL. What I want to do is put logic around the check boxes on who can check them and when. True table (little silly but it's an example): when checked || may check Staff || may check Client Staff | Client || Staff | Client || Staff | Client 0 0 || 1 0 0 1 0 1 || 0 0 0 1 1 0 || 1 0 0 1 1 1 || 0 0 0 1 There are to security roles, staff and client; a person's role determines who they are, the roles are maintained in the database alone with current state of the check boxes. So I can simply store the users roll in the view class and enable and disable check boxes based on their role, but this doesn't seem proper. That is putting logic in UI to control of which actions can be taken. How do I get most of this control down into the model? I mean I need to control which check boxes are enabled and then check the results in the model when the form is posted, so it seems the best place for it to originate. I am looking for a good approach to constructing this, something to follow as I build the application. If you know of some great references which explain these best practices that is really appreciated too.

    Read the article

  • Non-deprecated code for detecting whether a SharePoint User has a specific Permission Level

    - by ccomet
    In our application, we have some forms which need to show some data specifically if the current user has a specific permission level. These users belong to an SPGroup which includes users who should not see this data, so in this particular case I cannot filter based off of group membership. My current solution has been to use web.CurrentUser.Roles and use a simple check on whether it contains a permission level of the correct name. Roles is of the deprecated SPRole class, so I am bombarded with warning messages despite the fact it technically works. It suggests that I use SPRoleAssignment or SPRoleDefinition (the recommendation seems arbitrary since some lines recommend one while others recommend the other even though it is being used for the same thing). However, I cannot seem to find any method to directly retrieve an SPRoleAssignment or SPRoleDefinition object from an SPUser or SPPrincipal object, nor can I retrieve either object corresponding specifically to the current user of the SPWeb object. How can I update these methods to use non-deprecated code? I've found other cases of determining user permissions, but I haven't found one that will work from a starting point of the current web or the current user. It's not urgent, but it certainly is helpful to avoid having to sift through all of those warnings just to reach the more important warnings.

    Read the article

  • XSL apply templates not working...could be XPath error

    - by AdRock
    I have converted mny stylesheet to use apply templates instead of call templates and it worked fine for my other styesheet, which was more complicated, but this one doesn't seem to work even thought it is a much simpler template. All that it outputs is the sex node and the userlevel node. I think it has to do with my Xpath. All i want is to output the < user information, nothing else <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template name="hoo" match="/"> <html> <head> <title>Registered Members</title> <link rel="stylesheet" type="text/css" href="user.css" /> </head> <body> <h1>Registered Members</h1> <xsl:for-each select="folktask/member/user"> <div class="userdiv"> <xsl:apply-templates/> </div> </xsl:for-each> </body> </html> </xsl:template> <xsl:template match="folktask/member/user"> <xsl:apply-templates select="@id"/> <xsl:apply-templates select="personal/name"/> <xsl:apply-templates select="personal/address1"/> <xsl:apply-templates select="personal/city"/> <xsl:apply-templates select="personal/county"/> <xsl:apply-templates select="personal/postcode"/> <xsl:apply-templates select="personal/telephone"/> <xsl:apply-templates select="personal/mobile"/> <xsl:apply-templates select="personal/email"/> <xsl:apply-templates select="personal"/> <xsl:apply-templates select="account/username"/> <xsl:apply-templates select="account"/> </xsl:template> <xsl:template match="@id"> <div class="heading bold"><h2>USER ID: <xsl:value-of select="." /></h2></div> </xsl:template> <xsl:template match="personal/name"> <div class="small bold">NAME:</div> <div class="large"><xsl:value-of select="." /></div> </xsl:template> <xsl:template match="personal/address1"> <div class="small bold">ADDRESS:</div> <div class="large"><xsl:value-of select="." /></div> </xsl:template> <xsl:template match="personal/city"> <div class="small bold">CITY:</div> <div class="large"><xsl:value-of select="." /></div> </xsl:template> <xsl:template match="personal/county"> <div class="small bold">COUNTY:</div> <div class="large"><xsl:value-of select="." /></div> </xsl:template> <xsl:template match="personal/postcode"> <div class="small bold">POSTCODE:</div> <div class="large"><xsl:value-of select="." /></div> </xsl:template> <xsl:template match="personal/telephone"> <div class="small bold">TELEPHONE:</div> <div class="large"><xsl:value-of select="." /></div> </xsl:template> <xsl:template match="personal/mobile"> <div class="small bold">MOBILE:</div> <div class="large"><xsl:value-of select="." /> </div> </xsl:template> <xsl:template match="personal/email"> <div class="small bold">EMAIL:</div> <div class="large"> <xsl:element name="a"> <xsl:attribute name="href"> <xsl:text>mailto:</xsl:text> <xsl:value-of select="." /> </xsl:attribute> <xsl:value-of select="." /> </xsl:element> </div> </xsl:template> <xsl:template match="personal"> <div class="small bold">SEX:</div> <div class="colored bold"> <xsl:choose> <xsl:when test="sex='Male'"> <div class="sex male"><xsl:value-of select="sex/."/></div> </xsl:when> <xsl:otherwise> <div class="sex female"><xsl:value-of select="sex/."/></div> </xsl:otherwise> </xsl:choose> </div> </xsl:template> <xsl:template match="account/username"> <div class="small bold">USERNAME:</div> <div class="large"><xsl:value-of select="." /></div> </xsl:template> <xsl:template match="account"> <div class="small bold">ACCOUNT TYPE:</div> <div class="colored "> <xsl:choose> <xsl:when test="userlevel='1'"> <div class="nml bold">Normal User</div> </xsl:when> <xsl:when test="userlevel='2'"> <div class="vol bold">Volunteer</div> </xsl:when> <xsl:when test="userlevel='3'"> <div class="org bold">Organiser</div> </xsl:when> <xsl:otherwise> <div class="name adm bold">Administrator</div> </xsl:otherwise> </xsl:choose> </div> </xsl:template> </xsl:stylesheet> and some of my xml <?xml version="1.0" encoding="ISO-8859-1" ?> <?xml-stylesheet type="text/xsl" href="users.xsl"?> <folktask xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:noNamespaceSchemaLocation="folktask.xsd"> <member> <user id="1"> <personal> <name>Abbie Hunt</name> <sex>Female</sex> <address1>108 Access Road</address1> <address2></address2> <city>Wells</city> <county>Somerset</county> <postcode>BA5 8GH</postcode> <telephone>01528927616</telephone> <mobile>07085252492</mobile> <email>[email protected]</email> </personal> <account> <username>AdRock</username> <password>269eb625e2f0cf6fae9a29434c12a89f</password> <userlevel>4</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="1"> <roles></roles> <region>South West</region> </volunteer> </member> <member> <user id="2"> <personal> <name>Aidan Harris</name> <sex>Male</sex> <address1>103 Aiken Street</address1> <address2></address2> <city>Chichester</city> <county>Sussex</county> <postcode>PO19 4DS</postcode> <telephone>01905149894</telephone> <mobile>07784467941</mobile> <email>[email protected]</email> </personal> <account> <username>AmbientExpert</username> <password>8e64214160e9dd14ae2a6d9f700004a6</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="2"> <roles>Van Driver,gas Fitter</roles> <region>South Central</region> </volunteer> </member> </folktask>

    Read the article

  • Posting XML form data to a RESTful Server with Javascript or PHP

    - by pjs-worker
    Hi folks, I've been given the task of posting to a RESTful server. I'm new to the official "REST" but I've played with the concept before. However, this time I have an XML Payload example file that I am supposed to post. I'm struggling to figure out how the two relate. Can you help? Right now I can post to a specific site, say www.pcpost.com/schema/Application I can generate the URL for the inital, ie: postApplication?userid=4&... Being relatively new to web programming, I find that don't know how to take the following and interface it with the server. I'm at least familiar with Javascript and PHP. If this is impossible with those two types, I can learn whatever would be best. Thanks for your help on this. C <?xml version=\"1.0\" ?> <Application xmlns="http://www.pcpost.com/schema/Application" SchemaVersion="1.0" ProgramId="8" ApplicationDate="2009-08-29"> <Vendors> <Vendor Role="Applicant" Company="Test Company" Contact="Smith, John"/> <Vendor Role="Seller" Company="Test Company" Contact="Doe, Jane"/> <Vendor Role="Installer" Company="Test Company" Contact="Funk, Carl"/> </Vendors> <Participants> <Participant TaxStatus="Individual" Sector="Commercial"> <Roles> <Role>Host Customer</Role> </Roles> </Participants> </Application>

    Read the article

  • XSL check param length

    - by AdRock
    I need to check if a param has got a value in it and if it has then do this line otherwise do this line. I've got it working whereas I don't get errors but it's not taking the right branch Here is the template which holds the html in the XSL stylesheet <xsl:call-template name="volunteer_role"> <xsl:with-param name="volrole" select="volunteer/roles" /> </xsl:call-template> and here is the template where there is a choice whether to take this brancg or that brach depending if the param is empty <xsl:template name="volunteer_role"> <xsl:param name="volrole" select="'Not Available'" /> <div class="small bold">ROLES:</div> <div class="large"> <xsl:choose> <xsl:when test="string-length($volrole)!=0"> <xsl:value-of select="$volrole" /> </xsl:when> <xsl:otherwise> <xsl:text> </xsl:text> </xsl:otherwise> </xsl:choose> </div> </xsl:template>

    Read the article

  • Kerberos and/or other authentication systems - One time logon for all PHP scripts

    - by devviedev
    I'm managing a set of web apps, almost exclusively written in PHP, and would like to find an authentication platform to build a role-based authorization system on top of. Also, I'd like the authentication system to be extensible to use for, for example, system services (SSH, etc.) Here are some of the main characteristics I'm looking for, in order of importance: Easy PHP implementation (storing/reading easily roles, etc.). Redundant, if possible. If an auth system goes down everyone is not locked out. Has clients for Windows and Mac. Easy web-based administration (adding/removing users/roles, changing passwords). If not, I can build an administration system without too much effort. One-time log on. I'd also like, when an auth token is issued, to store the user's IP address and use that to authorize the user for some non web-based applications. For that reason, I'd like a desktop client to issue the token and revoke tokens when, for example, the user becomes idle at their workstation. I'm thinking Kerberos might be a solution, but what are other options?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >