Search Results

Search found 2624 results on 105 pages for 'andrew grant'.

Page 102/105 | < Previous Page | 98 99 100 101 102 103 104 105  | Next Page >

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • Limiting Audit Exposure and Managing Risk – Q&A and Follow-Up Conversation

    - by Tanu Sood
    Thanks to all who attended the live ISACA webcast on Limiting Audit Exposure and Managing Risk with Metrics-Driven Identity Analytics. We were really fortunate to have Don Sparks from ISACA moderate the webcast featuring Stuart Lincoln, Vice President, IT P&L Client Services, BNP Paribas, North America and Neil Gandhi, Principal Product Manager, Oracle Identity Analytics. Stuart’s insights given the team’s role in providing IT for P&L Client Services and his tremendous experience in identity management and establishing sustainable compliance programs were true value-add at yesterday’s webcast. And if you are a healthcare organization looking to solve your compliance and security challenges, we recommend you join us for a live webcast on Tuesday, November 29 at 10 am PT. The webcast will feature experts from Kaiser Permanente, PricewaterhouseCoopers and Oracle and the focus of the discussion will be around the compliance challenges a healthcare organization faces and best practices for tackling those. Here are the details: Healthcare IT News Webcast: Managing Risk and Enforcing Compliance in Healthcare with Identity Analytics Tuesday, November 29, 201110:00 a.m. PT / 1:00 p.m. ET Register Today The ISACA webcast replay is now available on-demand and the slides are also available for download. Since we didn’t have time to address all the questions we received during the live Q&A portion of the webcast, we have captured responses to the remaining questions here. Please continue to provide us your feedback and insights from your experience in deploying identity compliance solutions. Q. Can you please clarify the mechanism utilized to populate the Identity Warehouse from each individual application's access management function / files? A. Oracle Identity Analytics (OIA) supports direct imports from applications. Data collection is based on Extract, Transform and Load (ETL) that eliminates the need to write connectors to different applications. Oracle Identity Analytics’ import engine supports complex entitlement feeds saved as either text files or XML. The imports can be scheduled on a periodic basis or triggered as needed. If the applications are synchronized with a user provisioning solution like Oracle Identity Manager, Oracle Identity Analytics has a seamless integration to pull in data from Oracle Identity Manager. Q.  Can you provide a short summary of the new features in your latest release of Oracle Identity Analytics? A. Oracle recently announced availability of enhanced Oracle Identity Analytics. This release focused on easing the certification process by offering risk analytics driven certification, advanced certification screens, business centric views and significant improvement in performance including 3X faster data imports, 3X faster certification campaign generation and advanced auto-certification features, that  will allow organizations to improve user productivity by up to 80%. Closed-loop risk feedback and IT policy monitoring with Oracle Identity Manager, a leading user provisioning solution, allows for more accurate certification reviews. And, OIA's improved performance enables customers to scale compliance initiatives supporting millions of user entitlements across thousands of applications, whether on premise or in the cloud, without compromising speed or integrity. Q. Will ISACA grant a CPE credit for attending this ISACA-sponsored webinar today? A. From ISACA: Hello and thank you for your interest in the 2011 ISACA Webinar Program!  Unfortunately, there are no CPEs offered for this program, archived or live.  We will be looking into the feasibility of offering them in the future.  Q. Would you be able to use this to help manage licenses for software? That is to say - could it track software that is not used by a user, thus eliminating the software license? A. OIA’s integration with Oracle Identity Manager, a leading user provisioning solution, allows organizations to detect ghost accounts or unused accounts via account reconciliation. Based on company’s policies, this could trigger an automated workflow for account deletion or asking for further investigation. Closed-loop feedback between the two solutions would then allow visibility into the complete audit trail of when the account was detected, the action taken, by whom, when and the current status. Q. We have quarterly attestations and .xls mechanisms are not working. Once the identity data is correlated in Identity Analytics, do you then automate access certification? A. OIA’s identity warehouse analyzes and correlates identity data across various resources that allows OIA to determine a user’s risk profile, who the access review request should go to, along with all the relevant access details of the user. The access certification manager gets notification on what to review, when and the relevant data is presented in a business friendly screen. Based on the result of the access certification process, actions are triggered and results recorded and archived. Access review managers have visual risk indicators that also allow them to prioritize access certification tasks and efforts. Q. How does Oracle Identity Analytics work with Cloud Security? A. For enterprises looking to build their own cloud(s), Oracle offers a set of security services that cloud developers can leverage including Oracle Identity Analytics.  For enterprises looking to manage their compliance requirements but without hosting those in-house and instead having a hosting provider offer managed Identity Management services to the organizations, Oracle Identity Analytics can be leveraged much the same way as you’d in an on-premise (within the enterprise) environment. In fact, organizations today are leveraging Oracle Identity Analytics to manage identity compliance in both these ways. Q. Would you recommend this as a cost effective solution for a smaller organization with @ 2,500 users? A. The key return-on-investment (ROI) on Oracle Identity Analytics is derived from automating compliance processes thereby eliminating administrative overhead, minimizing errors, maintaining cost- and time-effective sustainable compliance processes and minimizing audit exposures and penalties.  Of course, there are other tangible benefits that are derived from an Oracle Identity Analytics implementation as outlined in the webcast. For a quantitative analysis of your requirements and potential ROI calculation, we recommend you refer to the Forrester Study on Total Economic Impact of Oracle Identity Analytics. For an in-person discussion, please email Richard Caldwell.

    Read the article

  • Wildcards!

    - by Tim Dexter
    Yes, its been a while, Im sorry, mumble, mumble ... no excuses. Well other than its been, as my son would say 'hecka busy.' On a brighter note I see Kan has been posting some cool stuff in my absence, long may he continue! I received a question today asking about using a wildcard in a template, something like: <?if:INVOICE = 'MLP*'?> where * is the wildcard Well that particular try does not work but you can do it without building your own wildcard function. XSL, the underpinning language of the RTF templates, has some useful string functions - you can find them listed here. I used the starts-with function to achieve a simple wildcard scenario but the contains can be used in conjunction with some of the others to build something more sophisticated. Assume I have a a list of friends and the amounts of money they owe me ... Im very generous and my interest rates a pretty competitive :0) <ROWSET> <ROW> <NAME>Andy</NAME> <AMT>100</AMT> </ROW> <ROW> <NAME>Andrew</NAME> <AMT>60</AMT> </ROW> <ROW> <NAME>Aaron</NAME> <AMT>50</AMT> </ROW> <ROW> <NAME>Alice</NAME> <AMT>40</AMT> </ROW> <ROW> <NAME>Bob</NAME> <AMT>10</AMT> </ROW> <ROW> <NAME>Bill</NAME> <AMT>100</AMT> </ROW> Now, listing my friends is easy enough <for-each:ROW> <NAME> <AMT> <end for-each> but lets say I just want to see all my friends beginning with 'A'. To do that I can use an XPATH expression to filter the data and tack it on to the for-each expression. This is more efficient that using an 'if' statement just inside the for-each. <?for-each:ROW[starts-with(NAME,'A')]?> will find me all the A's. The square braces denote the start of the XPATH expression. starts-with is the function Im calling and Im passing the value I want to check i.e. NAME and the string Im looking for. Just substitute in the characters you are looking for. You can of course use the function in a if statement too. <?if:starts-with(NAME,'A')?><?attribute@incontext:color;'red'?><?end if?> Notice I removed the square braces, this will highlight text red if the name begins with an 'A' You can even use the function to do conditional calculations: <?sum (AMT[starts-with(../NAME,'A')])?> Sum only the amounts where the name begins with an 'A' Notice the square braces are back, its a function we want to apply to the AMT field. Also notice that we need to use ../NAME. The AMT and NAME elements are at the same level in the tree, so when we are at the AMT level we need the ../ to go up a level to then come back down to test the NAME value. I have built out the above functions in a sample template here. Huge prizes for the first person to come up with a 'true' wildcard solution i.e. if NAME like '*im*exter* demand cash now!

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

  • Walmart and Fusion Apps

    - by ultan o'broin
    Photograph: Misha Vaughan I attended Fusion Apps (yes, I know I am supposed to say "Oracle Fusion Applications", but stuffy old style guides are a turn-off in interwebs conversations) User Experience Advocate (FXA) training in Long Beach, California last week; a suitable location as ODTUG KSCOPE 11 was kicking off and key players were in the area. As a member of Oracle's Apps-UX team I know the Fusion Apps messaging, natch, and done some other Fusion Apps go-to-market content work too. For the messaging details themselves, see Lonneke Dikmans (@lonnekedikmans) great blog, by the way. However, I wanted some 'formal' training combined with the opportunity to meet and learn from people already out there delivering those messages. The idea in me reaching out to Misha Vaughan, Apps-UX FXA maven, to get me onto this training was that in addition to my UX knowledge, I could leverage my location in EMEA and hit up customer events more quickly and easily. Those local user groups do like to hear the voice of locals too you know (so I need to work on that mid-Atlantic accent). I'm looking forward to such opportunities. The training was all smashing stuff, just the right level of detail, delivered professionally and with great style and humor. I was especially honored to be paired off for my er, coaching with Debra Lilley (@debralilley), who shared with everyone all kinds of tips and insights from her experiences of delivering the message and demo. For me, that was the real power of the FXA event--the communal, conversational aspect--the meeting up with people who had done all this for real, the sharing in their experiences, while learning along with other newbies. Sorry, but that all-important social aspect doesn't work so well with remote meetings. Katie Candland (Apps-UX) gave us a great tour of the Fusion Apps demo and included some useful presentational tips too (any excuse to buy that iPad). It's clear to me that the Fusion Apps messaging and demos really come alive with real-world examples that local application users will recognize, and I picked up some "yes, that's my job made easier" scene-stealers from Debra and Karen Brownfield too, to add to the great ones already provided. This power of examples shouldn't surprise anyone, they've long been a mainstay of applications user assistance, popular with users. We'll offer customers different types of example topics in the Fusion Apps online help too (stay tuned), and we know from research how important those 3S's (stories, scenarios, and simulations) are to users when they consume and apply information. Well, we've got the simulation, now it's time for more stories and scenarios. If you get a chance to participate in an FXA event (whether you are an Oracle employee or otherwise), I'd encourage it. It's committing your time and energy for sure, but I got real bang for the buck from it for my everyday job too. Listening to the room's feedback on the application demo really brought our internal design work to life, and I picked up on some things that I need to follow up on (like how you alphabetically sort stuff in other languages). User experience is after all, about users. What will I be doing next, and what would I like to see happen? Obviously, I need to develop my story-telling links with the people I met in Long Beach and do some practicing with the materials, and then get out there and deliver them at a suitable location. The demo is what it is right now, and that's a super-rich demo that I know everyone will want to see and ask questions about. Then, as mentioned by attendees at the FXA event, follow up on those translated and localized messages for EMEA (and APAC), that deal with different statutory or reporting requirements of the target markets. Given my background I would say that, wouldn't I? However, language is part of the UX, and international revenue is greater than US-only revenue for Oracle, so yes dear, we all need to get over the fact that enterprise apps users don't all speak, or want to speak, American-English. Most importantly perhaps, the continued development of a strong messaging community between Oracle and partners and customers where we can swap and share those FXA messaging stories and scenarios about Fusion Apps in a conversational way. The more the better, a combination of online and face-to-face meetings. I must also mention the great dinner after the event at Parker's Lighthouse, and the fun myself and Andrew Gilmour (Apps-UX) had at our end of the table talking about just about everything except Fusion Apps with Ronald Van Luttikhuizen and Ben Prusinski (who now understands the difference between Cork and Dublin people. I hope). Thanks to all the Apps-UXers who helped bring the FXA training to town, and to Debra and all the others that I am too jetlagged to mention right who were instrumental in making it happen for me. Here's to the next one. And the Walmart angle? That was me doing my Robert Scoble (ScO'bilizer?)-style guerilla smart phone research in Walmart in Long Beach, before the FXA event. It's all about stories for me. You can read more about it on the appslab blog (see the comments).

    Read the article

  • Announcing Solaris Technical Track at NLUUG Spring Conference on Operating Systems

    - by user9135656
    The Netherlands Unix Users Group (NLUUG) is hosting a full-day technical Solaris track during its spring 2012 conference. The official announcement page, including registration information can be found at the conference page.This year, the NLUUG spring conference focuses on the base of every computing platform; the Operating System. Hot topics like Cloud Computing and Virtualization; the massive adoption of mobile devices that have their special needs in the OS they run but that at the same time put the challenge of massive scalability onto the internet; the upspring of multi-core and multi-threaded chips..., all these developments cause the Operating System to still be a very interesting area where all kinds of innovations have taken and are taking place.The conference will focus specifically on: Linux, BSD Unix, AIX, Windows and Solaris. The keynote speech will be delivered by John 'maddog' Hall, infamous promotor and supporter of UNIX-based Operating Systems. He will talk the audience through several decades of Operating Systems developments, and share many stories untold so far. To make the conference even more interesting, a variety of talks is offered in 5 parallel tracks, covering new developments in and  also collaboration  between Linux, the BSD's, AIX, Solaris and Windows. The full-day Solaris technical track covers all innovations that have been delivered in Oracle Solaris 11. Deeply technically-skilled presenters will talk on a variety of topics. Each topic will first be introduced at a basic level, enabling visitors to attend to the presentations individually. Attending to the full day will give the audience a comprehensive overview as well as more in-depth understanding of the most important new features in Solaris 11.NLUUG Spring Conference details:* Date and time:        When : April 11 2012        Start: 09:15 (doors open: 8:30)        End  : 17:00, (drinks and snacks served afterwards)* Venue:        Nieuwegein Business Center        Blokhoeve 1             3438 LC Nieuwegein              The Nederlands          Tel     : +31 (0)30 - 602 69 00        Fax     : +31 (0)30 - 602 69 01        Email   : [email protected]        Route   : description - (PDF, Dutch only)* Conference abstracts and speaker info can be found here.* Agenda for the Solaris track: Note: talks will be in English unless marked with 'NL'.1.      Insights to Solaris 11         Joerg Moellenkamp - Solaris Technical Specialist         Oracle Germany2.      Lifecycle management with Oracle Solaris 11         Detlef Drewanz - Solaris Technical Specialist         Oracle Germany3.      Solaris 11 Networking - Crossbow Project        Andrew Gabriel - Solaris Technical Specialist        Oracle UK4.      ZFS: Data Integrity and Security         Darren Moffat - Senior Principal Engineer, Solaris Engineering         Oracle UK5.      Solaris 11 Zones and Immutable Zones (NL)         Casper Dik - Senior Staff Engineer, Software Platforms         Oracle NL6.      Experiencing Solaris 11 (NL)         Patrick Ale - UNIX Technical Specialist         UPC Broadband, NLTalks are 45 minutes each.There will be a "Solaris Meeting point" during the conference where people can meet-up, chat with the speakers and with fellow Solaris enthousiasts, and where live demos or other hands-on experiences can be shared.The official announcement page, including registration information can be found at the conference page on the NLUUG website. This site also has a complete list of all abstracts for all talks.Please register on the NLUUG website.

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • Using BizTalk to bridge SQL Job and Human Intervention (Requesting Permission)

    - by Kevin Shyr
    I start off the process with either a BizTalk Scheduler (http://biztalkscheduledtask.codeplex.com/releases/view/50363) or a manual file drop of the XML message.  The manual file drop is to allow the SQL  Job to call a "File Copy" SSIS step to copy the trigger file for the next process and allows SQL  Job to be linked back into BizTalk processing. The Process Trigger XML looks like the following.  It is basically the configuration hub of the business process <ns0:MsgSchedulerTriggerSQLJobReceive xmlns:ns0="urn:com:something something">   <ns0:IsProcessAsync>YES</ns0:IsProcessAsync>   <ns0:IsPermissionRequired>YES</ns0:IsPermissionRequired>   <ns0:BusinessProcessName>Data Push</ns0:BusinessProcessName>   <ns0:EmailFrom>[email protected]</ns0:EmailFrom>   <ns0:EmailRecipientToList>[email protected]</ns0:EmailRecipientToList>   <ns0:EmailRecipientCCList>[email protected]</ns0:EmailRecipientCCList>   <ns0:EmailMessageBodyForPermissionRequest>This message was sent to request permission to start the Data Push process.  The SQL Job to be run is WeeklyProcessing_DataPush</ns0:EmailMessageBodyForPermissionRequest>   <ns0:SQLJobName>WeeklyProcessing_DataPush</ns0:SQLJobName>   <ns0:SQLJobStepName>Push_To_Production</ns0:SQLJobStepName>   <ns0:SQLJobMinToWait>1</ns0:SQLJobMinToWait>   <ns0:PermissionRequestTriggerPath>\\server\ETL-BizTalk\Automation\TriggerCreatedByBizTalk\</ns0:PermissionRequestTriggerPath>   <ns0:PermissionRequestApprovedPath>\\server\ETL-BizTalk\Automation\Approved\</ns0:PermissionRequestApprovedPath>   <ns0:PermissionRequestNotApprovedPath>\\server\ETL-BizTalk\Automation\NotApproved\</ns0:PermissionRequestNotApprovedPath> </ns0:MsgSchedulerTriggerSQLJobReceive>   Every node of this schema was promoted to a distinguished field so that the values can be used for decision making in the orchestration.  The first decision made is on the "IsPermissionRequired" field.     If permission is required (IsPermissionRequired=="YES"), BizTalk will use the configuration info in the XML trigger to format the email message.  Here is the snippet of how the email message is constructed. SQLJobEmailMessage.EmailBody     = new Eai.OrchestrationHelpers.XlangCustomFormatters.RawString(         MsgSchedulerTriggerSQLJobReceive.EmailMessageBodyForPermissionRequest +         "<br><br>" +         "By moving the file, you are either giving permission to the process, or disapprove of the process." +         "<br>" +         "This is the file to move: \"" + PermissionTriggerToBeGenereatedHere +         "\"<br>" +         "(You may find it easier to open the destination folder first, then navigate to the sibling folder to get to this file)" +         "<br><br>" +         "To approve, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestApprovedPath +         "<br><br>" +         "To disapprove, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestNotApprovedPath +         "<br><br>" +         "The file will be IMMEDIATELY picked up by the automated process.  This is normal.  You should receive a message soon that the file is processed." +         "<br>" +         "Thank you!"     ); SQLJobSendNotification(Microsoft.XLANGs.BaseTypes.Address) = "mailto:" + MsgSchedulerTriggerSQLJobReceive.EmailRecipientToList; SQLJobEmailMessage.EmailBody(Microsoft.XLANGs.BaseTypes.ContentType) = "text/html"; SQLJobEmailMessage(SMTP.Subject) = "Requesting Permission to Start the " + MsgSchedulerTriggerSQLJobReceive.BusinessProcessName; SQLJobEmailMessage(SMTP.From) = MsgSchedulerTriggerSQLJobReceive.EmailFrom; SQLJobEmailMessage(SMTP.CC) = MsgSchedulerTriggerSQLJobReceive.EmailRecipientCCList; SQLJobEmailMessage(SMTP.EmailBodyFileCharset) = "UTF-8"; SQLJobEmailMessage(SMTP.SMTPHost) = "localhost"; SQLJobEmailMessage(SMTP.MessagePartsAttachments) = 2;   After the Permission request email is sent, the next step is to generate the actual Permission Trigger file.  A correlation set is used here on SQLJobName and a newly generated GUID field. <?xml version="1.0" encoding="utf-8"?><ns0:SQLJobAuthorizationTrigger xmlns:ns0="somethingsomething"><SQLJobName>Data Push</SQLJobName><CorrelationGuid>9f7c6b46-0e62-46a7-b3a0-b5327ab03753</CorrelationGuid></ns0:SQLJobAuthorizationTrigger> The end user (the human intervention piece) will either grant permission for this process, or deny it, by moving the Permission Trigger file to either the "Approved" folder or the "NotApproved" folder.  A parallel Listen shape is waiting for either response.   The next set of steps decide how the SQL Job is to be called, or whether it is called at all.  If permission denied, it simply sends out a notification.  If permission is granted, then the flag (IsProcessAsync) in the original Process Trigger is used.  The synchonous part is not really synchronous, but a loop timer to check the status within the calling stored procedure (for more information, check out my previous post:  http://geekswithblogs.net/LifeLongTechie/archive/2010/11/01/execute-sql-job-synchronously-for-biztalk-via-a-stored-procedure.aspx)  If it's async, then the sp starts the job and BizTalk sends out an email.   And of course, some error notification:   Footnote: The next version of this orchestration will have an additional parallel line near the Listen shape with a Delay built in and a Loop to send out a daily reminder if no response has been received from the end user.  The synchronous part is used to gather results and execute a data clean up process so that the SQL Job can be re-tried.  There are manu possibilities here.

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Using EUSM to manage EUS mappings in OUD

    - by Sylvain Duloutre
    EUSM is a command line tool that can be used to manage the EUS settings starting with the 11.1 release of Oracle. In the 11.1 release the tool is not yet documented in the Oracle EUS documentation, but this is planned for a coming release. The same commands used by EUSM can be performed from the Database Console GUI or from Grid Control*. For more details, search for the document ID 1085065.1 on OTN. The examples below don't include all the EUSM options, only the options that are used by EUS. EUSM is user friendly and intuitive. Typing eusm help <option> lists the parameters to be used for any of the available options. Here are the options related to connectivity with OUD : ldap_host="gnb.fr.oracle.com" - name of the OUD server. ldap_port=1389 - nonSSL (SASL) port used for OUD connections.  ldap_user_dn="cn=directory manager" - OUD administrator nameldap_user_password="welcome1" - OUD administrator password Find below common commands: To List Enterprise roles in OUD eusm listEnterpriseRoles domain_name=<Domain> realm_dn=<realm> ldap_host=<hostname> ldap_port=<port> ldap_user_dn=<oud administrator> ldap_user_password=<oud admin password> To List Mappings eusm listMappings domain_name=<Domain> realm_dn=<realm> ldap_host=<hostname> ldap_port=<port> ldap_user_dn=<oud admin> ldap_user_password=<oud admin password> To List Enterprise Role Info eusm listEnterpriseRoleInfo enterprise_role=<rdn of enterprise role> domain_name=<Domain> realm_dn=<realm> ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password> To Create Enterprise Role eusm createRole enterprise_role=<rdn of the enterprise role> domain_name=<Domain> realm_dn=<realm> ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password> To Create User-Schema Mapping eusm createMapping database_name=<SID of target database> realm_dn="<realm>" map_type=<ENTRY/SUBTREE> map_dn="<dn of enterprise user>" schema="<name of the shared schema>" ldap_host=<oud hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password="<oud admin password>" To Create Proxy Permission eusm createProxyPerm proxy_permission=<Name of the proxypermission> domain_name=<Domain> realm_dn="<realm>" ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password> To Grant Proxy permission to Proxy group eusm grantProxyPerm proxy_permission=<Name of the proxy permission> domain_name=<Domain> realm_dn="<realm>" ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<password> group_dn="<dn of the enterprise group>" To Map proxy permission to proxy user in DB eusm addTargetUser proxy_permission=<Name of the proxy permission> domain_name=<Domain> realm_dn="<realm>" ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password> database_name=<SID of the target database> target_user=<target database user> dbuser=<Database user with DBA privileges> dbuser_password=<database user password> dbconnect_string=<database_host>:<port>:<DBSID> Enterprise role to Global role mapping eusm addGlobalRole enterprise_role=<rdn of the enterprise role> domain_name=<Domain> realm_dn="<realm>" database_name=<SID of the target database> global_role=<name of the global role defined in the target database> dbuser=<database user> dbuser_password=<database user password> dbconnect_string=<database_host>:<port>:<DBSID> ldap_host=<oid_hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password>

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Nashorn ?? JDBC ? Oracle DB ?????

    - by Homma
    ???? ????????????Nashorn ?? JavaScript ??????? JDBC ? API ??????Oracle DB ?????????????????????? ?????????????????????JDBC ? API ??????????????? ????????? URL ? https://blogs.oracle.com/nashorn_ja/entry/nashorn_jdbc_1 ??? ???? ???? DB ????Oracle Linux 6.5 ?? Oracle 11.2.0.3.0 ?????????????? JDBC ????????????? DB ????????????????? ???? ?Oracle Database JDBC ???????????????????????Nashorn ?? JavaScript ?????????????????????? JDBC ? Oracle DB ??????? Nashorn ?? JavaScript ??????? JDBC ? Oracle DB ?????? JavaScript ?????? DB ???????????????? JavaScript ?????? oracle ????????? JavaScript ?????? DB ?????????????????????????????????DB ???????????? JavaScript ???????????????????????? oracle ?????????? JDBC ??????????????????????? ???? DB ?????? ?????? DB ???????????? SQL> create user test identified by "test"; SQL> grant connect, resource to test; Java 8 ??????? ???? JDK 8 ?????????????????????????????? 8u5 ???? Java 8 ??????? ???????? JDK ? yum ??????????????? # yum install ./jdk-8u5-linux-x64.rpm JDK ????????????????????? # java -version java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) Nashorn ????? oracle ??????????PATH ??????? $ vi ~/.bash_profile PATH=${PATH}:/usr/java/latest/bin export PATH $ . ~/.bash_profile jjs ?????????????????? $ jjs -fv nashorn full version 1.8.0_05-b13 ????????????? JDBC ?????????????? JDBC ?????????JDBC ?????? ??????????????????? ???????? JDBC ????????????????????????? ?????????????? JavaScript ??????????jjs ???????????????????? Nashorn ? JavaScript ?????????????????? JDBC ??????? jjs ????? -cp ?????? JDBC ????? JAR ??????????? $ vi version.js var OracleDataSource = Java.type("oracle.jdbc.pool.OracleDataSource"); var ods = new OracleDataSource(); ods.setURL("jdbc:oracle:thin:test/test@localhost:1521:orcl"); var conn = ods.getConnection(); var meta = conn.getMetaData(); print("JDBC driver version is " + meta.getDriverVersion()); $ jjs -cp ${ORACLE_HOME}/jdbc/lib/ojdbc6.jar version.js JDBC driver version is 11.2.0.3.0 ??????JavaScript ???????? JDBC ?????????? (11.2.0.3.0) ????????? Java.type() ??????? JavaClass ??????? new ????? Java ??????????????????????????? Java ???????????????????? ????????????????????????????????????????????????????? ?????????????????????????????????????? Java ??????????????? JavaScript ???????????????????????????????? ?????? ???????????????? jjs ???????????Nashorn ??????????????jjs ??????????????????????????? $ jjs -cp ${ORACLE_HOME}/jdbc/lib/ojdbc6.jar jjs> var OracleDataSource = Java.type("oracle.jdbc.pool.OracleDataSource"); jjs> var ods = new OracleDataSource(); jjs> ods.setURL("jdbc:oracle:thin:test/test@localhost:1521:orcl"); null jjs> var conn = ods.getConnection(); jjs> var meta = conn.getMetaData(); jjs> print("JDBC driver version is " + meta.getDriverVersion()); JDBC driver version is 11.2.0.3.0 ???????? JDBC ?????????? (11.2.0.3.0) ????????? ?????????????????????????????????????????????????????????JDBC ?????????????????????? ??? Nashorn ???????? JDBC ? API ????????????? API ???????????????? ???????? JavaScript ?????????????????????????????????? ???????????? JDBC ? DB ???????????????? JDBC ??????????????????????????? ???? Oracle Database JDBC?????? 11g????2(11.2) ??????? jjs ?????????? Nashorn User's Guide Java Scripting Programmer's Guide Oracle Nashorn: A Next-Generation JavaScript Engine for the JVM

    Read the article

  • How can I reliably check client identity whilst making DCOM calls to a C# .Net 3.5 Server?

    - by pionium
    Hi, I have an old Win32 C++ DCOM Server that I am rewriting to use C# .Net 3.5. The client applications sit on remote XP machines and are also written in C++. These clients must remain unchanged, hence I must implement the interfaces on new .Net objects. This has been done, and is working successfully regarding the implementation of the interfaces, and all of the calls are correctly being made from the old clients to the new .Net objects. However, I'm having problems obtaining the identity of the calling user from the DCOM Client. In order to try to identify the user who instigated the DCOM call, I have the following code on the server... [DllImport("ole32.dll")] static extern int CoImpersonateClient(); [DllImport("ole32.dll")] static extern int CoRevertToSelf(); private string CallingUser { get { string sCallingUser = null; if (CoImpersonateClient() == 0) { WindowsPrincipal wp = System.Threading.Thread.CurrentPrincipal as WindowsPrincipal; if (wp != null) { WindowsIdentity wi = wp.Identity as WindowsIdentity; if (wi != null && !string.IsNullOrEmpty(wi.Name)) sCallingUser = wi.Name; } if (CoRevertToSelf() != 0) ReportWin32Error("CoRevertToSelf"); } else ReportWin32Error("CoImpersonateClient"); return sCallingUser; } } private static void ReportWin32Error(string sFailingCall) { Win32Exception ex = new Win32Exception(); Logger.Write("Call to " + sFailingCall + " FAILED: " + ex.Message); } When I get the CallingUser property, the value returned the first few times is correct and the correct user name is identified, however, after 3 or 4 different users have successfully made calls (and it varies, so I can't be more specific), further users seem to be identified as users who had made earlier calls. What I have noticed is that the first few users have their DCOM calls handled on their own thread (ie all calls from a particular client are handled by a single unique thread), and then subsequent users are being handled by the same threads as the earlier users, and after the call to CoImpersonateClient(), the CurrentPrincipal matches that of the initial user of that thread. To Illustrate: User Tom makes DCOM calls which are handled by thread 1 (CurrentPrincipal correctly identifies Tom) User Dick makes DCOM calls which are handled by thread 2 (CurrentPrincipal correctly identifies Dick) User Harry makes DCOM calls which are handled by thread 3 (CurrentPrincipal correctly identifies Harry) User Bob makes DCOM calls which are handled by thread 3 (CurrentPrincipal incorrectly identifies him as Harry) As you can see in this illustration, calls from clients Harry and Bob are being handled on thread 3, and the server is identifying the calling client as Harry. Is there something that I am doing wrong? Are there any caveats or restrictions on using Impersonations in this way? Is there a better or different way that I can RELIABLY achieve what I am trying to do? All help would be greatly appreciated. Regards Andrew

    Read the article

  • Facebook Developer ToolKit: How should I construct this app?

    - by j0nscalet
    I have created a simple desktop application that I want to use to post status updates for the users of my app. Here's the kicker though that I am having trouble figuring out, the desktop application runs as part of a batch process every night, in which I update the status of certain users. I use the following code to accomplish this: (comes directly from the FDK samples) public FriendViewer() { InitializeComponent(); facebookService1.ApplicationKey = "Key"; facebookService1.Secret = "Secret"; facebookService1.SessionKey = "Session key"; facebookService1.IsDesktopApplication = true; } private void TestService_Load(object sender, EventArgs e) { try { if (!facebookService1.API.users.hasAppPermission(facebook.Types.Enums.Extended_Permissions.status_update)) facebookService1.GetExtendedPermission(facebook.Types.Enums.Extended_Permissions.status_update); if (!facebookService1.API.users.hasAppPermission(facebook.Types.Enums.Extended_Permissions.offline_access)) facebookService1.GetExtendedPermission(facebook.Types.Enums.Extended_Permissions.offline_access); long uid = facebookService1.users.getLoggedInUser(); facebook.Schema.user user = facebookService1.users.getInfo(uid); facebookService1.users.setStatus("Facebook Syndicator rules!"); MessageBox.Show(String.Format("Status set for {0} {1}", user.first_name, user.last_name)); } catch (Exception ex) { MessageBox.Show(ex.Message); Close(); } } My user's day to day activity is done a website front end. Since I dont have any user interaction in a nightly batch process, I cannot use the ConnectToFaceBook method on the FaceBookService to obtain a sessionKey for the user. Ideally I would like to prompt for authorization and extended permissions for my desktop app when a user logins into the web front end then save the sessionKey and uid in the database. At night when my process runs, I would reference the sessionKey and uid in order and update the user's status. I am finding myself fumbling between whether or not my app should be a web or desktop app. Having both a web and desktop app would be confusing to my users, because they would have to grant/manage permissions for both apps. And I looking at this the wrong way? Any help would be greatly appreciated! Thanks.

    Read the article

  • Mysql password hashing method old vs new

    - by The Disintegrator
    I'm trying to connect to a mysql server at dreamhost from a php scrip located in a server at slicehost (two different hosting companies). I need to do this so I can transfer new data at slicehost to dreamhost. Using a dump is not an option because the table structures are different and i only need to transfer a small subset of data (100-200 daily records) The problem is that I'm using the new MySQL Password Hashing method at slicehost, and dreamhost uses the old one, So i get $link = mysql_connect($mysqlHost, $mysqlUser, $mysqlPass, FALSE); Warning: mysql_connect() [function.mysql-connect]: OK packet 6 bytes shorter than expected Warning: mysql_connect() [function.mysql-connect]: mysqlnd cannot connect to MySQL 4.1+ using old authentication Warning: mysql_query() [function.mysql-query]: Access denied for user 'nodari'@'localhost' (using password: NO) facts: I need to continue using the new method at slicehost and i can't use an older php version/library The database is too big to transfer it every day with a dump Even if i did this, the tables have different structures I need to copy only a small subset of it, in a daily basis (only the changes of the day, 100-200 records) Since the tables are so different, i need to use php as a bridge to normalize the data Already googled it Already talked to both support stafs The more obvious option to me would be to start using the new MySQL Password Hashing method at dreamhost, but they will not change it and i'm not root so i can't do this myself. Any wild idea? By VolkerK sugestion: mysql> SET SESSION old_passwords=0; Query OK, 0 rows affected (0.01 sec) mysql> SELECT @@global.old_passwords,@@session.old_passwords, Length(PASSWORD('abc')); +------------------------+-------------------------+-------------------------+ | @@global.old_passwords | @@session.old_passwords | Length(PASSWORD('abc')) | +------------------------+-------------------------+-------------------------+ | 1 | 0 | 41 | +------------------------+-------------------------+-------------------------+ 1 row in set (0.00 sec) The obvious thing now would be run a mysql SET GLOBAL old_passwords=0; But i need SUPER privilege to do that and they wont give it to me if I run the query SET PASSWORD FOR 'nodari'@'HOSTNAME' = PASSWORD('new password'); I get the error ERROR 1044 (42000): Access denied for user 'nodari'@'67.205.0.0/255.255.192.0' to database 'mysql' I'm not root... The guy at dreamhost support insist saying thet the problem is at my end. But he said he will run any query I tell him since it's a private server. So, I need to tell this guy EXACTLY what to run. So, telling him to run SET SESSION old_passwords=0; SET GLOBAL old_passwords=0; SET PASSWORD FOR 'nodari'@'HOSTNAME' = PASSWORD('new password'); grant all privileges on *.* to nodari@HOSTNAME identified by 'new password'; would be a good start?

    Read the article

  • Suggest the best options to me to design the dynamic web interface using PHP MYSQL and AJAX

    - by Krishna
    Hello, I am designing a web interface for a company. I am describing the company's profile: company is currently having 5 branches and planning to extend their branches all over the country. it is an insurance surveying company. they are dealing with 6 Categories in the insurance domain, vide .. Engineering Fire Marine Motor Miscellaneous Risk Inspection and branches named as b1, b2, b3, b4, b5 and Extending. and finally they have contract with 22 companies. For each claim they are assign a unique ID. like contractcompany/category/serialno Ex: take a contracted company names as xxx, sss, zzz. xxx/Engineering/001 sss/Engineering/001 . . . xxx/Enginnering/002 sss/Engineering/002 . . . xxx/Fire/001 sss/Fire/001 . . . xxx/Fire/002 . . . xxx/Fire/002 . . . and so on..... by this way they issue the unique ID for each claim. Finally what i want is developing the interface with PHP mysql and ajax auto generating the unique id for each claim. store full details of the claims with reference to unique id. show all claims in one page, and they can view by branch wise and category wise. send monthly Report (All claims they have given and status of claims) to contract companies. give access to contracted companies, but they can view only their respective claims. Each claim has its own documents. So they can be uploaded by own company users or administrator. these files are associated with unique ID. contracted companies can view files. Give access to branches to enter new claims and update old claims. Administrator can create, update and delete all the claims and their details. Only administrator can grant new users (own company branches / contracted companies) Finally the the panel is completely database driven. Could any body can help. Thanks in advance Kindly do the needful and oblige Thanks and Regards Krishna. P [email protected]

    Read the article

  • PHP fopen returning null on files that work fine with include and get_file_contents

    - by brad allred
    Hi, I have XAMPP installed on a windows 2000 server. everything is working great except the PHP fopen function. I can neither create nor open files with it. the strange thing is that i can include/require/file_get_contents/other file related functions; also fopen does not generate any errors or notices it just returns NULL. I have gone as far as to grant full control of the file and all enclosing folders to everybody but i still get NULL instead of a file pointer. I have tried this on php 5.2.9, 5.2.13, and 5.3.1 with the same effect. i have gone through the php.ini file looking for something that is breaking it; i have even tried deleting and using the basic ini file from a linux box where fopen is working and still nothing. I know i have to restart apache after changing my ini and all that and have been (I have even restarted the server) so thats not it. I am at this poing assuming it is an apache configuration issue somehow, tomorrow im going to run a test through php-cli to make sure. I really don't want to bruise my head anymore over this can some apache/php wizard come to my aid? Hi guys, thanks for the responses. you are right is is not any config problem. the problem has to be with one of my dlls or one of my included files. I just tried the same code that isn't working in a new file without any include and i disabled my custom libraries and it worked. for the record here is what I was doing that wasn't working: $test_file = 'c:\\test.csv';//everybody has full control. is very large. if(file_exists($test_file) && is_readable($test_file)){ $fp = fopen($test_file, 'r'); echo var_export($fp, true);//outputs NULL. on my linux box this is a number. if($fp !== false){ //do the work fread($fp, 10);//throws the error that $fp is not a valid file handle } } something that i am including must be breaking fopen somehow. works as expected in new file with no includes.

    Read the article

  • How to make Flash 'play well with others'?

    - by Sensei James
    What up fam. So this isn't a question asking about memory management schemes; for those of you who may not know, the Flash Virtual Machine relies on garbage collection by using reference counting and mark and sweep (for good coverage of these topics, check out Grant Skinner's article and presentation). And yes, Flash also provides the "delete" operator, which can (unfortunately only) be used to remove the properties of dynamic objects. What I want to know is how to make it so that Flash programs don't continue to consume CPU and memory while running in the background (save loading content or communicating remotely, for example). The motivation for this question comes in part from Apple's ban on cross compiled applications (in its SDK 4) on the grounds that they do not behave as predicted with the multitasking feature central to iPhone OS 4. My intention is not only to make Flash programs that will 'pass muster' as far as multitasking in iPhone OS 4, but also to simply make better (behaving) Flash programs. Put another way, how might a Flash application mimic the multitasking feature of iPhone OS 4? Does the Flash API provide the means for a developer to put their applications to 'sleep' while other programs run, and then to 'awaken' them just as quickly? In our own program, we might do something as crude as detecting when the user has been idle (no mouse motion or key press) for (say) four seconds: var idle_id:uint = setInterval(4000, pause_program); var current_movie_clip:MovieClip; var current_frame:uint; ... // on Mouse move or key press... clearInterval(idle_id); idle_id = setInterval(4000, pause_program); ... function pause_program():void { current_movie_clip = event.target as MovieClip; current_frame = current_movie_clip.currentFrame; MovieClip(root).gotoAndStop("program_pause_screen"); } (on the program pause screen) resume_button.addEventListener(MouseEvent.CLICK, resume_program); function resume_program(event:MouseEvent) { current_movie_clip.gotoAndPlay(current_frame); } If that's the right idea, what's the best way to detect that an application should be shelved? And, more importantly, is it possible for Flash Player to detect that some of its running programs are idle, and to similarly shelve them until the user performs an action to resume them? (Please feel free to answer as much or as little of the many questions I've posed.)

    Read the article

  • Can I retain a Google apps session token permanently for a specific user who logs into my google app

    - by Ali
    Hi guys, is it possible to retain upon authorization a single session token for a user who signs into my gogle application. CUrrently my application seems to every now and then require the user to authenticate into google apps. I think it has to do with session dying out or so. I have the following code: function getCurrentUrl() { global $_SERVER; $php_request_uri = htmlentities(substr($_SERVER['REQUEST_URI'], 0, strcspn($_SERVER['REQUEST_URI'], "\n\r")), ENT_QUOTES); if (isset($_SERVER['HTTPS']) && strtolower($_SERVER['HTTPS']) == 'on') { $protocol = 'https://'; } else { $protocol = 'http://'; } $host = $_SERVER['HTTP_HOST']; if ($_SERVER['SERVER_PORT'] != '' && (($protocol == 'http://' && $_SERVER['SERVER_PORT'] != '80') || ($protocol == 'https://' && $_SERVER['SERVER_PORT'] != '443'))) { $port = ':' . $_SERVER['SERVER_PORT']; } else { $port = ''; } return $protocol . $host . $port . $php_request_uri; } function getAuthSubUrl($n=false) { $next = $n?$n:getCurrentUrl(); $scope = 'http://docs.google.com/feeds/documents https://www.google.com/calendar/feeds/ https://spreadsheets.google.com/feeds/ https://www.google.com/m8/feeds/ https://mail.google.com/mail/feed/atom/'; $secure = false; $session = true; //echo Zend_Gdata_AuthSub::getAuthSubTokenUri($next, $scope, $secure, $session);; return Zend_Gdata_AuthSub::getAuthSubTokenUri($next, $scope, $secure, $session).(isset($_SESSION['domain'])?'&hd='.$_SESSION['domain']:''); } function _regenerate_token() { global $BASE_URL; if(!$_SESSION['token']) { if(isset($_GET['token'])): $_SESSION['token'] = Zend_Gdata_AuthSub::getAuthSubSessionToken($_GET['token']); return; else: _regenerate_sessions(); _redirect(getAuthSubUrl($BASE_URL . '/index.php?'.$_SERVER['QUERY_STRING'])); endif; } } _regenerate_token(); I know I'm doing it all wrong here and I don't know why :( I have a CONSUMER SECRET code but only use it whereever I need to access a google service. However something is wrong with my authentication as the user has to periodically 'grant access to my application' and reauthorise himself... help please

    Read the article

  • What is a good dumbed-down, safe template system for PHP?

    - by Wilhelm
    (Summary: My users need to be able to edit the structure of their dynamically generated web pages without being able to do any damage.) Greetings, ladies and gentlemen. I am currently working on a service where customers from a specific demographic can create a specific type of web site and fill it with their own content. The system is written in PHP. Many of the users of this system wish to edit how their particular web site looks, or, more commonly, have a designer do it for them. Editing the CSS is fine and dandy, but sometimes that's not enough. Sometimes they want to shuffle the entire page structure around by editing the raw HTML of the dynamically created web pages. The templating system used by WordPress is, as far as I can see, perfect for my use. Except for one thing which is critically important. In addition to being able to edit how comments are displayed or where the menu goes, someone editing a template can have that template execute arbitrary PHP code. As the same codebase runs all these different sites, with all content in the same databse, allowing my users to run arbitrary code is clearly out of the question. So what I need, is a dumbed-down, idiot-proof templating system where my users can edit most of the page structure on their own, pulling in the dynamic sections wherever, without being able to even echo 1+1;. Observe the following psuedocode: <!DOCTYPE html> <title><!-- $title --></title> <!-- header() --> <!-- menu() --> <div>Some random custom crap added by the user.</div> <!-- page_content() --> That's the degree of power I'd like to grant my users. They don't need to do their own loops or calculations or anything. Just include my variables and functions and leave the rest to me. I'm sure I'm not the only person on the planet that needs something like this. Do you know of any ready-made templating systems I could use? Thanks in advance for your reply.

    Read the article

  • xslt test if a variable value is contained in a node set

    - by Aamir
    I have the following two files: <?xml version="1.0" encoding="utf-8" ?> <!-- D E F A U L T H O S P I T A L P O L I C Y --> <xas DefaultPolicy="open" DefaultSubjectsFile="subjects.xss"> <rule id="R1" access="deny" object="record" subject="roles/*[name()!='Staff']"/> <rule id="R2" access="deny" object="diagnosis" subject="roles//Nurse"/> <rule id="R3" access="grant" object="record[@id=$user]" subject="roles/member[@id=$user]"/> </xas> and the other xml file called subjects.xss is: <?xml version="1.0" encoding="utf-8" ?> <subjects> <users> <member id="dupont" password="4A-4E-E9-17-5D-CE-2C-DD-43-43-1D-F1-3F-5D-94-71"> <name>Pierre Dupont</name> </member> <member id="durand" password="3A-B6-1B-E8-C0-1F-CD-34-DF-C4-5E-BA-02-3C-04-61"> <name>Jacqueline Durand</name> </member> </users> <roles> <Staff> <Doctor> <member idref="dupont"/> </Doctor> <Nurse> <member idref="durand"/> </Nurse> </Staff> </roles> </subjects> I am writing an xsl sheet which will read the subject value for each rule in policy.xas and if the currently logged in user (accessible as variable "user" in the stylesheet) is contained in that subject value (say roles//Nurse), then do something. I am not being able to test whether the currently logged in user ($user which is equal to say "durand") is contained in roles//Nurse in the subjects file (which is a different xml file). Hope that clarifies my question. Any ideas? Thanks in advance.

    Read the article

  • Is there a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the .NET graphics world?

    - by Rao
    For the past 5 months or so, I've spent time learning C# using Andrew Troelsen's book and getting familiar with stuff in the .NET 4 stack... bits of ADO.NET, EF4 and a pinch of WCF to taste. I'm really interested in graphics development (not for games though), which is why I chose to go the .NET route when I decided choose from either Java or .NET to learn... since I heard about WPF and saw some sexy screenshots and all. I'm even almost done with the 4 WPF chapters in Troelsen's book. Now, all of a sudden I saw some post on a forum about how "WPF was dead" in the face of something called Silverlight. I searched more and saw all the confusion going on at present... even stuff like "Silverlight is dead too!" wrt HTML5. From what I gather, we are in a delicate period of time that will eventually decide which technology will stabilize, right? Even so, as someone new moving into UI & graphics development via .NET, I wish I could get some guidance from people more experienced people. Maybe I'm reading too much? Maybe I have missed some pieces of information? Maybe a path exists that minimizes tears of blood? In any case, here is a sample vomiting of my thoughts on which I'd appreciate some clarification or assurance or spanking: My present interest lies in desktop development. But on graduating from college, I wish to market myself as a .NET developer. The industry seems to be drooling for web stuff. Can Silverlight do both equally well? (I see on searches that SL works "out of browser"). I have two fair-sized hobby projects planned that will have hawt UIs with lots of drag n drop, sliding animations etc. These are intended to be desktop apps that will use reflection, database stuff using EF4, networking over LAN, reading-writing of files... does this affect which graphics technology can be used? At some laaaater point, if I become interested in doing a bit of 3D stuff in .NET, will that affect which technologies can be used? Or what if I look up to the heavens, stick out my middle finger, and do something crazy like go learn HTML5 even though my knowledge of it can be encapsulated in 2 sentences? Sorry I seem confused so much, I just want to know if there's a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the graphics world.

    Read the article

  • usercontrol hosted in IE renders as a textbox

    - by coxymla
    On my ongoing saga to mirror the hosting of a legacy app on a clean box, I've hit my next snag. One page relies on a big .NET UserControl that on the new machine renders only as a big, greyed out textarea (greyed out vertical scrollbar on the right hand edge. Inspecting the source shows the expected object tag.) This is particularly tricky because nobody seems to know much about hosted UserControls and all the discussions data back to 2002-2004. The page is quite simple: <%@ Page language="c#" Codebehind="DataExport.aspx.cs" AutoEventWireup="false" Inherits="yyyyy.Web.DataExport" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <html> <head> <title>DataExport</title> <link rel="Configuration" href="/xxxxx/yyyyy/DataExport.config"> </head> <body style="margin:0px;padding:0px;overflow:hidden"> <OBJECT id="DataExport" style="WIDTH: 100%; HEIGHT: 100%; position:absolute; left: 0px; top:0px" classid="yyyyy.Common.dll#yyyyy.Controls.DataExport" VIEWASTEXT> </OBJECT> </body> </html> The config file referenced: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="yyyyy"> <section name="dataExport" type="yyyyy.Controls.DataExportSectionHandler,yyyyy.Common" /> </sectionGroup> </configSections> <yyyyy> <dataExport> <layoutFile>http://vm2/xxxxx/yyyyy/layout.xml</layoutFile> <webServiceUrl>http://vm2/xxxxx/yyyyy/services/yyyyy.asmx</webServiceUrl> </dataExport> </yyyyy> </configuration> What I've checked: Security permissions should be OK, the site is trusted and adding a URL exception to grant FullTrust doesn't change anything. Config file is acessible over the web, layout.xml is accessible, ASMX shows the expected command list Machine.config grants GET permission for the usercontrol.config file. What perhaps looks fishy to me: The DataExport UserControl references Aspose.Excel to generate the spreadsheets it exports. When I navigate to the page and get a blank textbox, then run gacutil /ldl, nothing is in the local download cache. On the working machine, running the same command after viewing the page shows a laundry list of DLLs including the control DLL and the Aspose DLL.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105  | Next Page >