Search Results

Search found 1026 results on 42 pages for 'krishna kumar'.

Page 41/42 | < Previous Page | 37 38 39 40 41 42  | Next Page >

  • Changing the Game: Why Oracle is in the IT Operations Management Business

    - by DanKoloski
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Next week, in Orlando, is the annual Gartner IT Operations Management Summit. Oracle is a premier sponsor of this annual event, which brings together IT executives for several days of high level talks about the state of operational management of enterprise IT. This year, Sushil Kumar, VP Product Strategy and Business Development for Oracle’s Systems & Applications Management, will be presenting on the transformation in IT Operations required to support enterprise cloud computing. IT Operations transformation is an important subject, because year after year, we hear essentially the same refrain – large enterprises spend an average of two-thirds (67%!) of their IT resources (budget, energy, time, people, etc.) on running the business, with far too little left over to spend on growing and transforming the business (which is what the business actually needs and wants). In the thirtieth year of the distributed computing revolution (give or take, depending on how you count it), it’s amazing that we have still not moved the needle on the single biggest component of enterprise IT resource utilization. Oracle is in the IT Operations Management business because when management is engineered together with the technology under management, the resulting efficiency gains can be truly staggering. To put it simply – what if you could turn that 67% of IT resources spent on running the business into 50%? Or 40%? Imagine what you could do with those resources. It’s now not just possible, but happening. This seems like a simple idea, but it is a radical change from “business as usual” in enterprise IT Operations. For the last thirty years, management has been a bolted-on afterthought – we pick and deploy our technology, then figure out how to manage it. This pervasive dysfunction is a broken cycle that guarantees high ongoing operating costs and low agility. If we want to break the cycle, we need to take a more tightly-coupled approach. As a complete applications-to-disk platform provider, Oracle is engineering management together with technology across our stack and hooking that on-premise management up live to My Oracle Support. Let’s examine the results with just one piece of the Oracle stack – the Oracle Database. Oracle began this journey with the Oracle Database 9i many years ago with the introduction of low-impact instrumentation in the database kernel (“tell me what’s wrong”) and through Database 10g, 11g and 11gR2 has successively added integrated advisory (“tell me how to fix what’s wrong”) and lifecycle management and automated self-tuning (“fix it for me, and do it on an ongoing basis for all my assets”). When enterprises take advantage of this tight-coupling, the results are game-changing. Consider the following (for a full list of public references, visit this link): British Telecom improved database provisioning time 1000% (from weeks to minutes) which allows them to provide a new DBaaS service to their internal customers with no additional resources Cerner Corporation Saved $9.5 million in CapEx and OpEx AND launched a brand-new cloud business at the same time Vodafone Group plc improved response times 50% and reduced maintenance planning times 50-60% while serving 391 million registered mobile customers Or the recent Database Manageability and Productivity Cost Comparisons: Oracle Database 11g Release 2 vs. SAP Sybase ASE 15.7, Microsoft SQL Server 2008 R2 and IBM DB2 9.7 as conducted by independent analyst firm ORC. In later entries, we’ll discuss similar results across other portions of the Oracle stack and how these efficiency gains are required to achieve the agility benefits of Enterprise Cloud. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Partner Blog Series: PwC Perspectives Part 2 - Jumpstarting your IAM program with R2

    - by Tanu Sood
    Identity and access management (IAM) isn’t a new concept. Over the past decade, companies have begun to address identity management through a variety of solutions that have primarily focused on provisioning. . The new age workforce is converging at a rapid pace with ever increasing demand to use diverse portfolio of applications and systems to interact and interface with their peers in the industry and customers alike. Oracle has taken a significant leap with their release of Identity and Access Management 11gR2 towards enabling this global workforce to conduct their business in a secure, efficient and effective manner. As companies deal with IAM business drivers, it becomes immediately apparent that holistic, rather than piecemeal, approaches better address their needs. When planning an enterprise-wide IAM solution, the first step is to create a common framework that serves as the foundation on which to build the cost, compliance and business process efficiencies. As a leading industry practice, IAM should be established on a foundation of accurate data for identity management, making this data available in a uniform manner to downstream applications and processes. Mature organizations are looking beyond IAM’s basic benefits to harness more advanced capabilities in user lifecycle management. For any organization looking to embark on an IAM initiative, consider the following use cases in managing and administering user access. Expanding the Enterprise Provisioning Footprint Almost all organizations have some helpdesk resources tied up in handling access requests from users, a distraction from their core job of handling problem tickets. This dependency has mushroomed from the traditional acceptance of provisioning solutions integrating and addressing only a portion of applications in the heterogeneous landscape Oracle Identity Manager (OIM) 11gR2 solves this problem by offering integration with third party ticketing systems as “disconnected applications”. It allows for the existing business processes to be seamlessly integrated into the system and tracked throughout its lifecycle. With minimal effort and analysis, an organization can begin integrating OIM with groups or applications that are involved with manually intensive access provisioning and de-provisioning activities. This aspect of OIM allows organizations to on-board applications and associated business processes quickly using out of box templates and frameworks. This is especially important for organizations looking to fold in users and resources from mergers and acquisitions. Simplifying Access Requests Organizations looking to implement access request solutions often find it challenging to get their users to accept and adopt the new processes.. So, how do we improve the user experience, make it intuitive and personalized and yet simplify the user access process? With R2, OIM helps organizations alleviate the challenge by placing the most used functionality front and centre in the new user request interface. Roles, application accounts, and entitlements can all be found in the same interface as catalog items, giving business users a single location to go to whenever they need to initiate, approve or track a request. Furthermore, if a particular item is not relevant to a user’s job function or area inside the organization, it can be hidden so as to not overwhelm or confuse the user with superfluous options. The ability to customize the user interface to suit your needs helps in exercising the business rules effectively and avoiding access proliferation within the organization. Saving Time with Templates A typical use case that is most beneficial to business users is flexibility to place, edit, and withdraw requests based on changing circumstances and business needs. With OIM R2, multiple catalog items can now be added and removed from the shopping cart, an ecommerce paradigm that many users are already familiar with. This feature can be especially useful when setting up a large number of new employees or granting existing department or group access to a newly integrated application. Additionally, users can create their own shopping cart templates in order to complete subsequent requests more quickly. This feature saves the user from having to search for and select items all over again if a request is similar to a previous one. Advanced Delegated Administration A key feature of any provisioning solution should be to empower each business unit in managing their own access requests. By bringing administration closer to the user, you improve user productivity, enable efficiency and alleviate the administration overhead. To do so requires a federated services model so that the business units capable of shouldering the onus of user life cycle management of their business users can be enabled to do so. OIM 11gR2 offers advanced administrative options for creating, managing and controlling business logic and workflows through easy to use administrative interface and tools that can be exposed to delegated business administrators. For example, these business administrators can establish or modify how certain requests and operations should be handled within their business unit based on a number of attributes ranging from the type of request or the risk level of the individual items requested. Closed-Loop Remediation Security continues to be a major concern for most organizations. Identity management solutions bolster security by ensuring only the right users have the right access to the right resources. To prevent unauthorized access and where it already exists, the ability to detect and remediate it, are key requirements of an enterprise-grade proven solution. But the challenge with most solutions today is that some of this information still exists in silos. And when changes are made to systems directly, not all information is captured. With R2, oracle is offering a comprehensive Identity Governance solution that our customer organizations are leveraging for closed loop remediation that allows for an automated way for administrators to revoke unauthorized access. The change is automatically captured and the action noted for continued management. Conclusion While implementing provisioning solutions, it is important to keep the near term and the long term goals in mind. The provisioning solution should always be a part of a larger security and identity management program but with the ability to seamlessly integrate not only with the company’s infrastructure but also have the ability to leverage the information, business models compiled and used by the other identity management solutions. This allows organizations to reduce the cost of ownership, close security gaps and leverage the existing infrastructure. And having done so a multiple clients’ sites, this is the approach we recommend. In our next post, we will take a journey through our experiences of advising clients looking to upgrade to R2 from a previous version or migrating from a different solution. Meet the Writers:   Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Partner Blog Series: PwC Perspectives - Looking at R2 for Customer Organizations

    - by Tanu Sood
    Welcome to the first of our partner blog series. November Mondays are all about PricewaterhouseCoopers' perespective on Identity and R2. In this series, we have identity management experts from PricewaterhouseCoopers (PwC) share their perspective on (and experiences with) the recent identity management release, Oracle Identity Management R2. The purpose of the series is to discuss real world identity use cases that helped shape the innovations in the recent R2 release and the implementation strategies that customers are employing today with expertise from PwC. Part 1: Looking at R2 for Customer Organizations In this inaugural post, we will discuss some of the new features of the R2 release of Oracle Identity Manager that some of our customer organizations are implementing today and the business rationale for those. Oracle's R2 Security portfolio represents a solid step forward for a platform that is already market-leading.  Prior to R2, Oracle was an industry titan in security with reliable products, expansive compatibility, and a large customer base.  Oracle has taken their identity platform to the next level in their latest version, R2.  The new features include a customizable UI, a request catalog, flexible security, and enhancements for its connectors, and more. Oracle customers will be impressed by the new Oracle Identity Manager (OIM) business-friendly UI.  Without question, Oracle has invested significant time in responding to customer feedback about making access requests and related activities easier for non-IT users.  The flexibility to add information to screens, hide fields that are not important to a particular customer, and adjust web themes to suit a company's preference make Oracle's Identity Manager stand out among its peers.  Customers can also expect to carry UI configurations forward with minimal migration effort to future versions of OIM.  Oracle's flexible UI will benefit many organizations looking for a customized feel with out-of-the-box configurations. Organizations looking to extend their services to end users will benefit significantly from new usability features like OIM’s ‘Catalog.’  Customers familiar with Oracle Identity Analytics' 'Glossary' feature will be able to relate to the concept.  It will enable Roles, Entitlements, Accounts, and Resources to be requested through the out-of-the-box UI.  This is an industry-changing feature as customers can make the process to request access easier than ever.  For additional ease of use, Oracle has introduced a shopping cart style request interface that further simplifies the experience for end users.  Common requests can be setup as profiles to save time.  All of this is combined with the approval workflow engine introduced in R1 that provides the flexibility customers need to meet their compliance requirements. Enhanced security was also on the list of features Oracle wanted to deliver to its customers.  The new end-user UI provides additional granular access controls.  Common Help Desk use cases can be implemented with ease by updating the application profiles.  Access can be rolled out so that administrators can only manage a certain department or organization.  Further, OIM can be more easily configured to select which fields can be read-only vs. updated.  Finally, this security model can be used to limit search results for roles and entitlements intended for a particular department.  Every customer has a different need for access and OIM now matches this need with a flexible security model. One of the important considerations when selecting an Identity Management platform is compatibility.  The number of supported platform connectors and how well it can integrate with non-supported platforms is a key consideration for selecting an identity suite.  Oracle has a long list of supported connectors.  When a customer has a requirement for a platform not on that list, Oracle has a solution too.  Oracle is introducing a simplified architecture called Identity Connector Framework (ICF), which holds the potential to simplify custom connectors.  Finally, Oracle has introduced a simplified process to profile new disconnected applications from the web browser.  This is a useful feature that enables administrators to profile applications quickly as well as empowering the application owner to fulfill requests from their web browser.  Support will still be available for connectors based on previous versions in R2. Oracle Identity Manager's new R2 version has delivered many new features customers have been asking for.  Oracle has matured their platform with R2, making it a truly distinctive platform among its peers. In our next post, expect a deep dive into use cases for a customer considering R2 as their new Enterprise identity solution. In the meantime, we look forward to hearing from you about the specific challenges you are facing and your experience in solving those. Meet the Writers Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years. Praveen Krishna is a Manager in the Advisory  Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving.

    Read the article

  • why jQuery.parseJSON is not a function?

    - by Pandiya Chendur
    I use the following jquery statements and i am getting the error, jQuery.parseJSON is not a function my function is, function Iteratejsondata() {var HfJsonValue = { "Table": [{ "Emp_Id": "3", "Identity_No": "", "Emp_Name": "Jerome", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Supervisior", "Desig_Description": "Supervisior of the Construction", "SalaryBasis": "Monthly", "FixedSalary": "25000.00" }, { "Emp_Id": "4", "Identity_No": "", "Emp_Name": "Mohan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Acc ", "Desig_Description": "Accountant", "SalaryBasis": "Monthly", "FixedSalary": "200.00" }, { "Emp_Id": "5", "Identity_No": "", "Emp_Name": "Murugan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "150.00" }, { "Emp_Id": "6", "Identity_No": "", "Emp_Name": "Ram", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "120.00" }, { "Emp_Id": "7", "Identity_No": "", "Emp_Name": "Raja", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "135.00" }, { "Emp_Id": "8", "Identity_No": "", "Emp_Name": "Raja kumar", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason Helper", "Desig_Description": "Mason Helper", "SalaryBasis": "Weekly", "FixedSalary": "105.00" }, { "Emp_Id": "9", "Identity_No": "", "Emp_Name": "Lakshmi", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason Helper", "Desig_Description": "Mason Helper", "SalaryBasis": "Weekly", "FixedSalary": "100.00" }, { "Emp_Id": "10", "Identity_No": "", "Emp_Name": "Palani", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Carpenter", "Desig_Description": "Carpenter", "SalaryBasis": "Weekly", "FixedSalary": "200.00" }, { "Emp_Id": "11", "Identity_No": "", "Emp_Name": "Annamalai", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Carpenter", "Desig_Description": "Carpenter", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "12", "Identity_No": "", "Emp_Name": "David", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Fixer", "Desig_Description": "Steel Fixer", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "13", "Identity_No": "", "Emp_Name": "Chandru", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Fixer", "Desig_Description": "Steel Fixer", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "14", "Identity_No": "", "Emp_Name": "Mani", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Helper", "Desig_Description": "Steel Helper", "SalaryBasis": "Weekly", "FixedSalary": "175.00" }, { "Emp_Id": "15", "Identity_No": "", "Emp_Name": "Karthik", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Fixer", "Desig_Description": "Wood Fixer", "SalaryBasis": "Weekly", "FixedSalary": "195.00" }, { "Emp_Id": "16", "Identity_No": "", "Emp_Name": "Bala", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Fixer", "Desig_Description": "Wood Fixer", "SalaryBasis": "Weekly", "FixedSalary": "185.00" }, { "Emp_Id": "17", "Identity_No": "", "Emp_Name": "Tamil arasi", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Helper", "Desig_Description": "Wood Helper", "SalaryBasis": "Weekly", "FixedSalary": "185.00" }, { "Emp_Id": "18", "Identity_No": "", "Emp_Name": "Perumal", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Cook", "Desig_Description": "Cook", "SalaryBasis": "Weekly", "FixedSalary": "105.00" }, { "Emp_Id": "19", "Identity_No": "", "Emp_Name": "Andiappan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Watchman", "Desig_Description": "Watchman", "SalaryBasis": "Weekly", "FixedSalary": "150.00"}] }; //var jsonObj = eval('(' + HfJsonValue + ')'); var jsonObj = jQuery.parseJSON(HfJsonValue); and my page looks like this <div id="Pagination" class="page-numbers"></div> <br style="clear:both;" /> <div id="Searchresult"></div> <div id="hiddenresult" style="display:none;"> </div> <script type="text/javascript"> var pagination_options = { num_edge_entries: 2, num_display_entries: 8, callback: pageselectCallback, items_per_page: 3 } function pageselectCallback(page_index, jq) { var items_per_page = pagination_options.items_per_page; var offset = page_index * items_per_page; var new_content = $('#hiddenresult div.resultsdiv').slice(offset, offset + items_per_page).clone(); $('#Searchresult').empty().append(new_content); return false; } function initPagination() { var num_entries = $('#hiddenresult div.resultsdiv').length; // Create pagination element $("#Pagination").pagination(num_entries, pagination_options); } $(document).ready(function() { Iteratejsondata(); initPagination(); }); </script> I ve inspected through firebug and saw all jquery files have been downloaded but why this is hapenning? Any suggestion....

    Read the article

  • Paging through records (json data) using jQuery...

    - by Pandiya Chendur
    I have a JSON result that contains numerous records. I'd like to show the first five records in one page and create pager links which have to move to that page with five record so on. I don't want the page to refresh which is why I'm hoping for a combination of JavaScript and jQuery. My json data looks like this: {"Table" : [ {"Emp_Id" : "3","Identity_No" : "","Emp_Name" : "Jerome","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Supervisior","Desig_Description" : "Supervisior of the Construction","SalaryBasis" : "Monthly","FixedSalary" : "25000.00"}, {"Emp_Id" : "4","Identity_No" : "","Emp_Name" : "Mohan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Acc ","Desig_Description" : "Accountant","SalaryBasis" : "Monthly","FixedSalary" : "200.00"}, {"Emp_Id" : "5","Identity_No" : "","Emp_Name" : "Murugan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "150.00"}, {"Emp_Id" : "6","Identity_No" : "","Emp_Name" : "Ram","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "120.00"}, {"Emp_Id" : "7","Identity_No" : "","Emp_Name" : "Raja","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "135.00"}, {"Emp_Id" : "8","Identity_No" : "","Emp_Name" : "Raja kumar","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "105.00"}, {"Emp_Id" : "9","Identity_No" : "","Emp_Name" : "Lakshmi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "100.00"}, {"Emp_Id" : "10","Identity_No" : "","Emp_Name" : "Palani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "200.00"}, {"Emp_Id" : "11","Identity_No" : "","Emp_Name" : "Annamalai","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "220.00"}, {"Emp_Id" : "12","Identity_No" : "","Emp_Name" : "David","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"}, {"Emp_Id" : "13","Identity_No" : "","Emp_Name" : "Chandru","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"}, {"Emp_Id" : "14","Identity_No" : "","Emp_Name" : "Mani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Helper","Desig_Description" : "Steel Helper","SalaryBasis" : "Weekly","FixedSalary" : "175.00"}, {"Emp_Id" : "15","Identity_No" : "","Emp_Name" : "Karthik","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "195.00"}, {"Emp_Id" : "16","Identity_No" : "","Emp_Name" : "Bala","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "185.00"}, {"Emp_Id" : "17","Identity_No" : "","Emp_Name" : "Tamil arasi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Helper","Desig_Description" : "Wood Helper","SalaryBasis" : "Weekly","FixedSalary" : "185.00"}, {"Emp_Id" : "18","Identity_No" : "","Emp_Name" : "Perumal","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Cook","Desig_Description" : "Cook","SalaryBasis" : "Weekly","FixedSalary" : "105.00"}, {"Emp_Id" : "19","Identity_No" : "","Emp_Name" : "Andiappan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Watchman","Desig_Description" : "Watchman","SalaryBasis" : "Weekly","FixedSalary" : "150.00"} ] } And as of now my result looks like this, http://img401.imageshack.us/img401/2500/yuidtsum.jpg I have used jQuery for this: var jsonObj = JSON.parse(HfJsonValue); for (var i = jsonObj.Table.length - 1; i >= 0; i--) { var employee = jsonObj.Table[i]; $('<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>') .insertAfter('#ResultsDiv'); } My image contains only 6 records as of now. Any suggestions?

    Read the article

  • Paging Through Records(json data) Using jQuery...

    - by Pandiya Chendur
    I have a JSON result that contains numerous records. I'd like to show the first five records in one page and create pager links which have to move to that page with five record so on. I don't want the page to refresh which is why I'm hoping a combination of JavaScript and jQuery... My json Data looks like this... {"Table" : [{"Emp_Id" : "3","Identity_No" : "","Emp_Name" : "Jerome","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Supervisior","Desig_Description" : "Supervisior of the Construction","SalaryBasis" : "Monthly","FixedSalary" : "25000.00"},{"Emp_Id" : "4","Identity_No" : "","Emp_Name" : "Mohan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Acc ","Desig_Description" : "Accountant","SalaryBasis" : "Monthly","FixedSalary" : "200.00"},{"Emp_Id" : "5","Identity_No" : "","Emp_Name" : "Murugan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "150.00"},{"Emp_Id" : "6","Identity_No" : "","Emp_Name" : "Ram","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "120.00"},{"Emp_Id" : "7","Identity_No" : "","Emp_Name" : "Raja","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "135.00"},{"Emp_Id" : "8","Identity_No" : "","Emp_Name" : "Raja kumar","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "9","Identity_No" : "","Emp_Name" : "Lakshmi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "100.00"},{"Emp_Id" : "10","Identity_No" : "","Emp_Name" : "Palani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "200.00"},{"Emp_Id" : "11","Identity_No" : "","Emp_Name" : "Annamalai","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "12","Identity_No" : "","Emp_Name" : "David","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "13","Identity_No" : "","Emp_Name" : "Chandru","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "14","Identity_No" : "","Emp_Name" : "Mani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Helper","Desig_Description" : "Steel Helper","SalaryBasis" : "Weekly","FixedSalary" : "175.00"},{"Emp_Id" : "15","Identity_No" : "","Emp_Name" : "Karthik","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "195.00"},{"Emp_Id" : "16","Identity_No" : "","Emp_Name" : "Bala","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "17","Identity_No" : "","Emp_Name" : "Tamil arasi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Helper","Desig_Description" : "Wood Helper","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "18","Identity_No" : "","Emp_Name" : "Perumal","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Cook","Desig_Description" : "Cook","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "19","Identity_No" : "","Emp_Name" : "Andiappan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Watchman","Desig_Description" : "Watchman","SalaryBasis" : "Weekly","FixedSalary" : "150.00"}]} And as of now my result looks like this, http://img401.imageshack.us/img401/2500/yuidtsum.jpg I have used jquery for this, var jsonObj = JSON.parse(HfJsonValue); for (var i = jsonObj.Table.length - 1; i >= 0; i--) { var employee = jsonObj.Table[i]; $('<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>').insertAfter('#ResultsDiv'); } My image contains only 6 records as of now.. Any suggestions?

    Read the article

  • Good jquery pagination plugin to use with json Data...

    - by bala3569
    I am looking for a good jquery pagination plugin to use in my aspx page.... I have the following parameters currentpage,pagesize,TotalRecords,NumberofPages... I would like my plugin to same as stackoverflow paging .... EDIT: It should paginate through json data.... similar to this I use my json data and iterating with jquery var jsonObj = jQuery.parseJSON(HfJsonValue); for (var i = jsonObj.Table.length - 1; i >= 0; i--) { var employee = jsonObj.Table[i]; $('<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>').insertAfter('#ResultsDiv'); } There are 25 divs in my page as a result i want to show first five divs in page 1 and so on... Any suggestion... My HfJsonValue contains the following json data {"Table" : [{"Emp_Id" : "3","Identity_No" : "","Emp_Name" : "Jerome","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Supervisior","Desig_Description" : "Supervisior of the Construction","SalaryBasis" : "Monthly","FixedSalary" : "25000.00"},{"Emp_Id" : "4","Identity_No" : "","Emp_Name" : "Mohan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Acc ","Desig_Description" : "Accountant","SalaryBasis" : "Monthly","FixedSalary" : "200.00"},{"Emp_Id" : "5","Identity_No" : "","Emp_Name" : "Murugan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "150.00"},{"Emp_Id" : "6","Identity_No" : "","Emp_Name" : "Ram","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "120.00"},{"Emp_Id" : "7","Identity_No" : "","Emp_Name" : "Raja","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "135.00"},{"Emp_Id" : "8","Identity_No" : "","Emp_Name" : "Raja kumar","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "9","Identity_No" : "","Emp_Name" : "Lakshmi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "100.00"},{"Emp_Id" : "10","Identity_No" : "","Emp_Name" : "Palani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "200.00"},{"Emp_Id" : "11","Identity_No" : "","Emp_Name" : "Annamalai","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "12","Identity_No" : "","Emp_Name" : "David","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "13","Identity_No" : "","Emp_Name" : "Chandru","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "14","Identity_No" : "","Emp_Name" : "Mani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Helper","Desig_Description" : "Steel Helper","SalaryBasis" : "Weekly","FixedSalary" : "175.00"},{"Emp_Id" : "15","Identity_No" : "","Emp_Name" : "Karthik","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "195.00"},{"Emp_Id" : "16","Identity_No" : "","Emp_Name" : "Bala","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "17","Identity_No" : "","Emp_Name" : "Tamil arasi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Helper","Desig_Description" : "Wood Helper","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "18","Identity_No" : "","Emp_Name" : "Perumal","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Cook","Desig_Description" : "Cook","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "19","Identity_No" : "","Emp_Name" : "Andiappan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Watchman","Desig_Description" : "Watchman","SalaryBasis" : "Weekly","FixedSalary" : "150.00"}]}

    Read the article

  • How to use jquery to paginate json data?

    - by Pandiya Chendur
    My json Data looks like this {"Table" : [{"Emp_Id" : "3","Identity_No" : "","Emp_Name" : "Jerome","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Supervisior","Desig_Description" : "Supervisior of the Construction","SalaryBasis" : "Monthly","FixedSalary" : "25000.00"},{"Emp_Id" : "4","Identity_No" : "","Emp_Name" : "Mohan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Acc ","Desig_Description" : "Accountant","SalaryBasis" : "Monthly","FixedSalary" : "200.00"},{"Emp_Id" : "5","Identity_No" : "","Emp_Name" : "Murugan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "150.00"},{"Emp_Id" : "6","Identity_No" : "","Emp_Name" : "Ram","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "120.00"},{"Emp_Id" : "7","Identity_No" : "","Emp_Name" : "Raja","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "135.00"},{"Emp_Id" : "8","Identity_No" : "","Emp_Name" : "Raja kumar","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "9","Identity_No" : "","Emp_Name" : "Lakshmi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "100.00"},{"Emp_Id" : "10","Identity_No" : "","Emp_Name" : "Palani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "200.00"},{"Emp_Id" : "11","Identity_No" : "","Emp_Name" : "Annamalai","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "12","Identity_No" : "","Emp_Name" : "David","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "13","Identity_No" : "","Emp_Name" : "Chandru","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "14","Identity_No" : "","Emp_Name" : "Mani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Helper","Desig_Description" : "Steel Helper","SalaryBasis" : "Weekly","FixedSalary" : "175.00"},{"Emp_Id" : "15","Identity_No" : "","Emp_Name" : "Karthik","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "195.00"},{"Emp_Id" : "16","Identity_No" : "","Emp_Name" : "Bala","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "17","Identity_No" : "","Emp_Name" : "Tamil arasi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Helper","Desig_Description" : "Wood Helper","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "18","Identity_No" : "","Emp_Name" : "Perumal","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Cook","Desig_Description" : "Cook","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "19","Identity_No" : "","Emp_Name" : "Andiappan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Watchman","Desig_Description" : "Watchman","SalaryBasis" : "Weekly","FixedSalary" : "150.00"}]} There are 22 records in this json... How to paginate this json data 5 per page using jquery? EDIT: The above image is my summary view of employee list iterated using jquery var jsonObj = JSON.parse(HfJsonValue); for (var i = jsonObj.Table.length - 1; i >= 0; i--) { var employee = jsonObj.Table[i]; $('<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>').insertAfter('#ResultsDiv'); } I get 22 records now it may grow how to paginate json date by using jquery pagination.. Any suggestion...

    Read the article

  • jquery xml slideshow using ajax

    - by Codemaster Snake
    Hi all, I am trying to create a JQuery based slider using ajax to load images url from a xml file and then creating a html li list dynamically. Till now I am able to append and create DOM structure using Jquery. But I am not able to access the dynamically created list. I have also tried custom events using bind but not able to successfully implement it. Following is my jquery plugin code: (function($){ $.fn.genie = function(options) { var genie_dom = "<div class='genie_wrapper'><ul class='genie'></ul></div>"; var o, base; var genie_styles = "<style>.genie_wrapper{overflow:hidden}.genie{position: relative;margin:0;padding:0}.genie li {position: absolute;margin:0;padding:0}</style>"; var defaults = { width : '960px', height : '300px', background_color : '#000000', xml : 'genie.xml', speed : 1000, pause: 1000 }; base = $(this); o = $.extend(defaults, options); return this.each(function() { create_elements(); }); function create_elements() { $(base).html(genie_dom); $('head').append(genie_styles); $('.genie_wrapper').css({'background-color' : o.background_color, width : o.width, height: '300px', overflow: ''}); $.ajax({ type: "GET", url: o.xml, dataType: "xml", success: function(xml) { var slides = $(xml).find('slide'); var count = 0; $(slides).each(function(){ $('.genie').append('<li class="slide" id="slide'+count+'"><img src="' + $(this).text() + '" /></li>'); $('.genie li:last').css({'z-index' : count}); count++; }); } }); } } })(jQuery); In my html file: There is only one empty div to which I am calling my plugin like $(document).ready(function() { $('.slideshow').genie(); }); I have also tried using following everywhere in JS: $(".slide").bind("start_animation", function(e){ $(this).fadeOut(1000); alert($(this).html()); }); $(".slide").trigger("start_animation"); I want to animate li list using animate function, Can anyone please tell me how to implement it... It would be of great help... Regards, Neeraj Kumar EDIT: Can Anyone help me out please????

    Read the article

  • Microsoft Channel 9 Interviews Mei Liang to Introduce Sample Browser Extension for Visual Studio 2012 and 2010

    - by Jialiang
    This morning, Microsoft Channel 9 interviewed Mei Liang - Group Manager of Microsoft All-In-One Code Framework - to introduce the newest Sample Browser extension for Visual Studio 2012 &2010.   This extension provides a way for developers to search and download more than 4500 code samples from within Visual Studio, including over 700 Windows 8 samples and more than 1000 All-In-One Code Framework customer-driven code samples. Mei shows us not only the extension, but also the standalone version of the Sample Browser.   http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Sample-Browser-Visual-Studio-Extension   Microsoft All-In-One Code Framework, working in close partnership with the Visual Studio product team and MSDN Samples Gallery, developed the Sample Browser extension for both Visual Studio 2012 and Visual Studio 2010.  As an effort to evolve the code sample use experience and improve developers' productivity, the Sample Browser allows programmers to search, download and open over 4500 code samples from within Visual Studio with just a few simple clicks.  If no existing code sample can meet the needs, developers can even request a code sample easily from Microsoft thanks to the free “Sample Request Service” offered by Microsoft All-In-One Code Framework.  Through innovations, the teams hope to put the power of tens of thousands of code samples at developers’ fingertips. In short 3 months, the Sample Browser Visual Studio Extension has been installed by 100K global users.  It is also selected as one of the six most highly regarded and commonly used tools for Visual Studio that will make your programming experience feel like never before.   Got to love the All-In-One Code Framework team! You guys know this is THE go to source for code samples. Get this extension and you'll never need to leave VS2012 (well except for bathroom trips, but that's TMI anyway... ;) Read More... From: Greg Duncan (Author of CoolThingOfTheDay) 9/6/2011 12:00 AM The one software design pattern that I have used in just about every application I’ve written is “cut-and-paste,” so the new “Sample Browser” – read sample as a noun not an adjective – is a great boon to my productivity. Read More... From: Jim O'Neil (Microsoft Developer Evangelist) 9/28/2011 12:00 AM Install: http://aka.ms/samplebrowservsx Microsoft All-In-One Code Framework also offers the standalone version of Sample Browser.   The standalone version is particularly useful to Visual Studio Express edition or Visual Studio 2008 users, who cannot install the Sample Browser Visual Studio extension.   From Grassroots’ Passion for Developers to the Innovation of Sample Browser This Sample Browser has come a very long way improving the code sample use experience.  The history can be traced back to a grass-root innovation three years ago.   In early 2009, a few MSDN forum support engineers observed that lots of developers were struggling to work in Visual Studio without adequate code samples. Programming tasks seem harder than they should be when you only read through the documentation.  Just a couple of lines of sample code could answer a lot of questions.   They had a brilliant idea: What if we produce code samples based on developers’ frequently asked programming tasks in forums, social networks and support incidents, and then aggregate all our sample code in a one-stop library to benefit developers?  And what if developers can request code samples directly from Microsoft, free of charge?  This small group of grassroots at Microsoft devoted their nights and weekends to prototyping such a customer-driven code sample library.  This simple idea eventually turned into “Microsoft All-In-One Code Framework”, aka. OneCode.  With the support from more and more passionate developers at Microsoft and the leaders in the Community and Online Support team and Microsoft Commercial Technical Services (CTS), the idea has become a continually growing library with over 1000 customer-driven code samples covering almost all Microsoft development technologies.  These code samples originated from developers’ common pains and needs should be able to help many developers.  However, if developers cannot easily discover the code samples, the effort would still be in vain.  So in early 2010, the team started the idea of Sample Browser to ease the discovery and access of these samples.  In just two months, the first version of Sample Browser was finished and released by a passionate developer.  It was a very simple application, only supporting the basic sample offline search.  Users had to download the whole 100MB sample package containing all samples first, and run the Sample Browser to search locally.   Though developers could not search and download samples on-demand, this simple application laid a solid foundation for the team’s continuous innovations of Sample Browsing experience. In 2011, MSDN Samples Gallery had a big refresh.  The online sample experience was brought to a new level thanks to its PM Steven Wilssens and the gallery team’s effort.  Microsoft All-In-One Code Framework Team saw the opportunity to realize the “on-demand” sample search and download feature with the new gallery.  The two teams formed a strong partnership to upload all the customer-driven code samples to MSDN Samples Gallery, and released the new version of Sample Browser to support “on-demand” sample downloading in April, 2011.  Mei Liang, the Group Manager of Microsoft All-In-One Code Framework, was interviewed by Channel 9 to demo the Sample Browser.  Customers love the effort and the innovation!!  This can be clearly seen from the user comments in the publishing page.   It was very encouraging to the team of All-In-One Code Framework. The team continues innovating and evolving the Sample Browser.  They found the Visual Studio product team this time, and integrated the Sample Browsing experience into the latest Visual Studio 2012.  The newly released Sample Browser Visual Studio extension makes good use of Visual Studio 2012 IDE such as the new Quick Launch bar, the code editor, the toolbar and menus to offer easy access to thousands of code samples from within the development environment.   The Visual Studio Senior Program Manager Lead - Anthony Cangialosi, the Program Manager - Murali Krishna Hosabettu Kamalesha, the MSDN Samples Gallery PM – Steven Wilssens, and the Visual Studio Senior Escalation Engineer - Ed Dore shared lots of insightful suggestions with the team.  Thanks to the brilliant cross-group collaboration inside Microsoft, tens of new features including “Local Language Support” and “Favorite Samples”, as well as a face-lifted user interface, were added to further enhance the user experience. Since the new Sample Browser Visual Studio extension was released, it has received over 100 thousand downloads and five-star ratings.  A customer told the team that he officially falls in LOVE with Microsoft All-In-One Code Framework.   The Sample Browser Innovation for Developers Never Stops! The teams would never stop improving the Sample Browser for developers’ easier lives.   The Microsoft All-In-One Code Framework, Visual Studio and MSDN Samples Gallery teams are working closely to develop the next version of Sample Browser.  Here are the key functions in development or in discussion.  We hope to learn your feedback of the effort.  You can submit your suggestions to the official Visual Studio UserVoice site.  We look forward to hearing from you! 1) Offline Sample Search This is one of the top feature requests that we have received for Sample Browser.   The Sample Browser will support the offline search mode so that developers can search downloaded code samples when they do not have internet access.  This is particularly useful to developers in Enterprises with strict proxy settings. 2) Code Snippet Support and Visual Studio Editor Integration Today, the Sample Browser supports downloading and opening sample project.   However, when developers are searching for code samples, a better user experience would be to see the code snippets in the search result first.  Developers can quickly decide if the code snippet is relevant.   They can also drag and drop the code snippet into the Visual Studio Editor to solve some simple programming tasks.  If developers want to learn more about the sample, they can then choose to download the sample project and open it in Visual Studio. 3) Enterprise Sample Sharing and Searching Large enterprises have many code samples for their own internal tools and APIs that are not appropriate to be shared publicly in MSDN Samples Gallery.   In that case, today’s Sample Browser and MSDN Samples Gallery cannot help these Enterprise developers.  The idea is to create a Code Sample Repository in TFS, and provide an additional Visual Studio extension for Enterprise developers to quickly share code samples to TFS.  The Sample Browser can be configured to connect to the TFS Code Sample Repository to search for and download code samples.  This would potentially enable the Enterprise developers to be more productive. 4) Windows Store Sample Browser With the upcoming release of Windows RT and Microsoft Surface, developers are facing a completely new world of application platform.   Not like laptop, people would often use Microsoft Surface in commute and in travel.  Internet may not be available.  Today’s Visual Studio cannot be installed and run on Windows RT, however, our enthusiastic developers would hope to spend every minute on code.  They love code!   The idea is to create a Windows Store version of Sample Browser. Search and download samples from the online Samples Gallery when the user has internet access. Browse the sample code files and learn the sample documentation of downloaded samples with or without internet access.   In addition to the "browse” function, the Sample Browser could further support “bookmark”, “learning notes”, “code review”, and “quick social sharing". Make full use of the new touch and Windows Store App UI to give developers a new “relaxing” code browsing and learning experience, anytime, anywhere. With Windows Store Sample Browser, developers can enjoy A new relaxing and enjoyable experience for developers to learn code samples You do not have to sit in front of desk and formally open Visual Studio to read code samples.  Many developers get sub-health due to staying in front of desk for a very long time.  With Windows RT, Microsoft Surface and this Windows Store Sample Browser combining with the online MSDN Samples Gallery, developers can sit in a sofa, relaxingly hold the tablet and enjoy to learn their beloved sample code with detailed documentation. Anytime, anywhere Whether you have internet access or not, whether you are at home, in office, or in commute/airplane, developers can always easily access and browse the sample code. Lightweight and fast Particularly for learning a small sample project, the Windows Store Sample Browser would be more lightweight and faster to open and browse the sample code. Please submit your feedback and suggestion to Visual Studio UserVoice.  We look forward to hearing from you and deliver a better and better sample use experience.  Happy Coding!   Special Thanks to People working behind the latest release of Sample Browser Visual Studio Extension and the great partnerships!

    Read the article

  • SQL SERVER – Developer Training Resources and Summary Roundup

    - by pinaldave
    It is always pleasure for any author when other renowned authors in the industry write about you. Earlier I wrote a five part blog series on Developer Training and I have received a phenomenal response to the series. I have received plenty of comments, questions and feedback. I thought it would be nice to sum up the whole series as well answer a few of the questions received. Quick Recap Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. There are companies that pay for 100% of the expenses and there are employees who opt for training on their own expense during their personal time. Training is often looked at as vacation by employee and employers and we need to change this mind-set. One of the ways is to report back the learning to your manager and implement newly learned knowledge in day-to-day work. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. The manager often feels pressure to accommodate every single employee for training even though his training budget is limited. It is indeed the responsibility of the developer to get maximum advantage from the training. Training immediately helps organizations but stays as a part of an employee’s knowledge forever. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. The common element between both the methods is “learning material”. Learning material can be of any format – videos, books, paper notes or just a plain black board. Instructor-led training is a very effective mode but not possible every single time. During the course of the developer’s career, one has to learn lots of new technology and it is almost impossible to have a quality trainer available on that subject at that time. Books are most effective and proven methods, however, it always helps if someone explains the concepts of the book with a demonstration. In recent times I have started to believe in online trainings which leads to a hybrid experience. Online trainings take the best part of the books and the best part of the instructor-led training and gives effective training in a matter of hours. Developer Training – A Conclusive Summary- Part 5 In this part, I shared what I was continuously thinking about developer training. There is no better teacher than oneself. There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. Once someone overcomes them, life is much easier. I believe that proper and appropriate high quality training can help to address the burning issues. Opinion of Friends I invited a few of my friends to express their opinion about developer training and here are their opinions. I am listing them here in the order of the blog post publishing date. Nakul Vachhrajani - Developer Trainings-Importance, Benefits, Tips and follow-up Nakul’s sums of many of the concepts which are complementary to my blog posts. Nakul addresses the burning question of developer training with different angles. I am personally very impressed by his following statement - “Being skilled does not mean having just a stack of certifications, but it also means having an understanding about the internals of the products that you are working on – and using that knowledge to improve the efficiency & productivity at the workplace in turn resulting in better products, better consulting abilities and a happier self.” Nakul also suggests the online training options of Pluralsight. Vinod Kumar - Training–a necessity or bonus Vinod Kumar comes up with excellent follow up on developer training. Vinod is known for his inspirational writing about SQL Server. Vinod starts with a story of a student who is extremely eager to learn the wisdom of life from a monk but the monk does not accept him as a disciple for a long time. The conversation between student and monk is indeed an essence of all learning. We all want to learn quickly and be successful but the most important thing in life is to have the right attitude towards learning and more so towards life. The blog post end with a very important thought about how to avoid the famous excuse – “I don’t have enough time.” Ritesh Shah - Training – useful or useless? Ritesh brings up very important concept related to training. Ritesh in his meticulous style explains why training is an important and lifelong process. Training must not stop at any age but should continue forever. The moment training stops, progress stops along with. Paras Doshi - Professional Development Resource Paras is known for his to–the-point writing, and has summarized the five part series very precisely. He read the five part series and created a digest summary of the blog post. If you are in a rush and have no time to read my five series – I suggest you read his blog post. Training Resources I am often asked what the best resources for learning new technology are. This is the most difficult question EVER. There are plenty of good training resources available. When it is about training our needs are different, our preference of learning is different and we all have an opinion. Additionally, we all are located in different geographic locations worldwide and there is no way one solution will fit all. However, let me list a few of the training resources which I have built so far and you can consume them if you find it relevant to your need. SQL Server Books SQL Server Interview Questions and Answers SQL Wait Stats SQL Programming Joes 2 Pros SQL Server Video Tutorials SQL Server Questions and Answers SQL Server Performance: Indexing Basics SQL Server Performance: Introduction to Query Tuning SQL in Sixty Seconds Series of Sixty Seconds Learning Video on YouTube Trust me worldwide web is very big and there are plenty of high quality learning materials available worldwide – trainer-led as well online. I suggest you explore various options and make the best choice for yourself. Remember, training is your personal journey and it should never stop. Are you ready? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Thinking about Deprecated, Discontinued Features and Breaking Changes while Upgrading to SQL Server 2012 – Guest Post by Nakul Vachhrajani

    - by pinaldave
    Nakul Vachhrajani is a Technical Specialist and systems development professional with iGATE having a total IT experience of more than 7 years. Nakul is an active blogger with BeyondRelational.com (150+ blogs), and can also be found on forums at SQLServerCentral and BeyondRelational.com. Nakul has also been a guest columnist for SQLAuthority.com and SQLServerCentral.com. Nakul presented a webcast on the “Underappreciated Features of Microsoft SQL Server” at the Microsoft Virtual Tech Days Exclusive Webcast series (May 02-06, 2011) on May 06, 2011. He is also the author of a research paper on Database upgrade methodologies, which was published in a CSI journal, published nationwide. In addition to his passion about SQL Server, Nakul also contributes to the academia out of personal interest. He visits various colleges and universities as an external faculty to judge project activities being carried out by the students. Disclaimer: The opinions expressed herein are his own personal opinions and do not represent his employer’s view in anyway. Blog | LinkedIn | Twitter | Google+ Let us hear the thoughts of Nakul in first person - Those who have been following my blogs would be aware that I am recently running a series on the database engine features that have been deprecated in Microsoft SQL Server 2012. Based on the response that I have received, I was quite surprised to know that most of the audience found these to be breaking changes, when in fact, they were not! It was then that I decided to write a little piece on how to plan your database upgrade such that it works with the next version of Microsoft SQL Server. Please note that the recommendations made in this article are high-level markers and are intended to help you think over the specific steps that you would need to take to upgrade your database. Refer the documentation – Understand the terms Change is the only constant in this world. Therefore, whenever customer requirements, newer architectures and designs require software vendors to make a change to the keywords, functions, etc; they ensure that they provide their end users sufficient time to migrate over to the new standards before dropping off the old ones. Microsoft does that too with it’s Microsoft SQL Server product. Whenever a new SQL Server release is announced, it comes with a list of the following features: Breaking changes These are changes that would break your currently running applications, scripts or functionalities that are based on earlier version of Microsoft SQL Server These are mostly features whose behavior has been changed keeping in mind the newer architectures and designs Lesson: These are the changes that you need to be most worried about! Discontinued features These features are no longer available in the associated version of Microsoft SQL Server These features used to be “deprecated” in the prior release Lesson: Without these changes, your database would not be compliant/may not work with the version of Microsoft SQL Server under consideration Deprecated features These features are those that are still available in the current version of Microsoft SQL Server, but are scheduled for removal in a future version. These may be removed in either the next version or any other future version of Microsoft SQL Server The features listed for deprecation will compose the list of discontinued features in the next version of SQL Server Lesson: Plan to make necessary changes required to remove/replace usage of the deprecated features with the latest recommended replacements Once a feature appears on the list, it moves from bottom to the top, i.e. it is first marked as “Deprecated” and then “Discontinued”. We know of “Breaking change” comes later on in the product life cycle. What this means is that if you want to know what features would not work with SQL Server 2012 (and you are currently using SQL Server 2008 R2), you need to refer the list of breaking changes and discontinued features in SQL Server 2012. Use the tools! There are a lot of tools and technologies around us, but it is rarely that I find teams using these tools religiously and to the best of their potential. Below are the top two tools, from Microsoft, that I use every time I plan a database upgrade. The SQL Server Upgrade Advisor Ever since SQL Server 2005 was announced, Microsoft provides a small, very light-weight tool called the “SQL Server upgrade advisor”. The upgrade advisor analyzes installed components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after you upgrade. The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers, and trace files. Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures. Refer the links towards the end of the post to know how to get the Upgrade Advisor. The SQL Server Profiler Another great tool that you can use is the one most SQL Server developers & administrators use often – the SQL Server profiler. SQL Server Profiler provides functionality to monitor the “Deprecation” event, which contains: Deprecation announcement – equivalent to features to be deprecated in a future release of SQL Server Deprecation final support – equivalent to features to be deprecated in the next release of SQL Server You can learn more using the links towards the end of the post. A basic checklist There are a lot of finer points that need to be taken care of when upgrading your database. But, it would be worth-while to identify a few basic steps in order to make your database compliant with the next version of SQL Server: Monitor the current application workload (on a test bed) via the Profiler in order to identify usage of features marked as Deprecated If none appear, you are all set! (This almost never happens) Note down all the offending queries and feature usages Run analysis sessions using the SQL Server upgrade advisor on your database Based on the inputs from the analysis report and Profiler trace sessions, Incorporate solutions for the breaking changes first Next, incorporate solutions for the discontinued features Revisit and document the upgrade strategy for your deployment scenarios Revisit the fall-back, i.e. rollback strategies in case the upgrades fail Because some programming changes are dependent upon the SQL server version, this may need to be done in consultation with the development teams Before any other enhancements are incorporated by the development team, send out the database changes into QA QA strategy should involve a comparison between an environment running the old version of SQL Server against the new one Because minimal application changes have gone in (essential changes for SQL Server version compliance only), this would be possible As an ongoing activity, keep incorporating changes recommended as per the deprecated features list As a DBA, update your coding standards to ensure that the developers are using ANSI compliant code – this code will require a change only if the ANSI standard changes Remember this: Change management is a continuous process. Keep revisiting the product release notes and incorporate recommended changes to stay prepared for the next release of SQL Server. May the power of SQL Server be with you! Links Referenced in this post Breaking changes in SQL Server 2012: Link Discontinued features in SQL Server 2012: Link Get the upgrade advisor from the Microsoft Download Center at: Link Upgrade Advisor page on MSDN: Link Profiler: Review T-SQL code to identify objects no longer supported by Microsoft: Link Upgrading to SQL Server 2012 by Vinod Kumar: Link Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Upgrade

    Read the article

  • J2EE Applications, SPARC T4, Solaris Containers, and Resource Pools

    - by user12620111
    I've obtained a substantial performance improvement on a SPARC T4-2 Server running a J2EE Application Server Cluster by deploying the cluster members into Oracle Solaris Containers and binding those containers to cores of the SPARC T4 Processor. This is not a surprising result, in fact, it is consistent with other results that are available on the Internet. See the "references", below, for some examples. Nonetheless, here is a summary of my configuration and results. (1.0) Before deploying a J2EE Application Server Cluster into a virtualized environment, many decisions need to be made. I'm not claiming that all of the decisions that I have a made will work well for every environment. In fact, I'm not even claiming that all of the decisions are the best possible for my environment. I'm only claiming that of the small sample of configurations that I've tested, this is the one that is working best for me. Here are some of the decisions that needed to be made: (1.1) Which virtualization option? There are several virtualization options and isolation levels that are available. Options include: Hard partitions:  Dynamic Domains on Sun SPARC Enterprise M-Series Servers Hypervisor based virtualization such as Oracle VM Server for SPARC (LDOMs) on SPARC T-Series Servers OS Virtualization using Oracle Solaris Containers Resource management tools in the Oracle Solaris OS to control the amount of resources an application receives, such as CPU cycles, physical memory, and network bandwidth. Oracle Solaris Containers provide the right level of isolation and flexibility for my environment. To borrow some words from my friends in marketing, "The SPARC T4 processor leverages the unique, no-cost virtualization capabilities of Oracle Solaris Zones"  (1.2) How to associate Oracle Solaris Containers with resources? There are several options available to associate containers with resources, including (a) resource pool association (b) dedicated-cpu resources and (c) capped-cpu resources. I chose to create resource pools and associate them with the containers because I wanted explicit control over the cores and virtual processors.  (1.3) Cluster Topology? Is it best to deploy (a) multiple application servers on one node, (b) one application server on multiple nodes, or (c) multiple application servers on multiple nodes? After a few quick tests, it appears that one application server per Oracle Solaris Container is a good solution. (1.4) Number of cluster members to deploy? I chose to deploy four big 64-bit application servers. I would like go back a test many 32-bit application servers, but that is left for another day. (2.0) Configuration tested. (2.1) I was using a SPARC T4-2 Server which has 2 CPU and 128 virtual processors. To understand the physical layout of the hardware on Solaris 10, I used the OpenSolaris psrinfo perl script available at http://hub.opensolaris.org/bin/download/Community+Group+performance/files/psrinfo.pl: test# ./psrinfo.pl -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7)   The core has 8 virtual processors (8-15)   The core has 8 virtual processors (16-23)   The core has 8 virtual processors (24-31)   The core has 8 virtual processors (32-39)   The core has 8 virtual processors (40-47)   The core has 8 virtual processors (48-55)   The core has 8 virtual processors (56-63)     SPARC-T4 (chipid 0, clock 2848 MHz) The physical processor has 8 cores and 64 virtual processors (64-127)   The core has 8 virtual processors (64-71)   The core has 8 virtual processors (72-79)   The core has 8 virtual processors (80-87)   The core has 8 virtual processors (88-95)   The core has 8 virtual processors (96-103)   The core has 8 virtual processors (104-111)   The core has 8 virtual processors (112-119)   The core has 8 virtual processors (120-127)     SPARC-T4 (chipid 1, clock 2848 MHz) (2.2) The "before" test: without processor binding. I started with a 4-member cluster deployed into 4 Oracle Solaris Containers. Each container used a unique gigabit Ethernet port for HTTP traffic. The containers shared a 10 gigabit Ethernet port for JDBC traffic. (2.3) The "after" test: with processor binding. I ran one application server in the Global Zone and another application server in each of the three non-global zones (NGZ):  (3.0) Configuration steps. The following steps need to be repeated for all three Oracle Solaris Containers. (3.1) Stop AppServers from the BUI. (3.2) Stop the NGZ. test# ssh test-z2 init 5 (3.3) Enable resource pools: test# svcadm enable pools (3.4) Create the resource pool: test# poolcfg -dc 'create pool pool-test-z2' (3.5) Create the processor set: test# poolcfg -dc 'create pset pset-test-z2' (3.6) Specify the maximum number of CPU's that may be addd to the processor set: test# poolcfg -dc 'modify pset pset-test-z2 (uint pset.max=32)' (3.7) bash syntax to add Virtual CPUs to the processor set: test# (( i = 64 )); while (( i < 96 )); do poolcfg -dc "transfer to pset pset-test-z2 (cpu $i)"; (( i = i + 1 )) ; done (3.8) Associate the resource pool with the processor set: test# poolcfg -dc 'associate pool pool-test-z2 (pset pset-test-z2)' (3.9) Tell the zone to use the resource pool that has been created: test# zonecfg -z test-z1 set pool=pool-test-z2 (3.10) Boot the Oracle Solaris Container test# zoneadm -z test-z2 boot (3.11) Save the configuration to /etc/pooladm.conf test# pooladm -s (4.0) Results. Using the resource pools improves both throughput and response time: (5.0) References: System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones Capitalizing on large numbers of processors with WebSphere Portal on Solaris WebSphere Application Server and T5440 (Dileep Kumar's Weblog)  http://www.brendangregg.com/zones.html Reuters Market Data System, RMDS 6 Multiple Instances (Consolidated), Performance Test Results in Solaris, Containers/Zones Environment on Sun Blade X6270 by Amjad Khan, 2009.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Oracle Fusion Applications: Changing the Game

    - by kellsey.ruppel(at)oracle.com
    Originally posted in the Oracle Profit Magazine, November 2010 Edition. When the order processing system red-flags a customer's credit status, the IT department doesn't get the customer's call. When a supplier misses a delivery date for a key automotive assembly, it's not the CIO who has to answer for the error. Knowledge workers (known in IT circles as "users") are on the front lines when an exception occurs in an established business process. They're also the ones who study sales trends to decide when to open a new store in an up-and-coming neighborhood, which products are most profitable, how employee skill sets are evolving, and which suppliers are most efficient. In short, knowledge workers are masters of business as unusual. Traditional enterprise resource planning (ERP) systems and other familiar enterprise applications excel at automating, managing, and executing standard business processes. These programs shine when everything goes as planned. Life gets even trickier when a traditional application needs to be extended with a new service or an extra step is added to a business process when new products are brought to market, divisions are merged, or companies are acquired. Monolithic applications often need the IT department to step in and make the necessary adjustments--incurring additional costs and delays. Until now. When Oracle unveiled the much-anticipated family of Oracle Fusion Applications at Oracle OpenWorld in September 2010, knowledge workers in particular had a lot to cheer about. Business users will soon have ready access to analytical information and collaboration tools in the context of what they are working on, so they can make better decisions when problems or opportunities arise. Additionally, the Oracle Fusion Applications platform will make it easy for business users to tweak processes, create new capabilities, and find information, often without the need for IT department assistance and while still following company guidelines. And IT leaders will be happy to hear about new deployment options, guided implementation and setup tools, and cost-saving management capabilities. Just as important, the underlying technologies in Oracle Fusion Applications will allow organizations to choose among their existing investments and next-generation enterprise applications so they can introduce innovations at a pace that makes the most business and financial sense. "Oracle Fusion Applications are architected so you don't have to do rip and replace," says Jim Hayes, managing director of the consulting firm Accenture. "That's very important for creating a business case that will get through the steering committee and be approved by the board. It shows you can drive value and make a difference in the near term." For these and other reasons, analysts and early adopters are calling Oracle Fusion Applications a game changer for enterprise customers. The differences become apparent in three key areas: the way we innovate, work, and adopt technology. Game Changer #1: New Standard for InnovationChange is a constant challenge for most businesses, whether the catalysts are market dynamics, new competition, or the ever-expanding regulatory environment. And, in an ongoing effort to differentiate, business leaders are constantly looking for new ways to do business, serve constituents, and bring new products and services to market. In addition, companies face significant costs to keep their applications up-to-date. For example, when a company adds new suppliers to a procurement system, the IT shop typically has to invest time, effort, and even consulting fees for custom integrations that allow various ERP systems to communicate with each other. Oracle Fusion Applications were built on Web services and a modular SOA foundation to ease customizations and integration activities among all applications--whether from Oracle or another vendor. Interfaces and updates written in ubiquitous Java, rather than a proprietary coding language, allow organizations to tap into existing in-house technical skills rather than seek expensive outside specialists. And with SOA, organizations can extend a feature set or integrate with other SOA environments by combining Web services such as "look up customer" into a new business process managed by the BPEL orchestration engine. Flexibility like this has long-term implications. "Because users capture these changes at a higher metadata layer, not in the application's code, changes and additions are protected even as new versions of Oracle Fusion Applications are released," says Steve Miranda, senior vice president of applications development at Oracle. "This is a much more sustainable approach because you don't incur costly customizations that prevent upgrades and other innovations." And changes are easier to make: if one change is made in the metadata, that change is automatically reflected throughout the application interface, business intelligence, business process, and business logic. Game Changer #2: New Standard for WorkBoosting productivity comes down to doing the basics right: running business processes more efficiently and managing exceptions more effectively, so users can accomplish more in the course of a day or spend more quality time with the most profitable customers. The fastest way to improve process efficiency is to reduce the number of steps it takes to execute common tasks, such as ordering office equipment from an internal procurement system. Oracle Fusion Applications will deliver a complete role-based user experience with business intelligence and collaboration capabilities provided in the context of the work at hand. "We created every Oracle Fusion Applications screen by asking 'What does the user need to know?' 'What does he or she need to do?' and 'Who do they need to work with to get the job done?'" Miranda explains. So when the sales department heads need new laptops, the self-service procurement screen will not only display a list of approved vendors and configurations, but also a running list of reviews by coworkers who recently purchased the various models. Embedded intelligence may also display prevailing delivery lead times based on actual order histories, not the generic shipping dates vendors may quote. The pervasive business intelligence serves many other business activities across all areas of the enterprise. For example, a manager considering whether to promote a direct report can see the person's employee profile, with a salary history, appraisal summaries, and a rundown of skills and training. This approach to business intelligence also has implications for supply chain management. "One of the challenges at Ingersoll Rand is lack of visibility in our supply chain," says Mike Macrie, global director of enterprise applications for global industrial firm Ingersoll Rand. "Oracle Fusion Applications are going to provide the embedded intelligence to give us that visibility and give us the ability to analyze those orders at any point in our supply chain." Oracle Fusion Applications will also create a "role-based user experience" that displays a work list of events that need attention, based on user job function. Role awareness guides users with daily lists of action items and exceptions. So a credit manager may see seven invoices with discounts that are about to expire or 12 suppliers that have been put on hold because credit memos are awaiting approval. Individualization extends to the search capabilities of Oracle Fusion Applications. The platform uses Web-style search screens powered by an Oracle enterprise search engine, with a security framework that filters search results so individuals will only see the internal information they're authorized to access. A further aid to productivity is Oracle Fusion Applications' integration with Web 2.0 collaboration and social networking resources for business environments. Hover-over text will reveal relevant contact information whenever the name of a person appears in an Oracle Fusion Application. Users can connect via an online chat, phone call, or instant message without leaving the main application, reducing the time required for an accounts payable staffer to resolve a mismatch between an invoiced charge and the service record, for example. Addresses of suppliers, customers, or partners will also initiate hover-over text to show contact details and Web-based maps. Finally, Oracle Fusion Applications will promote a new way of working with purpose-driven communities that can bring new efficiencies to everything from cultivating sales leads to managing new projects. As soon as a lead or project materializes, the applications will automatically gather relevant participants into an online community that shares member contact information, schedules, discussion forums, and Wiki pages. "Oracle Fusion Applications will allow us to take it to the next level with embedded Web 2.0 tools and the embedded analytics," says Steve Printz, CIO and vice president, supply chain management, at window-and-door manufacturer Pella. "[This] allows those employees today who are processing transactions to really contribute to the success of the company and become decision-makers." Game Changer #3: New Standard for Technology AdoptionAs IT becomes a dominant component of how businesses run and compete, organizations need to lower the cost of implementing applications and introducing new application features. In the past, rolling out new code often required creating a test bed system, moving beta code to a separate system for user feedback, and--once all the revisions were made--moving version one of the software onto production systems, where business users could finally get the needed new features. Oracle Fusion Applications will use a dedicated setup manager application to streamline this process. First, the setup manager will help scope out the project, querying users about their requirements. "From those questions and answers we determine the steps and the order of those steps that will enable that task," Miranda says. Next, system utilities will assign tasks to owners, track completion status, and monitor the overall status of a programming effort. Oracle Fusion Applications can then recommend Web services that allow users to migrate setup choices and steps across all the various deployments of the application. Those setup capabilities automate the migration from test systems to production systems, as well as between different business units that may be using the same application. "The self-service ability of the setup manager helps business users change setups with very little intervention from the IT team," says Ravi Kumar, vice president at IT services company Infosys. "That to me is a big difference from how we've viewed enterprise applications before." For additional flexibility, organizations will be able to adopt Oracle Fusion Applications modules in either of two modes: a single-instance alternative uses one database for all Oracle Fusion Applications, while a "pillar mode" creates separate databases to underpin each application. This means IT departments running any one of Oracle's applications or even third-party applications can plug Oracle Fusion Applications modules into their environment and see additional business value created on top of their existing systems. And Oracle Fusion Applications offer a hybrid approach to deployment. The applications are all software-as-a-service-ready, so customers can choose on-premises, public or private cloud, or a combination of these to suit their business needs. It's that combination of flexibility and a roadmap for the future that may be the biggest game changer of all. "The Oracle Fusion Applications architecture allows us to migrate our company at a pace that's consistent with our business strategy, whereas before we might have had to do it with a massive upgrade," says Macrie of Ingersoll Rand. "We're looking forward to that architecture to really give us more flexibility in how we migrate over time." For More InformationUser Input Key to the Success of Oracle Fusion ApplicationsTransforming Coexistence into Strategic ValueUnder the HoodOracle Fusion ApplicationsOracle Service-Oriented Architecture  

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #049

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Two Connections Related Global Variables Explained – @@CONNECTIONS and @@MAX_CONNECTIONS @@CONNECTIONS Returns the number of attempted connections, either successful or unsuccessful since SQL Server was last started. @@MAX_CONNECTIONS Returns the maximum number of simultaneous user connections allowed on an instance of SQL Server. The number returned is not necessarily the number currently configured. Query Editor – Microsoft SQL Server Management Studio This post may be very simple for most of the users of SQL Server 2005. Earlier this year, I have received one question many times – Where is Query Analyzer in SQL Server 2005? I wrote small post about it and pointed many users to that post – SQL SERVER – 2005 Query Analyzer – Microsoft SQL SERVER Management Studio. Recently I have been receiving similar question. OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE SQL Server 2005 has a new OUTPUT clause, which is quite useful. OUTPUT clause has access to insert and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. OUTPUT clause can generate a table variable, a permanent table, or temporary table. Even though, @@Identity will still work with SQL Server 2005, however I find the OUTPUT clause very easy and powerful to use. Let us understand the OUTPUT clause using an example. Find Name of The SQL Server Instance Based on database server stored procedures has to run different logic. We came up with two different solutions. 1) When database schema is very much changed, we wrote completely new stored procedure and deprecated older version once it was not needed. 2) When logic depended on Server Name we used global variable @@SERVERNAME. It was very convenient while writing migrating script which depended on the server name for the same database. Explanation of TRY…CATCH and ERROR Handling With RAISEERROR Function One of the developers at my company thought that we can not use the RAISEERROR function in new feature of SQL Server 2005 TRY… CATCH. When asked for an explanation he suggested SQL SERVER – 2005 Explanation of TRY… CATCH and ERROR Handling article as excuse suggesting that I did not give example of RAISEERROR with TRY…CATCH. We all thought it was funny. Just to keep records straight, TRY… CATCH can sure use RAISEERROR function. Different Types of Cache Objects Serveral kinds of objects can be stored in the procedure cache: Compiled Plans: When the query optimizer finishes compiling a query plan, the principal output is compiled plan. Execution contexts: While executing a compiled plan, SQL Server has to keep track of information about the state of execution. Cursors: Cursors track the execution state of server-side cursors, including the cursor’s current location within a resultset. Algebrizer trees: The Algebrizer’s job is to produce an algebrizer tree, which represents the logic structure of a query. Open SSMS From Command Prompt – sqlwb.exe Example This article is written by request and suggestion of Sr. Web Developer at my organization. Due to the nature of this article most of the content is referred from Book On-Line. sqlwbcommand prompt utility which opens SQL Server Management Studio. Squib command does not run queries from the command prompt. sqlcmd utility runs queries from command prompt, read for more information. 2008 Puzzle – Solution – Computed Columns Datatype Explanation Just a day before I wrote article SQL SERVER – Puzzle – Computed Columns Datatype Explanation which was inspired by SQL Server MVP Jacob Sebastian. I suggest that before continuing this article read the original puzzle question SQL SERVER – Puzzle – Computed Columns Datatype Explanation.The question was if the computed column was of datatype TINYINT how to create a Computed Column of datatype INT? 2008 – Find If Index is Being Used in Database It is very often I get a query that how to find if any index is being used in the database or not. If any database has many indexes and not all indexes are used it can adversely affect performance. If the number of indices are higher it reduces the INSERT / UPDATE / DELETE operation but increase the SELECT operation. It is recommended to drop any unused indexes from table to improve the performance. 2009 Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries If you want to see what’s going on here, I think you need to shift your point of view from an implementation-centric view to an ANSI point of view. ANSI does not guarantee processing the order. Figure 2 is interesting, but it will be potentially misleading if you don’t understand the ANSI rule-set SQL Server operates under in most cases. Implementation thinking can certainly be useful at times when you really need that multi-million row query to finish before the backup fire off, but in this case, it’s counterproductive to understanding what is going on. SQL Server Management Studio and Client Statistics Client Statistics are very important. Many a times, people relate queries execution plan to query cost. This is not a good comparison. Both parameters are different, and they are not always related. It is possible that the query cost of any statement is less, but the amount of the data returned is considerably larger, which is causing any query to run slow. How do we know if any query is retrieving a large amount data or very little data? 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 SQL SERVER – Get Query Running in Session I was recently looking for syntax where I needed a query running in any particular session. I always remembered the syntax and ha d actually written it down before, but somehow it was not coming to mind quickly this time. I searched online and I ended up on my own article written last year SQL SERVER – Get Last Running Query Based on SPID. I felt that I am getting old because I forgot this really simple syntax. Find Total Number of Transaction on Interval In one of my recent Performance Tuning assignments I was asked how do someone know how many transactions are happening on a server during certain interval. I had a handy script for the same. Following script displays transactions happened on the server at the interval of one minute. You can change the WAITFOR DELAY to any other interval and it should work. 2011 Here are two DMV’s which are newly introduced in SQL Server 2012 and provides vital information about SQL Server. DMV – sys.dm_os_volume_stats – Information about operating system volume DMV – sys.dm_os_windows_info – Information about Operating System SQL Backup and FTP – A Quick and Handy Tool I have used this tool extensively since 2009 at numerous occasion and found it to be very impressive. What separates it from the crowd the most – it is it’s apparent simplicity and speed. When I install SQLBackupAndFTP and configure backups – all in 1 or 2 minutes, my clients are always impressed. Quick Note about JOIN – Common Questions and Simple Answers In this blog post we are going to talk about join and lots of things related to the JOIN. I recently started office hours to answer questions and issues of the community. I receive so many questions that are related to JOIN. I will share a few of the same over here. Most of them are basic, but note that the basics are of great importance. 2012 Importance of User Without Login Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Preserve Leading Zero While Coping to Excel from SSMS Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. Solution – 2 T-SQL Puzzles – Display Star and Shortest Code to Display 1 Earlier on this blog we had asked two puzzles. The response from all of you is nothing but Amazing. I have received 350+ responses. Many are valid and many were indeed something I had not thought about it. I strongly suggest you read all the puzzles and their answers here - trust me if you start reading the comments you will not stop till you read every single comment. Seriously trust me on it. Personally I have learned a lot from it. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 – Video http://www.youtube.com/watch?v=TvlYy-TGaaA Importance of User Without Login – T-SQL Demo Script Earlier I wrote a blog post about SQL SERVER – Importance of User Without Login and my friend and SQL Expert Vinod Kumar has written excellent follow up blog post about Contained Databases inside SQL Server 2012. Now lots of people asked me if I can also explain the same concept again so here is the small demonstration for it. Let me show you how login without user can help. Before we continue on this subject I strongly recommend that you read my earlier blog post here. In following demo I am going to demonstrate following situation. Login using the System Admin account Create a user without login Checking Access Impersonate the user without login Checking Access Revert Impersonation Give Permission to user without login Impersonate the user without login Checking Access Revert Impersonation Clean up Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal

    - by Murali Veligeti
    We need to understand the main difference between portlet workflow and servlet workflow.The main difference between portlet workflow and servlet workflow is that, the request to the portlet can have two distinct phases: 1) Action phase 2) Render phase. The Action phase is executed only once and is where any 'backend' changes or actions occur, such as making changes in a database. The Render phase then produces what is displayed to the user each time the display is refreshed. The critical point here is that for a single overall request, the action phase is executed only once, but the render phase may be executed multiple times. This provides a clean separation between the activities that modify the persistent state of your system and the activities that generate what is displayed to the user.The dual phases of portlet requests are one of the real strengths of the JSR-168 specification. For example, dynamic search results can be updated routinely on the display without the user explicitly re-running the search. Most other portlet MVC frameworks attempt to completely hide the two phases from the developer and make it look as much like traditional servlet development as possible - we think this approach removes one of the main benefits of using portlets. So, the separation of the two phases is preserved throughout the Spring Portlet MVC framework. The primary manifestation of this approach is that where the servlet version of the MVC classes will have one method that deals with the request, the portlet version of the MVC classes will have two methods that deal with the request: one for the action phase and one for the render phase. For example, where the servlet version of AbstractController has the handleRequestInternal(..) method, the portlet version of AbstractController has handleActionRequestInternal(..) and handleRenderRequestInternal(..) methods.The Spring Portlet Framework is designed around a DispatcherPortlet that dispatches requests to handlers, with configurable handler mappings and view resolution, just as the DispatcherServlet in the Spring Web Framework does.  Developing portlet.xml Let's start the sample development by creating the portlet.xml file in the /WebContent/WEB-INF/ folder as shown below: <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <portlet> <portlet-name>SpringPortletName</portlet-name> <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>view</portlet-mode> </supports> <portlet-info> <title>SpringPortlet</title> </portlet-info> </portlet> </portlet-app> DispatcherPortlet is responsible for handling every client request. When it receives a request, it finds out which Controller class should be used for handling this request, and then it calls its handleActionRequest() or handleRenderRequest() method based on the request processing phase. The Controller class executes business logic and returns a View name that should be used for rendering markup to the user. The DispatcherPortlet then forwards control to that View for actual markup generation. As you can see, DispatcherPortlet is the central dispatcher for use within Spring Portlet MVC Framework. Note that your portlet application can define more than one DispatcherPortlet. If it does so, then each of these portlets operates its own namespace, loading its application context and handler mapping. The DispatcherPortlet is also responsible for loading application context (Spring configuration file) for this portlet. First, it tries to check the value of the configLocation portlet initialization parameter. If that parameter is not specified, it takes the portlet name (that is, the value of the <portlet-name> element), appends "-portlet.xml" to it, and tries to load that file from the /WEB-INF folder. In the portlet.xml file, we did not specify the configLocation initialization parameter, so let's create SpringPortletName-portlet.xml file in the next section. Developing SpringPortletName-portlet.xml Create the SpringPortletName-portlet.xml file in the /WebContent/WEB-INF folder of your application as shown below: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/jsp/"/> <property name="suffix" value=".jsp"/> </bean> <bean id="pointManager" class="com.wlp.spring.bo.internal.PointManagerImpl"> <property name="users"> <list> <ref bean="point1"/> <ref bean="point2"/> <ref bean="point3"/> <ref bean="point4"/> </list> </property> </bean> <bean id="point1" class="com.wlp.spring.bean.User"> <property name="name" value="Murali"/> <property name="points" value="6"/> </bean> <bean id="point2" class="com.wlp.spring.bean.User"> <property name="name" value="Sai"/> <property name="points" value="13"/> </bean> <bean id="point3" class="com.wlp.spring.bean.User"> <property name="name" value="Rama"/> <property name="points" value="43"/> </bean> <bean id="point4" class="com.wlp.spring.bean.User"> <property name="name" value="Krishna"/> <property name="points" value="23"/> </bean> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="messages"/> </bean> <bean name="/users.htm" id="userController" class="com.wlp.spring.controller.UserController"> <property name="pointManager" ref="pointManager"/> </bean> <bean name="/pointincrease.htm" id="pointIncreaseController" class="com.wlp.spring.controller.IncreasePointsFormController"> <property name="sessionForm" value="true"/> <property name="pointManager" ref="pointManager"/> <property name="commandName" value="pointIncrease"/> <property name="commandClass" value="com.wlp.spring.bean.PointIncrease"/> <property name="formView" value="pointincrease"/> <property name="successView" value="users"/> </bean> <bean id="parameterMappingInterceptor" class="org.springframework.web.portlet.handler.ParameterMappingInterceptor" /> <bean id="portletModeParameterHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeParameterHandlerMapping"> <property name="order" value="1" /> <property name="interceptors"> <list> <ref bean="parameterMappingInterceptor" /> </list> </property> <property name="portletModeParameterMap"> <map> <entry key="view"> <map> <entry key="pointincrease"> <ref bean="pointIncreaseController" /> </entry> <entry key="users"> <ref bean="userController" /> </entry> </map> </entry> </map> </property> </bean> <bean id="portletModeHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeHandlerMapping"> <property name="order" value="2" /> <property name="portletModeMap"> <map> <entry key="view"> <ref bean="userController" /> </entry> </map> </property> </bean> </beans> The SpringPortletName-portlet.xml file is an application context file for your MVC portlet. It has a couple of bean definitions: viewController. At this point, remember that the viewController bean definition points to the com.ibm.developerworks.springmvc.ViewController.java class. portletModeHandlerMapping. As we discussed in the last section, whenever DispatcherPortlet gets a client request, it tries to find a suitable Controller class for handling that request. That is where PortletModeHandlerMapping comes into the picture. The PortletModeHandlerMapping class is a simple implementation of the HandlerMapping interface and is used by DispatcherPortlet to find a suitable Controller for every request. The PortletModeHandlerMapping class uses Portlet mode for the current request to find a suitable Controller class to use for handling the request. The portletModeMap property of portletModeHandlerMapping bean is the place where we map the Portlet mode name against the Controller class. In the sample code, we show that viewController is responsible for handling View mode requests. Developing UserController.java In the preceding section, you learned that the viewController bean is responsible for handling all the View mode requests. Your next step is to create the UserController.java class as shown below: public class UserController extends AbstractController { private PointManager pointManager; public void handleActionRequest(ActionRequest request, ActionResponse response) throws Exception { } public ModelAndView handleRenderRequest(RenderRequest request, RenderResponse response) throws ServletException, IOException { String now = (new java.util.Date()).toString(); Map<String, Object> myModel = new HashMap<String, Object>(); myModel.put("now", now); myModel.put("users", this.pointManager.getUsers()); return new ModelAndView("users", "model", myModel); } public void setPointManager(PointManager pointManager) { this.pointManager = pointManager; } } Every controller class in Spring Portlet MVC Framework must implement the org.springframework.web. portlet.mvc.Controller interface directly or indirectly. To make things easier, Spring Framework provides AbstractController class, which is the default implementation of the Controller interface. As a developer, you should always extend your controller from either AbstractController or one of its more specific subclasses. Any implementation of the Controller class should be reusable, thread-safe, and capable of handling multiple requests throughout the lifecycle of the portlet. In the sample code, we create the ViewController class by extending it from AbstractController. Because we don't want to do any action processing in the HelloSpringPortletMVC portlet, we override only the handleRenderRequest() method of AbstractController. Now, the only thing that HelloWorldPortletMVC should do is render the markup of View.jsp to the user when it receives a user request to do so. To do that, return the object of ModelAndView with a value of view equal to View. Developing web.xml According to Portlet Specification 1.0, every portlet application is also a Servlet Specification 2.3-compliant Web application, and it needs a Web application deployment descriptor (that is, web.xml). Let’s create the web.xml file in the /WEB-INF/ folder as shown in listing 4. Follow these steps: Open the existing web.xml file located at /WebContent/WEB-INF/web.xml. Replace the contents of this file with the code as shown below: <servlet> <servlet-name>ViewRendererServlet</servlet-name> <servlet-class>org.springframework.web.servlet.ViewRendererServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ViewRendererServlet</servlet-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> </servlet-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> The web.xml file for the sample portlet declares two things: ViewRendererServlet. The ViewRendererServlet is the bridge servlet for portlet support. During the render phase, DispatcherPortlet wraps PortletRequest into ServletRequest and forwards control to ViewRendererServlet for actual rendering. This process allows Spring Portlet MVC Framework to use the same View infrastructure as that of its servlet version, that is, Spring Web MVC Framework. ContextLoaderListener. The ContextLoaderListener class takes care of loading Web application context at the time of the Web application startup. The Web application context is shared by all the portlets in the portlet application. In case of duplicate bean definition, the bean definition in the portlet application context takes precedence over the Web application context. The ContextLoader class tries to read the value of the contextConfigLocation Web context parameter to find out the location of the context file. If the contextConfigLocation parameter is not set, then it uses the default value, which is /WEB-INF/applicationContext.xml, to load the context file. The Portlet Controller interface requires two methods that handle the two phases of a portlet request: the action request and the render request. The action phase should be capable of handling an action request and the render phase should be capable of handling a render request and returning an appropriate model and view. While the Controller interface is quite abstract, Spring Portlet MVC offers a lot of controllers that already contain a lot of the functionality you might need – most of these are very similar to controllers from Spring Web MVC. The Controller interface just defines the most common functionality required of every controller - handling an action request, handling a render request, and returning a model and a view. How rendering works As you know, when the user tries to access a page with PointSystemPortletMVC portlet on it or when the user performs some action on any other portlet on that page or tries to refresh that page, a render request is sent to the PointSystemPortletMVC portlet. In the sample code, because DispatcherPortlet is the main portlet class, Weblogic Portal / Webcenter Portal calls its render() method and then the following sequence of events occurs: The render() method of DispatcherPortlet calls the doDispatch() method, which in turn calls the doRender() method. After the doRenderService() method gets control, first it tries to find out the locale of the request by calling the PortletRequest.getLocale() method. This locale is used while making all the locale-related decisions for choices such as which resource bundle should be loaded or which JSP should be displayed to the user based on the locale. After that, the doRenderService() method starts iterating through all the HandlerMapping classes configured for this portlet, calling their getHandler() method to identify the appropriate Controller for handling this request. In the sample code, we have configured only PortletModeHandlerMapping as a HandlerMapping class. The PortletModeHandlerMapping class reads the value of the current portlet mode, and based on that, it finds out, the Controller class that should be used to handle this request. In the sample code, ViewController is configured to handle the View mode request so that the PortletModeHandlerMapping class returns the object of ViewController. After the object of ViewController is returned, the doRenderService() method calls its handleRenderRequestInternal() method. Implementation of the handleRenderRequestInternal() method in ViewController.java is very simple. It logs a message saying that it got control, and then it creates an instance of ModelAndView with a value equal to View and returns it to DispatcherPortlet. After control returns to doRenderService(), the next task is to figure out how to render View. For that, DispatcherPortlet starts iterating through all the ViewResolvers configured in your portlet application, calling their resolveViewName() method. In the sample code we have configured only one ViewResolver, InternalResourceViewResolver. When its resolveViewName() method is called with viewName, it tries to add /WEB-INF/jsp as a prefix to the view name and to add JSP as a suffix. And it checks if /WEB-INF/jsp/View.jsp exists. If it does exist, it returns the object of JstlView wrapping View.jsp. After control is returned to the doRenderService() method, it creates the object PortletRequestDispatcher, which points to /WEB-INF/servlet/view – that is, ViewRendererServlet. Then it sets the object of JstlView in the request and dispatches the request to ViewRendererServlet. After ViewRendererServlet gets control, it reads the JstlView object from the request attribute and creates another RequestDispatcher pointing to the /WEB-INF/jsp/View.jsp URL and passes control to it for actual markup generation. The markup generated by View.jsp is returned to user. At this point, you may question the need for ViewRendererServlet. Why can't DispatcherPortlet directly forward control to View.jsp? Adding ViewRendererServlet in between allows Spring Portlet MVC Framework to reuse the existing View infrastructure. You may appreciate this more when we discuss how easy it is to integrate Apache Tiles Framework with your Spring Portlet MVC Framework. The attached project SpringPortlet.zip should be used to import the project in to your OEPE Workspace. SpringPortlet_Jars.zip contains jar files required for the application. Project is written on Spring 2.5.  The same JSR 168 portlet should work on Webcenter Portal as well.  Downloads: Download WeblogicPotal Project which consists of Spring Portlet. Download Spring Jars In-addition to above you need to download Spring.jar (Spring2.5)

    Read the article

  • Partner Blog Series: PwC Perspectives - The Gotchas, The Do's and Don'ts for IDM Implementations

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Verdana","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Arial Narrow","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} It is generally accepted among business communities that technology by itself is not a silver bullet to all problems, but when it is combined with leading practices, strategy, careful planning and execution, it can create a recipe for success. This post attempts to highlight some of the best practices along with dos & don’ts that our practice has accumulated over the years in the identity & access management space in general, and also in the context of R2, in particular. Best Practices The following section illustrates the leading practices in “How” to plan, implement and sustain a successful OIM deployment, based on our collective experience. Planning is critical, but often overlooked A common approach to planning an IAM program that we identify with our clients is the three step process involving a current state assessment, a future state roadmap and an executable strategy to get there. It is extremely beneficial for clients to assess their current IAM state, perform gap analysis, document the recommended controls to address the gaps, align future state roadmap to business initiatives and get buy in from all stakeholders involved to improve the chances of success. When designing an enterprise-wide solution, the scalability of the technology must accommodate the future growth of the enterprise and the projected identity transactions over several years. Aligning the implementation schedule of OIM to related information technology projects increases the chances of success. As a baseline, it is recommended to match hardware specifications to the sizing guide for R2 published by Oracle. Adherence to this will help ensure that the hardware used to support OIM will not become a bottleneck as the adoption of new services increases. If your Organization has numerous connected applications that rely on reconciliation to synchronize the access data into OIM, consider hosting dedicated instances to handle reconciliation. Finally, ensure the use of clustered environment for development and have at least three total environments to help facilitate a controlled migration to production. If your Organization is planning to implement role based access control, we recommend performing a role mining exercise and consolidate your enterprise roles to keep them manageable. In addition, many Organizations have multiple approval flows to control access to critical roles, applications and entitlements. If your Organization falls into this category, we highly recommend that you limit the number of approval workflows to a small set. Most Organizations have operations managed across data centers with backend database synchronization, if your Organization falls into this category, ensure that the overall latency between the datacenters when replicating the databases is less than ten milliseconds to ensure that there are no front office performance impacts. Ingredients for a successful implementation During the development phase of your project, there are a number of guidelines that can be followed to help increase the chances for success. Most implementations cannot be completed without the use of customizations. If your implementation requires this, it’s a good practice to perform code reviews to help ensure quality and reduce code bottlenecks related to performance. We have observed at our clients that the development process works best when team members adhere to coding leading practices. Plan for time to correct coding defects and ensure developers are empowered to report their own bugs for maximum transparency. Many organizations struggle with defining a consistent approach to managing logs. This is particularly important due to the amount of information that can be logged by OIM. We recommend Oracle Diagnostics Logging (ODL) as an alternative to be used for logging. ODL allows log files to be formatted in XML for easy parsing and does not require a server restart when the log levels are changed during troubleshooting. Testing is a vital part of any large project, and an OIM R2 implementation is no exception. We suggest that at least one lower environment should use production-like data and connectors. Configurations should match as closely as possible. For example, use secure channels between OIM and target platforms in pre-production environments to test the configurations, the migration processes of certificates, and the additional overhead that encryption could impose. Finally, we ask our clients to perform database backups regularly and before any major change event, such as a patch or migration between environments. In the lowest environments, we recommend to have at least a weekly backup in order to prevent significant loss of time and effort. Similarly, if your organization is using virtual machines for one or more of the environments, it is recommended to take frequent snapshots so that rollbacks can occur in the event of improper configuration. Operate & sustain the solution to derive maximum benefits When migrating OIM R2 to production, it is important to perform certain activities that will help achieve a smoother transition. At our clients, we have seen that splitting the OIM tables into their own tablespaces by categories (physical tables, indexes, etc.) can help manage database growth effectively. If we notice that a client hasn’t enabled the Oracle-recommended indexing in the applicable database, we strongly suggest doing so to improve performance. Additionally, we work with our clients to make sure that the audit level is set to fit the organization’s auditing needs and sometimes even allocate UPA tables and indexes into their own table-space for better maintenance. Finally, many of our clients have set up schedules for reconciliation tables to be archived at regular intervals in order to keep the size of the database(s) reasonable and result in optimal database performance. For our clients that anticipate availability issues with target applications, we strongly encourage the use of the offline provisioning capabilities of OIM R2. This reduces the provisioning process for a given target application dependency on target availability and help avoid broken workflows. To account for this and other abnormalities, we also advocate that OIM’s monitoring controls be configured to alert administrators on any abnormal situations. Within OIM R2, we have begun advising our clients to utilize the ‘profile’ feature to encapsulate multiple commonly requested accounts, roles, and/or entitlements into a single item. By setting up a number of profiles that can be searched for and used, users will spend less time performing the same exact steps for common tasks. We advise our clients to follow the Oracle recommended guides for database and application server tuning which provides a good baseline configuration. It offers guidance on database connection pools, connection timeouts, user interface threads and proper handling of adapters/plug-ins. All of these can be important configurations that will allow faster provisioning and web page response times. Many of our clients have begun to recognize the value of data mining and a remediation process during the initial phases of an implementation (to help ensure high quality data gets loaded) and beyond (to support ongoing maintenance and business-as-usual processes). A successful program always begins with identifying the data elements and assigning a classification level based on criticality, risk, and availability. It should finish by following through with a remediation process. Dos & Don’ts Here are the most common dos and don'ts that we socialize with our clients, derived from our experience implementing the solution. Dos Don’ts Scope the project into phases with realistic goals. Look for quick wins to show success and value to the stake holders. Avoid “boiling the ocean” and trying to integrate all enterprise applications in the first phase. Establish an enterprise ID (universal unique ID across the enterprise) earlier in the program. Avoid major UI customizations that require code changes. Have a plan in place to patch during the project, which helps alleviate any major issues or roadblocks (product and database). Avoid publishing all the target entitlements if you don't anticipate their usage during access request. Assess your current state and prepare a roadmap to address your operations, tactical and strategic goals, align it with your business priorities. Avoid integrating non-production environments with your production target systems. Defer complex integrations to the later phases and take advantage of lessons learned from previous phases Avoid creating multiple accounts for the same user on the same system, if there is an opportunity to do so. Have an identity and access data quality initiative built into your plan to identify and remediate data related issues early on. Avoid creating complex approval workflows that would negative impact productivity and SLAs. Identify the owner of the identity systems with fair IdM knowledge and empower them with authority to make product related decisions. This will help ensure overcome any design hurdles. Avoid creating complex designs that are not sustainable long term and would need major overhaul during upgrades. Shadow your internal or external consulting resources during the implementation to build the necessary product skills needed to operate and sustain the solution. Avoid treating IAM as a point solution and have appropriate level of communication and training plan for the IT and business users alike. Conclusion In our experience, Identity programs will struggle with scope, proper resourcing, and more. We suggest that companies consider the suggestions discussed in this post and leverage them to help enable their identity and access program. This concludes PwC blog series on R2 for the month and we sincerely hope that the information we have shared thus far has been beneficial. For more information or if you have questions, you can reach out to Rex Thexton, Senior Managing Director, PwC and or Dharma Padala, Director, PwC. We look forward to hearing from you. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL).

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Wednesday, May 23, 2012

    CodePlex Daily Summary for Wednesday, May 23, 2012Popular ReleasesChristoc's DotNetNuke Module Development Template: 00.00.08 for DNN6: BEFORE USE YOU need to install the MSBuild Community Tasks available from http://msbuildtasks.tigris.org For best results you should configure your development environment as described in this blog post Then read this latest blog post about customizing and using these custom templates. Installation is simple To use this template place the ZIP (not extracted) file in your My Documents\Visual Studio 2010\Templates\ProjectTemplates\Visual C#\Web OR for VB My Documents\Visual Studio 2010\Te...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.53: fix issue #18106, where member operators on numeric literals caused the member part to be duplicated when not minifying numeric literals ADD NEW FEATURE: ability to create source map files! The first mapfile format to be supported is the Script# format. Use the new -map filename switch to create map files when building your sources.Microsoft SQL Server Product Samples: Database: AdventureWorks 2008 Analysis Services Project: AdventureWorks 2008 Analysis Services sample database project files. Project files for Analysis Services 2008 sample cubes (standard and enterprise). Lessons 1 to 10 tutorial files.HigLabo: HigLabo_20120522: Bug fix of EncodeToMailHeaderLine method when TransferEncoding is QuotedPrintable and Encoding is not ASCII.BlackJumboDog: Ver5.6.3: 2012.05.22 Ver5.6.3  (1) HTTP????????、ftp://??????????????????????LogicCircuit: LogicCircuit 2.12.5.22: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionThis release is fixing start up issue.Internals Viewer (updated) for SQL Server 2008 R2.: Internals Viewer for SSMS 2008 R2: Updated code to work with SSMS 2008 R2. Changed dependancies, removing old assemblies no longer present and replacing them with updated versions.Orchard Project: Orchard 1.4.2: This is a service release to address 1.4 and 1.4.1 bugs. Please read our release notes for Orchard 1.4.2: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesVirtu: Virtu 0.9.2: Source Requirements.NET Framework 4 Visual Studio 2010 with SP1 or Visual Studio 2010 Express with SP1 Silverlight 5 Tools for Visual Studio 2010 with SP1 Windows Phone 7 Developer Tools (which includes XNA Game Studio 4) Binaries RequirementsSilverlight 5 .NET Framework 4 XNA Framework 4SharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.0): New fetures:View other users predictions Hide/Show background image (web part property) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...Metadata Document Generator for Microsoft Dynamics CRM 2011: Metadata Document Generator (2.0.0.0): New UI Metro style New features Save and load settings to/from file Export only OptionSet attributes Use of Gembox Spreadsheet to generate Excel (makes application lighter : 1,5MB instead of 7MB)Audio Pitch & Shift: Audio Pitch And Shift 4.2.0: Backward / Forward buttons Improved features for encoding, streaming, menu Bug fixesSilverlight socket component: Smark.NetDisk: Smark.NetDisk?????Silverlight ?.net???????????,???????????????????????。Smark.NetDisk??????????,????.net???????????????????????tcp??;???????Silverlight??????????????????????callisto: callisto 2.0.28: Update log: - Extended Scribble protocol. - Updated HTML5 client code - now supports the latest versions of Google Chrome.ExtAspNet: ExtAspNet v3.1.6: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://bbs.extasp.net/ ??:http://demo.extasp.net/ ??:http://doc.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-20 v3.1.6 -??RowD...Dynamics XRM Tools: Dynamics XRM Tools BETA 1.0: The Dynamics XRM Tools 1.0 BETA is now available Seperate downloads are available for On Premise and Online as certain features are only available On Premise. This is a BETA build and may not resemble the final release. Many enhancements are in development and will be made available soon. Please provide feedback so that we may learn and discover how to make these tools better.PHPExcel: PHPExcel 1.7.7: See Change Log for details of the new features and bugfixes included in this release. BREAKING CHANGE! From PHPExcel 1.7.8 onwards, the 3rd-party tcPDF library will no longer be bundled with PHPExcel for rendering PDF files through the PDF Writer. The PDF Writer is being rewritten to allow a choice of 3rd party PDF libraries (tcPDF, mPDF, and domPDF initially), none of which will be bundled with PHPExcel, but which can be downloaded seperately from the appropriate sites.GhostBuster: GhostBuster Setup (91520): Added WMI based RestorePoint support Removed test code from program.cs Improved counting. Changed color of ghosted but unfiltered devices. Changed HwEntries into an ObservableCollection. Added Properties Form. Added Properties MenuItem to Context Menu. Added Hide Unfiltered Devices to Context Menu. If you like this tool, leave me a note, rate this project or write a review or Donate to Ghostbuster. Donate to GhostbusterEXCEL??、??、????????:DataPie(??MSSQL 2008、ORACLE、ACCESS 2007): DataPie_V3.2: V3.2, 2012?5?19? ????ORACLE??????。AvalonDock: AvalonDock 2.0.0795: Welcome to the Beta release of AvalonDock 2.0 After 4 months of hard work I'm ready to upload the beta version of AvalonDock 2.0. This new version boosts a lot of new features and now is stable enough to be deployed in production scenarios. For this reason I encourage everyone is using AD 1.3 or earlier to upgrade soon to this new version. The final version is scheduled for the end of June. What is included in Beta: 1) Stability! thanks to all users contribution I’ve corrected a lot of issues...New ProjectsAits HRM: Human Resource ManagermentBeeWix Toolkit: The toolkit is a set of classes we are using here at Beewix for our Social Network project. Church Management: Simple Church Management Software By Lalit Kumar & Ivan Lewis.Client Center for ConfigurationManager: The tool is designed for IT Professionals to troubleshoot SCCM/CM12 Client related Issues. The Client Center for Configuration Manager provides a quick and easy overview of client settings, including running services and Agent settings in a good easy to use, user interface.Common Instance Factory: Provides an abstraction over dependency injection and IoC containers using the abstract factory design pattern. It was created as an alternative to the Common Service Locator, but it does not use the service location anti-pattern and it provides support for releasing instances. Adapters are available for various dependency injection containers, such as Ninject and SimpleInjector, with more to come shortly. There are also WCF extensions available for decoupling services from DI containers.ConfigMgrRegistrationRequest: ConfigMgrRegistrationRequest allows you to simulate a client using System Center 2012 Configuration Manager Client SDK Basically, this project allows you to create Fake CM12 Clients, ideal when you need test load, reports, etc... when you run the tool (as a local Administrator), it will - Open a csv file and send request to register a new client to sccm - Send a update client id to sccm - Request policy - Send a ddr message - Send hinv more info on my blog at http://wmug.co.uk/w...COSC 320 TWIN System v2.0: Migration of previous version due to one month server limitationCultiv Photometadata for Umbraco: The PhotoMetaData package will extract meta data from images that you upload in your Umbraco media section. Dependency Injection Service Provider (DISP): Dependency Injection Service Provider (DISP) is a wrapper or an interface that aim to allow .NET developers use one of the inversion of control (IoC) containers out there such as StructureMap or Ninject from a high level of abstraction, using the same interface and classes without having to worry about the concrete implementation of each of the IoC containers. DISP provides an interface that will create and instance of an IoC container of your election, and provide generic interface to co...DirectPOS.NET: DirectPOS is a set of classes that bypasses OPOS and other libraries to talk to POS devices directly currently focusing on support for Bixolon, Samsung, and Epson thermal printers and some other devices.Dodongo's Quest: Roguelike in C# and XNAEpicCms: epiccms is only a name. The underlying system consists of two components: an api data service that replicates database tables, views, and procedures directly by url routes; a web interface that is very lightweight and purposed only to directly edit database table data. It is the intention that users will have full control over their data, how it is distributed, who has access, and when or how long anything(anyone) has access to all single pieces of data. It is also the intention that users wi...Find Duplicate file: This application is developed in WPF. you can find duplicate files from the file impression not from file size of from file name. Although this process is very time consuming. You can search and delete the similar file from the application itself. you can also open file location and delete manually from windows explorer. HoleFilling: We present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which conta...ILCC: A C compiler made in C# that generates .NET CIL code, XML, YAML and .NET PInvokeKendo UI framework for Orchard: This is a common location for kendo UI framework and related script libraries to use with Orchard CMS.MASAS SharePoint: This project is all about creating a SharePoint 2010-based capability to use the Multi-Agency Situational Awareness System (of systems) - MASAS. MASAS is focused on the exchange of structured information. This module is focused on using that information in your SharePoint 2010-based applications. Think of it this way - MASAS is an information bucket. MASAS allows you to share structured information by putting information into the bucket and by pulling information out of the bucket. ...MCP History: History Module for CB websiteMediaWikiSPMigrator: This project is aimed to make it easier to migrate from Mediawiki wiki's into SharePoint. The big differentiator is that it will allow you to map templates into a specific content type in SharePoint. NConf - Advanced Configuration Manager: NConf is an advanced configuration system for .Net projects. It's written out of the need for more advanced configuration than what .Net provides. The key features it supports is multiple configuration sources, simple to use syntax, the ability to reload/update configuration at runtime, and the easy ability to implement custom configuration sources.NMEA Interpreter: NMEA Interpreter is a class library created for one of my projects. It's main function is to parse NMEA sentences into usable easy to read and display data.Occupado: Time management for the School enviroment.OmahaMTG Site: Omaha MTG SiteProject Bloodlust Fury: Group project for CIS 375Radar IoC container: Very simple, fast and easy to use IoC container. Don't have any dependencies on external libraries - just pure .NET 4 Client Profile. Minimal size (~18kB in release build) makes it suitable to use in any project type.Service Request System: Service Request System (SRS) is a lightweight application for submitting and managing Service Requests of any type. The application is written in ASP.net (C#), and can utilize any type of database.Sign In As A Different User: Running your browser (IE) in a corporate environment will give you single sign on to web applications running in your intranet. But in some cases you need to access an URL with different credentials (admin purpose, etc.). Applications like SharePoint will provide you a solution right out of the box, but if this is not available the SignInAsADifferentUser project may help you. We as Glück & Kanja Consulting AG deployed such configurations in relation to Microsoft Lync components. Searching the...SimIn - Simulate Input in Your .NET Applications: SimIn is a light-weight .NET library which allows You to send user input to another applications. You can use SimIn in two modes: simulating user input (SendMessages), or control Your mouse. It supports DirectX input simulation, which means you can send mouse or keyboard messages to any DirectX game you want, and the game will register it correctly. SocialVeris: Este projeto consiste em uma rede social da instituição de ensino Veris IBTA. É um projeto de conclusão do curso de Análise e Desenvolvimento de Sistemas.Sprocket: Web application to management tasks in small company. TheList: A web an mobile application oriented to users that need a support in the creation of the supermarket list.Turntable Enhanced: TT Enhanced is a Turntable.fm Chrome extension used for giving the Turntable.fm website a bit of a face-lift. It adds cosmetic changes to the website, room moderation, drop down lists for easier usability, and much more.Web Part Collector: WebPart collector will help you identify any webpart or all webparts that are located inside your site collection

    Read the article

  • retriving hearders in all pages of word

    - by udaya
    Hi I am exporting data from php page to word,, there i get 'n' number of datas in each page .... How to set the maximum number of data that a word page can contain ,,,, I want only 20 datas in a single page This is the coding i use to export the data to word i got the data in word format but the headers are not available for all the pages ex: Page:1 slno name country state Town 1 vivek india tamilnadu trichy 2 uday india kerala coimbatore like this i am getting many details but in my page:2 i dont get the headers like name country state and town....But i can get the details like kumar america xxxx yyyy i want the result to be like slno name country state town n chris newzealand ghgg jkgj Can i get the headers If it is not possible Is there anyway to limit the number of details being displayed in each page //EDIT YOUR MySQL Connection Info: $DB_Server = "localhost"; //your MySQL Server $DB_Username = "root"; //your MySQL User Name $DB_Password = ""; //your MySQL Password $DB_DBName = "cms"; //your MySQL Database Name $DB_TBLName = ""; //your MySQL Table Name $sql = "SELECT (SELECT COUNT(*) FROM tblentercountry t2 WHERE t2.dbName <= t1.dbName and t1.dbIsDelete='0') AS SLNO ,dbName as Namee,t3.dbCountry as Country,t4.dbState as State,t5.dbTown as Town FROM tblentercountry t1 join tablecountry as t3, tablestate as t4, tabletown as t5 where t1.dbIsDelete='0' and t1.dbCountryId=t3.dbCountryId and t1.dbStateId=t4.dbStateId and t1.dbTownId=t5.dbTownId order by dbName limit 0,50"; //Optional: print out title to top of Excel or Word file with Timestamp //for when file was generated: //set $Use_Titel = 1 to generate title, 0 not to use title $Use_Title = 1; //define date for title: EDIT this to create the time-format you need //$now_date = DATE('m-d-Y H:i'); //define title for .doc or .xls file: EDIT this if you want $title = "Country"; /* Leave the connection info below as it is: just edit the above. (Editing of code past this point recommended only for advanced users.) */ //create MySQL connection $Connect = @MYSQL_CONNECT($DB_Server, $DB_Username, $DB_Password) or DIE("Couldn't connect to MySQL:" . MYSQL_ERROR() . "" . MYSQL_ERRNO()); //select database $Db = @MYSQL_SELECT_DB($DB_DBName, $Connect) or DIE("Couldn't select database:" . MYSQL_ERROR(). "" . MYSQL_ERRNO()); //execute query $result = @MYSQL_QUERY($sql,$Connect) or DIE("Couldn't execute query:" . MYSQL_ERROR(). "" . MYSQL_ERRNO()); //if this parameter is included ($w=1), file returned will be in word format ('.doc') //if parameter is not included, file returned will be in excel format ('.xls') IF (ISSET($w) && ($w==1)) { $file_type = "vnd.ms-excel"; $file_ending = "xls"; }ELSE { $file_type = "msword"; $file_ending = "doc"; } //header info for browser: determines file type ('.doc' or '.xls') HEADER("Content-Type: application/$file_type"); HEADER("Content-Disposition: attachment; filename=database_dump.$file_ending"); HEADER("Pragma: no-cache"); HEADER("Expires: 0"); /* Start of Formatting for Word or Excel */ IF (ISSET($w) && ($w==1)) //check for $w again { /* FORMATTING FOR WORD DOCUMENTS ('.doc') */ //create title with timestamp: IF ($Use_Title == 1) { ECHO("$title\n\n"); } //define separator (defines columns in excel & tabs in word) $sep = "\n"; //new line character WHILE($row = MYSQL_FETCH_ROW($result)) { //set_time_limit(60); // HaRa $schema_insert = ""; FOR($j=0; $j<mysql_num_fields($result);$j++) { //define field names $field_name = MYSQL_FIELD_NAME($result,$j); //will show name of fields $schema_insert .= "$field_name:\t"; IF(!ISSET($row[$j])) { $schema_insert .= "NULL".$sep; } ELSEIF ($row[$j] != "") { $schema_insert .= "$row[$j]".$sep; } ELSE { $schema_insert .= "".$sep; } } $schema_insert = STR_REPLACE($sep."$", "", $schema_insert); $schema_insert .= "\t"; PRINT(TRIM($schema_insert)); //end of each mysql row //creates line to separate data from each MySQL table row PRINT "\n----------------------------------------------------\n"; } }ELSE{ /* FORMATTING FOR EXCEL DOCUMENTS ('.xls') */ //create title with timestamp: IF ($Use_Title == 1) { ECHO("$title\n"); } //define separator (defines columns in excel & tabs in word) $sep = "\t"; //tabbed character //start of printing column names as names of MySQL fields FOR ($i = 0; $i < MYSQL_NUM_FIELDS($result); $i++) { ECHO MYSQL_FIELD_NAME($result,$i) . "\t"; } PRINT("\n"); //end of printing column names //start while loop to get data WHILE($row = MYSQL_FETCH_ROW($result)) { //set_time_limit(60); // HaRa $schema_insert = ""; FOR($j=0; $j<mysql_num_fields($result);$j++) { IF(!ISSET($row[$j])) $schema_insert .= "NULL".$sep; ELSEIF ($row[$j] != "") $schema_insert .= "$row[$j]".$sep; ELSE $schema_insert .= "".$sep; } $schema_insert = STR_REPLACE($sep."$", "", $schema_insert); //following fix suggested by Josue (thanks, Josue!) //this corrects output in excel when table fields contain \n or \r //these two characters are now replaced with a space $schema_insert = PREG_REPLACE("/\r\n|\n\r|\n|\r/", " ", $schema_insert); $schema_insert .= "\t"; PRINT(TRIM($schema_insert)); PRINT "\n"; } } ?

    Read the article

  • Windows Impersonation failed

    - by skprocks
    I am using following code to implement impersonation for the particular windows account,which is failing.Please help. using System.Security.Principal; using System.Runtime.InteropServices; public partial class Source_AddNewProduct : System.Web.UI.Page { [DllImport("advapi32.dll", SetLastError = true)] static extern bool LogonUser( string principal, string authority, string password, LogonSessionType logonType, LogonProvider logonProvider, out IntPtr token); [DllImport("kernel32.dll", SetLastError = true)] static extern bool CloseHandle(IntPtr handle); enum LogonSessionType : uint { Interactive = 2, Network, Batch, Service, NetworkCleartext = 8, NewCredentials } enum LogonProvider : uint { Default = 0, // default for platform (use this!) WinNT35, // sends smoke signals to authority WinNT40, // uses NTLM WinNT50 // negotiates Kerb or NTLM } //impersonation is used when user tries to upload an image to a network drive protected void btnPrimaryPicUpload_Click1(object sender, EventArgs e) { try { string mDocumentExt = string.Empty; string mDocumentName = string.Empty; HttpPostedFile mUserPostedFile = null; HttpFileCollection mUploadedFiles = null; string xmlPath = string.Empty; FileStream fs = null; StreamReader file; string modify; mUploadedFiles = HttpContext.Current.Request.Files; mUserPostedFile = mUploadedFiles[0]; if (mUserPostedFile.ContentLength >= 0 && Path.GetFileName(mUserPostedFile.FileName) != "") { mDocumentName = Path.GetFileName(mUserPostedFile.FileName); mDocumentExt = Path.GetExtension(mDocumentName); mDocumentExt = mDocumentExt.ToLower(); if (mDocumentExt != ".jpg" && mDocumentExt != ".JPG" && mDocumentExt != ".gif" && mDocumentExt != ".GIF" && mDocumentExt != ".jpeg" && mDocumentExt != ".JPEG" && mDocumentExt != ".tiff" && mDocumentExt != ".TIFF" && mDocumentExt != ".png" && mDocumentExt != ".PNG" && mDocumentExt != ".raw" && mDocumentExt != ".RAW" && mDocumentExt != ".bmp" && mDocumentExt != ".BMP" && mDocumentExt != ".TIF" && mDocumentExt != ".tif") { Page.RegisterStartupScript("select", "<script language=" + Convert.ToChar(34) + "VBScript" + Convert.ToChar(34) + "> MsgBox " + Convert.ToChar(34) + "Please upload valid picture file format" + Convert.ToChar(34) + " , " + Convert.ToChar(34) + "64" + Convert.ToChar(34) + " , " + Convert.ToChar(34) + "WFISware" + Convert.ToChar(34) + "</script>"); } else { int intDocLen = mUserPostedFile.ContentLength; byte[] imageBytes = new byte[intDocLen]; mUserPostedFile.InputStream.Read(imageBytes, 0, mUserPostedFile.ContentLength); //xmlPath = @ConfigurationManager.AppSettings["ImagePath"].ToString(); xmlPath = Server.MapPath("./../ProductImages/"); mDocumentName = Guid.NewGuid().ToString().Replace("-", "") + System.IO.Path.GetExtension(mUserPostedFile.FileName); //if (System.IO.Path.GetExtension(mUserPostedFile.FileName) == ".jpg") //{ //} //if (System.IO.Path.GetExtension(mUserPostedFile.FileName) == ".gif") //{ //} mUserPostedFile.SaveAs(xmlPath + mDocumentName); //Remove commenting till upto stmt xmlPath = "./../ProductImages/"; to implement impersonation byte[] bytContent; IntPtr token = IntPtr.Zero; WindowsImpersonationContext impersonatedUser = null; try { // Note: Credentials should be encrypted in configuration file bool result = LogonUser(ConfigurationManager.AppSettings["ServiceAccount"].ToString(), "ad-ent", ConfigurationManager.AppSettings["ServiceAccountPassword"].ToString(), LogonSessionType.Network, LogonProvider.Default, out token); if (result) { WindowsIdentity id = new WindowsIdentity(token); // Begin impersonation impersonatedUser = id.Impersonate(); mUserPostedFile.SaveAs(xmlPath + mDocumentName); } else { throw new Exception("Identity impersonation has failed."); } } catch { throw; } finally { // Stop impersonation and revert to the process identity if (impersonatedUser != null) impersonatedUser.Undo(); // Free the token if (token != IntPtr.Zero) CloseHandle(token); } xmlPath = "./../ProductImages/"; xmlPath = xmlPath + mDocumentName; string o_image = xmlPath; //For impersoantion uncomment this line and comment next line //string o_image = "../ProductImages/" + mDocumentName; ViewState["masterImage"] = o_image; //fs = new FileStream(xmlPath, FileMode.Open, FileAccess.Read); //file = new StreamReader(fs, Encoding.UTF8); //modify = file.ReadToEnd(); //file.Close(); //commented by saurabh kumar 28may'09 imgImage.Visible = true; imgImage.ImageUrl = ViewState["masterImage"].ToString(); img_Label1.Visible = false; } //e.Values["TemplateContent"] = modify; //e.Values["TemplateName"] = mDocumentName.Replace(".xml", ""); } } catch (Exception ex) { ExceptionUtil.UI(ex); Response.Redirect("errorpage.aspx"); } } } The code on execution throws system.invalidoperation exception.I have provided full control to destination folder to the windows service account that i am impersonating.

    Read the article

  • problem in adding image to the UIButton.

    - by monish
    Hi friends, I got an another problem in my application and I am wasting so much of time on that. Does pls anyone can help with this problem. Actually I had an Event and I should give rating for that event for that I wrote the code as: In CellForRowAtIndexPath......I had the code as: - (UITableViewCell *)tableView:(UITableView *)tv cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"MasterViewIdentifier"]; //UITableViewCell *cell = nil; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"MasterViewIdentifier"] autorelease]; cell.selectionStyle = UITableViewCellSelectionStyleNone; UIView* elementView = [[UIView alloc] initWithFrame:CGRectMake(20,170,320,280)]; elementView.tag = 0; elementView.backgroundColor=[UIColor clearColor]; [cell.contentView addSubview:elementView]; [elementView release]; } UIView* elementView = [cell.contentView viewWithTag:0]; elementView.backgroundColor=[UIColor clearColor]; for(UIView* subView in elementView.subviews) { [subView removeFromSuperview]; } if(indexPath.section == 8) { UIImage *whiteImg = [UIImage imageNamed:@"white_star.png"] ; UIImage *yellowImg = [UIImage imageNamed:@"yellow_Star.png"] ; UIButton *button1 = [[UIButton alloc]initWithFrame:CGRectMake(159, 15, 25, 20)]; [button1 addTarget:self action:@selector(buttonAction:) forControlEvents:UIControlEventTouchUpInside]; button1.tag = 1; UIButton *button2 = [[UIButton alloc]initWithFrame:CGRectMake(185, 15, 25, 20)]; [button2 addTarget:self action:@selector(buttonAction:) forControlEvents:UIControlEventTouchUpInside]; button2.tag = 2; UIButton *button3 = [[UIButton alloc]initWithFrame:CGRectMake(211, 15, 25, 20)]; [button3 addTarget:self action:@selector(buttonAction:) forControlEvents:UIControlEventTouchUpInside]; button3.tag = 3; UIButton *button4 = [[UIButton alloc]initWithFrame:CGRectMake(237, 15, 25, 20)]; [button4 addTarget:self action:@selector(buttonAction:) forControlEvents:UIControlEventTouchUpInside]; button4.tag = 4; UIButton *button5 = [[UIButton alloc]initWithFrame:CGRectMake(263, 15, 25, 20)]; [button5 addTarget:self action:@selector(buttonAction:) forControlEvents:UIControlEventTouchUpInside]; button5.tag = 5; if(event.eventRatings == 1) { [button1 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button2 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button3 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button4 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button5 setBackgroundImage:whiteImg forState:UIControlStateNormal]; } else if(event.eventRatings == 2) { [button1 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button2 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button3 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button4 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button5 setBackgroundImage:whiteImg forState:UIControlStateNormal]; } else if(event.eventRatings == 3) { [button1 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button2 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button3 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button4 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button5 setBackgroundImage:whiteImg forState:UIControlStateNormal]; } else if(event.eventRatings == 4) { [button1 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button2 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button3 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button4 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button5 setBackgroundImage:whiteImg forState:UIControlStateNormal]; } else if(event.eventRatings == 5) { [button1 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button2 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button3 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button4 setBackgroundImage:yellowImg forState:UIControlStateNormal]; [button5 setBackgroundImage:yellowImg forState:UIControlStateNormal]; } else { [button1 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button2 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button3 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button4 setBackgroundImage:whiteImg forState:UIControlStateNormal]; [button5 setBackgroundImage:whiteImg forState:UIControlStateNormal]; } [elementView addSubview:button1]; [button1 release]; [elementView addSubview:button2]; [button2 release]; [elementView addSubview:button3]; [button3 release]; [elementView addSubview:button4]; [button4 release]; [elementView addSubview:button5]; [button5 release]; if(isRightButton == YES) { button1.enabled = NO; button2.enabled = NO; button3.enabled = NO; button4.enabled = NO; button5.enabled = NO; } else if(isRightButton == NO) { button1.enabled = YES; button2.enabled = YES; button3.enabled = YES; button4.enabled = YES; button5.enabled = YES; } [elementView addSubview:ratingsTitleLabel]; cell.accessoryType = UITableViewCellAccessoryNone; } return cell; } And the action of the button is written as: -(void)buttonAction:(id)sender { rating = [sender tag]; printf("\n Ratig Value inside Button Action~~~~~~~~~~~~~~~~%d",rating); event.eventRatings = rating; [tableView reloadData]; } When I build the application in simlator of 3.1.2 O.S its working fine by displaying the star images. My porblem is when I build it in 3.1.2 O.S Device the images are not displaying.I checked the code for casesensitivity in file name and its gud but Im not gettig the images to display. Guys help me to solve this. Thank you, Monish Kumar.

    Read the article

  • Regarding playing media file in Android media player application

    - by Mangesh
    Hi. I am new to android development. I just started with creating my own media player application by looking at the code samples given in Android SDK. While I am trying to play a local media file (m.3gp), I am getting IOException error :: error(1,-4). Please can somebody help me in this regard. Here is my code. package com.mediaPlayer; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.media.MediaPlayer; import android.media.MediaPlayer.OnBufferingUpdateListener; import android.media.MediaPlayer.OnCompletionListener; import android.media.MediaPlayer.OnPreparedListener; import android.media.MediaPlayer.OnVideoSizeChangedListener; import android.view.SurfaceHolder; import android.util.Log; public class MediaPlayer1 extends Activity implements OnBufferingUpdateListener, OnCompletionListener,OnPreparedListener, OnVideoSizeChangedListener,SurfaceHolder.Callback { private static final String TAG = "MediaPlayerByMangesh"; // Widgets in the application private Button btnPlay; private Button btnPause; private Button btnStop; private MediaPlayer mMediaPlayer; private String path = "m.3gp"; private SurfaceHolder holder; private int mVideoWidth; private int mVideoHeight; private boolean mIsVideoSizeKnown = false; private boolean mIsVideoReadyToBePlayed = false; // For the id of radio button selected private int radioCheckedId = -1; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { Log.d(TAG, "Entered OnCreate:"); super.onCreate(savedInstanceState); setContentView(R.layout.main); Log.d(TAG, "Creatinging Buttons:"); btnPlay = (Button) findViewById(R.id.btnPlay); btnPause = (Button) findViewById(R.id.btnPause); // On app load, the Pause button is disabled btnPause.setEnabled(false); btnStop = (Button) findViewById(R.id.btnStop); btnStop.setEnabled(false); /* * Attach a OnCheckedChangeListener to the radio group to monitor radio * buttons selected by user */ Log.d(TAG, "Watching for Click"); /* Attach listener to the Calculate and Reset buttons */ btnPlay.setOnClickListener(mClickListener); btnPause.setOnClickListener(mClickListener); btnStop.setOnClickListener(mClickListener); } /* * ClickListener for the Calculate and Reset buttons. Depending on the * button clicked, the corresponding method is called. */ private OnClickListener mClickListener = new OnClickListener() { @Override public void onClick(View v) { switch (v.getId()) { case R.id.btnPlay: Log.d(TAG, "Clicked Play Button"); Log.d(TAG, "Calling Play Function"); Play(); break; case R.id.btnPause: Pause(); break; case R.id.btnStop: Stop(); break; } } }; /** * Play the Video. */ private void Play() { // Create a new media player and set the listeners mMediaPlayer = new MediaPlayer(); Log.d(TAG, "Entered Play function:"); try { mMediaPlayer.setDataSource(path); } catch(IOException ie) { Log.d(TAG, "IO Exception:" + path); } mMediaPlayer.setDisplay(holder); try { mMediaPlayer.prepare(); } catch(IOException ie) { Log.d(TAG, "IO Exception:" + path); } mMediaPlayer.setOnBufferingUpdateListener(this); mMediaPlayer.setOnCompletionListener(this); mMediaPlayer.setOnPreparedListener(this); //mMediaPlayer.setOnVideoSizeChangedListener(this); //mMediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); } public void onBufferingUpdate(MediaPlayer arg0, int percent) { Log.d(TAG, "onBufferingUpdate percent:" + percent); } public void onCompletion(MediaPlayer arg0) { Log.d(TAG, "onCompletion called"); } public void onVideoSizeChanged(MediaPlayer mp, int width, int height) { Log.v(TAG, "onVideoSizeChanged called"); if (width == 0 || height == 0) { Log.e(TAG, "invalid video width(" + width + ") or height(" + height + ")"); return; } mIsVideoSizeKnown = true; mVideoWidth = width; mVideoHeight = height; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void onPrepared(MediaPlayer mediaplayer) { Log.d(TAG, "onPrepared called"); mIsVideoReadyToBePlayed = true; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void surfaceChanged(SurfaceHolder surfaceholder, int i, int j, int k) { Log.d(TAG, "surfaceChanged called"); } public void surfaceDestroyed(SurfaceHolder surfaceholder) { Log.d(TAG, "surfaceDestroyed called"); } public void surfaceCreated(SurfaceHolder holder) { Log.d(TAG, "surfaceCreated called"); Play(); } private void startVideoPlayback() { Log.v(TAG, "startVideoPlayback"); holder.setFixedSize(176, 144); mMediaPlayer.start(); } /** * Pause the Video */ private void Pause() { ; /* * If all fields are populated with valid values, then proceed to * calculate the tips */ } /** * Stop the Video. */ private void Stop() { ; /* * If all fields are populated with valid values, then proceed to * calculate the tips */ } /** * Shows the error message in an alert dialog * * @param errorMessage * String the error message to show * @param fieldId * the Id of the field which caused the error. This is required * so that the focus can be set on that field once the dialog is * dismissed. */ private void showErrorAlert(String errorMessage, final int fieldId) { new AlertDialog.Builder(this).setTitle("Error") .setMessage(errorMessage).setNeutralButton("Close", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { findViewById(fieldId).requestFocus(); } }).show(); } } Thanks, Mangesh Kumar K.

    Read the article

  • insert/update/delete with xml in .net 3.5

    - by Radhi
    Hello guys, i have "UserProfile" in my site which we have stored in xml fromat of xml is as below <UserProfile xmlns=""> <BasicInfo> <Title value="Basic Details" /> <Fields> <UserId title="UserId" display="No" right="Public" value="12" /> <EmailAddress title="Email Address" display="Yes" right="Public" value="[email protected]" /> <FirstName title="First Name" display="Yes" right="Public" value="Radhi" /> <LastName title="Last Name" display="Yes" right="Public" value="Patel" /> <DisplayName title="Display Name" display="Yes" right="Public" value="Radhi Patel" /> <RegistrationStatusId title="RegistrationStatusId" display="No" right="Public" value="10" /> <RegistrationStatus title="Registration Status" display="Yes" right="Private" value="Registered" /> <CountryName title="Country" display="Yes" right="Public" value="India" /> <StateName title="State" display="Yes" right="Public" value="Maharashtra" /> <CityName title="City" display="Yes" right="Public" value="Mumbai" /> <Gender title="Gender" display="Yes" right="Public" value="FeMale" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="0" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Nov 27 2009 3:08PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="Feb 18 2010 1:43PM " /> <LogInStatusId title="LogInStatusId" display="No" right="Public" value="1" /> <LogInStatus title="LogIn Status" display="Yes" right="Private" value="Free" /> <ProfileImagePath title="Profile Pic" display="No" right="Public" value="~/Images/96.jpg" /> </Fields> </BasicInfo> <PersonalInfo> <Title value="Personal Details" /> <Fields> <Nickname title="Nick Name" display="Yes" right="Public" value="Rahul" /> <NativeLocation title="Native" display="Yes" right="Public" value="Amreli" /> <DateofAnniversary title="Anniversary Dt." display="Yes" right="Public" value="12/29/2008" /> <BloodGroupId title="BloodGroupId" display="No" right="Public" value="25" /> <BloodGroupName title="Blood Group" display="Yes" right="Public" value="25" /> <MaritalStatusId title="MaritalStatusId" display="No" right="Public" value="34" /> <MaritalStatusName title="Marital status" display="Yes" right="Private" value="34" /> <DateofDeath title="Death dt" display="Yes" right="Private" value="" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Jan 6 2010 2:59PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="3/10/2010 5:34:14 PM" /> </Fields> </PersonalInfo> <FamilyInfo> <Title value="Family Details" /> <Fields> <GallantryHistory title="Gallantry History" display="Yes" right="Public" value="Gallantry history rahul" /> <Ethinicity title="Ethinicity" display="Yes" right="Public" value="ethnicity rahul" /> <KulDev title="KulDev" display="Yes" right="Public" value="Krishna" /> <KulDevi title="KulDevi" display="Yes" right="Public" value="Khodiyar" /> <Caste title="Caste" display="Yes" right="Private" value="Brhamin" /> <SunSignId title="SunSignId" display="No" right="Public" value="20" /> <SunSignName title="SunSignName" display="Yes" right="Public" value="Scorpio" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Dec 29 2009 4:59PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="3/10/2010 6:29:56 PM" /> </Fields> </FamilyInfo> <HobbyInfo> <Title value="Hobbies/Interests" /> <Fields> <AbountMe title="Abount Me" display="Yes" right="Public" value="Naughty.... " /> <Hobbies title="Hobbies" display="Yes" right="Public" value="Dance, Music, decoration, Shopping" /> <Food title="Food" display="Yes" right="Public" value="Maxican salsa, Pizza, Khoya kaju " /> <Movies title="Movies" display="Yes" right="Public" value="day after tommorrow. wake up sid. avatar" /> <Music title="Music" display="Yes" right="Public" value="Chu kar mere man ko... wake up sid songs, slow music, apgk songs" /> <TVShows title="TV Shows" display="Yes" right="Public" value="business bazzigar, hanah montana" /> <Books title="Books" display="Yes" right="Public" value="mystry novels" /> <Sports title="Sports" display="Yes" right="Public" value="Badminton" /> <Will title="Will" display="Yes" right="Public" value="do photography, to have my own super home... and i can decorate it like anything..." /> <FavouriteQuotes title="Favourite Quotes" display="Yes" right="Public" value="Live like a king nothing lasts forever. not even your troubles smooth sea do not makes skillfull sailors" /> <CremationPrefernces title="Cremation Prefernces" display="Yes" right="Public" value="" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Feb 24 2010 2:13PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="Mar 2 2010 4:34PM " /> </Fields> </HobbyInfo> <PermenantAddr> <Title value="Permenant Address" /> <Fields> <Address title="Address" display="Yes" right="Public" value="Test Entry" /> <CityId title="CityId" display="No" right="Public" value="93" /> <CityName title="City" display="Yes" right="Public" value="Chennai" /> <StateId title="StateId" display="No" right="Public" value="89" /> <StateName title="State" display="Yes" right="Public" value="Tamil Nadu" /> <CountryId title="CountryId" display="No" right="Public" value="108" /> <CountryName title="Country" display="Yes" right="Public" value="India" /> <ZipCode title="ZipCode" display="Yes" right="Private" value="360019" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Jan 6 2010 1:29PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="0" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value=" " /> </Fields> </PermenantAddr> <PresentAddr> <Title value="Present Address" /> <Fields> <Address title="Ethinicity" display="Yes" right="Public" value="" /> <CityId title="CityId" display="No" right="Public" value="" /> <CityName title="City" display="Yes" right="Public" value="" /> <StateId title="StateId" display="No" right="Public" value="" /> <StateName title="State" display="Yes" right="Public" value="" /> <CountryId title="CountryId" display="No" right="Public" value="" /> <CountryName title="Country" display="Yes" right="Public" value="" /> <ZipCode title="ZipCode" display="Yes" right="Public" value="" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="" /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="" /> </Fields> </PresentAddr> <ContactInfo> <Title value="Contact Details" /> <Fields> <DayPhoneNo title="Day Phone" display="Yes" right="Public" value="" /> <NightPhoneNo title="Night Phone" display="Yes" right="Public" value="" /> <MobileNo title="Mobile No" display="Yes" right="Private" value="" /> <FaxNo title="Fax No" display="Yes" right="CUG" value="" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Jan 5 2010 12:37PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="Feb 17 2010 1:37PM " /> </Fields> </ContactInfo> <EmailInfo> <Title value="Alternate Email Addresses" /> <Fields /> </EmailInfo> <AcademicInfo> <Title value="Education Details" /> <Fields> <Record right="Public"> <Education title="Education" display="Yes" value="Full Attendance" /> <Institute title="Institute" display="Yes" value="Attendance" /> <PassingYear title="Passing Year" display="Yes" value="2000" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 12:41PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 12:41PM " /> </Record> <Record right="Public"> <Education title="Education" display="Yes" value="D.C.E." /> <Institute title="Institute" display="Yes" value="G.P.G" /> <PassingYear title="Passing Year" display="Yes" value="2005" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 12:45PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 12:45PM " /> </Record> <Record right="Public"> <Education title="Education" display="Yes" value="MCSE" /> <Institute title="Institute" display="Yes" value="MCSE" /> <PassingYear title="Passing Year" display="Yes" value="2009" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 6:12PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Mar 2 2010 4:33PM " /> </Record> <Record right="Public"> <Education title="Education" display="Yes" value="H.S.C." /> <Institute title="Institute" display="Yes" value="G.H.S.E.B." /> <PassingYear title="Passing Year" display="Yes" value="2002" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 6:17PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 6:17PM " /> </Record> <Record right="Public"> <Education title="Education" display="Yes" value="S.S.C." /> <Institute title="Institute" display="Yes" value="G.S.E.B." /> <PassingYear title="Passing Year" display="Yes" value="2000" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 6:17PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 6:17PM " /> </Record> </Fields> </AcademicInfo> <AchievementInfo> <Title value="Achievement Details" /> <Fields> <Record right="Public"> <Awards title="Award" display="Yes" value="Test Entry" /> <FieldOfAward title="Field Of Award" display="Yes" value="Test Entry" /> <Tournament title="Tournament" display="Yes" value="Test Entry" /> <AwardDescription title="Description" display="Yes" value="Test Entry" /> <AwardYear title="Award Year" display="Yes" value="2002" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 3:51PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 3:51PM " /> </Record> <Record right="Public"> <Awards title="Award" display="Yes" value="Test Entry" /> <FieldOfAward title="Field Of Award" display="Yes" value="Test Entry" /> <Tournament title="Tournament" display="Yes" value="Test Entry" /> <AwardDescription title="Description" display="Yes" value="Test Entry" /> <AwardYear title="Award Year" display="Yes" value="2005" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Jan 8 2010 10:19AM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Jan 8 2010 10:19AM " /> </Record> <Record right="Public"> <Awards title="Award" display="Yes" value="Test Entry3" /> <FieldOfAward title="Field Of Award" display="Yes" value="Test Entry3" /> <Tournament title="Tournament" display="Yes" value="Test Entry3" /> <AwardDescription title="Description" display="Yes" value="Test Entry3" /> <AwardYear title="Award Year" display="Yes" value="2007" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 11:47AM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 11:47AM " /> </Record> <Record right="Public"> <Awards title="Award" display="Yes" value="Test Entry4" /> <FieldOfAward title="Field Of Award" display="Yes" value="Test Entry4" /> <Tournament title="Tournament" display="Yes" value="Test Entry4" /> <AwardDescription title="Description" display="Yes" value="Test Entry3" /> <AwardYear title="Award Year" display="Yes" value="2000" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 11:47AM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 11:47AM " /> </Record> </Fields> </AchievementInfo> <ProfessionalInfo> <Title value="Professional Details" /> <Fields> <Record right="Public"> <Occupation title="Occupation" display="Yes" value="Test Entry" /> <Organization title="Organization" display="Yes" value="Test Entry" /> <ProjectsDescription title="Description" display="Yes" value="Test Entry" /> <Duration title="Duration" display="Yes" value="26" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Jan 4 2010 3:01PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Jan 4 2010 3:01PM " /> </Record> <Record right="Public"> <Occupation title="Occupation" display="Yes" value="Test Entry" /> <Organization title="Organization" display="Yes" value="Test Entry" /> <ProjectsDescription title="Description" display="Yes" value="Test Entry" /> <Duration title="Duration" display="Yes" value="10" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Jan 4 2010 3:01PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Jan 4 2010 3:01PM " /> </Record> <Record right="Public"> <Occupation title="Occupation" display="Yes" value="Test Entry" /> <Organization title="Organization" display="Yes" value="Test Entry" /> <ProjectsDescription title="Description" display="Yes" value="Test Entry" /> <Duration title="Duration" display="Yes" value="15" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Jan 4 2010 3:01PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Jan 4 2010 3:01PM " /> </Record> </Fields> </ProfessionalInfo> </UserProfile> now for tags like PersonalInfo,contactInfo,Address there will be only one record, but for tags like "OtherInfo","academicInfo","ProfessionalInfo" there will be multiple records so in xml there are tags for that. now to edit tags having one record only for one user i did coding like: private void FillFamilyInfoControls(XElement rootElement) { ddlSunsign.DataBind(); XElement parentElement; string xPathQuery = "FamilyInfo/Fields"; parentElement = rootElement.XPathSelectElement(xPathQuery); txtGallantryHistory.Text = parentElement.Element("GallantryHistory").Attribute("value").Value; txtEthinicity.Text = parentElement.Element("Ethinicity").Attribute("value").Value; txtKulDev.Text = parentElement.Element("KulDev").Attribute("value").Value; txtKulDevi.Text = parentElement.Element("KulDevi").Attribute("value").Value; txtCaste.Text = parentElement.Element("Caste").Attribute("value").Value; ddlSunsign.SelectedValue = parentElement.Element("SunSignId").Attribute("value").Value; ddlSunsign.SelectedItem.Text = parentElement.Element("SunSignName").Attribute("value").Value; } private XElement UpdateFamilyInfoXML(XElement rootElement) { XElement parentElement; string xPathQuery = "FamilyInfo/Fields"; parentElement = rootElement.XPathSelectElement(xPathQuery); parentElement.Element("GallantryHistory").Attribute("value").Value = txtGallantryHistory.Text; parentElement.Element("Ethinicity").Attribute("value").Value = txtEthinicity.Text; parentElement.Element("KulDev").Attribute("value").Value = txtKulDev.Text; parentElement.Element("KulDevi").Attribute("value").Value = txtKulDevi.Text; parentElement.Element("Caste").Attribute("value").Value = txtCaste.Text; parentElement.Element("SunSignId").Attribute("value").Value = ddlSunsign.SelectedItem.Value; parentElement.Element("SunSignName").Attribute("value").Value = ddlSunsign.SelectedItem.Text; parentElement.Element("ModifiedBy").Attribute("value").Value = UMSession.CurrentLoggedInUser.UserId.ToString(); parentElement.Element("ModifiedOn").Attribute("value").Value = System.DateTime.Now.ToString(); return rootElement; //rootElement.XPathSelectElement(xPathQuery) = parentElement; } these 2 functions i have used to update xml and to get data from xml and fill into controls. but for the tags where there are multiple records... i am not able to find any solution /control using which i an do it easily. coz my field's value is in attribute named "Value" when in the examples i saw its between opening and closing tag.. i have 2 methods to do so. 1 . i make one page to edit a single record and pass id of record to that page on editing. open the page in iframe on same page to edit and in page_load get data from database for id passed and fill it in control. 2 . i store xml on server as physical file and use it in page i opened in iframe to edit the record. so i am confused... can anybody please guide me that what shpould i do to let user provide interface to edit this xml data/user profile ?

    Read the article

< Previous Page | 37 38 39 40 41 42  | Next Page >