Search Results

Search found 1314 results on 53 pages for 'vital al kapus'.

Page 37/53 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Error Handling Examples(C#)

    “The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs.” (OWASP, 2011) No Error Handling The absence of error handling is still a form of error handling. Based on the code in Figure 1, if an error occurred and was not handled within either the ReadXml or BuildRequest methods the error would bubble up to the Search method. Since this method does not handle any acceptations the error will then bubble up the stack trace. If this continues and the error is not handled within the application then the environment in which the application is running will notify the user running the application that an error occurred based on what type of application. Figure 1: No Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); dt.ReadXml(BuildRequest(searchTerm, resultCount)); return dt; } Generic Error Handling One simple way to add error handling is to catch all errors by default. If you examine the code in Figure 2, you will see a try-catch block. On April 6th 2010 Louis Lazaris clearly describes a Try Catch statement by defining both the Try and Catch aspects of the statement. “The try portion is where you would put any code that might throw an error. In other words, all significant code should go in the try section. The catch section will also hold code, but that section is not vital to the running of the application. So, if you removed the try-catch statement altogether, the section of code inside the try part would still be the same, but all the code inside the catch would be removed.” (Lazaris, 2010) He also states that all errors that occur in the try section cause it to stops the execution of the try section and redirects all execution to the catch section. The catch section receives an object containing information about the error that occurred so that they system can gracefully handle the error properly. When errors occur they commonly log them in some form. This form could be an email, database entry, web service call, log file, or just an error massage displayed to the user.  Depending on the error sometimes applications can recover, while others force an application to close. Figure 2: Generic Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (Exception ex) { // Handle all Exceptions } return dt; } Error Specific Error Handling Like the Generic Error Handling, Error Specific error handling allows for the catching of specific known errors that may occur. For example wrapping a try catch statement around a soap web service call would allow the application to handle any error that was generated by the soap web service. Now, if the systems wanted to send a message to the web service provider every time a soap error occurred but did not want to notify them if any other type of error occurred like a network time out issue. This would be varying tedious to accomplish using the General Error Handling methodology. This brings us to the use case for using the Error Specific error handling methodology.  The Error Specific Error handling methodology allows for the TryCatch statement to catch various types of errors depending on the type of error that occurred. In Figure 3, the code attempts to handle DataException differently compared to how it potentially handles all other errors. This allows for specific error handling for each type of known error, and still allows for error handling of any unknown error that my occur during the execution of the TryCatch statement. Figure 5: Error Specific Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (TimeoutException ex) { // Handle Timeout TimeoutException Only } catch (Exception) { // Handle all Exceptions } return dt; }

    Read the article

  • Drupal Modules for SEO & Content

    - by Aditi
    When we talk about Drupal SEO, there are two things to consider one is about the relevant SEO practices and about appropriate Drupal Modules available. Optimizing your website for search engines is one of the most important aspect of launching & promoting your website especially if ranking matters to you. Understanding SEO For starters, you have begin with Keyword research and then optimize your content according to your findings by tagging, meta tags etc, Drupal modules once installed help you manage a lot of such parameters. Identifying the target keywords Using the Page Title and Token modules PathAuto configuration <H1> heading tags Optimizing Drupal’s default robots.txt file Etc. While Drupal gives you a lot of ability to make your website content worthy & search engine friendly it is important for you to make sure you are not crossing the line or you could get penalized. Modules Overview Drupal Power is at its best when you have these modules & great brain working together. The basic SEO improvements can be achieved easily with the modules enlisted below, but you can win magical rankings if you use them logically & wisely. Understanding your keyword competition & enhancing your content is the basic key to success and ofcourse the modules: Pathauto Automatically create search enging friendly readable URLS from tokens. A token is a piece of data from content, say the author’s username, or the content’s title. For example mysite.com/an-article, rather than mysite.com/node/114 for every node you make. NodeWords Amazingly useful drupal module that allows you to create custom meta tags and descriptions for your nodes, which gives you the ability to target specific keywords and phrases. Page Title Enables you to set an alternative title for the <title></title> tags and for the <h1></h1> tags on a node. Global Redirect Manage content duplication, 301 redirects, and URL validation with this small, but powerful module. Taxonomy manager Make large additions, or changes to taxonomy very easy. This module provides a powerful interface for managing taxonomies. A vocabulary gets displayed in a dynamic tree view, where parent terms can be expanded to list their nested child terms or can be collapsed. robotstxt A robots.txt file is vital for ensuring that search engine spiders don’t index the unwanted areas of your site. This Drupal module gives you the ability to manage your robots.txt file through the CMS admin. xmlsitemap An XML Sitemap lets the search engines index your website content. This module helps in generating and maintaining a complete sitemap for your website and gives you control over exactly which parts of the site you want to be included in the index. It even gives you the ability to automatically submit your sitemap to Google, Yahoo!, Ask.com and Windows Live every time you update a node or at specific interval. Node Import This module allows you to import a set of nodes from a Comma Seperated Values (CSV) or Tab Seperated Values (TSV) text file. Makes it easy to import hundreds-thousands of csv rows and you get to tie up these rows to CCK fields (or locations), and it can file it under the right taxonomy hierarchy. This is Super life saver module.

    Read the article

  • WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found

    - by John Haigh
    WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found After finding these steps online from http://dattard.blogspot.com/2008/11/active-directory-forms-based.html in order to setup Active Directory Forms Based Authentication I was all set to complete this task, except for one problem. These steps are missing one very important vital step in order for FBA to work with Active Directory. A supplement to step 3 before granting access in step 5 through the people picker. You need to specify the Active Directory Provider Name to the people picker, otherwise you will not be able specify users through the Policy for Web Application. <PeoplePickerWildcards>       <clear />          <add key="ADMembershipProvider" value="%" />     </PeoplePickerWildcards> Recently we needed to use Forms Based Authentication with Active Directory from an Extranet. This is how we got it to work. 1. Extend the Web Application Instead of tweaking the internal web app, Extend the web application you want to expose to the Extranet, giving it the required host headers etc. 2. Configure SharePoint Central Admin to use FBA for the "new" Web Applications Login to SharePoint Central Admin Go to Application Management / Application Security / Authentication Providers and Change the Web Application to the one which needs to be configured for Forms Based Authentication Click zone / default, change authentication type to forms and enter ActiveDirectoryMemebershipProvider under membership provider name ( for example , "ADMembershipProvider") and save this change 3. Update the web.config of SharePoint Central admin site under configuration node <connectionStrings> <add name="ADConnectionString" connectionString="LDAP://DynamicsAX.local/CN=Users,DC=DynamicsAX,DC=local /> </connectionStrings> under system.web node <membership defaultProvider="ADMembershipProvider"> <providers> <add name="ADMembershipProvider" type="System.Web.Security.ActiveDirectoryMembershipProvider,System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ADConnectionString" connectionUsername="xxx" connectionPassword="yyy" enableSearchMethods="true" attributeMapUsername="sAMAccountName"/> </providers> </membership> 4.Update the web.config of SharePoint Web application Repeat step 3 for the web.config of the SharePoint webapplication to be configured for Forms Based Authentication Change the authentication in web.config to <authentication mode="Forms"> <forms loginUrl="/_layouts/login.aspx"></forms> </authentication> 5. Grant Access on the extended Web Application Your extranet web application is now configured to use FBA. However, until users, who will be accessing the site via FBA, are given permissions for the site, it will be inaccessible to them. To get started, open your browser and navigate to your farm’s Central Administration site. Click on Application Management and then click on Policy for Web Application. Make sure that you are working on the extranet web application. Do the following steps: Click on Add Users. In the Zones drop down, select the appropriate Extranet zone. IMPORTANT: If you select the incorrect zone, you may not be able to resolve user names. Hence, the zone you select must match the zone of the web application that is configured to use FBA. Click the Next button. In the Users edit box, type the name of the FBA user whom you wish to have full control for the site. Click the Resolve link next to the Users edit box. If the web application's FBA information has been configured correctly, the name will resolve and become underlined. Check the Full Control checkbox. Click the Finish button.

    Read the article

  • Tips to Make Your Website Cell Phone Friendly

    - by Aditi
    Working on a new website design? or Redesigning your website? There is a lot more to consider now a days not just user experience, clean code, CSS etc. one of the important attribute one must not miss, which is making them mobile friendly! With the growing use of handhelds & unlimited data plans, people browse on their cellphones! and All come in different sizes! it is tough to make a website that would look great not just on a high resolution widescreen monitor/LCD, but also should look equally impressive on the low resolutions of cellphones. We are today going to discuss about such factors that can help you make a website Cellphone Friendly. Fluid Width Layouts As we start discussing about this, Most people speak of the Fluid Width Layouts as vital step in moving your website to be mobile friendly. Fluid width allows the width of your website stretch or shrink depending on the browser size. However, having a layout which flows with the width of the screen’s resolution is certainly convenient, more often than not the website was originally laid out for a desktop in mind. Compressing a fluid layout to 320 pixels can do some serious damage to layout, Thus some people strongly believe it is far better to have a mobile style sheet and lay out the content specifically for that screen and have more control on the display. The best thing to do is to detect the type of platform that is connected to your website and disabling or changing some tools and effects to make it look better if not perfect. Keep Your Web Pages Short length One must avoid long pages on their website, a lot of scroll makes it very non user friendly for people, especially on mobile devices this is a huge draw back because of the longer load time it takes to download the webpage. Everyone likes crisp & concise content such pages are easier to load & browse. This makes your website accessible across all platforms. Also try to keep shorter urls, if they have to type..save them from that much work especially if someone is using a cellphone with no QWERTY keyboard it can be tough. Usable Navigation & Search Unlike Desktops, your website’s Navigation won’t super work on a cellphone. Keep in mind the user experience for cellphone users as you design your Navigation. Try to keep your content centered as they do have difficulty in reading the webpage. I always look upto Google and their pages as available on mobile as a great example. Keeping a functional & very visible search bar helps mobile users navigate by searching. Understanding Clean Website Code : Evolved for Mobile Clean code is important when you consider the diversity out there for handheld devices. Some cell phones may only understand WAP. More capable phones may understand WAP2, which allows rendering websites with XHTML and CSS. Most mobiles won’t display tables, floats, frames, JavaScript, and dynamic menus. Most cellphone will not support cookies. Devices at the high end of the mobile market such as BlackBerry, Palm, or the upcoming iPhone are highly capable and support nearly as much as a standard computer..but masses still do not have such phones. You can use specific emulators to test your website on mobile devices. Make sure your color combinations provide good contrast between foreground and background colors, particularly for devices with fewer color options.

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • Vitality of Product Information Management Showcased at OpenWorld 2012

    - by Mala Narasimharajan
     By Sachin Patel Can you hear the countdown clock ticking!! OpenWorld 2012 is almost here and as I write this Oracle is buzzing with fresh new ideas and solutions that will be showcased this year. What an exciting time for all of us to be in midst of a digital revolution. Whether it is Apple fans clamoring to find every new feature that has been added to the iPhone 5 or a startup launching a new digital thermostat (has anyone looked at the new one from Nest ), product information is a vital for companies to grow and compete in this cut-throat market. Customer today struggle to aggregate and enrich this product data from the myriad of systems they have in place to run their businesses and operations. Having a product information strategy is paramount to align your sales channels and operations with the most accurate and upto date product data. We have a number of sessions this year at OpenWorld where you can gain more insight into how Oracle’s next generation of Fusion Applications, in this case Fusion Product Hub can provide you with a solution to streamline and get control of your Product Master Data. Enabling Trusted Enterprise Product Data with Oracle Fusion Product HubTuesday, October 2nd 11:45 am, Moscone West 2022 Join me Sachin Patel, Director of Product Strategy and Milan Bhatia, VP of Development as we discuss how you can enable trusted product master data in your enterprise. In this session we plan to cover the challenges companies face today in mastering product data. The discussion will also include how Fusion Product Hub brings new and innovative features to empower your product data owners to create a holistic and rich product definition that can be leveraged across your enterprise. We will also be joined by Pawel Fidelus from Fideltronik an Early Adopter for Fusion Product Hub who will showcase their plans to implement Fusion Product Hub and the value it will bring to Fideltronik Multichannel Fulfillment Excellence in Direct-to-Consumer Market Thursday, October 4th, 12:45 am, Moscone West 2024 Do you have multiple order capture systems? Do you have difficulty in fulfilling orders for your customers across various channels and suppliers? Mark Carson, Director, Fusion DOO and Brad Kerr, Director, AGSS will be showcasing the Fusion Distributed Order Orchestration solution and how companies can orchestrate orders from multiple order capture systems and route them to the appropriate fulfillment system. Sachin Patel, Director Product Strategy for Product MDM will highlight the business pain points in consolidating and commercializing data from a Multi Channel Commerce point of view and how Fusion Product Hub helps in allowing you to provide a single source of truth to drive a singular and rich customer experience. Oracle Fusion Supply Chain Management: Customer Adoption and Experiences                                                Wednesday, October 3rd 10:15 am, Moscone West 2003 This is a great session to attend to learn about how Fusion Supply Chain Management and Fusion Product Hub Early Adopters, including Boeing and Fideltronik are leveraging Fusion Applications to improve their Supply Chain operations. Have a great OpenWorld and see you soon!!

    Read the article

  • Meet our Interns: Adam and Hanadi

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 This week, we’d like to introduce you to two of our ECEMEA Interns, Adam and Hanadi. They’re based in different countries and are part of different teams; however they both have the same enthusiasm in being an Intern at Oracle. “Hi! I’m Adam (Bachelor of Accounting Science & CIMA Diploma in Management Accounting), a member of the Oracle Applications Pre-sales team in Johannesburg, South Africa. Joining Oracle has been a truly inspiring experience thus far. My first week at Oracle has been one of insight and learning. I have had the opportunity to meet and interact with industry leading software solution professionals. Gaining insight into a mammoth multinational company has changed my perception on how things work and has truly opened my eyes to the world of business. Having the privilege of joining the Oracle Graduate Program has afforded me the chance to take advantage of countless training opportunities as well as the chance to learn about Information Technology in a practical manner which is vital to most businesses in today’s modern environment.” “Hi! I’m Hanadi, an Oracle 2013 Sales Intern from Saudi Arabia. I received my BSc in Information Technology from King Saud University and immediately after graduating I applied for the internship at Oracle. I thought it was an incredible opportunity and a great way to shift from college life to career life through learning and practicing in an environment with such high standards. At the beginning, I was a bit nervous in joining the serious business world, but once I joined, I found the program very organized and everyone was extremely helpful, which made it easier for us, as interns, to learn faster. If you are a self-motivated, committed person, who has initiative, accepts challenges, has good soft skills and some technical experience, I would definitely advice you to take a chance and apply for the program once you graduate. Best of luck!” Get the latest updates from the ECEMEA Sales and Presales Internship Programme 2013 by following #Oracleinterns on Twitter or visiting CampusatOracle Facebook Page! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Bridging the Gap in Cloud, Big Data, and Real-time

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} With all the buzz of around big data and cloud computing, it is easy to overlook one of your most precious commodities—your data. Today’s businesses cannot stand still when it comes to data. Market success now depends on speed, volume, complexity, and keeping pace with the latest data integration breakthroughs. Are you up to speed with big data, cloud integration, real-time analytics? Join us in this three part blog series where we’ll look at each component in more detail. Meet us online on October 24th where we’ll take your questions about what issues you are facing in this brave new world of integration. Let’s start first with Cloud. What happens with your data when you decide to implement a private cloud architecture? Or public cloud? Data integration solutions play a vital role migrating data simply, efficiently, and reliably to the cloud; they are a necessary ingredient of any platform as a service strategy because they support cloud deployments with data-layer application integration between on-premise and cloud environments of all kinds. For private cloud architectures, consolidation of your databases and data stores is an important step to take to be able to receive the full benefits of cloud computing. Private cloud integration requires bidirectional replication between heterogeneous systems to allow you to perform data consolidation without interrupting your business operations. In addition, integrating data requires bulk load and transformation into and out of your private cloud is a crucial step for those companies moving to private cloud. In addition, the need for managing data services as part of SOA/BPM solutions that enable agile application delivery and help build shared data services for organizations. But what about public Cloud? If you have moved your data to a public cloud application, you may also need to connect your on-premise enterprise systems and the cloud environment by moving data in bulk or as real-time transactions across geographies. For public and private cloud architectures both, Oracle offers a complete and extensible set of integration options that span not only data integration but also service and process integration, security, and management. For those companies investing in Oracle Cloud, you can move your data through Oracle SOA Suite using REST APIs to Oracle Messaging Cloud Service —a new service that lets applications deployed in Oracle Cloud securely and reliably communicate over Java Messaging Service . As an example of loading and transforming data into other public clouds, Oracle Data Integrator supports a knowledge module for Salesforce.com—now available on AppExchange. Other third-party knowledge modules are being developed by customers and partners every day. To learn more about how to leverage Oracle’s Data Integration products for Cloud, join us live: Data Integration Breakthroughs Webcast on October 24th 10 AM PST.

    Read the article

  • ORA-4031 Troubleshooting

    - by [email protected]
      QUICKLINK: Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Note 1087773.1 : ORA-4031 Diagnostics Tools [Video]   Have you observed an ORA-04031 error reported in your alert log? An ORA-4031 error is raised when memory is unavailable for use or reuse in the System Global Area (SGA).  The error message will indicate the memory pool getting errors and high level information about what kind of allocation failed and how much memory was unavailable.  The challenge with ORA-4031 analysis is that the error and associated trace is for a "victim" of the problem.   The failing code ran into the memory limitation, but in almost all cases it was not part of the root problem.    Looking for the best way to diagnose? When an ORA-4031 error occurs, a trace file is raised and noted in the alert log if the process experiencing the error is a background process.   User processes may experience errors without reports in the alert log or traces generated.   The V$SHARED_POOL_RESERVED view will show reports of misses for memory over the life of the database. Diagnostics scripts are available in Note 430473.1 to help in analysis of the problem.  There is also a training video on using and interpreting the script data Note 1087773.1. 11g DiagnosabilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4031 occurs. For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file contains vital information about what led to the error condition.  Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]Oracle Configuration Manager (OCM)Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations.Oracle Configuration Manager Quick Start GuideNote 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    Common Causes/Solutions The ORA-4031 can occur for many different reasons.  Some possible causes are: SGA components too small for workload Auto-tuning issues Fragmentation due to application design Bug/leaks in memory allocationsFor more on the 4031 and how this affects the SGA, see Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Because of the multiple potential causes, it is important to gather enough diagnostics so that an appropriate solution can be identified.  However, most commonly the cause is associated with configuration tuning.   Ensuring that MEMORY_TARGET or SGA_TARGET are large enough to accommodate workload can get around many scenarios.  The default trace associated with the error provides very high level information about the memory problem and the "victim" that ran into the issue.   The data in the default trace is not going to point to the root cause of the problem. When migrating from 9i to 10g and higher, it is necessary to increase the size of the Shared Pool due to changes in the basic design of the shared memory area. Note 270935.1 Shared pool sizing in 10gNOTE: Diagnostics on the errors should be investigated as close to the time of the error(s) as possible.  If you must restart a database, it is not feasible to diagnose the problem until the database has matured and/or started seeing the problems again. Note 801787.1 Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory Allocation ***For reference to the content in this blog, refer to Note.1088239.1 Master Note for Diagnosing ORA-4031 

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 4-10, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared via the OTN ArchBeat Facebook Page for the week of November 4-10, 2012. OAM/OVD JVM Tuning | @FusionSecExpert Vinay from the Oracle Fusion Middleware Architecture Group (the very prolific A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. Exploring Lambda Expressions for the Java Language and the JVM | Java Magazine In the latest //Java/Architect column in Java Magazine, Ben Evans, Martijn Verburg, and Trisha Gee explain how, "although Lambda expressions might seem unfamiliar to begin with, they're quite easy to pick up, and mastering them will be vital for writing applications that can take full advantage of modern multicore CPUs." SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Oracle Solaris 11.1 update focuses on database integration, cloud | Mark Fontecchio TechTarget editor Mark Fontecchio reports on the recent Oracle Solaris 11.1 release, with comments from IDC's Al Gillen. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger ADF Mobile Custom Javasciprt – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. Architects Matter: Making sense of the people who make sense of enterprise IT Why do architects matter? Oracle Enterprise Architect Eric Stephens suggests that you ask yourself this question the next time you take the elevator to the Oracle offices on the 45th floor of the Willis Tower in Chicago, Illinois (or any other skyscraper, for that matter). If you had to take the stairs to get to those offices, who would you blame? "You get the picture," he says. "Architecture is essential for any necessarily complex structure, be it a building or an enterprise." (Read the article...) Converting SSL certificate generated by a 3rd party to an Oracle Wallet | Paulo Albuquerque Oracle Fusion Middleware A-Team member Paulo Albuquerque shares "a workaround to get your private key, certificate and CA trusted certificates chain into Oracle Wallet." How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Updated Business Activity Monitoring (BAM) Class | Gary Barg Oracle SOA Team blogger Gary Barg has news for those interested in a skills upgrade. This updated Oracle University course "explains how to use Oracle BAM to monitor enterprise business activities across an enterprise in real time. You can measure your key performance indicators (KPIs), determine whether you are meeting service-level agreements (SLAs), and take corrective action in real time." Thought for the Day "For every complex problem, there is a solution that is simple, neat, and wrong." — H. L. Mencken (September 12, 1880 – January 29, 1956) Source: SoftwareQuotes.com

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • "Why We Chose Fusion CRM" by Vikas Bhambri, Managing Partner, The Athene Group

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Vikas Bhambri, Managing Partner, The Athene Group This year The Athene Group (www.theathenegroup.com) celebrated our tenth anniversary. The company has accomplished a lot in ten years overcoming a number of hurdles and challenges to have grown organically to a 150+ person global company with offices in the US, UK, and India and customers in the US, Canada, and Europe. Now more than ever with the current global landscape from an economic and competitive standpoint it was vital that we make some changes to remain successful for the next ten years. There were two key initiatives that we discussed internally that would enable us to successfully accomplish this – collaboration and the concept of “insight to action”. With our existing Oracle CRM On Demand platform we had components of this but not the full depth and breadth that we were looking for. When we started to discuss Fusion CRM we immediately saw several next generation tools that would embrace these two objectives. For a consulting and development organization the collaboration required between business development and consulting delivery is as important as the collaboration required during the projects between the project delivery and account management teams. The Activity Streams functionality in Fusion CRM immediately addressed the communication of key discussion topics and exchanges around our clients. Of course when we saw the Oracle Social Network (which is part of our Fusion CRM roadmap) we were blown away. The combination OSN and our CRM is going to make us more effective as we discuss and work cohesively on client engagements – ensuring mutual success for both Athene and our clients. When we looked at “insight to action” we saw that we had a great platform when folks were at their desks, unfortunately a lot of our business development and consulting folks are on the road. The Fusion Mobile Sales and Fusion Outlook Desktop provide information to our teams when they are on the go. So that they can provide real-time information and react to real-time information provided by their peers. We are in the early stages of our transformative experience with Fusion CRM but we believe the platform along with our people and processes are going to help us achieve our goals in the future.

    Read the article

  • VirtualBox 3.2 is released! A Red Letter Day?

    - by Fat Bloke
    Big news today! A new release of VirtualBox packed full of innovation and improvements. Over the next few weeks we'll take a closer look at some of these new features in a lot more depth, but today we'll whet your appetite with the headline descriptions. To start with, we should point out that this is the first Oracle-branded version which makes today a real Red-letter day ;-)  Oracle VM VirtualBox 3.2 Version 3.2 moves VirtualBox forward in 3 main areas ( handily, all beginning with "P" ) : performance, power and supported guest operating system platforms.  Let's take a look: Performance New Latest Intel hardware support - Harnessing the latest in chip-level support for virtualization, VirtualBox 3.2 supports new Intel Core i5 and i7 processor and Intel Xeon processor 5600 Series support for Unrestricted Guest Execution bringing faster boot times for everything from Windows to Solaris guests; New Large Page support - Reducing the size and overhead of key system resources, Large Page support delivers increased performance by enabling faster lookups and shorter table creation times. New In-hypervisor Networking - Significant optimization of the networking subsystem has reduced context switching between guests and host, increasing network throughput by up to 25%. New New Storage I/O subsystem - VirtualBox 3.2 offers a completely re-worked virtual disk subsystem which utilizes asynchronous I/O to achieve high-performance whilst maintaining high data integrity; New Remote Video Acceleration - The unique built-in VirtualBox Remote Display Protocol (VRDP), which is primarily used in virtual desktop infrastructure deployments, has been enhanced to deliver video acceleration. This delivers a rich user experience coupled with reduced computational expense, which is vital when servers are running hundreds of virtual machines; Power New Page Fusion - Traditional Page Sharing techniques have suffered from long and expensive cache construction as pages are scrutinized as candidates for de-duplication. Taking a smarter approach, VirtualBox Page Fusion uses intelligence in the guest virtual machine to determine much more rapidly and accurately those pages which can be eliminated thereby increasing the capacity or vm density of the system; New Memory Ballooning- Ballooning provides another method to increase vm density by allowing the memory of one guest to be recouped and made available to others; New Multiple Virtual Monitors - VirtualBox 3.2 now supports multi-headed virtual machines with up to 8 virtual monitors attached to a guest. Each virtual monitor can be a host window, or be mapped to the hosts physical monitors; New Hot-plug CPU's - Modern operating systems such Windows Server 2008 x64 Data Center Edition or the latest Linux server platforms allow CPUs to be dynamically inserted into a system to provide incremental computing power while the system is running. Version 3.2 introduces support for Hot-plug vCPUs, allowing VirtualBox virtual machines to be given more power, with zero-downtime of the guest; New Virtual SAS Controller - VirtualBox 3.2 now offers a virtual SAS controller, enabling it to run the most demanding of high-end guests; New Online Snapshot Merging - Snapshots are powerful but can eat up disk space and need to be pruned from time to time. Historically, machines have needed to be turned off to delete or merge snapshots but with VirtualBox 3.2 this operation can be done whilst the machines are running. This allows sophisticated system management with minimal interruption of operations; New OVF Enhancements - VirtualBox has supported the OVF standard for virtual machine portability for some time. Now with 3.2, VirtualBox specific configuration data is also stored in the standard allowing richer virtual machine definitions without compromising portability; New Guest Automation - The Guest Automation APIs allow host-based logic to drive operations in the guest; Platforms New USB Keyboard and Mouse - Support more guests that require USB input devices; New Oracle Enterprise Linux 5.5 - Support for the latest version of Oracle's flagship Linux platform; New Ubuntu 10.04 ("Lucid Lynx") - Support for both the desktop and server version of the popular Ubuntu Linux distribution; And as a man once said, "just one more thing" ... New Mac OS X (experimental) - On Apple hardware only, support for creating virtual machines run Mac OS X. All in all this is a pretty powerful release packed full of innovation and speedups. So what are you waiting for?  -FB 

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Need WIF Training?

    - by Your DisplayName here!
    I spend numerous hours every month answering questions about WIF and identity in general. This made me realize that this is still quite a complicated topic once you go beyond the standard fedutil stuff. My good friend Brock and I put together a two day training course about WIF that covers everything we think is important. The course includes extensive lab material where you take standard application and apply all kinds of claims and federation techniques and technologies like WS-Federation, WS-Trust, session management, delegation, home realm discovery, multiple identity providers, Access Control Service, REST, SWT and OAuth. The lab also includes the latest version of the thinktecture identityserver and you will learn how to use and customize it. If you are looking for an open enrollment style of training, have a look here. Or contact me directly! The course outline looks as follows: Day 1 Intro to Claims-based Identity & the Windows Identity Foundation WIF introduces important concepts like conversion of security tokens and credentials to claims, claims transformation and claims-based authorization. In this module you will learn the basics of the WIF programming model and how WIF integrates into existing .NET code. Externalizing Authentication for Web Applications WIF includes support for the WS-Federation protocol. This protocol allows separating business and authentication logic into separate (distributed) applications. The authentication part is called identity provider or in more general terms - a security token service. This module looks at this scenario both from an application and identity provider point of view and walks you through the necessary concepts to centralize application login logic both using a standard product like Active Directory Federation Services as well as a custom token service using WIF’s API support. Externalizing Authentication for SOAP Services One big benefit of WIF is that it unifies the security programming model for ASP.NET and WCF. In the spirit of the preceding modules, we will have a look at how WIF integrates into the (SOAP) web service world. You will learn how to separate authentication into a separate service using the WS-Trust protocol and how WIF can simplify the WCF security model and extensibility API. Day 2 Advanced Topics:  Security Token Service Architecture, Delegation and Federation The preceding modules covered the 80/20 cases of WIF in combination with ASP.NET and WCF. In many scenarios this is just the tip of the iceberg. Especially when two business partners decide to federate, you usually have to deal with multiple token services and their implications in application design. Identity delegation is a feature that allows transporting the client identity over a chain of service invocations to make authorization decisions over multiple hops. In addition you will learn about the principal architecture of a STS, how to customize the one that comes with this training course, as well as how to build your own. Outsourcing Authentication:  Windows Azure & the Azure AppFabric Access Control Service Microsoft provides a multi-tenant security token service as part of the Azure platform cloud offering. This is an interesting product because it allows to outsource vital infrastructure services to a managed environment that guarantees uptime and scalability. Another advantage of the Access Control Service is, that it allows easy integration of both the “enterprise” protocols like WS-* as well as “web identities” like LiveID, Google or Facebook into your applications. ACS acts as a protocol bridge in this case where the application developer doesn’t need to implement all these protocols, but simply uses a service to make it happen. Claims & Federation for the Web and Mobile World Also the web & mobile world moves to a token and claims-based model. While the mechanics are almost identical, other protocols and token types are used to achieve better HTTP (REST) and JavaScript integration for in-browser applications and small footprint devices. Also patterns like how to allow third party applications to work with your data without having to disclose your credentials are important concepts in these application types. The nice thing about WIF and its powerful base APIs and abstractions is that it can shield application logic from these details while you can focus on implementing the actual application. HTH

    Read the article

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • Taking HRMS to the Cloud to Simplify Human Resources Management

    - by HCM-Oracle
    By Anke Mogannam With human capital management (HCM) a top-of-mind issue for executives in every industry, human resources (HR) organizations are poised to have their day in the sun—proving not just their administrative worth but their strategic value as well.  To make good on that promise, however, HR must modernize. Indeed, if HR is to act as an agent of change—providing the swift reallocation of employees  and the rapid absorption of employee data required for enterprises to shift course on a dime—it must first deal with the disruptive change at its own front door. And increasingly, that means choosing the right technology and human resources management system (HRMS) for managing the entire employee lifecycle. Unfortunately, for most organizations, this task has proved easier said than done. This is because while much has been written about advances in HRMS technology, until recently, most of those advances took the form of disparate on-premises solutions designed to serve very specific purposes. Although this may have resulted in key competencies in certain areas, it also meant that processes for core HR functions like payroll and benefits were being carried out in separate systems from those used for talent management, workforce optimization, training, and so on. With no integration—and no single system of record—processes were disconnected, ease of use was impeded, user experience was diminished, and vital data was left untapped.  Today, however, that scenario has begun to change, and end-to-end cloud-based HCM solutions have moved from wished-for innovations to real-life solutions. Why, then, have HR organizations been so slow in adopting them? The answer—it would seem—is, “It’s complicated.” So complicated, in fact, that 45 percent of the respondents to PwC’s “Annual HR Technology Survey” (for 2013) reported having no formal HR software roadmap, and 40 percent stated that they “did not know” whether their organizations would be increasing their use of cloud or software as a service (SaaS) for HR.  Clearly, HR organizations need help sorting through the morass of HR software options confronting them. But just as clearly, there’s an enormous opportunity awaiting those that do. The trick will come in charting a course that allows HR to leverage existing technology while investing in the cloud-based solutions that will deliver the end-to-end processes, easy-to-understand analytics, and superior adaptability required to simplify—and add value to—every aspect of employee management. The Opportunity therefore is to cut costs, drive Innovation, and increase engagement by moving to cloud-based HCM.  Then you will benefit from one Interface, leverage many access points, and  gain at-a-glance insight across your entire workforce. With many legacy on-premises HR systems not being efficient anymore and cloud-based, integrated systems that span the range of HR functions finally reaching maturity, the time is ripe for moving core HR to the cloud. Indeed, for the first time ever there are more HRMS replacement initiatives than HRMS upgrade initiatives under way, and the majority of them involve moving to the cloud per Cedar Crestone’s 2013-2014 HRMS survey. To learn how you can launch your own cloud HCM initiative and begin using HR to power the enterprise, visit Oracle HRMS in the Cloud and Oracle’s new customer 2 cloud program. Anke Mogannam brings more than 16 years of marketing and human capital management experience in the technology industries to her role at Oracle where she is part of the Human Capital Management applications marketing team. In that role, Anke drives content marketing, messaging, go-to-market activities, integrated marketing campaigns, and field enablement. Prior to joining Oracle, Anke held several roles in communications, marketing, HCM product strategy and product management at PeopleSoft, SAP, Workday and Saba. Follow her on Twitter @amogannam

    Read the article

  • Viewport / Camera Calculation in 2D Game

    - by Dave
    we have a 2D game with some sprites and tiles and some kind of camera/viewport, that "moves" around the scene. so far so good, if we wouldn't had some special behaviour for your camera/viewport translation. normally you could stick the camera to your player figure and center it, resulting in a very cheap, undergraduate, translation equation, like : vec_translation -/+= speed (depending in what keys are pressed. WASD as default.) buuuuuuuuuut, we want our player figure be able to actually reach the bounds, when the viewport/camera has reached a maximum translation. we came up with the following solution (only keys a and d are the shown here, the rest is just adaption of calculation or maybe YOUR super-cool and elegant solution :) ): if(keys[A]) { playerX -= speed; if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } else if(playerScreenX <= width / 2 && (tx) >= 0) { playerScreenX -= speed; tx = 0; if(playerScreenX < 0) playerScreenX = 0; } else if(playerScreenX >= width / 2 && (tx) < 0) { playerScreenX -= speed; } } if(keys[D]) { playerX += speed; if(playerScreenX >= width / 2 && (-tx + width) < sceneWidth) { playerScreenX = width / 2; tx -= speed; } if(playerScreenX >= width / 2 && (-tx + width) >= sceneWidth) { playerScreenX += speed; tx = -(sceneWidth - width); if(playerScreenX >= width - player.width) playerScreenX = width - player.width; } if(playerScreenX <= width / 2 && (-tx + width) < sceneWidth) { playerScreenX += speed; } } i think the code is rather self explaining: keys is a flag container for currently active keys, playerX/-Y is the position of the player according to world origin, tx/ty are the translation components vital to background / npc / item offset calculation, playerOnScreenX/-Y is the actual position of the player figure (sprite) on screen and width/height are the dimensions of the camera/viewport. this all looks quite nice and works well, but there is a very small and nasty calculation error, which in turn sums up to some visible effect. let's consider following piece of code: if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } it can be translated into plain english as : if the x position of your player figure on screen is less or equal the half of your display / camera / viewport size AND there is enough space left LEFT of your viewport/camera then set players x position on screen to width half, increase translation (because we subtract the translation from something we want to move). easy, right?! doing this will create a small delta between playerX and playerScreenX. after so much talking, my question appears now here at the bottom of this document: how do I stick the calculation of my player-on-screen to the actual position of the player AND having a viewport that is not always centered aroung the players figure? here is a small test-case in processing: http://pastebin.com/bFaTauaa thank you for reading until now and thank you in advance for probably answering my question.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • [ScriptMethod(ResponseFormat = ResponseFormat.Json)]

    - by gnomixa
    In ASP.net web service if the above isn't specified , what is the response format by default? Also, if my web service below: [WebMethod()] public List<Sample> GenerateSamples(string[][] data) { ResultsFactory f = new ResultsFactory(data); List<Sample> samples = f.GenerateSamples(); return samples; } returns the list of objects, If I change the response format to JSON, I have to change the return type to string, then how do I access objects in my javascript? Currently I call this web service in my JS such as: $.ajax({ type: "POST", url: "http://localhost/TemplateWebService/Service.asmx/GenerateSamples", data: jsonText, contentType: "application/json; charset=utf-8", dataType: "json", success: function(response) { var samples = (typeof response.d) == 'string' ? eval('(' + response.d + ')') : response.d; if (samples.length > 0) { doSomethingHere(samples); } else { alert("No samples have been generated"); } }, error: function(xhr, status, error) { var msg = JSON.parse(xhr.responseText); alert(msg.Message); } }); What i noticed though, even though everything works perfectly fine, the eval statement never gets executed, which means that the web service always returns a string! So my question is, is [ScriptMethod(ResponseFormat = ResponseFormat.Json)] necessary on the web service definition side? The way things are now, I can use samples array and access each object and its properties as I normally would in any OOP code, which is very convenient, and everything works no problem, but I just wanted to make sure that I am not missing anything in my set up. I took the basics of combining Jquery's ajax with asp.net from Encosia side, and the response type wasn't mentioned there - I read it on another site and am I not sure how vital it is.

    Read the article

  • Implementing a listview inside a sliding drawer with a listview already present

    - by Parker
    I have an app whose main class extends ListActivity: public class GUIPrototype extends ListActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); final Cursor c = managedQuery(People.CONTENT_URI, null, null, null, null); String[] from = new String[] {People.NAME}; int[] to = new int[] { R.id.row_entry }; SimpleCursorAdapter adapter = new SimpleCursorAdapter(this,R.layout.drawer,c,from,to); setListAdapter(adapter); getListView().setTextFilterEnabled(true); } } I have a sliding drawer included in my XML, and I'm trying to get a separate listview to appear in the sliding drawer. I'm trying to populate the second listview using an inflater: View inflatedView = View.inflate(this, R.layout.main, null); ListView namesLV = (ListView) inflatedView.findViewById(R.id.content); String[] names2 = new String[] { "CS 345", "New Tag", "Untagged" }; ArrayAdapter<String> bb = new ArrayAdapter<String>(this, R.layout.main, R.id.row_entry, names2); namesLV.setAdapter(bb); This compiles, and runs, but the slidingdrawer is completely blank. My XML follows: <SlidingDrawer android:id="@+id/drawer" android:handle="@+id/handle" android:content="@+id/content" android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="vertical" android:layout_gravity="bottom"> <ImageView android:id="@id/handle" android:layout_width="48px" android:layout_height="48px" android:background="@drawable/icon"/> <ListView android:layout_width="fill_parent" android:layout_height="wrap_content" android:id="@id/content"/> </SlidingDrawer> I feel like I'm missing a vital step. I haven't found any resources on my problem by Googling, so any help would be greatly appreciated.

    Read the article

  • Wix Burn issue: Uninstall fails saying "Found dependent"

    - by vivek chaurasiya
    I have made a burn bundle which encapsulates 2 msi (msi1 , msi2) . In the UI I use checkboxes to ask the user to select which MSI to install. Now if user selects one of the msi to install, installation goes fine. But during Uninstall action, the burn log file says : [][:15]: Detected package: Netfx4Full, state: Present, cached: None [][:15]: Detected package: DummyInstallationPackageId3, state: **Absent**, cached: None [][:15]: Detected package: msi2.msi, state: **Present**, cached: Complete [][:15]: Detect complete, result: 0x0 [][:16]: Plan 3 packages, action: Uninstall [][:16]: Will not uninstall package: msi2.msi, found dependents: 1 [][:16]: Found dependent: {08e74372-83f2-4594-833b-e924b418b360}, name: My Test Application In the install scenario, I chose to install msi2 and NOT msi1. My bundle code looks like: <Bundle Name="My Test Application" Version="1.0.0.0" Manufacturer="Bryan" UpgradeCode="CC2A383C-751A-43B8-90BF-A250F7BC2863"> <Chain> <PackageGroupRef Id='Netfx4Full' /> <MsiPackage Id="DummyInstallationPackageId3" SourceFile="msi1.msi" ForcePerMachine="yes" InstallCondition="var1 = 1" > </MsiPackage> <MsiPackage SourceFile="msi2.msi" Vital="yes" Cache="yes" Visible="no" ForcePerMachine="yes" InstallCondition="var2 = 2" > </MsiPackage> </Chain> My OnDetectPackageComplete() looks like: private void OnDetectPackageComplete(object sender, DetectPackageCompleteEventArgs e) { if (e.PackageId == "DummyInstallationPackageId3" ) { if (e.State == PackageState.Absent) InstallEnabled = true; else if (e.State == PackageState.Present) UninstallEnabled = true; } } What should I do so that the burn bundle is freely able to uninstall the msi which the user selected at the time of install. Besides, If I select both msi to install, then uninstall is working fine. IMO, there is some problem b/w the relation of bundle and the 2 msi. Please help me as I am stuck with this problem.

    Read the article

  • Using Essential Use Cases to design a UI-centric Application

    - by Bruno Brant
    Hello all, I'm begging a new project (oh, how I love the fresh taste of a new project!) and we are just starting to design it. In short: The application is a UI that will enable users to model an execution flow (a Visio like drag & drop interface). So our greatest concern is usability and features that will help the users model fast and clearly the execution flow. Our established methodology makes extensive use of Use Cases in order to create a harmonious view of the application between the programmers and users. This is a business concern, really: I'd prefer to use an Agile Method with User Stories rather than User Cases, but we need to define a clear scope to sell the product to our clients. However, Use Cases have a number of flaws, most of which are related to the fact that they include technical details, like UI, etc, as can be seem here. But, since we can't use User Stories and a fully interactive design, I've decided that we compromise: I will be using Essential Use Cases in order to hide those details. Now I have another problem: it's essential (no pun intended) to have a clear description of UI interaction, so, how should I document it? In other words, how do I specify a application through the use of Essential Use Cases where the UI interaction is vital to it? I can see some alternatives: Abandon the use of Use Cases since they don't correctly represent the problem Do not include interface descriptions in the use case, but create another documentation (Story Boards) and link then to the Essential Use Cases Include UI interaction description to the Essential Use Cases, since they are part of the business rules in the perspective of the users and the application itself

    Read the article

  • How to serve tiff WMS imagery through GeoServer

    - by mikem419
    Ok so I am new to the GeoServer/database world. I am a student intern and I have been given the task of setting up a WMS using GeoServer. I have never done any database work before this so bare with me if my questions leave out important information. I am using GeoServer 2.0.1 in standalone mode (downloaded using Jetty) with PostgreSQL 8.4 installed. I went through nyc_roads and nyc_buildings install demo in the GeoServer documentation but I still do not understand how I should go about serving up some test images. I noticed that the nyc_roads setup included a .sql file that was responsible for setting up the nyc_buildings database. I do not know how/where this file was generated. Our test images are .tiff and .jpeg. I have successfully been able to do a WMS call on the local GeoServer machine, and have opened the included demo imagery. I now wish to add these .tiff and .jpeg images to GeoServer and access them through WMS. I have tried copying the images to the GeoServer data directory,adding a new data store, and layers, but I always receive an error regarding the "input stream." Again I am very sorry if I am leaving out allot of vital information, this is as much as I know. thanks!

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >