Search Results

Search found 11331 results on 454 pages for 'resource monitor'.

Page 382/454 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Find Knowledge Quickly

    - by Get Proactive Customer Adoption Team
    Untitled Document Get to relevant knowledge on the Oracle products you use in a few quick steps! Customers tell us that the volume of search results returned can make it difficult to find the information they need, especially when similar Oracle products exist. These simple tips show you how to filter, browse, search, and refine your results to get relevant answers faster. Filter first: PowerView is your best friend Powerview is an often ignored feature of My Oracle Support that enables you to control the information displayed on the Dashboard, the Knowledge tab and regions, and the Service Request tab based on one or more parameters. You can define a PowerView to limit information based on product, product line, support ID, platform, hostname, system name and others. Using PowerView allows you to restrict: Your search results to the filters you have set The product list when selecting your products in Search & Browse and when creating service requests   The PowerView menu is at the top of My Oracle Support, near the title You turn PowerView on by clicking PowerView is Off, which is a button. When PowerView is On, and filters are active, clicking the button again will toggle Powerview off. Click the arrow to the right to create new filters, edit filters, remove a filter, or choose from the list of previously created filters. You can create a PowerView in 3 simple steps! Turn PowerView on and select New from the PowerView menu. Select your filter from the Select Filter Type dropdown list and make selections from the other two menus. Hint: While there are many filter options, selecting your product line or your list of products will provide you with an effective filter. Click the plus sign (+) to add more filters. Click the minus sign (-) to remove a filter. Click Create to save and activated the filter(s) You’ll notice that PowerView is On displays along with the active filters. For more information about the PowerView capabilities, click the Learn more about PowerView… menu item or view a short video. Browse & Refine: Access the Best Match Fast For Your Product and Task In the Knowledge Browse region of the Knowledge or Dashboard tabs, pick your product, pick your task, select a version, if applicable. A best match document – a collection of knowledge articles and resources specific to your selections - may display, offering you a one-stop shop. The best match document, called an “information center,” is an aggregate of dynamically updated links to information pertinent to the product, task, and version (if applicable) you chose. These documents are refreshed every 24 hours to ensure that you have the most current information at your fingertips. Note: Not all products have “information centers.” If no information center appears as a best match, click Search to see a list of search results. From the information center, you can access topics from a product overview to security information, as shown in the left menu. Just want to search? That’s easy too! Again, pick your product, pick your task, select a version, if applicable, enter a keyword term, and click Search. Hint: In this example, you’ll notice that PowerView is on and set to PeopleSoft Enterprise. When PowerView is on and you select a product from the Knowledge Base product list, the listed products are limited to the active PowerView filter. (Products you’ve previously picked are also listed at the top of the dropdown list.) Your search results are displayed based on the parameters you entered. It’s that simple! Related Information: My Oracle Support - User Resource Center [ID 873313.1] My Oracle Support Community For more tips on using My Oracle Support, check out these short video training modules. My Oracle Support Speed Video Training [ID 603505.1]

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • Getting Help with 'SEPA' Questions

    - by MargaretW
    What is 'SEPA'? The Single Euro Payments Area (SEPA) is a self-regulatory initiative for the European banking industry championed by the European Commission (EC) and the European Central Bank (ECB). The aim of the SEPA initiative is to improve the efficiency of cross border payments and the economies of scale by developing common standards, procedures, and infrastructure. The SEPA territory currently consists of 33 European countries -- the 28 EU states, together with Iceland, Liechtenstein, Monaco, Norway and Switzerland. Part of that infrastructure includes two new SEPA instruments that were introduced in 2008: SEPA Credit Transfer (a Payables transaction in Oracle EBS) SEPA Core Direct Debit (a Receivables transaction in Oracle EBS) A SEPA Credit Transfer (SCT) is an outgoing payment instrument for the execution of credit transfers in Euro between customer payment accounts located in SEPA. SEPA Credit Transfers are executed on behalf of an Originator holding a payment account with an Originator Bank in favor of a Beneficiary holding a payment account at a Beneficiary Bank. In R12 of Oracle applications, the current SEPA credit transfer implementation is based on Version 5 of the "SEPA Credit Transfer Scheme Customer-To-Bank Implementation Guidelines" and the "SEPA Credit Transfer Scheme Rulebook" issued by European Payments Council (EPC). These guidelines define the rules to be applied to the UNIFI (ISO20022) XML message standards for the implementation of the SEPA Credit Transfers in the customer-to-bank space. This format is compliant with SEPA Credit Transfer version 6. A SEPA Core Direct Debit (SDD) is an incoming payment instrument used for making domestic and cross-border payments within the 33 countries of SEPA, wherein the debtor (payer) authorizes the creditor (payee) to collect the payment from his bank account. The payment can be a fixed amount like a mortgage payment, or variable amounts such as those of invoices. The "SEPA Core Direct Debit" scheme replaces various country-specific direct debit schemes currently prevailing within the SEPA zone. SDD is based on the ISO20022 XML messaging standards, version 5.0 of the "SEPA Core Direct Debit Scheme Rulebook", and "SEPA Direct Debit Core Scheme Customer-to-Bank Implementation Guidelines". This format is also compliant with SEPA Core Direct Debit version 6. EU Regulation #260/2012 established the technical and business requirements for both instruments in euro. The regulation is referred to as the "SEPA end-date regulation", and also defines the deadlines for the migration to the new SEPA instruments: Euro Member States: February 1, 2014 Non-Euro Member States: October 31, 2016. Oracle and SEPA Within the Oracle E-Business Suite of applications, Oracle Payables (AP), Oracle Receivables (AR), and Oracle Payments (IBY) provide SEPA transaction capabilities for the following releases, as noted: Release 11.5.10.x -  AP & AR Release 12.0.x - AP & AR & IBY Release 12.1.x - AP & AR & IBY Release 12.2.x - AP & AR & IBY Resources To assist our customers in migrating, using, and troubleshooting SEPA functionality, a number of resource documents related to SEPA are available on My Oracle Support (MOS), including: R11i: AP: White Paper - SEPA Credit Transfer V5 support in Oracle Payables, Doc ID 1404743.1R11i: AR: White Paper - SEPA Core Direct Debit v5.0 support in Oracle Receivables, Doc ID 1410159.1R12: IBY: White Paper - SEPA Credit Transfer v5 support in Oracle Payments, Doc ID 1404007.1R12: IBY: White Paper - SEPA Core Direct Debit v5 support in Oracle Payments, Doc ID 1420049.1R11i/R12: AP/AR/IBY: Get Help Setting Up, Using, and Troubleshooting SEPA Payments in Oracle, Doc ID 1594441.2R11i/R12: Single European Payments Area (SEPA) - UPDATES, Doc ID 1541718.1R11i/R12: FAQs for Single European Payments Area (SEPA), Doc ID 791226.1

    Read the article

  • Web Safe Area (optimal resolution) for web app design?

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'Web Safe Area' should I optimize the app layout and design. By Web Safe Area I mean the actual area available to display the website in the browser (which is influenced by monitor resolution as well as the space taken up by the browser and OS) I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number (the app I'm designing is flash based, Apple mobile devices don't support flash so iPad support isn't a concern). My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I'm really surprised that I couldn't find this type of investigation anywhere on the web. Lots of websites talk about designing for 1024x768, but that's only half the equation! There is no mention of browser/OS influences on the actual area you have to display the site/app. Any help on this would be greatly appreciated! Thanks. EDIT Another caveat to my line of thinking above is that different browsers actually take up different amounts of pixels based on the OS they're running on. For example, under WinXP IE8 takes up 142px on top of the screen (instead the aforementioned 120px for Win7) because the file menu shows up by default on XP while in Win7 the file menu is hidden by default. So it looks like on WinXP + IE8 the Web Safe Area would be a mere 572px (768px-142-30-24=572)

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Internet of Things Becoming Reality

    - by kristin.jellison
    The Internet of Things is not just on the radar—it’s becoming a reality. A globally connected continuum of devices and objects will unleash untold possibilities for businesses and the people they touch. But the “things” are only a small part of a much larger, integrated architecture. A great example of this comes from the healthcare industry. Imagine an expectant mother who needs to watch her blood pressure. She lives in a mountain village 100 miles away from medical attention. Luckily, she can use a small “wearable” device to monitor her status and wirelessly transmit the information to a healthcare hub in her village. Now, say the healthcare hub identifies that the expectant mother’s blood pressure is dangerously high. It sends a real-time alert to the patient’s wearable device, advising her to contact her doctor. It also pushes an alert with the patient’s historical data to the doctor’s tablet PC. He inserts a smart security card into the tablet to verify his identity. This ensures that only the right people have access to the patient’s data. Then, comparing the new data with the patient’s medical history, the doctor decides she needs urgent medical attention. GPS tracking devices on ambulances in the field identify and dispatch the closest one available. An alert also goes to the closest hospital with the necessary facilities. It sends real-time information on her condition directly from the ambulance. So when she arrives, they already have a treatment plan in place to ensure she gets the right care. The Internet of Things makes a huge difference for the patient. She receives personalized and responsive healthcare. But this technology also helps the businesses involved. The healthcare provider achieves a competitive advantage in its services. The hospital benefits from cost savings through more accurate treatment and better application of services. All of this, in turn, translates into savings on insurance claims. This is an ideal scenario for the Internet of Things—when all the devices integrate easily and when the relevant organizations have all the right systems in place. But in reality, that can be difficult to achieve. Core design principles are required to make the whole system work. Open standards allow these systems to talk to each other. Integrated security protects personal, financial, commercial and regulatory information. A reliable and highly available systems infrastructure is necessary to keep these systems running 24/7. If this system were just made up of separate components, it would be prohibitively complex and expensive for almost any organization. The solution is integration, and Oracle is leading the way. We’re developing converged solutions, not just from device to datacenter, but across devices, utilizing the Java platform, and through data acquisition and management, integration, analytics, security and decision-making. The Internet of Things (IoT) requires the predictable action and interaction of a potentially endless number of components. It’s in that convergence that the true value of the Internet of Things emerges. Partners who take the comprehensive view and choose to engage with the Internet of Things as a fully integrated platform stand to gain the most from the Internet of Things’ many opportunities. To discover what else Oracle is doing to connect the world, read about Oracle’s Internet of Things Platform. Learn how you can get involved as a partner by checking out the Oracle Java Knowledge Zone. Best regards, David Hicks

    Read the article

  • New PeopleSoft HCM 9.1 On Demand Standard Edition provides a complete set of IT services at a low, predictable monthly cost

    - by Robbin Velayedam
    At Oracle Open World last month, Oracle announced that we are extending our On Demand offerings with the general availability of PeopleSoft On Demand Standard Edition. Standard Edition represents Oracle’s commitment to providing customers a choice of solutions, technology, and deployment options commensurate with their business needs and future growth. The Standard Edition offering complements the traditional On Demand offerings (Enterprise and Professional Editions) by focusing on a low, predictable monthly cost model that scales with the size of your business.   As part of Oracle's open cloud strategy, customers can freely move PeopleSoft licensed applications between on premise and the various  on demand options as business needs arise.    In today’s business climate, aggressive and creative business objectives demand more of IT organizations. They are expected to provide technology-based solutions to streamline business processes, enable online collaboration and multi-tasking, facilitate data mining and storage, and enhance worker productivity. As IT budgets remain tight in a recovering economy, the challenge becomes how to meet these demands with limited time and resources. One way is to eliminate the variable costs of projects so that your team can focus on the high priority functions and better predict funding and resource needs two to three years out. Variable costs and changing priorities can derail the best laid project and capacity plans. The prime culprits of variable costs in any IT organization include disaster recovery, security breaches, technical support, and changes in business growth and priorities. Customers have an immediate need for solutions that are cheaper, predictable in cost, and flexible enough for long-term growth or capacity changes. The Standard Edition deployment option fulfills that need by allowing customers to take full advantage of the rich business functionality that is inherent to PeopleSoft HCM, while delegating all application management responsibility – such as future upgrades and product updates – to Oracle technology experts, at an affordable and expected price. Standard Edition provides the advantages of the secure Oracle On Demand hosted environment, the complete set of PeopleSoft HCM configurable business processes, and timely management of regular updates and enhancements to the application functionality and underlying technology. Standard Edition has a convenient monthly fee that is scalable by number of employees, which helps align the customer’s overall cost of ownership with its size and anticipated growth and business needs. In addition to providing PeopleSoft HCM applications' world class business functionality and Oracle On Demand's embassy-grade security, Oracle’s hosted solution distinguishes itself from competitors by offering customers the ability to transition between different deployment and service models at any point in the application ownership lifecycle. As our customers’ business and economic climates change, they are free to transition their applications back to on-premise at any time. HCM On Demand Standard Edition is based on configurability options rather than customizations, requiring no additional code to develop or maintain. This keeps the cost of ownership low and time to production less than a month on average. Oracle On Demand offers the highest standard of security and performance by leveraging a state-of-the-art data center with dedicated databases, servers, and secured URL all within a private cloud. Customers will not share databases, environments, platforms, or access portals with other customers because we value how mission critical your data are to your business. Oracle’s On Demand also provides a full breadth of disaster recovery services to provide customers the peace of mind that their data are secure and that backup operations are in place to keep their businesses up and running in the case of an emergency. Currently we have over 50 PeopleSoft customers delegating us with the management of their applications through Oracle On Demand. If you are a customer interested in learning more about the PeopleSoft HCM 9.1 Standard Edition and how it can help your organization minimize your variable IT costs and free up your resources to work on other business initiatives, contact Oracle or your Account Services Representative today.

    Read the article

  • ReSharper 8.0 EAP now available

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/28/resharper-8.0-eap-now-available.aspxJetbrains have just released |ReSharper 8.0 Beta on their Early Access |Programme at http://www.jetbrains.com/resharper/whatsnew/?utm_source=resharper8b&utm_medium=newsletter&utm_campaign=resharper&utm_content=customersResharper 8.0 comes with the following new features:Support for Visual Studio 2013 Preview. Yes, ReSharper is known to work well with the fresh preview of Visual Studio 2013, and if you have already started digging into it, ReSharper 8.0 Beta is ready for the challenge.Faster code fixes. Thanks to the new Fix in Scope feature, you can choose to batch-fix some of the code issues that ReSharper detects in the scope of a project or the whole solution. Supported fixes include removing unused directives and redundant casts.Project dependency viewer. ReSharper is now able to visualize a project dependency graph for a bird's eye view of dependencies within your solution, all without compiling anything!Multifile templates. ReSharper's file templates can now be expanded to generate more than one file. For instance, this is handy for generating pairs of a main logic class and a class for extensions, or sets of partial files.Navigation improvements. These include a new action called Go to Everything to let you search for a file, type or method name from the same input box; support for line numbers in navigation actions; a new tool window called Assembly Explorer for browsing through assemblies; and two more contextual navigation actions: Navigate to Generic Substitutions and Navigate to Assembly Explorer.New solution-wide refactorings. The set of fresh refactorings is headlined by the highly requested Move Instance Method to move methods between classes without making them static. In addition, there are Inline Parameter and Pull Parameter. Last but not least, we're also introducing 4 new XAML-specific refactorings!Extraordinary XAML support. A plethora of new and improved functionality for all developers working with XAML code includes dedicated grid inspections and quick-fixes; Extract Style, Extract, Move and Inline Resource refactorings; atomic renaming of dependency properties; and a lot more.More accessible code completion. ReSharper 8 makes more of its IntelliSense magic available in automatic completion lists, including extension methods and an option to import a type. We're also introducing double completion which gives you additional completion items when you press the corresponding shortcut for the second time.A new level of extensibility. With the new NuGet-based Extension Manager, discovering, installing and uninstalling ReSharper extensions becomes extremely easy in Visual Studio 2010 and higher. When we say extensions, we mean not only full-fledged plug-ins but also sets of templates or SSR patterns that can now be shared much more easily.CSS support improvements. Smarter usage search for CSS attributes, new CSS-specific code inspections, configurable support for CSS3 and earlier versions, compatibility checks against popular browsers - there's a rough outline of what's new for CSS in ReSharper 8.A command-line version of ReSharper. ReSharper 8 goes beyond Visual Studio: we now provide a free standalone tool with hundreds of ReSharper inspections and additionally a duplicate code finder that you can integrate with your CI server or version control system.Multiple minor improvements in areas such as decompiling and code formatting, as well as support for the Blue Theme introduced in Visual Studio 2012 Update 2.

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • Pantech Link II, Ubuntu and Virtual XP

    - by user85041
    Okay this is my problem. I have a Pantech Link II, dmesg states: [ 896.072037] usb 2-3: new high-speed USB device number 3 using ehci_hcd [ 896.258562] cdc_acm 2-3:1.0: ttyACM0: USB ACM device [ 896.260039] usbcore: registered new interface driver cdc_acm [ 896.260042] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters Have it installed through wine (pc suite and driver) and it doesn't see it. Virtual XP through VMWare Player sees my device, knows it needs a driver. The removable devices says Curitel Pantech USB Device (Maybe Driver). I have PC Suite installed in XP, I install the driver through the executable.. it says problem with installing hardware, and then it disappears. Ubuntu sees it after restart, but if I start XP with that driver installed, it disappears from both and I get these errors in dmesg: [ 1047.760555] /dev/vmmon[2882]: PTSC: initialized at 3093322000 Hz using TSC, TSCs are synchronized. [ 1048.174033] /dev/vmmon[2882]: Monitor IPI vector: 0 [ 1055.293060] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293074] /dev/vmnet: port on hub 8 successfully opened [ 1055.293088] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293094] /dev/vmnet: port on hub 8 successfully opened [ 1072.446305] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446316] /dev/vmnet: port on hub 8 successfully opened [ 1072.446328] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446334] /dev/vmnet: port on hub 8 successfully opened [ 1072.856024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.292024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.732024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1127.743034] NET: Registered protocol family 39 [ 1127.749320] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_ALLOC (cid=1522210225,result=4). [ 1144.104031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1144.412031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1155.889976] ehci_hcd 0000:00:13.2: force halt; handshake ffffc90000642024 00004000 00000000 -> -110 [ 1155.889980] ehci_hcd 0000:00:13.2: HC died; cleaning up [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 lessa@X:~$ dmesg|tail [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 I have tried uninstalling, and installing manually from the device manager update driver while it's still has the warning sign.. it doesn't see the drivers as valid. No idea how to fix this.. would prefer to not have to go to another computer. I'm not trying to do anything but get the pictures off of it. I have to restart ubuntu, plug in device, for ubuntu to see it correctly again. I am like a month and a half old linux newbie so I have no idea the commands I could use for this, and I don't have a memory card in the phone to mount.

    Read the article

  • Data breakpoints to find points where data gets broken

    - by raccoon_tim
    When working with a large code base, finding reasons for bizarre bugs can often be like finding a needle in a hay stack. Finding out why an object gets corrupted without no apparent reason can be quite daunting, especially when it seems to happen randomly and totally out of context. Scenario Take the following scenario as an example. You have defined the a class that contains an array of characters that is 256 characters long. You now implement a method for filling this buffer with a string passed as an argument. At this point you mistakenly expect the buffer to be 256 characters long. At some point you notice that you require another character buffer and you add that after the previous one in the class definition. You now figure that you don’t need the 256 characters that the first member can hold and you shorten that to 128 to conserve space. At this point you should start thinking that you also have to modify the method defined above to safeguard against buffer overflow. It so happens, however, that in this not so perfect world this does not cross your mind. Buffer overflow is one of the most frequent sources for errors in a piece of software and often one of the most difficult ones to detect, especially when data is read from an outside source. Many mass copy functions provided by the C run-time provide versions that have boundary checking (defined with the _s suffix) but they can not guard against hard coded buffer lengths that at some point get changed. Finding the bug Getting back to the scenario, you’re now wondering why does the second string get modified with data that makes no sense at all. Luckily, Visual Studio provides you with a tool to help you with finding just these kinds of errors. It’s called data breakpoints. To add a data breakpoint, you first run your application in debug mode or attach to it in the usual way, and then go to Debug, select New Breakpoint and New Data Breakpoint. In the popup that opens, you can type in the memory address and the amount of bytes you wish to monitor. You can also use an expression here, but it’s often difficult to come up with an expression for data in an object allocated on the heap when not in the context of a certain stack frame. There are a couple of things to note about data breakpoints, however. First of all, Visual Studio supports a maximum of four data breakpoints at any given time. Another important thing to notice is that some C run-time functions modify memory in kernel space which does not trigger the data breakpoint. For instance, calling ReadFile on a buffer that is monitored by a data breakpoint will not trigger the breakpoint. The application will now break at the address you specified it to. Often you might immediately spot the issue but the very least this feature can do is point you in the right direction in search for the real reason why the memory gets inadvertently modified. Conclusions Data breakpoints are a great feature, especially when doing a lot of low level operations where multiple locations modify the same data. With the exception of some special cases, like kernel memory modification, you can use it whenever you need to check when memory at a certain location gets changed on purpose or inadvertently.

    Read the article

  • JTF Tranlsation Festival 2011

    - by user13133135
    ?????????????????????? (MT) ??????????????????????? JTF ????????????????????????????????????????????????? ???5??!???21?JTF???????? ? ??:2011?11?29?(?)9:30~20:30(??9:00) ? ??:??????????(????)?(??) ? ??:(?)?????? ??:JTF?????????? ? http://www.jtf.jp/jp/festival/festival_top.html ????????????????????????????????MT ????????????????????????????????????????????????? 90 ???!??(!?)?????????????????????????????????????????????????????????????????????????????? ????????????????? http://www.jtf.jp/jp/festival/festival_program.html#koen_04 ?????????????????????????????? English:  It's been a while since the last post... I have been working on machine translation (MT) and post editing (PE) for Japanese.  Last year was my first step in MT+PE area, and I would take this year as an advanced step.  I plan to talk over Post editing 2011 (Advanced Step) on November 27 at JTF Translation Festival.  ?5 days before application due? 21st JTF Translation Festival ? Date:Nov 29, 2011 Tuesday 9:30~20:30(Gate open: 9:00) ? Place:Arcadia Ichigaya Tokyo ? http://www.jtf.jp/jp/festival/festival_top.html In this session, I would like to expand the thought on "how to best utilize MT and PE" either from the view of Client and Translator.  I will show some examples of post editing as a guideline to know what is the best way and most effective way to do post-edit for Japanese.  Also, I will discuss what is the best practice for MT users (Client). The session lasts 90 minutes... sound a little long for me, but I want to spend more time for discussion than last year.  It would be great to exchange thought or experiences about MT and PE.  What is your concerns or problems in the daily work with MT ?  If you have some, please bring them to my session at JTF Translation Festival.  Here is my session details (Japanese): http://www.jtf.jp/jp/festival/festival_program.html#koen_04 Here is the outline of my session: What is the advantage of MT ? Does it solve all the problems about cost, resource, and quality ?  Well, it is not a magic.  So, you cannot expect all at once.  When you have a problem, there are 3 options... 1. Be patient and wait until everything is ready, 2. Run a workaround using anything available now, 3. Find out something completely new and spend time and money. This time, I will focus Option 2 - do something with what we already have.  That is, I will discuss how we can best utilize MT in our daily business.  My view is two ways: From Client point of view, and From Translator point of view Looking forward to meeting many people and exchanging thoughts and information!

    Read the article

  • PCI Encryption Key Management

    - by Unicorn Bob
    (Full disclosure: I'm already an active participant here and at StackOverflow, but for reasons that should hopefully be obvious, I'm choosing to ask this particular question anonymously). I currently work for a small software shop that produces software that's sold commercially to manage small- to mid-size business in a couple of fairly specialized industries. Because these industries are customer-facing, a large portion of the software is related to storing and managing customer information. In particular, the storage (and securing) of customer credit card information. With that, of course, comes PCI compliance. To make a long story short, I'm left with a couple of questions about why certain things were done the way they were, and I'm unfortunately without much of a resource at the moment. This is a very small shop (I report directly to the owner, as does the only other full-time employee), and the owner doesn't have an answer to these questions, and the previous developer is...err...unavailable. Issue 1: Periodic Re-encryption As of now, the software prompts the user to do a wholesale re-encryption of all of the sensitive information in the database (basically credit card numbers and user passwords) if either of these conditions is true: There are any NON-encrypted pieces of sensitive information in the database (added through a manual database statement instead of through the business object, for example). This should not happen during the ordinary use of the software. The current key has been in use for more than a particular period of time. I believe it's 12 months, but I'm not certain of that. The point here is that the key "expires". This is my first foray into commercial solution development that deals with PCI, so I am unfortunately uneducated on the practices involved. Is there some aspect of PCI compliance that mandates (or even just strongly recommends) periodic key updating? This isn't a huge issue for me other than I don't currently have a good explanation to give to end users if they ask why they are being prompted to run it. Question 1: Is the concept of key expiration standard, and, if so, is that simply industry-standard or an element of PCI? Issue 2: Key Storage Here's my real issue...the encryption key is stored in the database, just obfuscated. The key is padded on the left and right with a few garbage bytes and some bits are twiddled, but fundamentally there's nothing stopping an enterprising person from examining our (dotfuscated) code, determining the pattern used to turn the stored key into the real key, then using that key to run amok. This seems like a horrible practice to me, but I want to make sure that this isn't just one of those "grin and bear it" practices that people in this industry have taken to. I have developed an alternative approach that would prevent such an attack, but I'm just looking for a sanity check here. Question 2: Is this method of key storage--namely storing the key in the database using an obfuscation method that exists in client code--normal or crazy? Believe me, I know that free advice is worth every penny that I've paid for it, nobody here is an attorney (or at least isn't offering legal advice), caveat emptor, etc. etc., but I'm looking for any input that you all can provide. Thank you in advance!

    Read the article

  • When will EBS 12.2 be released?

    - by Steven Chan (Oracle Development)
    The most frequently asked question at OpenWorld this year was, "When will EBS 12.2 be released?" Sadly, Oracle's communication policies prohibit us from speculating about release dates for unreleased software. We are not permitted to give estimates, rough timelines, guesses, or anything else that remotely resembles specific guidance on release dates. You can monitor My Oracle Support and this blog for updates on EBS 12.2.  I'll post them here as soon as they're available.  I'm embedding an old favourite from 2007 in its entirety here, since it applies equally to new releases as well as certifications. "Loose Lips Sink Ships" (March 20, 2007)If I were to sort emails in my inbox into groups, the biggest -- by far -- would be the one for emails that start with, "When will _____ be certified with the E-Business Suite?"  I answer these dutifully but know that my replies can sometimes be maddening, for two reasons:  technical uncertainty, and Oracle's rules for such communications. On the Spiral Model of CertificationsTechnology stack certifications tend to be highly iterative in nature.  As a result, statements about certification dates tend to be accurate only when made in hindsight.  Laypeople are horrified to hear this, but it's the ugly truth.  Uncertainty is simply inherent to the process.  I've become inured to it over the years, but it might come as a surprise to you that it can take many cycles to get fully-released software to work together.  Take this scenario: We test a particular combination of Component A and B. If we encounter a problem, say, with Component A, we log a bug. We receive a new version of Component A. The process iterates again. The reality is this: until a certification is completed and released, there's no accurate way of telling how many iterations are yet to come.  This is true regardless of the number of iterations that have already been completed.  Our Lips Are SealedGenerally, people understand that things are subject to change, so the second reason I can't say anything specific is actually much more important than the first.  "Loose lips might sink ships" was coined in World War II in an effort to remind people that careless talk can have serious consequences.  Curiously, this applies to Oracle's communications about upcoming features, configurations, and releases, too.  As a publicly traded company, we have very strict policies that prohibit us from linking specific releases to specific dates.  If you've ever listened to an earnings call with analysts, you'll often hear them asking, "Can you add a little more color to that statement?"  For certifications, color is usually the only thing that I have.  Sometimes I can provide a bit more information about the technical nature of the certification in question, such as expected footprints or version levels.  I can occasionally share technical issues that we've found, too, to convey the degree of risk or complexity involved in the certification.  Aside from that, there's little additional information about specific dates, date ranges, or even speculation about dates that I can provide... that is, without having one of those uncomfortable conversations with Oracle Legal.  So, as much as it pains me to do so, when it comes to dates, I'm always forced to conclude with a generic reply that blandly states one of the following: We're working on that certification right now That certification is in the pipeline but hasn't been started yet We don't have plans for that certification Don't Shoot the MessengerThankfully, I've developed a thick skin over the years -- which is a good thing, considering the colorful and energetic responses I've received over the years after answering these questions.  However, on behalf of my Oracle colleagues who are faced with these questions every day in the field, I urge you to remember that they're required to follow these same corporate rules about date disclosures.  It never hurts to ask, but don't be too disappointed if we can't provide you with a detailed answer.  The Go-Go's had it right, after all.  Related Articles Webcast Replay Available: Technical Preview of EBS 12.2 Online Patching

    Read the article

  • Come up with a real-world problem in which only the best solution will do (a problem from Introduction to algorithms) [closed]

    - by Mike
    EDITED (I realized that the question certainly needs a context) The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: "Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough." I'm interested in its first statement. And (from my understanding) it is asked to name a real-world problem where only the exact solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make this simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then when the mathematical model needs to be solved an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because the question is taken from the book "Introduction to algorithms" the term "solution" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: What are the real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough solution)? There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do there's no practical need in instant crack though it's desired. So the quality of "best" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N*Y. But that's the problem of finding solution for P on computer X which is... well, good enough. My final thought that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases really very very good but still not the best ones. Isn't it? :) If it's not can you provide an example? Or can you name any such unsolved problem of practical interest?

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • Spotlight on Oracle Social Relationship Management. Social Enable Your Enterprise with Oracle SRM.

    - by Pat Ma
    Facebook is now the most popular site on the Internet. People are tweeting more than they send email. Because there are so many people on social media, companies and brands want to be there too. They want to be able to listen to social chatter, engage with customers on social, create great-looking Facebook pages, and roll out social-collaborative work environments within their organization. This is where Oracle Social Relationship Management (SRM) comes in. Oracle SRM is a product that allows companies to manage their presence with prospects and customers on social channels. Let's talk about two popular use cases with Oracle SRM. Easy Publishing - Companies now have an average of 178 social media accounts - with every product or geography or employee group creating their own social media channel. For example, if you work at an international hotel chain with every single hotel creating their own Facebook page for their location, that chain can have well over 1,000 social media accounts. Managing these channels is a mess - with logging in and out of every account, making sure that all accounts are on brand, and preventing rogue posts from destroying the brand. This is where Oracle SRM comes in. With Oracle Social Relationship Management, you can log into one window and post messages to all 1,000+ social channels at once. You can set up approval flows and have each account generate their own content but that content must be approved before publishing. The benefits of this are easy social media publishing, brand consistency across all channels, and protection of your brand from inappropriate posts. Monitoring and Listening - People are writing and talking about your company right now on social media. 75% of social media users have written a negative post about a brand after a poor customer service experience. Think about all the negative posts you see in your Facebook news feed about delayed flights or being on hold for 45 minutes. There is so much social chatter going on around your brand that it's almost impossible to keep up or comprehend what's going on. That's where Oracle SRM comes in. With Social Relationship Management, a company can monitor and listen to what people are saying about them on social channels. They can drill down into individual posts or get a high level view of trends and mentions. The benefits of this are comprehending what's being said about your brand and its competitors, understanding customers and their intent, and responding to negative posts before they become a PR crisis. Oracle SRM is part of Oracle Cloud. The benefits of cloud deployment for customers are faster deployments, less maintenance, and lower cost of ownership versus on-premise deployments. Oracle SRM also fits into Oracle's vision to social enable your enterprise. With Oracle SRM, social media is not just a marketing channel. Social media is also mechanism for sales, customer support, recruiting, and employee collaboration. For more information about how Oracle SRM can social enable your enterprise, please visit oracle.com/social. For more information about Oracle Cloud, please visit cloud.oracle.com.

    Read the article

  • ADF page security - the untold password rule

    - by ankuchak
    I'm kinda new to Oracle ADF. So, in this blog post I'm going to share something with you that I faced (and recovered from) recently. Initially I thought if I should at all put a blog post on this, because it's totally simple. Still, simplicity is a relative term. So without wasting further time, let's kick off.    I was exploring the ADF security aspect to secure a page through html basic authentication. The idea is very simple and the credential store etc. come into picture. But I was not able to run a successful test of this phenomenally simple thing even after trying for over 30 minutes. This is what I did.   I created a simple jsf page and put a panel in it. And I put a simple el to show the current user name.  Next I created a user that I should test with. I named the password as myuser, just to keep it simple. Then I created an enterprise role and mapped the user that I just created. Then I created an application role and mapped the enterprise role to it. Then I mapped the resource, the simple jsf page in this case, to this application role. This way, only users with the given application role can only access this page (as if you didn't know this duh!).  Of course, I had to create the page definition for the page before I could map it to an application role. What else! done! Then I hit the run menu item and it all went well...   Until... I got this message. I put the correct credentials repeatedly 2-3 times. Still I got the same error. Why? I didn't get any error message during the deployment. nope.  Then, as I said before, I spent over 30 minutes trying different things out, things like mapping only the user(not the role) to the page, changing the context root etc. Nothing worked!  Then of course, I bothered to look at the logs and found this. See the first red line. That says it all. So the problem was with that password. The password must have at least one special character and one digit in it. I think I was misled by the missing password hint/rule and the fact that the deployment didn't fail even if the user was not created properly. Well, yes, I agree that I was fool enough not to look at the logs.  Later I changed the password to something like myuser123# . And it worked. I hope it helped.

    Read the article

  • EPPM Is a Must-Have Capability as Global Energy and Power Industries Eye US$38 Trillion in New Investments

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “The process manufacturing industry is facing an unprecedented challenge: from now until 2035, cumulative worldwide investments of US$38 trillion will be required for drilling, power generation, and other energy projects,” Iain Graham, director of energy and process manufacturing for Oracle’s Primavera, said in a recent webcast. He adds that process manufacturing organizations such as oil and gas, utilities, and chemicals must manage this level of investment in an environment of constrained capital markets, erratic supply and demand, aging infrastructure, heightened regulations, and declining global skills. In the following interview, Graham explains how the right enterprise project portfolio management (EPPM) technology can help the industry meet these imperatives. Q: Why is EPPM so important for today’s process manufacturers? A: If the industry invests US$38 trillion without proper cost controls in place, a huge amount of resources will be put at risk, especially when it comes to cost overruns that may occur in large capital projects. Process manufacturing companies must not only control costs, but also monitor all the various contractors that will be involved in each project. If you’re not managing your own workers and all the interdependencies among the different contractors, then you’ve got problems. Q: What else should process manufacturers look for? A: It’s also important that an EPPM solution has the ability to manage more than just capital projects. For example, it’s best to manage maintenance and capital projects in the same system. Say you’re due to install a new transformer in a power station as part of a capital project, but routine maintenance in that area of the facility is scheduled for that morning. The lack of coordination could lead to unforeseen delays. There are also IT considerations that impact capital projects, such as adding servers and network cable for a control system in a power station. What organizations need is a true EPPM system that’s not just for capital projects, maintenance, or IT activities, but instead an enterprisewide solution that provides visibility into all types of projects. Read the complete Q&A here and discover the practical framework for successfully managing this massive capital spending.

    Read the article

  • Oracle????????(2012?11?)

    - by Steve He(???)
      Oracle Support Training Oracle ???????????,????????????,??????,?????Oracle??????????,????????????????????????????????Oracle???????????? ???? ?? ?? ?? ?? ???? ?? Support Best Practices (formerly WEWS) ???? ?? 11?13? 15:00 ?? EBS - Support Diagnostics Tools ???? ?? 11?15? 15:00 ?? OSWatcher Black Box: How to improve performance and monitor your system automatically ???? ?? 11?15? 15:00 ?? MOS - Configuration Manager ???? ?? 11?20? 15:00 ?? Get Proactive Resolve - Answers Generic ???? ?? 11?22? 15:00 ?? MOS - Communities ???? ?? 11?27? 15:00 ?? ?????? My Oracle Support ??????????????????????,??? world clock.??????? Oracle ?????????????,??? note 603505.1 ????????????,??????????????(Mandarin)?????? Internet Explorer ??? My Oracle Support ????????????????? ?? ?? ?? ?? Creating Customer Value ???? ?? ?? Oracle Support Basics ???? ?? ?? An Introduction to My Oracle Support ???? ?? ?? Service Request Management ???? ?? ?? Customer User Administration ???? ?? ?? Managing Favorite ???? ?? ?? Quick Search ???? ?? ?? Hot Topic Email ???? ?? ?? Patch and Update ???? ?? ?? Site Alert ???? ?? ?? Search and Browse Features in My Oracle Support ???? ?? ?? Why Use Configuration Manager In The My Oracle Support ???? ?? ?? Enterprise Manager 11g and My Oracle Support ???? ?? ?? Oracle Collaborative Support ???? ?? ?? How to Escalate a Service Request within Oracle Support ???? ?? ?? ????????,?? Support Training Community ?????????? Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • i get the exception org.hibernate.MappingException: No Dialect mapping for JDBC type: -9

    - by ramesh m
    i am using hibernate .i wrote Native sql query. this query will be execute in sqlSever command promt try { session=HibernateUtil.getInstance().getSession(); transaction=session.beginTransaction(); SQLQuery query = session.createSQLQuery("SELECT AP.PROJECT_NAME, AP.SKILLSET, PA.START_DATE, PA.END_DATE, RS.EMPLOYEE_ID, RS.EMPLOYEE_NAME, RS.REPORTING_PM FROM RESOURCE_MASTER RS,SHARED_PROPOSAL S, ACTUAL_PROPOSAL AP, PROJECT_APPROVED PA, PROJECT_ALLOCATION PL WHERE RS.EMPLOYEE_ID = PL.EMPLOYEE_ID AND PA.PROJECT_ID = PL.PROJECT_ID AND PA.SHARED_PROPOSAL_ID = S.SHARED_PROPOSAL_ID AND S.ACTUAL_PROPOSAL_ID=AP.ACTUAL_PROPOSAL_ID"); List<Object[]> obj=query.list(); Object[] object=new Object[arrayList.size()]; for (int i = 0; i < arrayList.size(); i++) { object[i]=arrayList.get(i); System.out.println(object[i]); } arrayList.get(0); String name=(String)arrayList.get(0); logger.info("In find All searchDeveloper"); }catch(Exception exception) { throw new PPAMException("Contact admin","Problem retrieving resource master list",exception); } like that i am using on that time i got this Exception: org.hibernate.MappingException: No Dialect mapping for JDBC type: -9 this query is executed in sqlserver command propt , i maaped seven tables, but remove ACTUAL_PROPOSAL AP table .it is execute correctly please help me

    Read the article

  • Reporting services 2008: ReportExecution2005.asmx does not exist

    - by Shimrod
    Hi everyone, I'm trying to generate a report directly from the code (to send it by mail after). I make this in a windows service. So here is what I'm doing: Dim rview As New ReportViewer() Dim reportServerAddress As String = "http://server/Reports_client" rview.ServerReport.ReportServerUrl = New Uri(reportServerAddress) Dim paramList As New List(Of Microsoft.Reporting.WinForms.ReportParameter) paramList.Add(New Microsoft.Reporting.WinForms.ReportParameter("param1", t.Value)) paramList.Add(New Microsoft.Reporting.WinForms.ReportParameter("CurrentDate", Date.Now)) Dim reportsDirectory As String = "AppName.Reports" Dim reportPath As String = String.Format("/{0}/{1}", reportsDirectory, reportName) rview.ServerReport.ReportPath = reportPath rview.ServerReport.SetParameters(paramList) 'This is where I get the exception Dim mimeType, encoding, extension, deviceInfo As String Dim streamids As String() Dim warnings As Microsoft.Reporting.WinForms.Warning() deviceInfo = "<DeviceInfo><SimplePageHeaders>True</SimplePageHeaders></DeviceInfo>" Dim format As String = "PDF" Dim bytes As Byte() = rview.ServerReport.Render(format, deviceInfo, mimeType, encoding, extension, streamids, warnings) When debugging this code, I can see it throws a MissingEndpointException where I make the SetParameters(paramList) with this message: The attempt to connect to the report server failed. Check your connection information and that the report server is a compatible version. Looking in the server's log file, I can see this: ui!ReportManager_0-8!878!06/02/2010-11:34:36:: Unhandled exception: System.Web.HttpException: The file '/Reports_client/ReportExecution2005.asmx' does not exist. at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResult(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResult(HttpContext context, VirtualPath virtualPath) at System.Web.UI.WebServiceParser.GetCompiledType(String inputFile, HttpContext context) at System.Web.Services.Protocols.WebServiceHandlerFactory.GetHandler(HttpContext context, String verb, String url, String filePath) at System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) at System.Web.HttpApplication.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) I didn't find any resource on the web that fits my problem. Does anyone have a clue ? I'm able to view the reports from a web application, so I'm sure the server is running. Thanks in advance.

    Read the article

  • Using WPF and SlimDx (DirectX 10/11)

    - by slurmomatic
    I am using SlimDX with WinForms for a while now, but want to make the switch to WPF now. However, I can't figure out how to get DX10/11 working with WPF. The February release of SlimDX provides a WPF example, which only works with DX 9 though. I found the following solution: http://jmorrill.hjtcentral.com/Home/tabid/428/EntryId/437/Direct3D-10-11-Direct2D-in-WPF.aspx but can't get it to work with SlimDX. My main problem is the shared resource handle as I don't know how to retrieve the shared handle from a SlimDX texture. I can't find any information to this topic. In C++ the code looks like this: HRESULT D3DImageEx::GetSharedHandle(IUnknown *pUnknown, HANDLE * pHandle) { HRESULT hr = S_OK; *pHandle = NULL; IDXGIResource* pSurface; if (FAILED(hr = pUnknown->QueryInterface(__uuidof(IDXGIResource), (void**)&pSurface))) return hr; hr = pSurface->GetSharedHandle(pHandle); pSurface->Release(); return hr; } Basically, what I want to do (because I think that this is the solution), is to share a texture between a Direct3d9DeviceEx (for rendering the WPF D3DImage) and a Direct3d10Device (a texture render target for my scene). Any pointers in the right direction are greatly appreciated.

    Read the article

  • java.lang.ClassCastException: java.lang.Integer cannot be cast to java.util.HashMap

    - by kongkea
    I've got this Error When I click listview to show full image size. how can i solve it? Error 11-20 10:27:47.039: D/AndroidRuntime(5078): Shutting down VM 11-20 10:27:47.039: W/dalvikvm(5078): threadid=1: thread exiting with uncaught exception (group=0x40c061f8) 11-20 10:27:47.047: E/AndroidRuntime(5078): FATAL EXCEPTION: main 11-20 10:27:47.047: E/AndroidRuntime(5078): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.util.HashMap 11-20 10:27:47.047: E/AndroidRuntime(5078): at com.example.mylistview.MainActivity$1.onItemClick(MainActivity.java:103) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AdapterView.performItemClick(AdapterView.java:292) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AbsListView.performItemClick(AbsListView.java:1173) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AbsListView$PerformClick.run(AbsListView.java:2701) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AbsListView$1.run(AbsListView.java:3453) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.os.Handler.handleCallback(Handler.java:605) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.os.Handler.dispatchMessage(Handler.java:92) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.os.Looper.loop(Looper.java:137) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.app.ActivityThread.main(ActivityThread.java:4514) 11-20 10:27:47.047: E/AndroidRuntime(5078): at java.lang.reflect.Method.invokeNative(Native Method) 11-20 10:27:47.047: E/AndroidRuntime(5078): at java.lang.reflect.Method.invoke(Method.java:511) 11-20 10:27:47.047: E/AndroidRuntime(5078): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:790) 11-20 10:27:47.047: E/AndroidRuntime(5078): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:557) 11-20 10:27:47.047: E/AndroidRuntime(5078): at dalvik.system.NativeStart.main(Native Method) MainActivity public class MainActivity extends Activity { public static final int DIALOG_DOWNLOAD_JSON_PROGRESS = 0; private ProgressDialog mProgressDialog; ArrayList<HashMap<String, Object>> MyArrList; @SuppressLint("NewApi") @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Permission StrictMode if (android.os.Build.VERSION.SDK_INT > 9) { StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build(); StrictMode.setThreadPolicy(policy); } // Download JSON File new DownloadJSONFileAsync().execute(); } @Override protected Dialog onCreateDialog(int id) { switch (id) { case DIALOG_DOWNLOAD_JSON_PROGRESS: mProgressDialog = new ProgressDialog(this); mProgressDialog.setMessage("Downloading....."); mProgressDialog.setProgressStyle(ProgressDialog.STYLE_SPINNER); mProgressDialog.setCancelable(true); mProgressDialog.show(); return mProgressDialog; default: return null; } } // Show All Content public void ShowAllContent() { // listView1 final ListView lstView1 = (ListView)findViewById(R.id.listView1); lstView1.setAdapter(new ImageAdapter(MainActivity.this,MyArrList)); lstView1.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View v, int position, long id) { HashMap<String, Object> hm = (HashMap<String, Object>) lstView1.getAdapter().getItem(position); String imagePath = (String) hm.get("photo"); Intent i = new Intent(MainActivity.this,FullImageActivity.class); i.putExtra("fullImage", imagePath); startActivity(i); } }); } public class ImageAdapter extends BaseAdapter { private Context context; private ArrayList<HashMap<String, Object>> MyArr = new ArrayList<HashMap<String, Object>>(); public ImageAdapter(Context c, ArrayList<HashMap<String, Object>> myArrList) { // TODO Auto-generated method stub context = c; MyArr = myArrList; } public int getCount() { // TODO Auto-generated method stub return MyArr.size(); } public Object getItem(int position) { // TODO Auto-generated method stub return position; } public long getItemId(int position) { // TODO Auto-generated method stub return position; } public View getView(int position, View convertView, ViewGroup parent) { // TODO Auto-generated method stub LayoutInflater inflater = (LayoutInflater) context .getSystemService(Context.LAYOUT_INFLATER_SERVICE); if (convertView == null) { convertView = inflater.inflate(R.layout.activity_column, null); } // ColImage ImageView imageView = (ImageView) convertView.findViewById(R.id.ColImgPath); imageView.getLayoutParams().height = 80; imageView.getLayoutParams().width = 80; imageView.setPadding(5, 5, 5, 5); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); try { imageView.setImageBitmap((Bitmap)MyArr.get(position).get("ImageThumBitmap")); } catch (Exception e) { // When Error imageView.setImageResource(android.R.drawable.ic_menu_report_image); } // ColImgID TextView txtImgID = (TextView) convertView.findViewById(R.id.ColImgID); txtImgID.setPadding(10, 0, 0, 0); txtImgID.setText("ID : " + MyArr.get(position).get("id").toString()); // ColImgName TextView txtPicName = (TextView) convertView.findViewById(R.id.ColImgName); txtPicName.setPadding(50, 0, 0, 0); txtPicName.setText("Name : " + MyArr.get(position).get("first_name").toString()); return convertView; } } // Download JSON in Background public class DownloadJSONFileAsync extends AsyncTask<String, Void, Void> { protected void onPreExecute() { super.onPreExecute(); showDialog(DIALOG_DOWNLOAD_JSON_PROGRESS); } @Override protected Void doInBackground(String... params) { // TODO Auto-generated method stub String url = "http://192.168.10.104/adchara1/"; JSONArray data; try { data = new JSONArray(getJSONUrl(url)); MyArrList = new ArrayList<HashMap<String, Object>>(); HashMap<String, Object> map; for(int i = 0; i < data.length(); i++){ JSONObject c = data.getJSONObject(i); map = new HashMap<String, Object>(); map.put("id", (String)c.getString("id")); map.put("first_name", (String)c.getString("first_name")); // Thumbnail Get ImageBitmap To Object map.put("photo", (String)c.getString("photo")); map.put("ImageThumBitmap", (Bitmap)loadBitmap(c.getString("photo"))); // Full (for View Popup) map.put("frame", (String)c.getString("frame")); MyArrList.add(map); } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } protected void onPostExecute(Void unused) { ShowAllContent(); // When Finish Show Content dismissDialog(DIALOG_DOWNLOAD_JSON_PROGRESS); removeDialog(DIALOG_DOWNLOAD_JSON_PROGRESS); } } /*** Get JSON Code from URL ***/ public String getJSONUrl(String url) { StringBuilder str = new StringBuilder(); HttpClient client = new DefaultHttpClient(); HttpGet httpGet = new HttpGet(url); try { HttpResponse response = client.execute(httpGet); StatusLine statusLine = response.getStatusLine(); int statusCode = statusLine.getStatusCode(); if (statusCode == 200) { // Download OK HttpEntity entity = response.getEntity(); InputStream content = entity.getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader(content)); String line; while ((line = reader.readLine()) != null) { str.append(line); } } else { Log.e("Log", "Failed to download file.."); } } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return str.toString(); } /***** Get Image Resource from URL (Start) *****/ private static final String TAG = "Image"; private static final int IO_BUFFER_SIZE = 4 * 1024; public static Bitmap loadBitmap(String url) { Bitmap bitmap = null; InputStream in = null; BufferedOutputStream out = null; try { in = new BufferedInputStream(new URL(url).openStream(), IO_BUFFER_SIZE); final ByteArrayOutputStream dataStream = new ByteArrayOutputStream(); out = new BufferedOutputStream(dataStream, IO_BUFFER_SIZE); copy(in, out); out.flush(); final byte[] data = dataStream.toByteArray(); BitmapFactory.Options options = new BitmapFactory.Options(); //options.inSampleSize = 1; bitmap = BitmapFactory.decodeByteArray(data, 0, data.length,options); } catch (IOException e) { Log.e(TAG, "Could not load Bitmap from: " + url); } finally { closeStream(in); closeStream(out); } return bitmap; } private static void closeStream(Closeable stream) { if (stream != null) { try { stream.close(); } catch (IOException e) { android.util.Log.e(TAG, "Could not close stream", e); } } } private static void copy(InputStream in, OutputStream out) throws IOException { byte[] b = new byte[IO_BUFFER_SIZE]; int read; while ((read = in.read(b)) != -1) { out.write(b, 0, read); } } /***** Get Image Resource from URL (End) *****/ @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } } FullImageActivity String imagePath = getIntent().getStringExtra("fullImage"); if(imagePath != null && !imagePath.isEmpty()){ File imageFile = new File(imagePath); if(imageFile.exists()){ Bitmap myBitmap = BitmapFactory.decodeFile(imageFile.getAbsolutePath()); ImageView iv = (ImageView) findViewById(R.id.fullimage); iv.setImageBitmap(myBitmap); } }

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >