Search Results

Search found 22961 results on 919 pages for 'memory management'.

Page 7/919 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Oracle Identity Management Connector Overview

    - by Darin Pendergraft
    Oracle Identity Manager (OIM) is a complete Identity Governance system that automates access rights management, and provisions IT resources.  One important aspect of this system is the Identity Connectors that are used to integrate OIM with external, identity-aware applications. New in OIM 11gR2 PS1 is the Identity Connector Framework (ICF) which is the foundation for both OIM and Oracle Waveset.Identity Connectors perform several very important functions: On boarding accounts from trusted sources like SAP, Oracle E-Business Suite, & PeopleSoft HCM Managing users lifecycle in various Target systems through provisioning and recon operations Synchronizing entitlements from targets systems so that they are available in the OIM request catalog Fulfilling access grants and access revoke requests Some connectors may support Role Lifecycle Management Some connectors may support password sync from target to OIM The Identity Connectors are broken down into several families: The BMC Remedy Family BMC Remedy Ticket Management BMC Remedy User Management The Microsoft Family Microsoft Active Directory Microsoft Active Directory Password Sync Microsoft Exchange The Novell Family Novell eDirectory Novell GroupWise The Oracle E-Business Suite Family Oracle e-Business Employee Reconciliation Oracle e-Business User Management The PeopleSoft Family PeopleSoft Employee Reconciliation PeopleSoft User Management The SAP Family SAP CUA SAP Employee Reconciliation SAP User Management The UNIX Family UNIX SSH UNIX Telnet As you can see, there are a large number of connectors that support apps from a variety of vendors to enable OIM to manage your business applications and resources. If you are interested in finding out more, you can get documentation on these connectors on our OTN page at: http://www.oracle.com/technetwork/middleware/id-mgmt/downloads/connectors-101674.html

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • Emgu CV - memory-leaks (memory consumption)

    - by martin pilch
    I am using EmguCV, the OpenCV wrapper for .NET. I am disposing all created objects but my app is still using more and more memory (in release configuration too). I have debugged my app using .NET Memory profiler and get this result: http://img532.imageshack.us/img532/2503/screenqv.png all objects instance count is oscilating but GChandle instance counr is increasing until my machine is unusable. Garbage collector does not release memory (i think). I am using VS 2008 professional, Win7 prof 32-bit, both up to date, and last stable version of emguCV. I can post some app code, if it will help. Thanks and sorry for my English. Martin

    Read the article

  • SOA Governance Starts with People and Processes

    - by Jyothi Swaroop
    While we all agree that SOA Governance is about People, Processes and Technology. Some experts are of the opinion that SOA Governance begins with People and Processes but needs to be empowered with technology to achieve the best results. Here's an interesting piece from David Linthicum on eBizq: In the world of SOA, the concept of SOA governance is getting a lot of attention. However, how SOA governance is defined and implemented really depends on the SOA governance vendor who just left the building within most enterprises. Indeed, confusion is a huge issue when considering SOA governance, and the core issues are more about the fundamentals of people and processes, and not about the technology. SOA governance is a concept used for activities related to exercising control over services in an SOA, including tracking the services, monitoring the service, and controlling changes made to the services, simple put. The trouble comes in when SOA governance vendors attempt to define SOA governance around their technology, all with different approaches to SOA governance. Thus, it's important that those building SOAs within the enterprise take a step back and understand what really need to support the concept of SOA governance. The value of SOA governance is pretty simple. Since services make up the foundation of an SOA, and are at their essence the behavior and information from existing systems externalized, it's critical to make sure that those accessing, creating, and changing services do so using a well controlled and orderly mechanism. Those of you, who already have governance in place, typically around enterprise architecture efforts, will be happy to know that SOA governance does not replace those processes, but becomes a mechanism within the larger enterprise governance concept. People and processes are first thing on the list to get under control before you begin to toss technology at this problem. This means establishing an understanding of SOA governance within the team members, including why it's important, who's involved, and the core processes that are to be follow to make SOA governance work. Indeed, when creating the core SOA governance strategy should really be independent of the technology. The technology will change over the years, but the core processes and discipline should be relatively durable over time.

    Read the article

  • When to unload graphics object from main memory?

    - by piotrek
    I writing my resource mangaer, and I consider about how it can work for graphics objects (like textures, meshes). I think about this : I want to load texture (in pseudocode): Texture t = resMgr.GetTex("image.png"); and GetTex make something like this: load texture from disk to main memory create texture object (load it to gpu memory) unload texture from main memory I consider about 3 step, does game engines that you know unload meshes/textures after load them into gpu memory ?

    Read the article

  • Is it unusual for a small company (15 developers) not to use managed source/version control?

    - by LordScree
    It's not really a technical question, but there are several other questions here about source control and best practice. The company I work for (which will remain anonymous) uses a network share to host its source code and released code. It's the responsibility of the developer or manager to manually move source code to the correct folder depending on whether it's been released and what version it is and stuff. We have various spreadsheets dotted around where we record file names and versions and what's changed, and some teams also put details of different versions at the top of each file. Each team (2-3 teams) seems to do this differently within the company. As you can imagine, it's an organised mess - organised, because the "right people" know where their stuff is, but a mess because it's all different and it relies on people remembering what to do at any one time. One good thing is that everything is backed up on a nightly basis and kept indefinitely, so if mistakes are made, snapshots can be recovered. I've been trying to push for some kind of managed source control for a while, but I can't seem to get enough support for it within the company. My main arguments are: We're currently vulnerable; at any point someone could forget to do one of the many release actions we have to do, which could mean whole versions are not stored correctly. It could take hours or even days to piece a version back together if necessary We're developing new features along with bug fixes, and often have to delay the release of one or the other because some work has not been completed yet. We also have to force customers to take versions that include new features even if they just want a bug fix, because there's only really one version we're all working on We're experiencing problems with Visual Studio because multiple developers are using the same projects at the same time (not the same files, but it's still causing problems) There are only 15 developers, but we all do stuff differently; wouldn't it be better to have a standard company-wide approach we all have to follow? My questions are: Is it normal for a group of this size not to have source control? I have so far been given only vague reasons for not having source control - what reasons would you suggest could be valid for not implementing source control, given the information above? Are there any more reasons for source control that I could add to my arsenal? I'm asking mainly to get a feel for why I have had so much resistance, so please answer honestly. I'll give the answer to the person I believe has taken the most balanced approach and has answered all three questions. Thanks in advance

    Read the article

  • Why does my Delphi program's memory continue to grow?

    - by lkessler
    I am using Delphi 2009 which has the FastMM4 memory manager built into it. My program reads in and processes a large dataset. All memory is freed correctly whenever I clear the dataset or exit the program. It has no memory leaks at all. Using the CurrentMemoryUsage routine given in spenwarr's answer to: http://stackoverflow.com/questions/437683/how-to-get-the-memory-used-by-a-delphi-program, I have displayed the memory used by FastMM4 during processing. What seems to be happening is that memory is use is growing after every process and release cycle. e.g.: 1,456 KB used after starting my program with no dataset. 218,455 KB used after loading a large dataset. 71,994 KB after clearing the dataset completely. If I exit at this point (or any point in my example), no memory leaks are reported. 271,905 KB used after loading the same dataset again. 125,443 KB after clearing the dataset completely. 325,519 KB used after loading the same dataset again. 179,059 KB after clearing the dataset completely. 378,752 KB used after loading the same dataset again. It seems that my program's memory use is growing by about 53,400 KB upon each load/clear cycle. Task Manager confirms that this is actually happening. I have heard that FastMM4 does not always release all of the program's memory back to the Operating system when objects are freed so that it can keep some memory around when it needs more. But this continual growing bothers me. Since no memory leaks are reported, I can't identify a problem. Does anyone know why this is happening, if it is bad, and if there is anything I can or should do about it?

    Read the article

  • Windows Physical Direct Memory Mapping

    - by chrisjleaf
    I'm a bit disappointed there is almost no discussion of this no matter where I look so I guess I'll have to ask. I'm writing a cross platform memory bench marking application which requires direct physical address mapping rather than virtual addressing. EDIT The solution would look something like the Linux/Unix system calls: int fd = open("/dev/mem", O_RDONLY); mmap(NULL, len, PROT_READ, MAP_SHARED, fd, PHYSICAL_ADDRESS_OFFSET); which will require the kernel to either give you a virtual page mapping to the desired physical address or return that it failed. This does require supervisor privileges but that is ok. I have seen a lot of information about shared memory and memory mapped files but all of these reside on disc and are thus not really useful when I'm trying to make a system dependent read. It is very similar to writing an IO driver although I do no need write permissions to the physical address. This site gives an example of how to do it on a driver level using the Windows Driver Kit: NT Insider: Sharing Memory between drivers and applications This solution would probably require Visual Studio which currently I do not have access to. (I have downloaded the WDK api but it complained about my use of GCC for Windows). I'm traditionally a Linux programmer so I'm hoping there might be something really simple I'm missing. Thanks in advance if you know something I don't!

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • How important is patch management?

    - by James Hill
    Problem I'm trying to sell the idea of organizational patch/update management and antivirus management to my superiors. Thus far, my proposition has been met with two responses: We haven't had any issues yet (I would add that we know of) We just don't think it's that big of a risk. Question Are there any resources available that can help me sell this idea? I've been told that 55-85% of all security related issues can be resolved by proper anti-virus and patch/update management but the individual that told me couldn't substantiate the claim. Can it be substantiated? Additional Information 1/5 of our computers (the ones on the building) have Windows update turned on by default and anti-virus installed. 4/5 of our computers are outside corporate and the users currently have full control over anti-virus and Windows updates (I know this is an issue, one step at a time).

    Read the article

  • G++ Multi-platform memory leak detection tool

    - by indyK1ng
    Does anyone know where I can find a memory memory leak detection tool for C++ which can be either run in a command line or as an Eclipse plug-in in Windows and Linux. I would like it to be easy to use. Preferably one that doesn't overwrite new(), delete(), malloc() or free(). Something like GDB if its gonna be in the command line, but I don't remember that being used for detecting memory leaks. If there is a unit testing framework which does this automatically, that would be great. This question is similar to other questions (such as http://stackoverflow.com/questions/283726/memory-leak-detection-under-windows-for-gnu-c-c ) however I feel it is different because those ask for windows specific solutions or have solutions which I would rather avoid. I feel I am looking for something a bit more specific here. Suggestions don't have to fulfill all requirements, but as many as possible would be nice. Thanks. EDIT: Since this has come up, by "overwrite" I mean anything which requires me to #include a library or which otherwise changes how C++ compiles my code, if it does this at run time so that running the code in a different environment won't affect anything that would be great. Also, unfortunately, I don't have a Mac, so any suggestions for that are unhelpful, but thank you for trying. My desktop runs Windows (I have Linux installed but my dual monitors don't work with it) and I'd rather not run Linux in a VM, although that is certainly an option. My laptop runs Linux, so I can use that tool on there, although I would definitely prefer sticking to my desktop as the screen space is excellent for keeping all of the design documentation and requirements in view without having to move too much around on the desktop. NOTE: While I may try answers, I won't mark one as accepted until I have tried the suggestion and it is satisfactory. EDIT2: I'm not worried about the cross-platform compatibility of my code, it's a command line application using just the C++ libraries.

    Read the article

  • How is an array stored in memory?

    - by George
    In an interest to delve deeper into how memory is allocated and stored, I have written an application that can scan memory address space, find a value, and write out a new value. I developed a sample application with the end goal to be able to programatically locate my array, and overwrite it with a new sequence of numbers. In this situation, I created a single dimensional array, with 5 elements, e.g. int[] array = new int[] {8,7,6,5,4}; I ran my application and searched for a sequence of the five numbers above. I was looking for any value that fell between 4 and 8, for a total of 5 numbers in a row. Unforuntately, my the sequential numbers in my array matched hundreds of results, as the numbers 4 through 8, in no particular sequence happened to be next to each other, in memory, in many situations. Is there any way to distinguish that a set of numbers within memory, represents an array, not simply integers that are next to each other? Is there any way of knowing that if I find a certain value, that the matching values proceeding it are that of an array? I would assume that when I declare int[] array, its pointing at the first address of my array, which would provide some kind of meta-data to what existed in the array, e.g. 0x123456789 meta-data, 5 - 32 bit integers 0x123456789 + 32 "8" 0x123456789 + 64 "7" 0x123456789 + 96 "6" 0x123456789 + 128 "5" 0x123456789 + 160 "4" Am I way off base?

    Read the article

  • Firefox consumes too much memory

    - by Vivek
    I have firefox version 11.0 and am running ubuntu 11.10. Firefox takes upto 850MB RAM with only six or seven tabs opened and all the tabs loaded with light weight websites only. I wonder why would a browser consume so much memory. It keeps increasing its memory consumption over time. I have 3GB RAM and most of the times firefox consumes upto 30% of my memory. How do I fix this? EDIT: The output of the command sudo iotop -oPa as asked by @Jippie

    Read the article

  • EBS: OPP Out of memory issue...

    - by ashish.shrivastava
    FO Processor is little more hungry for memory compare to other Java process. If XSLT scalable option is not set and the same time your RTF template is not well optimized definitely you are going to hit Out of memory exception while working with large volume of data. If the memory requirement is not too bad, you can set the OOP Heap size using following SQL queries. Check the current OPP JVM Heap size using following SQL query SQL select DEVELOPER_PARAMETERS from FND_CP_SERVICES where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP' DEVELOPER_PARAMETERS ----------------------------------------------------- J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m Set the JVM Heap size using following SQL query SQL update FND_CP_SERVICES set DEVELOPER_PARAMETERS = 'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx2048m' where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP'); SQLCommit; . You need to restart the Concurrent Manager to make it effective. If this does not resolve the issue, You need to optimize RTF template and set the XSLT scalable option true.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Applications affected by memory performance

    - by robotron
    I'm writing a paper on the topic of applications affected more by memory performance than processor performance. I've got a lot written regarding the gap between the two, however I can't seem to find anything about the applications that might be affected more by memory performance than by processor speed. I suppose these are applications that make a large amount of memory references, but I have no idea what kind of applications would make such large number of references to make it stand out? Perhaps databases? Can you please give me any pointers on how to proceed, some links to papers? I'm really stuck.

    Read the article

  • Wheres my memory going?

    - by Stu2000
    My machine keeps 'freezing' before eventaully logging out with all the programs exiting. This is rather annoying, and I think its because I keep running out of memory. I am not running any custom software, just netbeans, chrome etc. (Stuff I usually run on other ubuntu computers without issue). For some reason my memory usage is through the roof as seen here, but I can't quite figure out why. Here is a screenshot which may be useful with htop and gnome-system monitor open as user and as root. I notice that my console-kit-daemon is taking up about a gig of 'virtual memory'. Is that normal? Any tips/advice will be helpful. In the meantime I have ordered 2 x 4 gig ram sticks to try and just throw hardware at the issue.

    Read the article

  • Loading saved byte array to memory stream causes out of memory exception

    - by user2320861
    At some point in my program the user selects a bitmap to use as the background image of a Panel object. When the user does this, the program immediately draws the panel with the background image and everything works fine. When the user clicks "Save", the following code saves the bitmap to a DataTable object. MyDataSet.MyDataTableRow myDataRow = MyDataSet.MyDataTableRow.NewMyDataTableRow(); //has a byte[] column named BackgroundImageByteArray using (MemoryStream stream = new MemoryStream()) { this.Panel.BackgroundImage.Save(stream, ImageFormat.Bmp); myDataRow.BackgroundImageByteArray = stream.ToArray(); } Everything works fine, there is no out of memory exception with this stream, even though it contains all the image bytes. However, when the application launches and loads saved data, the following code throws an Out of Memory Exception: using (MemoryStream stream = new MemoryStream(myDataRow.BackGroundImageByteArray)) { this.Panel.BackgroundImage = Image.FromStream(stream); } The streams are the same length. I don't understand how one throws an out of memory exception and the other doesn't. How can I load this bitmap? P.S. I've also tried using (MemoryStream stream = new MemoryStream(myDataRow.BackgroundImageByteArray.Length)) { stream.Write(myDataRow.BackgroundImageByteArray, 0, myDataRow.BackgroundImageByteArray.Length); //throw OoM exception here. }

    Read the article

  • Memory allocation in case of static variables

    - by eSKay
    I am always confused about static variables, and the way memory allocation happens for them. For example: int a = 1; const int b = 2; static const int c = 3; int foo(int &arg){ arg++; return arg; } How is the memory allocated for a,b and c? What is the difference (in terms of memory) if I call foo(a), foo(b) and foo(c)?

    Read the article

  • Is there an NSCFTimer memory leak?

    - by mystify
    I tracked down a memory leak with instruments. I always end up with the information that the responsible library is Foundation. When I track that down in my code, I end up here, but there's nothing wrong with my memory management: - (void)setupTimer { // stop timer if still there [self stopAnimationTimer]; NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:0.2 target:self selector:@selector(step:) userInfo:nil repeats:YES]; self.animationTimer = timer; // retain property, -release in -dealloc method } the property animationTimer is retaining the timer. In -dealloc I -release it. Now that looks like a framework bug? I checked with iPhone OS 3.0 and 3.1, both have that problem every time I use NSTimer like this. Any idea what else could be the problem? (my memory leak scan interval was 0.1 seconds. but same thing with 5 seconds)

    Read the article

  • UIWebView memory management

    - by wolfrevo
    Hello, I have a problem with memory management. I am developing an application that makes heavy use of UIWebView. This app generates dynamically lots of UIWebViews while loading content from my server. Some of these UIWebViews are quite large and have a lot of pictures. If I use instruments to detect leaks, I do not detect any. However, lots of objects are allocated and I suspect that has to do with the UIWebViews. When the webviews release because no longer needed, it appears that not all memory is released. I mean, after a request to my server the app creates an UITableView and many webviews (instruments say about 8Mb). When user tap back, all of them are released but memory usage only decrements about 2-3 Mb, and after 5-10 minutes using the app it crashes. Am I missing something? Anyone know what could be happening? Thank you!

    Read the article

  • Does it take time to deallocate memory?

    - by jm1234567890
    I have a C++ program which, during execution, will allocate about 3-8Gb of memory to store a hash table (I use tr1/unordered_map) and various other data structures. However, at the end of execution, there will be a long pause before returning to shell. For example, at the very end of my main function I have std::cout << "End of execution" << endl; But the execution of my program will go something like $ ./program do stuff... End of execution [long pause of maybe 2 min] $ -- returns to shell Is this expected behavior or am I doing something wrong? I'm guessing that the program is deallocating the memory at the end. But, commercial applications which use large amounts of memory (such as photoshop) do not exhibit this pause when you close the application. Please advise :)

    Read the article

  • How does VirtualBox's memory usage work?

    - by DrFredEdison
    I've been running several VM's with VirtualBox, and the memory usage reported from various perspectives, and I'm having trouble figuring how much memory my VMs actually use. Here is an example: I have a VM running Windows 7 (as the Guest OS) on my windows XP Host machine. The Host Machine Has 3 GB of RAM The Guest VM is setup to have a base memory of 1 GB If I run Task Manger on the Guest OS, I see memory usage of 430 MB If I run Task Manger on the host OS, I see 3 processes that seem to belong to VirtualBox: VirtualBox.exe (1), using 60 MB of memory (This one seems to have the most CPU usage) VirtualBox.exe (2), using 20 MB of memory VBoxSvc.exe, using 11.5 MB of memory While running the VM, the Host OS's memory usage is about 2 GB When I shut down the VM, the Host OS's it goes back to memory usage goes down to about 900 MB So clearly, there are some huge differences here. I really don't understand how the GuestOS can use 400+ MB, while the Host OS only shows about 75 MB allocated to the VM. Are there other processes used by VirtualBox that aren't as obviously named? Also, I'd like to know if I run a machine with 1 GB, is that going to take 1 GB away from my host OS, or only the amount of memory the Guest machine is currently using? update Somene expressed distrust over my memory usage numbers, and I'm not sure if that distrust was directed at me, or my Host OS's Task Manager's reporting (which is perhaps the culprit), but for any skeptics, here is a screenshot of those processes on the host machine:

    Read the article

  • Bios Memory settings and Virtualization + Ubuntu (Unofficial Answers Welcome) [closed]

    - by TardisGuy
    Attempting to optimize my (Main Windowless) Ubuntu system for my uses I will detail questions below, I understand this might be the wrong place to ask these questions. If so, my apologies and I thank you so much for your patience. Thanks to all the volenteers that have helped me learn ubuntu over the years (Since 5.10) This is a "short" list of questions I have been trying to figure out for some time. If you feel you can answer one but not another, that's already more than I could ask for. I have wrote this up in a format for easy navigation to important points Hopefully to less annoy your eyes. You're welcome :) or i'm sorry i annoy you. :( If you would be so kind, Please format answers as follows: question 1: _ _ _ _ _ or question 1-a: _ _ _ _ _ If you want to simply link me to relevant information, rather than type up something really detailed; that would be more than awesome! Memory Specific Questions Goal: Maximizing memory bandwith to better perform in Virtualization, and Large file compression. (Possible conflict?) Ganged vs Unganged "which is better?"** is relative, i know. But what about ganged vs unganged - With or without Bank/channel interleaving? a: Speculation - If i understand correctly, "channel interleaving has something to do with using both channels to read or write in a kind of "striping" pattern, as opposed to a standard half duplex operation.(probably wrong) but wouldn't ganged channels make this irrelevant? Memory Interleaving(bank). Does it have a down side? Does it require a ratio of clocks? (If I run 4x4gig ddr3) a. If im reading correctly(trying to learn), this is designed to spread operations between latency cycles to work around the higher latency of "normal" operation. b. However it seems to me that it has to be: divisible by fractions of a master clock? So if i run memory at 1333mhz, then the mean between 2 (physical) banks would operate every (roughly) 600Mhz? Warning! Possibly utter nonsense: (1333/2 interleaving to act like 1 memory module per 2 sticks of a total of 4 sticks, meaning 2x channels@4) c. which makes me wonder if there would be left over clock cycles the system would have to... "truncate/balance" or something? But I'm certain theres a feature somewhere i don't understand. Virtualization Questions AMD-V - Option of IOMMU Turned it on, why do i have extra option of "64MB"? If IOMMU is on, but "64MB" is "disabled", Is it on? (have scoured google, I still dont know) a. I think i understand that its supposed to (kind of) "set aside" a part of ram to act as a faster interactive zone for "stuff" (usb, Graphics, and... what?) b. I am using Nvidia graphics on AMD (Used kernel option "iommu=pt iommu=1, pt "passthrough"? No idea what they do, found it on google to solve boot up issue) c. Will this option help me use low latency sound hardware, like my midi keyboard? Can you recommend any additional tweaks? a. sysctl settings? b. swap settings? Grats, youve reached the end. Thanks for Reading.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >