Search Results

Search found 12988 results on 520 pages for 'performance'.

Page 246/520 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • Should I return iterators or more sophisticated objects?

    - by Erik
    Say I have a function that creates a list of objects. If I want to return an iterator, I'll have to return iter(a_list). Should I do this, or just return the list as it is? My motivation for returning an iterator is that this would keep the interface smaller -- what kind of container I create to collect the objects is essentially an implementation detail On the other hand, it would be wasteful if the user of my function may have to recreate the same container from the iterator which would be bad for performance.

    Read the article

  • SQL Saturday #300 - Kansas City

    SQL Saturday is coming to Kansas on September 13, 2014. Our very own Steve Jones will be presenting, alongside other big names like Glenn Berry, Kathi Kellenberger, Sean and Jen McCown, Jason Strate, and many more. Register while space is available. Get alerts within 15 seconds of SQL Server issuesSQL Monitor checks performance data every 15 seconds, so you can fix issues before your users even notice them. Start monitoring with a free trial.

    Read the article

  • Data Virtualization: Federated and Hybrid

    - by Krishnamoorthy
    Data becomes useful when it can be leveraged at the right time. Not only enterprises application stores operate on large volume, velocity and variety of data. Mobile and social computing are in the need of operating in foresaid data. Replicating and transferring large swaths of data is one challenge faced in the field of data integration. However, smaller chunks of data aggregated from a variety of sources presents and even more interesting challenge in the industry. Over the past few decades, technology trends focused on best user experience, operating systems, high performance computing, high performance web sites, analysis of warehouse data, service oriented architecture, social computing, cloud computing, and big data. Operating on the ‘dark data’ becomes mandatory in the future technology trend, although, no solution can make dark data useful data in a single day. Useful data can be quantified by the facts of contextual, personalized and on time delivery. In most cases, data from a single source may not be complete the picture. Data has to be combined and computed from various sources, where data may be captured as hybrid data, meaning the combination of structured and unstructured data. Since related data is often found across disparate sources, effectively integrating these sources determines how useful this data ultimately becomes. Technology trends in 2013 are expected to focus on big data and private cloud. Consumers are not merely interested in where data is located or how data is retrieved and computed. Consumers are interested in how quick and how the data can be leveraged. In many cases, data virtualization is the right solution, and is expected to play a foundational role for SOA, Cloud integration, and Big Data. The Oracle Data Integration portfolio includes a data virtualization product called ODSI (Oracle Data Service Integrator). Unlike other data virtualization solutions, ODSI can perform both read and write operations on federated/hybrid data (RDBMS, Webservices,  delimited file and XML). The ODSI Engine is built on XQuery, hence ODSI user can perform computations on data either using XQuery or SQL. Built in data and query caching features, which reduces latency in repetitive calls. Rightly positioning ODSI, can results in a highly scalable model, reducing spend on additional hardware infrastructure.

    Read the article

  • What are the differences between Bigloo and ECL?

    - by Pubby
    I've been looking to embed Lisp in some C++ code. Two options I'm interested in is Bigloo Scheme and ECL. Reading through the docs they seem to support a very similar feature set. Obviously Bigloo is Scheme and ECL is CLisp, but what other differences do they have? In particular I'm interested in the following criteria: Ease of embedding (for C++, not just C) Performance Style of coding Size Tail call support I'm targeting this question towards someone who has used both.

    Read the article

  • The DevTouch Pro: New Mobile Application Development Tool Saves Developers and Managers Time and Money

    Montreal – 1 December 2010 – Amyuni Technologies, a leading vendor of high-performance development tools announced today the release of the DevTouch Pro, a revolutionary software deployment tool designed for mobile application developers. The DevTouch Pro is a color touchscreen tablet designed to provide mobile application developers and product managers with a customizable development, testing, and deployment platform.

    Read the article

  • A Small Blog About Huge Pages

    - by rickramsey
    Video Interview: What Are Linux Huge Pages?, by Ed Whalen, Oracle ACE Blog: There's Been a Change In How Huge Pages Are Allocated, by Tanel Poder, Oracle ACE Director Blog: Performance Issues with Transparent Huge Pages (thank you, Bjoern Rost!) Web: About the Car, by Smart Ridez LLC, of Woodland Hills, California - Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Oracle E-Business Suite 12.2.4 is Available for Download!

    - by Brian Kerr - EBS Support Engineer -Oracle
    This Release Update Pack (RUP) for the EBS 12.2 release codeline includes new features as well as statutory and regulatory updates, and error corrections for stability, performance, and security.  This is a consolidated suite-wide patch set. Release 12.2.4 is cumulative and includes new updates as well as updates made available in one-off patches for prior 12.2 releases. The details for downloading and applying the Oracle E-Business Suite 12.2.4 Release Update Pack can be found in the Oracle E-Business Suite Release 12.2.4 Readme (Doc ID 1617458.1).

    Read the article

  • Is there going to be Twinview ( or alternative) implemented for nouveau ?

    - by lisak
    as I've had heavy issues with nvidia driver regarding performance of basic X window operations (window moving, resizing, scrolling). I switched to nouveau driver. But I lost the possibility of having dual screen that I had previously thanks to nvidia twinview feature... Anyway I rather have fluent X than dual screen, but having dual screen would be nice, so I'm wondering if there is already an nouveau alternative to nvidia's twinview or if it is going to be implemented.

    Read the article

  • SAP veut transformer l'approche des bases de données avec SAP NetWeaver BW et une plateforme de développement pour HANA

    SAP veut transformer l'approche traditionnelle des bases de données Avec SAP NetWeaver BW et une nouvelle plateforme de développement pour HANA « Les toutes dernières innovations de SAP HANA produisent des environnements de Data Warehouse dopés, qui fournissent des données clients en temps réel. SAP HANA permet également d'animer un réseau en ligne et offre une plateforme ouverte aux développeurs », ces avec ces mots que SAP vient d'annoncer que le composant SAP NetWeaver Business Warehouse (SAP NetWeaver BW) allait être animé par la plateforme SAP HANA. HANA (pour High-Performance Analytic Appliance), doit permettre d'améliorer considérablement les performances des requêt...

    Read the article

  • Oracle’s Visual CRM Solution

    Visual CRM adds the powerful visualization and document centric collaboration capabilities of Oracle’s AutoVue to Oracle’s best-in-class CRM solutions. By introducing a visual aspect to call center, field service, and ordering processes, Visual CRM helps teams provide faster responses to customer issues, optimize field service performance, and shorten ordering cycles while minimizing order errors.With Visual CRM, organizations can achieve improved customer service levels and field service operations which help drive margin, top line revenue, and customer retention.

    Read the article

  • Sybase IQ 15.4 annoncé : Sybase parie sur Hadoop et MapReduce, et défie sa maison mère ?

    Sybase IQ 15.4 annoncé pour fin novembre Sybase veut repousser les limites du Big Data avec Hadoop et MapReduce Alors que la grand messe annuelle de SAP, le SAPPHIRE NOW, battait son plein, la nouvelle filiale de l'éditeur allemand Sybase a annoncé en totale indépendance la sortie de Sybase IQ 15.4, son serveur analytique haute performance structuré en colonnes pour gérer les "big data". Alors que de son côté SAP met en avant HANA, sa nouvelle technologie de mise en cache des données (ou "In-Memory Computing") pour accélérer la vitesse de traite...

    Read the article

  • Did Ubuntu 10.04 Achieve Its Ten Second Boot Goal?

    <b>Phoronix:</b> "Canonical expressed their plans to achieve a ten-second boot time in June of last year for Ubuntu 10.04 LTS, with their reference system being a Dell Mini 9 netbook. In February, we last checked on Ubuntu's boot performance and found it close, but not quite there yet..."

    Read the article

  • Optimizing Transaction Log Throughput

    As a DBA, it is vital to manage transaction log growth explicitly, rather than let SQL Server auto-growth events "manage" it for you. If you undersize the log, and then let SQL Server auto-grow it in small increments, you'll end up with a very fragmented log. Examples in the article, extracted from SQL Server Transaction Log Management by Tony Davis and Gail Shaw, demonstrate how this can have a significant impact on the performance of any SQL Server operations that need to read the log.

    Read the article

  • Speaking About SQL Server

    - by AllenMWhite
    There's a lot of excitement in the SQL Server world right now, with the RTM (Release to Manufacturing) release of SQL Server 2012 , and the availability of SQL Server Data Tools (SSDT) . My personal speaking schedule has exploded as well. Just this past Saturday I presented a session called Gather SQL Server Performance Data with PowerShell . There are a lot of events coming up, and I hope to see you at one or more of them. Here's a list of what's scheduled so far: First, I'll be presenting a session...(read more)

    Read the article

  • An innovative architecture to develop business web forms (3) - Configure GridView

    This is third article in the series to introduce an innovative architecture to develop web forms in enterprise software which is high performance, productivity, configurability and maintainability than writing ASPX/MVC code directly. The article introduces how to configure gridview for search result...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • DBA Reporting Presentation - Cambridge UG

    - by NeilHambly
    I'm now able to Report (sorry for the pun!) that my presentation on DBA Reporting I gave @ the User group on 25th November @ Red-Gate Offices in Cambridge So I have attached the Presentation in PDF format for you all to replay and view if you weren't able to attend. Here a few links you may also want to check out on some of those products we discussed Various ones like SQL NEXUS / DAIG / PAL / Internals Viewer http://www.codeplex.com/ SQL Server 2005 Performance Dashboard Reports http://www...(read more)

    Read the article

  • Efficient SQL Server Indexing by Design

    Having a good set of indexes on your SQL Server database is critical to performance. Efficient indexes don't happen by accident; they are designed to be efficient. Greg Larsen discusses whether primary keys should be clustered, when to use filtered indexes and what to consider when using the Fill Factor.

    Read the article

  • ISV Exastack program: IBIS, Performix, Cardtek

    - by Javier Puerta
    Impact Business Information Solutions (IBIS) accelerates insights for Health Sciences decision-makers to achieve new levels productivity using Oracle’s extreme-performance system, Oracle Exadata Database Machine. Read More. Perfomix Inc Achieves Oracle Exadata Optimized Status. Read more. Cardtek Group Company SmartSoft's payment processing solution achieves Oracle Exadata Optimized status. Read more.

    Read the article

  • Wait, Chrome Dev Tools could do THAT?

    Wait, Chrome Dev Tools could do THAT? Your browser is one of the most and best instrumented development platforms -- you may just not realize it yet. In this episode we'll take a whirlwind tour of how to analyze network performance, rendering and layout pipeline, as well as detecting memory leaks in your Javascript code, and using audits and extensions to build faster and better apps! From: GoogleDevelopers Views: 207 16 ratings Time: 33:40 More in Science & Technology

    Read the article

  • Gain Visibility

    This Industry AppsCast will discuss the importance of visibility across all projects enterprise wide and how Oracle's Primavera PPM solutions provides transparency into project status performance across all projects in your portfolio.

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >