Search Results

Search found 9117 results on 365 pages for 'systems analysis'.

Page 125/365 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • C programming in 2011

    - by Duncan Bayne
    Many moons ago I cut C code for a living, primarily while maintaining a POP3 server that supported a wide range of OSs (Linux, *BSD, HPUX, VMS ...). I'm planning to polish the rust off my C skills and learn a bit about language implementation by coding a simple FORTH in C. But I'm wondering how (or whether?) have things changed in the C world since 2000. When I think C, I think ... comp.lang.c ANSI C wherever possible (but C89 as C99 isn't that widely supported) gcc -Wall -ansi -pedantic in lieu of static analysis tools Emacs Ctags Autoconf + make (and see point 2 for VMS, HP-UX etc. goodness) Can anyone who's been writing in C for the past eleven years let me know what (if anything ;-) ) has changed over the years? (In other news, holy crap, I've been doing this for more than a decade).

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • Apress Books - 3 - Pro ASP.NET 4 CMS (ISBN 987-1-4302-2712-0) - Final comments

    - by TATWORTH
    This book is more than just  a book about an ASP.NET CMS system -  it has much practical advice and examples for the Dot Net web developer. I liked the use of JQuery to detect that JavaScript was not enabled. One chapter was about MemCached - this one chapter could justify the price of the book if you run a server farm and need to improve performance. Some links to get you started are: Windows Memcache at http://code.jellycan.com/memcached/ Dot Net Access Library at http://sourceforge.net/projects/memcacheddotnet/ The chapters on scripting, performance analysis and search engne optimisation all provide excellent examples. This certainly is a book that should be part of every Dot Net Web Development team library. Congratulations to the author and to Apress for publishing this book!

    Read the article

  • What's typical in terms of royalties? [closed]

    - by Matt Phillips
    I'm a developer negotiating compensation for a commercialized version of some data analysis software I wrote (see my profile if you like). This is a completely new experience for me. I want per-unit royalties, but I don't have the slightest idea what the standard amount is. I also want to be compensated for my time, so that's an upfront R&D cost for the company I'm negotiating with, but distribution cost to them is presumably virtually nothing once it's out there. But then there's support costs. What sorts of deals have you folks negotiated?

    Read the article

  • " this kernel required an X86-64 CPU, but only detected a i686 CPU"

    - by jy19
    I recently decided to use Virtualbox to run Ubuntu, but I get the message this kernel required an X86-64 CPU, but only detected a i686 CPU I've already enabled virtualization in BIOS, but that doesn't seem to work. Many other solutions suggest that I should download the 32-bit version, and not the 64-bit. I'm not sure about that though, since my computer clearly says "64-bit operating system" under systems. But I might just be mistaken.

    Read the article

  • Is acousting fingerprinting too broad for one audio file only?

    - by IBG
    So we were looking for some topics related to audio analysis and found acoustic fingerprinting. As it is, it seems like the most famous application for it is for identification of music. Enter our manager, who requested us to research and possible find an algorithm or existing code that we can use for this very simple approach (like it's easy, source codes don't show up like mushrooms): Always-on app for listening Compare the audio patterns to a single audio file (assume sound is a simple beep) If beep is detected, send notification to server With a flow this simple, do you think acousting fingerprinting is a broad approach to use? Should we stop and take another approach? Where to best start? We haven't started anything yet (on the development side) on this regard, so I want to get other opinion if this is pursuit is worth it or moot.

    Read the article

  • Maximizing the Value of Software

    - by David Dorf
    A few years ago we decided to increase our investments in documenting retail processes and architectures.  There were several goals but the main two were to help retailers maximize the value they derive from our software and help system integrators implement our software faster.  The sale is only part of our success metric -- its actually more important that the customer realize the benefits of the software.  That's when we actually celebrate. This week many of our customers are gathered in Chicago to discuss their successes during our annual Crosstalk conference.  That provides the perfect forum to announce the release of the Oracle Retail Reference Library.  The RRL is available for free to Oracle Retail customers and partners.  It contains 1000s of hours of work and represents years of experience in the retail industry.  The Retail Reference Library is composed of three offerings: Retail Reference Model We've been sharing the RRM for several years now, with lots of accolades.  The RRM is a set of business process diagrams at varying levels of granularity. This release marks the debut of Visio documents, which should make it easier for retailers to adopt and edit the diagrams.  The processes represent an approximation of the Oracle Retail software, but at higher levels they are pretty generic and therefore usable with other software as well.  Using these processes, the business and IT are better able to communicate the expectations of the software.  They can be used to guide customization when necessary, and help identify areas for optimization in the organization. Retail Reference Architecture When embarking on a software implementation project, it can be daunting to start from a blank sheet of paper.  So we offer the RRA, a comprehensive set of documents that describe the retail enterprise in terms of logical architecture, physical deployments, and systems integration.  These documents and diagrams describe how all the systems typically found in a retailer enterprise work together.  They serve as a way to jump-start implementations using best practices we've captured over the years. Retail Semantic Glossary Have you ever seen two people argue over something because they're using misaligned terminology?  Its a huge waste and happens all the time.  The Retail Semantic Glossary is a simple application that allows retailers to define terms and metrics in a centralized database.  This initial version comes with limited content with the goal of adding more over subsequent releases.  This is the single source for defining key performance indicators, metrics, algorithms, and terms so that the retail organization speaks in a consistent language. These three offerings are downloaded from MyOracleSupport separately and linked together using the start page above.  Everything is navigated using a Web browser.  See the Oracle Retail Documentation blog for more details.

    Read the article

  • This Week on the Green Data Center Management Front

    Among the big news this week for those looking to make their data center more environmentally friendly: Two IBM POWER7-based servers become the first four-processor systems in the industry to qualify for Energy Star status; NetApp announces plans to have execs, and other on hand to discuss green computing at SNW Spring 2010; and the feds are examining how cloud will save money and energy.

    Read the article

  • Diplomatically point out the obvious problem in a product

    - by exiter2000
    As we all know, every software has bugs in it. It is matter of time to discover it. Suppose if you just found your product has potential big issue and it was not developed by you. How would you deal with it? I usually speak up with some data & analysis even if it is not my part of code. I am wondering if it is too offensive because I often faced on some resistance(depending on the issue), which would eventually be gone.

    Read the article

  • How do I compile & install the newest version of Transmission?

    - by Codemonkey
    I'm trying to install Transmission 2.51 on Ubuntu 10.04. Compiling the source goes fine, but I can't seem to get it to compile the GUI as well. This is the configure output: Configuration: Source code location: . Compiler: g++ Build libtransmission: yes * optimized for low-resource systems: no * µTP enabled: yes Build Command-Line client: yes Build GTK+ client: no (GTK+ none) * libappindicator for an Ubuntu-style tray: no Build Daemon: yes Build Mac client: no How do I get it to build the GTK+ client?

    Read the article

  • Well-tested libraries for player ratings?

    - by Lucky
    It's common in games to implement some sort of numerical ranking system -- the ELO system is usually used in chess. I could implement this system naively using Wikipedia's descriptions, but I suspect that this would open up a whole box of problems that have already been solved: rating inflation, etc -- for instance, the ELO system has a K constant that's 'fudged' according to rating, duration, pairings, statistics, ... What are some libraries (I'm looking at Python, but anything is okay) that implements rating systems? It also doesn't have to be ELO.

    Read the article

  • SQL Profiler Through StreamInsight Sample Solution

    In this postI show how you can use StreamInsight to take events coming from SQL Server Profiler in real-time and do some analytics whilst the data is in flight.  Here is the solution for that post.  The download contains Project that reads events from a previously recorded trace file Project that starts a trace and captures events in real-time from a custom trace definition file (Included) It is a very simple solution and could be extended.  Whilst this example traces against SQL Server it would be trivial to change this so it profiles events in Analysis Services.       Enjoy.

    Read the article

  • What software do you use to help plan your team work, and why?

    - by Alex Feinman
    Planning is very difficult. We are not naturally good at estimating our own future, and many cognitive biases exacerbate the problem. Group planning is even harder. Incomplete information, inconsistent views of a situation, and communication problems compound the difficulty. Agile methods provide one framework for organizing group planning--making planning visible to everyone (user stories), breaking it into smaller chunks (sprints), and providing retrospective analysis so you get better at planning. But finding good tools to support these practices is proving tricky. What software tools do you use to achieve these goals? Why are you using that tool? What successes have you had with a particular tool?

    Read the article

  • What is the ideal laptop for creative coding applications?

    - by Jason
    Hi, I am a creative coder using C++(cinder and OpenFrameworks) I am looking to upgrade from my MacBook, which slowed down to about 3fps this morning. My project involves particles systems and fluids reacting to audio analysis data and computer vision data in real-time. SD or HD? no biggie. I have asked many people what computer I need. Ideally, I want a MacBook Pro. But is that enough power? I've been told that I need a desktop for what I am doing though I'd rather stay portable I've been told that I should go PC linux to get the most power but I'd rather stay mac I've been told that RAM is more of bottleneck than processor speed I've been told that the Graphics Card is more important than CPU and that code optimizations such as using trees over lists, proper threading, sending tasks to the GPU make a bigger difference than the hardware!!! what's true?! what do I need? Any suggestions are greatly appreciated

    Read the article

  • Enable Seamless Transformation and Effective Adoption of Change with Oracle User Productivity Kit

    Organizations go through continuous transformation and change - whether it is through mergers and acquisitions, standardizations of systems, a rollout of a new application or business process improvements. With Oracle User Productivity Kit, project teams can capture and deploy best practices to streamline efficiency, reduce cost, and ensure successful change adoption. Discover how organizations can leverage the multiple outputs of Oracle UPK for all phases of the project from blueprinting/design/configuration to testing/training/go-live as well as maintenance and support.

    Read the article

  • How do you go about checking your open source libraries for keystroke loggers?

    - by asd
    A random person on the internet told me that a technology was secure(1), safe to use and didn't contain keyloggers because it is open source. While I can trivially detect the key stroke logger in this open source application, what can developers(2) do to protect themselves against rouge committers to open source projects? Doing a back of the envelope threat analysis, if I were a rogue developer, I'd fork a branch on git and promote it's download since it would have twitter support (and a secret key stroke logger). If it was an SVN repo, I'd create just create a new project. Even better would be to put the malicious code in the automatic update routines. (1) I won't mention which because I can only deal with one kind of zealot at a time. (2) Ordinary users are at the mercy of their virus and malware detection software-- it's absurd to expect grandma to read the source of code of their open source word processor's source code to find the keystroke logger.

    Read the article

  • Les risques du Cloud sont plus importants que ses avantages, pour les responsables IT interrogés par

    Mise à jour du 08/04/10 [Les commentaires sur cette mise à jour commencent ici] Les risques du Cloud Computing sont plus grands que ses avantages Pour les responsables IT interrogés par l'ISACA Près de la moitié (45%) des responsables IT interrogés dans une étude de l'ISACA (la Information Systems Audit and Control Association) considèrent que les risques liés au Cloud Computing sont plus importants que ses avantages, 38% pensent que risques et bénéfices s'équilibrent, et seulement 12% pensent que...

    Read the article

  • SEO For Ecommerce Sites

    Ecommerce is a fancy name for just about any business that can be conducted via the internet or other computer systems. The global internet explosion has changed the face of the ecommerce industry for good, increasing both potential profits and the ferocity of competition across virtually every consumer market.

    Read the article

  • Change Management and Source Control

    So, given the many good reasons for using Version Control systems for managing the changes in database applications, how does one go about the rather different routines of team development, such as testing, continuous integration, and managing data? What are the issues that you're likely to face? The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • What is the value of an Enterprise Resource Planning (ERP) System?

    According to PWC.com ERP systems can add tremendous value to a company’s core business functionality.  Below PWC.com summarizes the primary value that an ERP can add to a company. ERPs are a collection business application that coordinates the resources, information, and activities required for core business processes. ERPs are strategic tools used to reduce costs, improve business processes, and healthier risk management.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >