Search Results

Search found 10438 results on 418 pages for 'power architecture'.

Page 87/418 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 13

    - by MarkPearl
    Learning Outcomes Explain the advantages of using a large number of registers Discuss the way in which compilers optimize register usage Discuss the evolution of CISC machines Describe the characteristics of RISC architecture Discuss the RISC vs. CISC controversy Describe the way in which RISC and CISC design principles can be combined Instruction Execution Characteristics To understand the the line of reasoning of RISC advocates, we need a brief overview of instruction execution characteristics. These include… Operations Operands Procedure Calls These three sections can be studied in depth in the textbook at pages 503 - 505 A number of groups have come up with the conclusion that the attempt to make the instruction set architecture closer to HLLs (High Level Languages) is not the most effective design strategy. Rather HLL’s can be best supported by optimizing performance of the most time-consuming features of typical HLL programs. Generally 3 main characteristics came up to improve performance… Use a large number of registers or use a compiler to optimize register usage Careful attention needs to be paid to the design of instruction pipelines A simplified (reduced) instruction set is indicated The use of a large register optimization One of the most important design principles of RISC machines is the use of a large number of registers. The concept of register windows and the use of a large register file versus the use of cache memory are discussed. On the face of it, the use of a large set of registers should decrease the need to access memory. The design task is to organize the registers in such a fashion that this goal is realized. Read page 507 – 510 for a detailed explanation. Compiler-based register optimization   Reduced Instructions Set Architecture There are two advantages to smaller programs… Because the program takes up less memory, there is a savings in that resource (this was more compelling when memory was more expensive) Smaller programs should improve performance, and this will happen in two ways – fewer instructions means fewer instruction bytes to be fetched and in a paging environment smaller programs occupy fewer pages, reducing page faults. Certain characteristics are common to RISC processors… One instruction per cycle Register-to-register operations Simple addressing modes Simple instruction formats RISC vs. CISC After initial enthusiasm for RISC machines, there has been a growing realization that RISC designs may benefit from the inclusion of some CISC features CISC designs may benefit from the inclusion of some RISC features

    Read the article

  • Choices in Architecture, Design, Algorithms, Data Structures for effective RDF Reasoning and Querying in a Big Data Environment [on hold]

    - by user2891213
    As part of my academic project I would like to know what choices in Architecture, Design, Algorithms, Data Structures do we need in order to provide effective and efficient RDF Reasoning and Querying in a Big Data Environment. Basically I want to get info regarding below points: What are the Systems and Software to get appropriate Architecture? What kind of API layer(s) would we need on top of the Big Data stores, to make this possible? The Indexing structures we will need. The appropriate Algorithms, and appropriate Algorithms for Query Planning across Big Data stores. The Performance Analysis and Cost Models we will need to justify the design decisions we have made along the way. Can anyone please provide pointers.. Thanks, David

    Read the article

  • AHCI, Power Mode S3 and HPET 64bit - please help with these settings!

    - by thetattoo
    my problem is the that for a Hackintosh (unofficial PC running OS X) I needed to set AHCI, Power Mode S3 and HPET 64bit. Now, I want to install Ubuntu (when 12.04 comes out) and was wondering if these settings will be right to use for Ubuntu. I read that AHCI is how the hard disks are accessed, Power Mode S3 is how suspend works with the RAM but couldn't figure out if using HPET 64bit makes any difference than HPET 32bit. What I would like is an explanation of what these BIOS settings do and what is best for Ubuntu. Thank you very much

    Read the article

  • Patterns for non-layered applications

    - by Paul Stovell
    In Patterns of Enterprise Application Architecture, Martin Fowler writes: This book is thus about how you decompose an enterprise application into layers and how those layers work together. Most nontrivial enterprise applications use a layered architecture of some form, but in some situations other approaches, such as pipes and filters, are valuable. I don't go into those situations, focussing instead on the context of a layered architecture because it's the most widely useful. What patterns exist for building non-layered applications/parts of an application? Take a statistical modelling engine for a financial institution. There might be a layer for data access, but I expect that most of the code would be in a single layer. Would you still expect to see Gang of Four patterns in such a layer? How about a domain model? Would you use OO at all, or would it be purely functional? The quote mentions pipes and filters as alternate models to layers. I can easily imagine a such an engine using pipes as a way to break down the data processing. What other patterns exist? Are there common patterns for areas like task scheduling, results aggregation, or work distribution? What are some alternatives to MapReduce?

    Read the article

  • How to understand other people's CSS architectures?

    - by John
    I am reasonably good with CSS. However, when working with someone else's CSS, it's difficult for me to see the "bigger picture" in their architecture (but i have no problem when working with a CSS sheet I wrote myself). For example, I have no problems using Firebug to isolate and fix cross browser compatibility issues, or fixing a floating issue, or changing the height on a particular element. But if I'm asked to do something drastic such as, "I want the right sidebars of pages A, B, C and D to have a red border. I want the right side bars of pages E, F and G to have a blue border if and only if the user mouses over", then it takes me time a long time to map out all the CSS inheritance rules to see the "bigger picture". For some reason, I don't encounter the same difficulty with backend code. After a quick debriefing of how a feature works, and a quick inspection of the controller and model code, I will feel comfortable with the architecture. I will think, "it's reasonable to assume that there will be an Employee class that inherits from the Person Class that's used by a Department controller". If I discover inconvenient details that aren't consistent with overall architectural style, I am confident that I can hammer things back in place. With someone else's CSS work, it's much harder for me to see the "relationships" between different classes, and when and how the classes are used. When there are many inheritance rules, I feel overwhelmed. I'm having trouble articulating my question and issues... All I want to know is, why is it so much harder for me to see the bigger picture in someone else's CSS architecture than compared to someone else's business logic layer? **Does it have any thing to do with CSS being a relatively new technology, and there aren't many popular design patterns?

    Read the article

  • Block diagram containing computer buses,motherboard..

    - by learnerforever
    Hi all, I am trying to understand computer architecture. In particular: - Physical view i.e. what all is packed inside the motherboard and what all outside - Conceptual view. how processor,memory,peripherals are connected. I am getting confused among various buses like local bus,PCI bus, SCSI bus,ISA bus,USB bus. I am looking for block diagrams. How is the USB port connected to processor ultimately? through PCI bus etc? Why do we have so many buses? What was it like before SCSI/IDE came? Does the diagram at : http://www.vaughns-1-pagers.com/computer/pc-block-diagram.htm look correct? It shows no connection between PCI bus and SCSI bus. Is that correct? I would greatly appreciate any other link especially of block diagrams/anatomy and not just textual writeup. Thanks,

    Read the article

  • Physically moving a hard drive from older iMac (c2d) to new iMac (i7) ?

    - by Inshim
    Instead of my usual habit of using superduper to mirror my drive to a new computer, I just physically moved the hard drive from an older iMac to a new one. But... it now doesn't boot, getting stuck at the apple logo screen. Since the hard drive that came with the new iMac works well, and my old drive works well when I return it to the older iMac, I conclude that there is some problem at the system/kernel level due to the different hardware. In the past I did similar things (e.g. starting a C2D machine from a Core Duo in target disk mode), so perhaps the change in architecture to the i5/i7 is too problematic? The main point: do you know of any way to get the system to rebuild for itself the proper versions of the system components when booting? Are there certain directories that I can safely delete to make that happen? Thanks

    Read the article

  • Storage servers architectural solution for backup. What is the best way? (pics inside)

    - by Kirzilla
    Hello, What is the best architecture for storage servers array? Needs... a) easy way to add one more server to array b) we don't have single backup server c) we need to have one backup for each "web" part of each server Group #1 : is cross-server-backuping scheme; the main disadvantage that we can't add one more server, we should add 2 servers in one time. Group #2 : is a Group #1, but with three and more servers. It also have a disadvantage - to add one more server we should move existing backup to it. Any suggestions? Thank you. Thank you.

    Read the article

  • Fresh install of 64bit Ubuntu needs flash but adobe doesn't have a version for me.

    - by DJTripleThreat
    I need flash to watch YT videos. YT said "You need to upgrade your Adobe Flash Player to watch this video." with a link to download flash. I'm running 10.04 so I see possible choices for myself: 1) a .deb file for Ubuntu 8.04+ or APT (whats this??) for Ubuntu 9.04. I downloaded the deb file and when I opened it in the installer it said that I have the wrong architecture. Anyone have an idea about how to work around this?

    Read the article

  • What hardware is at physical address 0x80000000 on powerpc New World Macintosh?

    - by tinkerer
    Open Firmware device tree gives no clue what device might decode at physical address 0x80000000 to 0x8008200 on a G4 New World Macintosh. The mmu has three adjacent Virtual=Real translations for that block. They are the only address translations reported between the top or physical dram at 20000000 and the start of the PCI bridges at f0000000. (A possible clue is that frame-buffer-addr is reported as 9c008000 by Open Firmware, and that is not in the reported translation table either). I believe the architecture has been around since about 1999.

    Read the article

  • Difference between “system-on-chip” and “CPU”

    - by Tim
    Very confused, in some websites, they have this line: iPhone 5s CPU: Apple A7 other websites saying that: iPhone 5s System-on-chip: Apple 7 CPU: 1.3 GHz 64bit dual core other sources saying that iPhone 5s System-on-chip: Apple 7 CPU: 1.3 GHz 64bit dual core Apple 7 In Wikipedia, it said: The Apple A7 is a 64-bit system on a chip (SoC) designed by Apple Inc. It first appeared in the iPhone 5S, which was introduced on September 10, 2013. Apple states that it is up to twice as fast and has up to twice the graphics power compared to its predecessor, the Apple A6. While not the first 64-bit ARM CPU, it is the first to ship in a consumer smartphone or tablet computer. There are 2 sentences: The Apple A7 is a 64-bit system on a chip (SoC) and While not the first 64-bit ARM CPU Wikipedia also said “The A7 features an Apple-designed 64-bit 1.3–1.4 GHz ARMv8-A dual-core CPU, called Cyclone”. So System on chip is also CPU? very confused

    Read the article

  • What does it mean for a computer to be an "IBM Compatible PC"?

    - by Jon
    A couple questions about this: 1) Is this term even relevant any more? 2) Does this mean anything from a developer's stand point? It is not exactly clear to me if this is a BIOS, architecture, bus or a combination. A piece of software I'm working on expects to see a "Description" of the system and currently windows machines report "AT/AT Compatible". Having been tasked to port this to Mac, I really don't know what a proper "Description" would be - this will most likely be omitted but I was wondering if anyone could provide some insight on the modern usage of this term.

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • Cant access Dell BMC IPMI Over IP

    - by Bobb
    I have Dell R210 with iDRAC BMC (new name for old BMC). Which is on-board feature with shared NIC (I believe). Server is on colocation and I didnt set it up before sent there... So I asked for the remote hands to setup IPMI Over IP. They enabled it, set the IP and everything. The IP is different than main box IP. Also the box is cabled to NIC1 and the BMC supposed to share it (am I right?) I can see new IP in the Open Server Administrator (installed on the box). I tried Supermicro IPMI tool and I tried Dell ipmish.exe command like this ipmish -ip xxx -u root -p calvin sysinfo gives BMC is not detected What could be wrong? is there a diagnostics tool I can try? It must be something obvious. I just never used things like that before.... P.S. I read something about encryptions key in the Dell docs. But I understand that is for encrypted IPMI 2.0 and ipmish can use IPMI 1.5 without encryption.

    Read the article

  • Dell PSU Compatibility: Dell Inspiron 530 (Desktop)

    - by ashes999
    I have an Inspiron 530 with a stock PSU. I need to upgrade it to meet my video card's needs (AMD HD6770), which demands at least 450W, to potentially fix BSODs with the latest version of the drivers (so claims AMD support). Now, I've heard conflicting reports about whether Dell uses special/proprietary PSUs. (Examples for aye and nay to special PSUs.) How exactly can I determine if a PSU is compatible with my PC, before buying it? I assume I will not be able to return it if it doesn't fit, or makes my computer explode in a fireball of doom.

    Read the article

  • What are good values for PSU voltages?

    - by earlz
    Hello, I have an odd computer I'm trying to fix that will crash only during the setup of an OS(crashes on every OS I've tried so far) It's not overheating and it is stripped down as much as possible and I've tried multiple harddrives, and memtest86+ can run on it for 3 hours without a crash or fail. So, I was a bit stumped and was looking in the BIOS for possible causes and found a hardware monitor that shows PSU voltages. They are: VCORE: 1.432V 3.3V: 3.136V 5V: 5.273 12V: 12.144V I thought the 3.3V looked a little low, but I'm not really sure on how "bad" that is. So, what are the good ranges for the voltages on each CPU rail?

    Read the article

  • Macbook Pro with Windows 7 - GPU always on

    - by Joonas Pulakka
    Übergizmo is reporting an issue with the new Macbook Pros' GeForce 330M GPU being always "on" under Windows 7, and thus almost halving the battery life compared to that with OS X (which is able to somehow suspend that GPU and use the the low-end integrated GPU to do the light work). Any solutions, or rumors of coming solutions?

    Read the article

  • PSU Suggestions for SLI EVGA GTX 680 + i7

    - by JMH
    So i am putting together a new machine and am trying to determine the best PSU to go with for this setup. Here is the build i am thinking about Processor Intel Core i7-3770K Ivy Bridge 3.5GHz Motherboard ASRock Z77 Professional Graphics Card (2 SLI) EVGA 02G-P4-2680-KR Sound Card HT | OMEGA eClaro 7.1 Fan/Heat Sync ZALMAN CNPS9500A-LED Case COOLER MASTER HAF X RC-942-KKN1 RAM G.SKILL Ripjaws X Series 32GB (4 x 8GB) SSD Of some Sort So my guess is that i need a 750 -1000W PSU, but which brand.... how much, etc --- really in the woods. Thanks -Josh

    Read the article

  • Laptop wakes from sleep, once, due to audio controller (Windows 7)

    - by stijn
    The laptop is a recent Dell XPS 15z and the problem is as follows (reproducible about 90% of tries): put laptop to sleep using either Start-Sleep or closing the lid laptop goes to sleep after about 5 seconds, but instantly wakes again showing a black screen (touching the keyboard or moving the mouse shows the login screen one normally gets after wake) login again, put laptop to sleep latop stays in sleep mode output of powercfg -lastwake after the first instant wake shows the audio controller is responsible. Why would that be, why only the first try, and how to fix this? Wake History Count - 1 Wake History [0] Wake Source Count - 1 Wake Source [0] Type: Device Instance Path: PCI\VEN_8086&DEV_1C20&SUBSYS_04461028&REV_05\3&11583659&0&D8 Friendly Name: Description: High Definition Audio Controller Manufacturer: Microsoft

    Read the article

  • PC case and PSU screw question....

    - by user32569
    Hi, I have maybe a funny one to ask.... To this Christmass I bought new PC. When I started to asseble it, I found that my case (Artic Cooling Silentium T11) has 12 screws for HDD, DVD etc, and 6 screws for the expansion cards. Well, first thing that surprised me was, why only 6 screws for expansion card, when case has actually 7 slots. And second, what are PSU screws supposed to some with? The PSU, Case or nothing? Becouse neighter PSU or Case had them. PSU is Evolve Storm 600W. Well, I know case and PSU are not some high end devices, but still, would it hurt them to add 1 screw for expansion cards and 4 for PSU? So, my question is, is this situation normall, or which one (Case or PSU) does normally screws come with? Thanks.

    Read the article

  • MacBook Air: USB Hub Compatible with the MBA SuperDrive

    - by _ande_turner_
    A little background to those whom may think this Question to specific: The MacBook Air SuperDrive draws 1A vs 500mA of a normal USB device, and therefore you can't use a standard USB hub powered or unpowered because each port gets 500mA not 1A... Have any MacBook Air users found a USB hub which can accommodate the MBA SuperDrive and another peripheral?

    Read the article

  • DPMS, keep screen off when lid shut

    - by Evan Teran
    I have a laptop running linux. In my xorg configuration, I have DPMS setup so that the screen automatically turns off during several events. In addition to that I have to the following script tied to ACPI lid open/close events: #!/bin/sh for i in $(pidof X); do CMD=$(ps --no-heading $i) XAUTH="$(echo $CMD | sed -n 's/.*-auth \(.*\)/\1/p')" DISPLAY="$(echo $CMD | sed -n 's/.* \(:[0-9]\) .*/\1/p')" # turn the display off or back on export XAUTHORITY=$XAUTH /usr/bin/xset -display $DISPLAY dpms force $1 done Basically, this script takes one parameter ("on" or "off") then iterates through all of my running X sessions and either turns on or turns off the monitor. Here's my issue. When I close the lid of the laptop, the screen goes off as expected, but if a mouse event occurs (like if something bumps into the table...) then the screen turns back on even though it is closed (I can see the light through the side of the laptop). Is there a way to prevent the screen from turning on during a mouse event if the lid is closed?

    Read the article

  • What is preventing my computer from going idle?

    - by brianberns
    When I first boot my Windows 7 computer, it will go idle if I stop using it - first the screensaver comes on, then the computer goes to sleep after a certain amount of time. This is the expected behavior. However, after I've used the computer for awhile without rebooting (after about a day or so), I've noticed that it stops going idle - the screensaver won't come on, and the computer won't sleep, no matter how long it sits unused. I've confirmed that the idle timer is increasing as expected via GetLastInputInfo. However, it looks like something is interfering with the results from CallNtPowerInformation. Every 14 or 16 seconds, the TimeRemaining value jumps back up to its maximum value when I query SystemPowerInformation. I've used the SysInternals Process Monitor to detect any unusual events that might be happening to trigger this reset, but come up empty. Does anyone know exactly what are the possible causes of TimeRemaining resetting to its maximum value? I'm fairly sure that it's not my mouse, keyboard, or network sending spurious events, because I've disabled each one and the problem continues to occur. This would also reset the GetLastInputInfo timer, which is not happening. I'm looking for something that affects SystemPowerInformation TimeRemaining, but does not affect GetLastInputInfo. Thanks.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >