Search Results

Search found 3266 results on 131 pages for 'dell dimension'.

Page 45/131 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • System speakers not recognized

    - by Kyle Maxwell
    Since upgrading to Xubuntu 13.10, sound has not functioned properly (e.g. screeching when playing Skype notifications). Now, however, it does not function at all. pavucontrol only shows Dummy Output and does not recognize the built-in speakers on my Dell Precision M4600. Possibly related, the sound indicator applet does not come up when I click on it, only showing a small white bar underneath it. I have purged and reinstalled pulseaudio. lspci -v shows: 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) Subsystem: Dell Precision M4600 Flags: bus master, fast devsel, latency 0, IRQ 56 Memory at f2560000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 01:00.1 Audio device: NVIDIA Corporation GF106 High Definition Audio Controller (rev a1) Subsystem: Dell Device 14a3 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f0080000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel The "Capabilities: <access denied" line makes me wonder if there's a permissions issue, as the Log Out applet now shows "Restart" and "Shutdown" grayed out. groups shows me in: kmaxwell adm dialout cdrom sudo dip plugdev fuse lpadmin netdev sambashare vboxusers

    Read the article

  • IRQ Conflicts Causing Video Card and Boot Problems?

    - by sanpatricio
    tl;dr - I have 4 devices sharing 1 IRQ. Is this bad and how do I tell the BIOS to stop it? Background: I have an old Dell GX280 dual Pentium 4 that I (semi) resurrected last weekend with an installation of Ubuntu 12.04. Everything was going fine the first several hours until a problem that plagued me when WinXP was on that machine happened -- it froze. Completely froze. None of the myriad of ways I have found here on askubuntu helped me to regain control except a long-press of the power button to shut it off. Clearly, this wasn't a software/WinXP issue. After much googling, I found that hardware conflicts can often cause this sort of total lock-up and with all the odd blocks of yellow and flecks of color showing on my screen (both WinXP and Ubuntu) I figured my old GeForce 7600 was failing and causing me these odd issues. (A good canned-air dusting of the entire interior fixed the color fleck problem) Again, through much googling and numerous answers found on askubuntu, I somehow stumbled my way onto the lshw command. After going through it, line by line, I found that I have four devices sharing IRQ 16: eth0, wlan0, ide0 (DVD-RW), and my video card. In hindsight, I can recall weird instances of my Ethernet connection to another computer not working when I thought it should. I never full troubleshot those issues so it could be a coincidence. The other thing that has been plaguing me since installing Ubuntu (wasn't there during WinXP) has been periodic moments of my monitor getting no signal from Ubuntu during boot. The first couple days, it would disappear after the Dell boot screen and reappear at Ubuntu login. Now, it disappears after the Dell boot screen and doesn't return at all -- I have to hit F12 where I can load a safe mode version of Ubuntu and get more details like dmesg and lsdev. I also ran memtest86 overnight and woke up to zero errors, so failing RAM is out. Where do I go from here?

    Read the article

  • ubuntu 12.04 LTS hangs on boot when external monitor is connected

    - by vladeta
    I have a problem with my fresh installed Ubuntu 12.04 LTS on my Dell M4600: nVidia quadro 2000M i7-2860 16GB ram 128GB SSD Dell/Samsung 750GB HDD IPS RGB laptop display When it is connected via DP++ to the external Dell U2311H monitor, it hangs on boot or when wakening from suspend. If I detach the DP cable it boots normally. I have tried all combinations that I have found, as adding to grub: "no splash", "boot=pci", "acpi=off", etc... I have also changed in nVidia X settings that external monitor is the primary one and also tried to delete monitor.xml file. There is no change it hangs each time after grub. It starts to load daemons then both screens are blank and then completely hangs with beep sound. What I discovered is if I detach the cable and wait for about 2 sec after grub starts booting and then physically connect DP cable while the Ubuntu is still booting everything works normally and I have a picture on my external screen while the laptop screen is off, just as I wanted. Do you maybe know how to solve this issue? Thank You.

    Read the article

  • 12.04 Taking forever for no apparent reason

    - by Sam
    First off, I'd just like to say how much I'm loving Ubuntu so far. It's one of my first steps into Linux, but so far it's blown away Windows in almost every regard. Now, onto my problem. My brothers have some old Dell Dimensions that are barely clinging to life. They were both running XP Home Premium. The 2400 installed just fine, no issues at all. Now, the other computer, the 3000 isn't clearing the screen with the Ubuntu logo and the pulsing dots. I've tried multiple discs, including the exact same one that gave me no issue in the other computer, and I'm at a loss. Does anyone have any suggestions as to where the problem might lay? They're both 32 bit Intel processors, and it's the correct version of Ubuntu. Is it a bad disc drive? Hardware incompatibility? Thanks for any assistance that can be provided. Dell Dimension 2400 Dell Dimension 3000

    Read the article

  • Dual displays not working - NVidia - Ubuntu 12.04 - Second Monitor - Two Screens

    - by user75105
    Graphics Card: NVidia 460 GTX. Driver: NVIDIA accelerated graphics driver (version current) I have one DVI monitor, an old Dell LCD from 2005, and one VGA monitor, an Asus ML238H from 2010 whose HDMI port broke. The Asus is plugged into my graphics card's primary monitor slot and is the better monitor even though it is VGA but my computer defaults to the Dell. This happens when I boot as well; the loading screens, the motherboard brand image, etc. are all displayed on the Dell monitor until Windows loads. Then both monitors work. The same thing happened when I booted up Ubuntu 12.04 but I did not see the second monitor when the log-in screen popped up, nor did I when I logged in. I went to System Settings/Displays and my Asus monitor is not an option. I clicked Detect Displays and the Asus is not detected. I looked at the other questions regarding NVIDIA drivers and recalled my problems with Ubuntu a few years ago and decided to check the driver. I went to Additional Drivers to install the proprietary driver and it looks like it's installed and active but I'm still having this problem. There is another driver option, the post-release NVIDIA driver, but that does not fix the problem either. Also, under System Details/Graphics the graphics device is listed as Unknown, which might indicate that it is using an open source generic driver and not the proprietary NVidia driver. But under Additional Drivers it says that I am using the NVidia driver. Any help is appreciated.

    Read the article

  • Dual displays not working - NVidia - Ubuntu 12.4

    - by user75105
    Graphics Card: NVidia 460 GTX. Driver: NVIDIA accelerated graphics driver (version current) I have one DVI monitor, an old Dell LCD from 2005, and one VGA monitor, an Asus ML238H from 2010 whose HDMI port broke. The Asus is plugged into my graphics card's primary monitor slot and is the better monitor even though it is VGA but my computer defaults to the Dell. This happens when I boot as well; the loading screens, the motherboard brand image, etc. are all displayed on the Dell monitor until Windows loads. Then both monitors work. The same thing happened when I booted up Ubuntu 12.4 but I did not see the second monitor when the log-in screen popped up, nor did I when I logged in. I went to System Settings/Displays and my Asus monitor is not an option. I clicked Detect Displays and the Asus is not detected. I looked at the other questions regarding NVIDIA drivers and recalled my problems with Ubuntu a few years ago and decided to check the driver. I went to Additional Drivers to install the proprietary driver and it looks like it's installed and active but I'm still having this problem. There is another driver option, the post-release NVIDIA driver, but that does not fix the problem either. Also, under System Details/Graphics the graphics device is listed as Unknown, which might indicate that it is using an open source generic driver and not the proprietary NVidia driver. But under Additional Drivers it says that I am using the NVidia driver. Any help is appreciated.

    Read the article

  • Oracle VM Forum????????!?????????????? "??????·????" ??????

    - by rika.tokumichi
    ??????OTN????????? 2010?5?18?????????????????Oracle VM Forum?????????? ?????????????????????????????????? ?????????????????????????????????·?????????????????????????????????????????????????????????????????????????·???????????????????????????????????????????? ??????????4?????????? ????????(?????)???????????·???????~???????????????? 5????~ ?????!????????·????????????????????? ????????????·?????????~???????????????????????~ ????????????????????~????????·????????????????~ ????????????????? ????????????????????????????????OTN?????????????????????????????????????????????????????????Fusion Middleware?????? - Fusion Middleware???????? ????????????????????????????????^^ ???????????????????????????????????? ------------------------------ ?????!????????·????????????????????? ????????????????????????????Xen???????????????? ???Oracle VM for x86?Xen???????????????Xen 4.0????????Oracle VM???????????????????????????????????????Dell PowerEdge R910(CPU 32?????? 64GB)????????? ??????????????????VM????????????????????????????????????????????·???????????????????? ??????????4GB??????·????????????????????·????????????????? ???????VM1???????????????????????VM 30????VM 60?????????????????????? ?????????????????????????????????????VM?????????????????????? ?????????????Self-Ballooning?Transcendent Memory?????????????????????????????????????????????????????????? >Xen??????????????? ????????????????????????Dell PowerEdge R810?????????????????????2U??????????????CPU 32??????????·????32??????????? ----------------------------- ????????????????????~????????·????????????????~ ???????????????????(AP)???????????????????????????????????????????????????????????????????????????????????????????????? ????????????AP????????????????????????? ?????????????????????????????????AP????·????????????????????????????????????????AP·????·????????????????????????????????????? ???????????EC???????????????????????????????????? ????????????????????2?????? ???Oracle Virtual Assembly Builder? ????(Web?AP?DB)????????????????????????????????????????????? ???Oracle WebLogic Server on JRockit Virtual Edition? ????OS?????????????????JVM?????OS????????????????????(?????????????)?????????????????????????? ----------------------------- ??????????????????? >?????????????? ?????????????(??????)?????????????????????????? ?????????????????????????! ?????????????????????????????????????????????????????????

    Read the article

  • Performance using T-SQL PIVOT vs SSIS PIVOT Transformation Component.

    - by Nev_Rahd
    Hi I am in process of building Dimension from EDW (source), wherein I need to pivot columns of source to load Dimension. Currently most of the pivoting stuff am doing is by using T-SQL PIVOT which further get used in my SSIS package to merge with Dim table This pivoting can also be achieved by SSIS PIVOT Transformation component. In regards to Performance which approach would be the best? Thanks

    Read the article

  • Java JFrame method pack() problem

    - by Oliver
    Hi, I have a frame with 4 JPanels and 1 JScrollPane, the 4 panels are in border layout north, east, south, west and the scrollpane in the center. I have been trying to get the pack method for a frame fuctioning but when run you just get the title bar of the window. Any Ideas? Thank you in advance. JFrame conFrame; JPanel panel1; JPanel panel2; JPanel panel3; JPanel panel4; JScrollPane listPane; JList list; Object namesAr[]; ... ... ... namesAr= namesA.toArray(); list = new JList(namesAr); list.setSelectionMode(ListSelectionModel.SINGLE_SELECTION); list.setLayoutOrientation(JList.HORIZONTAL_WRAP); list.setVisibleRowCount(-3); list.addListSelectionListener(this); listPane = new JScrollPane(list); panel1 = new JPanel(); panel2 = new JPanel(); panel3 = new JPanel(); panel4 = new JPanel(); conFrame.setLayout(new BorderLayout()); panel1.setPreferredSize(new Dimension(100, 100)); panel2.setPreferredSize(new Dimension(100, 100)); panel3.setPreferredSize(new Dimension(100, 100)); panel4.setPreferredSize(new Dimension(100, 100)); panel1.setBackground(Color.red); panel2.setBackground(Color.red); panel3.setBackground(Color.red); panel4.setBackground(Color.red); conFrame.pack(); conFrame.add(panel1, BorderLayout.NORTH); conFrame.add(panel2, BorderLayout.EAST); conFrame.add(panel3, BorderLayout.SOUTH); conFrame.add(panel4, BorderLayout.WEST); conFrame.add(listPane, BorderLayout.CENTER); conFrame.setVisible(true);

    Read the article

  • Cube project doesn't work because of permissions

    - by sms
    I'm doing "Multidimensional Project" with MS SQL Server 2012 (Server Data Tools - Visual Studio 2010 Shell). I can't run (debug) it. If the data source's impersonation information is set to "use the service account", this error occures: Error 2 Internal error: The operation terminated unsuccessfully. 0 0 Error 3 OLE DB error: OLE DB or ODBC error: Login failed for user 'NT Service\MSSQLServerOLAPService'.; 28000. 0 0 Error 4 Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Data Warehouse', Name of 'Data Warehouse'. 0 0 Error 5 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Items', Name of 'Items' was being processed. 0 0 Error 6 Errors in the OLAP storage engine: An error occurred while the 'Id' attribute of the 'Items' dimension from the 'Warehouse_MultidimensionalProject_Cube' database was being processed. 0 0 Error 7 Server: The current operation was cancelled because another operation in the transaction failed. 0 0 I guessed that this account has no premissions but (1) I coudn't even add this account (it seems that it doesn't exist) and (2) how is that even possible for it to not have built-it poremissions? When I'm setting impersonation to "use the credentials of current user" (which is the owner of the data source, btw.), another error occures: Error 2 Internal error: The operation terminated unsuccessfully. 0 0 Error 3 The datasource, 'Data Warehouse', contains an ImpersonationMode that is not supported for processing operations. 0 0 Error 4 Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Data Warehouse', Name of 'Data Warehouse'. 0 0 Error 5 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Items', Name of 'Items' was being processed. 0 0 Error 6 Errors in the OLAP storage engine: An error occurred while the 'Id' attribute of the 'Items' dimension from the 'Warehouse_MultidimensionalProject_Cube' database was being processed. 0 0 Error 7 Server: The current operation was cancelled because another operation in the transaction failed. 0 0 Any help?

    Read the article

  • How do i pack multiple rectangles in a 2d box tetris style

    - by mglmnc
    I have a number of rectangles of various widths and heights. I have a larger rectangular platform to put them on. I want to pack them on one side of the platform so they spread in the lengthwise (X) dimension but keep the widthwise (Y) dimension to a minimal. That is to place them like a tetris game. There can be no overlaps but there can be gaps. Is there an algorithm out there to do this?

    Read the article

  • UIImage resize and crop to fit

    - by Amit Hagin
    I read a lot, also here, but couldn't find a simple way to do it: In objective c - I have a big UIImage and a small UIImageView. I want to programmatically shrink the content of a UIImage just enough to fit the smaller dimension within the UIImageView. The larger dimension will be cropped, and the result will be the maximum I can get from an image without changing the proportion. can you please help me?

    Read the article

  • R: optimal way of computing the "product" of two vectors

    - by Musa
    Hi, Let's assume that I have a vector r <- rnorm(4) and a matrix W of dimension 20000*200 for example: W <- matrix(rnorm(20000*200),20000,200) I want to compute a new matrix M of dimension 5000*200 such that m11 <- r%*%W[1:4,1], m21 <- r%*%W[5:8,1], m12 <- r%*%W[1:4,2] etc. (i.e. grouping rows 4-by-4 and computing the product). What's the optimal (speed,memory) way of doing this? Thanks in advance.

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • Drivers not installing and drives not being recognized

    - by jab818
    Dell optiplex 755 running win7-32 Two problems, not sure if they're related 1: Two drivers are missing when viewing device manager, PCI serial port and PCI Simple Communications Controller, I've searched dell's site, and the internet as a whole and have been wholly unable to find the drivers. 2: I'm unable to connect any external hard drives, but any flash drives plugged in will be recognized and accessible by the computer without issue. Any help would be appreciated.

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • MegaCLI always returns blank output

    - by JamesHannah
    This server is a Dell R200 running Ubuntu 8.04LTS using a LSI SAS1068E raid card supplied from Dell, I suspect that there might be some kind of RAID issue with the hardware raid built into the motherboard, but I can't seem to get MegaCLi to return any useful output: root@81 $ ./MegaCli -AdpAllInfo -aALL root@81 $ ./MegaCli -PDList -aALL root@81 $ The disks work and AFAIK the raid software is installed correctly. I've seen this issue on RedHat issues also in the past. The RAID was initially setup through the BIOS on this server and appears to be functioning fine apart from this.

    Read the article

  • How to set up dual quadro cards on RHEL 5.5?

    - by Alex J. Roberts
    I have a RHEL 5 workstation with 2 nvidia Quadro FX4500 cards, with one display attached to each card. After doing a clean install of RHEL 5.5, the second display doesnt work (it worked ok in RHEL 5.2). Neither separate X screens nor Xinerama are working. The kernel version is 2.6.18-194.el5 I've tried nvidia drivers 185.18.36 (the ones that i was using on 5.2) and the latest 260.19.36 and neither works. My xorg.conf is as follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildmeister@builder58) Fri Aug 14 18:34:43 PDT 2009 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" FontPath "unix/:7100" EndSection Section "ServerFlags" Option "Xinerama" "1" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from data in "/etc/sysconfig/keyboard" Identifier "Keyboard0" Driver "kbd" Option "XkbLayout" "us" Option "XkbModel" "pc105" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:10:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:129:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection And the Xorg Log: X Window System Version 7.1.1 Release Date: 12 May 2006 X Protocol Version 11, Revision 0, Release 7.1.1 Build Operating System: Linux 2.6.18-164.11.1.el5 x86_64 Red Hat, Inc. Current Operating System: Linux blur.svsdsde 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 Build Date: 06 March 2010 Build ID: xorg-x11-server 1.1.1-48.76.el5 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Module Loader present Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Fri Feb 18 09:52:08 2011 (==) Using config file: "/etc/X11/xorg.conf" (==) ServerLayout "Layout0" (**) |-->Screen "Screen0" (0) (**) | |-->Monitor "Monitor0" (**) | |-->Device "Device0" (**) |-->Screen "Screen1" (1) (**) | |-->Monitor "Monitor1" (**) | |-->Device "Device1" (**) |-->Input Device "Keyboard0" (**) |-->Input Device "Mouse0" (**) FontPath set to: unix/:7100 (==) RgbPath set to "/usr/share/X11/rgb" (==) ModulePath set to "/usr/lib64/xorg/modules" (**) Option "Xinerama" "1" (**) Xinerama: enabled (==) Max clients allowed: 512, resource mask: 0xfffff (II) Open ACPI successful (/var/run/acpid.socket) (II) Module ABI versions: X.Org ANSI C Emulation: 0.3 X.Org Video Driver: 1.0 X.Org XInput driver : 0.6 X.Org Server Extension : 0.3 X.Org Font Renderer : 0.5 (II) Loader running on linux (II) LoadModule: "bitmap" (II) Loading /usr/lib64/xorg/modules/fonts/libbitmap.so (II) Module bitmap: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Bitmap (II) LoadModule: "pcidata" (II) Loading /usr/lib64/xorg/modules/libpcidata.so (II) Module pcidata: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Video Driver, version 1.0 (++) using VT number 7 (II) PCI: PCI scan (all values are in hex) (II) PCI: 00:00:0: chip 10de,005e card 103c,1500 rev a3 class 05,80,00 hdr 00 (II) PCI: 00:01:0: chip 10de,0051 card 103c,1500 rev a3 class 06,01,00 hdr 80 (II) PCI: 00:01:1: chip 10de,0052 card 103c,1500 rev a2 class 0c,05,00 hdr 80 (II) PCI: 00:02:0: chip 10de,005a card 103c,1500 rev a2 class 0c,03,10 hdr 80 (II) PCI: 00:02:1: chip 10de,005b card 103c,1500 rev a3 class 0c,03,20 hdr 80 (II) PCI: 00:04:0: chip 10de,0059 card 103c,1500 rev a2 class 04,01,00 hdr 00 (II) PCI: 00:06:0: chip 10de,0053 card 103c,1500 rev f2 class 01,01,8a hdr 00 (II) PCI: 00:07:0: chip 10de,0054 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:08:0: chip 10de,0055 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:09:0: chip 10de,005c card 0000,0000 rev a2 class 06,04,01 hdr 01 (II) PCI: 00:0a:0: chip 10de,0057 card 103c,1500 rev a3 class 06,80,00 hdr 00 (II) PCI: 00:0e:0: chip 10de,005d card 0000,0000 rev a3 class 06,04,00 hdr 01 (II) PCI: 00:18:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 05:05:0: chip 104c,8023 card 103c,1500 rev 00 class 0c,00,10 hdr 00 (II) PCI: 0a:00:0: chip 10de,009d card 10de,02af rev a1 class 03,00,00 hdr 00 (II) PCI: End of PCI scan (II) PCI-to-ISA bridge: (II) Bus -1: bridge is at (0:1:0), (0,-1,-1), BCTRL: 0x0008 (VGA_EN is set) (II) Subtractive PCI-to-PCI bridge: (II) Bus 5: bridge is at (0:9:0), (0,5,5), BCTRL: 0x0206 (VGA_EN is cleared) (II) Bus 5 non-prefetchable memory range: [0] -1 0 0xf5000000 - 0xf50fffff (0x100000) MX[B] (II) PCI-to-PCI bridge: (II) Bus 10: bridge is at (0:14:0), (0,10,10), BCTRL: 0x000a (VGA_EN is set) (II) Bus 10 I/O range: [0] -1 0 0x00003000 - 0x00003fff (0x1000) IX[B] (II) Bus 10 non-prefetchable memory range: [0] -1 0 0xf3000000 - 0xf4ffffff (0x2000000) MX[B] (II) Bus 10 prefetchable memory range: [0] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B] (II) Host-to-PCI bridge: (II) Bus 0: bridge is at (0:24:0), (0,0,10), BCTRL: 0x0008 (VGA_EN is set) (II) Bus 0 I/O range: [0] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) Bus 0 non-prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (II) Bus 0 prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (--) PCI:*(10:0:0) nVidia Corporation Quadro FX 4500 rev 161, Mem @ 0xf3000000/24, 0xc0000000/28, 0xf4000000/24, I/O @ 0x3000/7 (II) Addressable bus resource ranges are [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] [1] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) OS-reported resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) Active PCI resource ranges: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) Active PCI resource ranges after removing overlaps: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) OS-reported resource ranges after removing overlaps with PCI: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) All system resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) LoadModule: "extmod" (II) Loading /usr/lib64/xorg/modules/extensions/libextmod.so (II) Module extmod: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension SHAPE (II) Loading extension MIT-SUNDRY-NONSTANDARD (II) Loading extension BIG-REQUESTS (II) Loading extension SYNC (II) Loading extension MIT-SCREEN-SAVER (II) Loading extension XC-MISC (II) Loading extension XFree86-VidModeExtension (II) Loading extension XFree86-Misc (II) Loading extension XFree86-DGA (II) Loading extension DPMS (II) Loading extension TOG-CUP (II) Loading extension Extended-Visual-Information (II) Loading extension XVideo (II) Loading extension XVideo-MotionCompensation (II) Loading extension X-Resource (II) LoadModule: "dbe" (II) Loading /usr/lib64/xorg/modules/extensions/libdbe.so (II) Module dbe: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension DOUBLE-BUFFER (II) LoadModule: "glx" (II) Loading /usr/lib64/xorg/modules/extensions/libglx.so (II) Module glx: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Server Extension (II) NVIDIA GLX Module 185.18.36 Fri Aug 14 18:27:24 PDT 2009 (II) Loading extension GLX (II) LoadModule: "freetype" (II) Loading /usr/lib64/xorg/modules/fonts/libfreetype.so (II) Module freetype: vendor="X.Org Foundation & the After X-TT Project" compiled for 7.1.1, module version = 2.1.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font FreeType (II) LoadModule: "type1" (II) Loading /usr/lib64/xorg/modules/fonts/libtype1.so (II) Module type1: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.2 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Type1 (II) LoadModule: "record" (II) Loading /usr/lib64/xorg/modules/extensions/librecord.so (II) Module record: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.13.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension RECORD (II) LoadModule: "dri" (II) Loading /usr/lib64/xorg/modules/extensions/libdri.so (II) Module dri: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading sub module "drm" (II) LoadModule: "drm" (II) Loading /usr/lib64/xorg/modules/linux/libdrm.so (II) Module drm: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading extension XFree86-DRI (II) LoadModule: "nvidia" (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so (II) Module nvidia: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Video Driver (II) LoadModule: "kbd" (II) Loading /usr/lib64/xorg/modules/input/kbd_drv.so (II) Module kbd: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.0 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) LoadModule: "mouse" (II) Loading /usr/lib64/xorg/modules/input/mouse_drv.so (II) Module mouse: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.1 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) NVIDIA dlloader X Driver 185.18.36 Fri Aug 14 17:51:02 PDT 2009 (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs (II) Primary Device is: PCI 0a:00:0 (--) Chipset NVIDIA GPU found (II) Loading sub module "fb" (II) LoadModule: "fb" (II) Loading /usr/lib64/xorg/modules/libfb.so (II) Module fb: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org ANSI C Emulation, version 0.3 (II) Loading sub module "wfb" (II) LoadModule: "wfb" (II) Loading /usr/lib64/xorg/modules/libwfb.so (II) Module wfb: vendor="NVIDIA Corporation" compiled for 7.1.99.2, module version = 1.0.0 (II) Loading sub module "ramdac" (II) LoadModule: "ramdac" (II) Loading /usr/lib64/xorg/modules/libramdac.so (II) Module ramdac: vendor="X.Org Foundation" compiled for 7.1.1, module version = 0.1.0 ABI class: X.Org Video Driver, version 1.0 (II) resource ranges after xf86ClaimFixedResources() call: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) resource ranges after probing: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) NVIDIA(0): Option "TwinView" "0" (**) NVIDIA(0): Option "MetaModes" "nvidia-auto-select +0+0" (**) NVIDIA(0): Enabling RENDER acceleration (II) NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) NVIDIA(0): enabled. (II) NVIDIA(0): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:10:0:0 (GPU-0) (--) NVIDIA(0): Memory: 524288 kBytes (--) NVIDIA(0): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(0): Detected PCI Express Link width: 16X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on Quadro FX 4500 at PCI:10:0:0: (--) NVIDIA(0): DELL 3007WFP (DFP-0) (--) NVIDIA(0): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Assigned Display Device: DFP-0 (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): "nvidia-auto-select+0+0" (II) NVIDIA(0): Virtual screen size determined to be 2560 x 1600 (--) NVIDIA(0): DPI set to (101, 101); computed from "UseEdidDpi" X config (--) NVIDIA(0): option (WW) NVIDIA(0): UBB is incompatible with the Composite extension. Disabling (WW) NVIDIA(0): UBB. (==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp (II) do I need RAC? No, I don't. (II) resource ranges after preInit: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) NVIDIA(GPU-1): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:129:0:0 (GPU-1) (--) NVIDIA(GPU-1): Memory: 524288 kBytes (--) NVIDIA(GPU-1): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(GPU-1): Detected PCI Express Link width: 16X (--) NVIDIA(GPU-1): Interlaced video modes are supported on this GPU (--) NVIDIA(GPU-1): Connected display device(s) on Quadro FX 4500 at PCI:129:0:0: (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0) (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Initialized GPU GART. (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Loading extension NV-GLX (II) NVIDIA(0): NVIDIA 3D Acceleration Architecture Initialized (==) NVIDIA(0): Disabling shared memory pixmaps (II) NVIDIA(0): Using the NVIDIA 2D acceleration architecture (==) NVIDIA(0): Backing store disabled (==) NVIDIA(0): Silken mouse enabled (**) Option "dpms" (**) NVIDIA(0): DPMS enabled (II) Loading extension NV-CONTROL (==) RandR enabled (II) Setting vga for screen 0. (II) Initializing built-in extension MIT-SHM (II) Initializing built-in extension XInputExtension (II) Initializing built-in extension XTEST (II) Initializing built-in extension XKEYBOARD (II) Initializing built-in extension XC-APPGROUP (II) Initializing built-in extension SECURITY (II) Initializing built-in extension XINERAMA (II) Initializing built-in extension XFIXES (II) Initializing built-in extension XFree86-Bigfont (II) Initializing built-in extension RENDER (II) Initializing built-in extension RANDR (II) Initializing built-in extension COMPOSITE (II) Initializing built-in extension DAMAGE (II) Initializing built-in extension XEVIE (II) Initializing extension GLX (WW) Disabling Composite since Xinerama is enabled (**) Option "CoreKeyboard" (**) Keyboard0: Core Keyboard (**) Option "Protocol" "standard" (**) Keyboard0: Protocol: standard (**) Option "AutoRepeat" "500 30" (**) Option "XkbRules" "xorg" (**) Keyboard0: XkbRules: "xorg" (**) Option "XkbModel" "pc105" (**) Keyboard0: XkbModel: "pc105" (**) Option "XkbLayout" "us" (**) Keyboard0: XkbLayout: "us" (**) Option "CustomKeycodes" "off" (**) Keyboard0: CustomKeycodes disabled (**) Option "Protocol" "auto" (**) Mouse0: Device: "/dev/input/mice" (**) Mouse0: Protocol: "auto" (**) Option "CorePointer" (**) Mouse0: Core Pointer (**) Option "Device" "/dev/input/mice" (**) Option "Emulate3Buttons" "no" (**) Option "ZAxisMapping" "4 5" (**) Mouse0: ZAxisMapping: buttons 4 and 5 (**) Mouse0: Buttons: 9 (II) XINPUT: Adding extended input device "Mouse0" (type: MOUSE) (II) XINPUT: Adding extended input device "Keyboard0" (type: KEYBOARD) (--) Mouse0: PnP-detected protocol: "ExplorerPS/2" (II) Mouse0: ps2EnableDataReporting: succeeded (II) Open ACPI successful (/var/run/acpid.socket) (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Mouse0: ps2EnableDataReporting: succeeded (the snipped part can be changed if necessary) Any help at all would be appreciated. Cheers, Alex

    Read the article

  • WinXP: File Record Segment nnnn is unreadable?

    - by chris
    I'm pretty sure that the drive is toast, except for the fact that this error is only showing up on one partition (it's out of a Dell computer, and has a couple of Dell partitions, which boot and work fine.) I've already purchased another hard drive, and re-installed WinXP, but when I re-connect this drive and reboot, I'm getting thousands of these errors. Is there any chance of recovering any files off this? Should I prevent XP from trying to resolve the problems?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >