Search Results

Search found 8303 results on 333 pages for 'knowledge modules'.

Page 33/333 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • "ODM" - One of the Support team's most valued acronyms

    - by graham.mckendry(at)oracle.com
    If you submit technical service requests (SRs) through the My Oracle Support portal, you may often see the term "ODM" used in updates from our Support team. ODM is an acronym for "Oracle Diagnostic Methodology", which defines a standard problem solving approach that all of Oracle Support uses for every technical SR. ODM provides a number of benefits to the SRs - both for the Support organization and for the customer - including a consistent approach, higher quality, justified solutions, and ultimately faster resolution. Screenshot: Example of an ODM "Issue Clarification" activity in a service request The Oracle Diagnostic Methodology applies to both categories of technical SRs: Consultative (question-answer topics) and Problem-Solution. There are a few KM Notes that describe the steps of ODM, however to keep things simple (and since those KM Notes appear to be a bit outdated), I'll summarize the ODM stages here as follows: Consultative ODM - Three mandatory stages: ODM Question: Clarification of the customer's exact question. ODM Answer: Thorough answer to the customer's question. ODM Knowledge Content: Reference to new or existing knowledge base content, or explanation why the particular SR does not necessarily require knowledge content. Problem-Solution ODM - Eight mandatory stages: ODM Issue Clarification: Clarification of the reported issue, including the symptoms, the steps to reproduce, and an outline of the business impact ODM Issue Verification: Confirmation of the issue being verified based on proof provided by the customer, such as screenshots, log files, or reproducing the issue during an Oracle Web Conference. ODM Cause Determination: Succinct outline of the root cause of the issue. ODM Cause Justification: Explanation as to why the root cause applies to this particular situation. ODM Proposed Solution(s): Succinct outline of the potential solution(s) to resolve the issue. ODM Proposed Solution(s) Justification: Explanation of why the proposed solution(s) will in fact resolve the issue. ODM Solution Action Plan: Detailed numbered instructions on how to execute the proposed solutions. ODM Knowledge Content: Reference to new or existing knowledge base content, or explanation why the particular SR does not necessarily require knowledge content. During these stages, you may see other optional ODM-related activities such as "ODM Data Collection", "ODM Action Plan", "ODM Research", and "ODM Test Case". Again, these structured tags help ensure a uniform methodology across your SRs. With this knowledge you should be able to develop better predictability of what's coming next in your SRs, as well as what you can do to help expedite the resolution process.

    Read the article

  • Do You Know How OUM defines the four, basic types of business system testing performed on a project? Why not test your knowledge?

    - by user713452
    Testing is perhaps the most important process in the Oracle® Unified Method (OUM). That makes it all the more important for practitioners to have a common understanding of the various types of functional testing referenced in the method, and to use the proper terminology when communicating with each other about testing activities. OUM identifies four basic types of functional testing, which is sometimes referred to as business system testing.  The basic functional testing types referenced by OUM include: Unit Testing Integration Testing System Testing, and  Systems Integration Testing See if you can match the following definitions with the appropriate type above? A.  This type of functional testing is focused on verifying that interfaces/integration between the system being implemented (i.e. System under Discussion (SuD)) and external systems functions as expected. B.     This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the several custom components developed to satisfy a given requirement (e.g. screen, program, report, etc.) interact with one another as designed. C.  This type of functional testing is focused on verifying that the functionality within the system being implemented (i.e. System under Discussion (SuD)), functions as expected.  This includes out-of-the -box functionality delivered with Commercial Off-The-Shelf (COTS) applications, as well as, any custom components developed to address gaps in functionality.  D.  This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the individual custom components developed to satisfy a given requirement  (e.g. screen, program, report, etc.) functions as designed.   Check your answers below: (D) (B) (C) (A) If you matched all of the functional testing types to their definitions correctly, then congratulations!  If not, you can find more information in the Testing Process Overview and Testing Task Overviews in the OUM Method Pack.

    Read the article

  • How to install chrome autosave extension?

    - by Oguz Can Sertel
    I would like to install chrome autosave plugin on ubuntu. when I try to install it with these steps https://github.com/NV/chrome-devtools-autosave-server , I got some errors... there was not installed node and npm out of box on ubuntu 12.10. So I installed npm and node with these commands. sudo apt-get install npm sudo apt-get install node and I tried to install autosave here is the output: sudo npm install -g autosave npm http GET https://registry.npmjs.org/autosave npm http 304 https://registry.npmjs.org/autosave npm http GET https://registry.npmjs.org/commander npm http 304 https://registry.npmjs.org/commander /usr/local/bin/autosave -> /usr/local/lib/node_modules/autosave/bin/autosave > [email protected] install /usr/local/lib/node_modules/autosave > node ./scripts/install.js npm ERR! error installing [email protected] npm WARN This failure might be due to the use of legacy binary "node" npm WARN For further explanations, please read npm WARN /usr/share/doc/nodejs/README.Debian npm WARN npm ERR! [email protected] install: `node ./scripts/install.js` npm ERR! `sh "-c" "node ./scripts/install.js"` failed with 1 npm ERR! npm ERR! Failed at the [email protected] install script. npm ERR! This is most likely a problem with the autosave package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node ./scripts/install.js npm ERR! You can get their info via: npm ERR! npm owner ls autosave npm ERR! There is likely additional logging output above. npm ERR! npm ERR! System Linux 3.5.0-17-generic npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "autosave" npm ERR! cwd /home/naczu npm ERR! node -v v0.6.19 npm ERR! npm -v 1.1.4 npm ERR! code ELIFECYCLE npm ERR! message [email protected] install: `node ./scripts/install.js` npm ERR! message `sh "-c" "node ./scripts/install.js"` failed with 1 npm ERR! errno {} npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/naczu/npm-debug.log npm not ok and here is README.debian nodejs for Debian ================= packaged modules ---------------- The global search path for modules is /usr/lib/nodejs Future packages of node modules will use that directory, so it should be used wisely. user modules ------------ Node looks for modules in ./node_modules directory first; please read node#modules documentation carefully for more information. Node does not look for modules in /usr/local/lib/node_modules, where npm put them. Please read npm-link(1) of npm package, to understand how to properly use npm-installed modules in a project. Note that require.paths is not supported in future node versions. See also node(1) for more information about NODE_PATH. nodejs command -------------- The upstream name for the Node.js interpreter command is "node". In Debian the interpreter command has been changed to "nodejs". This was done to prevent a namespace collision: other commands use the same name in their upstreams, such as ax25-node from the "node" package. Scripts calling Node.js as a shell command must be changed to instead use the "nodejs" command.

    Read the article

  • Why won't my Broadcom BCM4312 LP-PHY work with the STA driver?

    - by Jackson Taylor
    I tried the steps here for a 4312: https://help.ubuntu.com/community/WifiDocs/Driver/bcm43xx Both of these: sudo modprobe -r b43 ssb wl sudo modprobe wl return: FATAL: Module wl not found. FATAL: Error running install command for wl (this one is only for the second one actually) I tried the broadcom-sta, didn't work. What's confusing is down below in the next steps for STA with internet access it says to use the bcmwl one. So I install that and it succeeds but with some errors: sudo apt-get install bcmwl-kernel-source Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: module-assistant Use 'apt-get autoremove' to remove it. The following NEW packages will be installed: bcmwl-kernel-source 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,181 kB of archives. After this operation, 3,609 kB of additional disk space will be used. Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 168005 files and directories currently installed.) Unpacking bcmwl-kernel-source (from .../bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.112+bdcom-0ubuntu3) ... Loading new bcmwl-5.100.82.112+bdcom DKMS files... Building only for 3.5.0-21-generic Building for architecture x86_64 Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. ERROR: Module b43 does not exist in /proc/modules ERROR: Module b43legacy does not exist in /proc/modules ERROR: Module ssb does not exist in /proc/modules ERROR: Module bcm43xx does not exist in /proc/modules ERROR: Module brcm80211 does not exist in /proc/modules ERROR: Module brcmfmac does not exist in /proc/modules ERROR: Module brcmsmac does not exist in /proc/modules ERROR: Module bcma does not exist in /proc/modules FATAL: Module wl not found. FATAL: Error running install command for wl update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.5.0-21-generic jtaylor991@jtaylor991-whiteHP:~$ sudo modprobe wl FATAL: Module wl not found. FATAL: Error running install command for wl Then I do the modprobe wl commands listed above and it gives the above listed errors. It didn't work with the broadcom-sta driver either. I installed the b43 ones but nothing happened, and I don't know why so those are still installed. firmware-b43legacy-installer, b43-fwcutter and firmware-b43-lpphy-installer (yes it is a LP-PHY) are currently installed. If I go into System Settings Software Sources Additional Drivers it says "Using Broadcom 802.11 Linux STA wireless driver source from bcmwl-kernel-source (proprietary) But bcmwl-kernel-source isn't installed. I could try again but I remember rebooting and it still said this. What's funny is it found wireless networks during the Ubuntu setup/installation, I don't remember if I got it to connect or not though. I think it kept asking for a password when I put it in (yes it was right I showed password and looked at it) so I just ignored it. But right now the Enable Wireless option in the top right is just gone, it's just Enable Networking and I'm on ethernet on this HP Pavilion dv4-1435dx right here. If I run rfkill list it shows: 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no It was hard blocked at the beginning but unblocking it makes no change. Also it's a touch sensitive button, and it appears to be always orange no matter if it's enabled or not because when I touch it the hard blocked changes between yes and no in rfkill list. I think it was blue for a minute at one point. What is going on?!?! Help me! Lol, thanks for any and all of your time guys. Oh yeah this is Ubuntu 12.10 fresh install.

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • How to get started on working with WebTrends locally using a DEV account access? [on hold]

    - by Knowledge Craving
    I'm a newbie developer on WebTrends Analytics, although I have worked extensively on Adobe SiteCatalyst. Currently I want to integrate WebTrends with the web pages of a local project on my local JBoss EAP application server, so that I can run it locally and see how WebTrends work, along with how the Analytics reports show up. Later on, I may customize the reports data as per the project requirements. However, I can't find anywhere how I can start working on a local development process using a DEV account access. I'm mainly trying to get data on number of user visits per day, as of now, which will cater to multichannel experience. And later on, I will start pulling more data based on targeting and segmentation. Can anyone of you please help and guide me in the right direction? Any help is highly appreciated. Thank you!

    Read the article

  • How to set up dual quadro cards on RHEL 5.5?

    - by Alex J. Roberts
    I have a RHEL 5 workstation with 2 nvidia Quadro FX4500 cards, with one display attached to each card. After doing a clean install of RHEL 5.5, the second display doesnt work (it worked ok in RHEL 5.2). Neither separate X screens nor Xinerama are working. The kernel version is 2.6.18-194.el5 I've tried nvidia drivers 185.18.36 (the ones that i was using on 5.2) and the latest 260.19.36 and neither works. My xorg.conf is as follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildmeister@builder58) Fri Aug 14 18:34:43 PDT 2009 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" FontPath "unix/:7100" EndSection Section "ServerFlags" Option "Xinerama" "1" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from data in "/etc/sysconfig/keyboard" Identifier "Keyboard0" Driver "kbd" Option "XkbLayout" "us" Option "XkbModel" "pc105" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:10:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:129:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection And the Xorg Log: X Window System Version 7.1.1 Release Date: 12 May 2006 X Protocol Version 11, Revision 0, Release 7.1.1 Build Operating System: Linux 2.6.18-164.11.1.el5 x86_64 Red Hat, Inc. Current Operating System: Linux blur.svsdsde 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 Build Date: 06 March 2010 Build ID: xorg-x11-server 1.1.1-48.76.el5 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Module Loader present Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Fri Feb 18 09:52:08 2011 (==) Using config file: "/etc/X11/xorg.conf" (==) ServerLayout "Layout0" (**) |-->Screen "Screen0" (0) (**) | |-->Monitor "Monitor0" (**) | |-->Device "Device0" (**) |-->Screen "Screen1" (1) (**) | |-->Monitor "Monitor1" (**) | |-->Device "Device1" (**) |-->Input Device "Keyboard0" (**) |-->Input Device "Mouse0" (**) FontPath set to: unix/:7100 (==) RgbPath set to "/usr/share/X11/rgb" (==) ModulePath set to "/usr/lib64/xorg/modules" (**) Option "Xinerama" "1" (**) Xinerama: enabled (==) Max clients allowed: 512, resource mask: 0xfffff (II) Open ACPI successful (/var/run/acpid.socket) (II) Module ABI versions: X.Org ANSI C Emulation: 0.3 X.Org Video Driver: 1.0 X.Org XInput driver : 0.6 X.Org Server Extension : 0.3 X.Org Font Renderer : 0.5 (II) Loader running on linux (II) LoadModule: "bitmap" (II) Loading /usr/lib64/xorg/modules/fonts/libbitmap.so (II) Module bitmap: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Bitmap (II) LoadModule: "pcidata" (II) Loading /usr/lib64/xorg/modules/libpcidata.so (II) Module pcidata: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Video Driver, version 1.0 (++) using VT number 7 (II) PCI: PCI scan (all values are in hex) (II) PCI: 00:00:0: chip 10de,005e card 103c,1500 rev a3 class 05,80,00 hdr 00 (II) PCI: 00:01:0: chip 10de,0051 card 103c,1500 rev a3 class 06,01,00 hdr 80 (II) PCI: 00:01:1: chip 10de,0052 card 103c,1500 rev a2 class 0c,05,00 hdr 80 (II) PCI: 00:02:0: chip 10de,005a card 103c,1500 rev a2 class 0c,03,10 hdr 80 (II) PCI: 00:02:1: chip 10de,005b card 103c,1500 rev a3 class 0c,03,20 hdr 80 (II) PCI: 00:04:0: chip 10de,0059 card 103c,1500 rev a2 class 04,01,00 hdr 00 (II) PCI: 00:06:0: chip 10de,0053 card 103c,1500 rev f2 class 01,01,8a hdr 00 (II) PCI: 00:07:0: chip 10de,0054 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:08:0: chip 10de,0055 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:09:0: chip 10de,005c card 0000,0000 rev a2 class 06,04,01 hdr 01 (II) PCI: 00:0a:0: chip 10de,0057 card 103c,1500 rev a3 class 06,80,00 hdr 00 (II) PCI: 00:0e:0: chip 10de,005d card 0000,0000 rev a3 class 06,04,00 hdr 01 (II) PCI: 00:18:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 05:05:0: chip 104c,8023 card 103c,1500 rev 00 class 0c,00,10 hdr 00 (II) PCI: 0a:00:0: chip 10de,009d card 10de,02af rev a1 class 03,00,00 hdr 00 (II) PCI: End of PCI scan (II) PCI-to-ISA bridge: (II) Bus -1: bridge is at (0:1:0), (0,-1,-1), BCTRL: 0x0008 (VGA_EN is set) (II) Subtractive PCI-to-PCI bridge: (II) Bus 5: bridge is at (0:9:0), (0,5,5), BCTRL: 0x0206 (VGA_EN is cleared) (II) Bus 5 non-prefetchable memory range: [0] -1 0 0xf5000000 - 0xf50fffff (0x100000) MX[B] (II) PCI-to-PCI bridge: (II) Bus 10: bridge is at (0:14:0), (0,10,10), BCTRL: 0x000a (VGA_EN is set) (II) Bus 10 I/O range: [0] -1 0 0x00003000 - 0x00003fff (0x1000) IX[B] (II) Bus 10 non-prefetchable memory range: [0] -1 0 0xf3000000 - 0xf4ffffff (0x2000000) MX[B] (II) Bus 10 prefetchable memory range: [0] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B] (II) Host-to-PCI bridge: (II) Bus 0: bridge is at (0:24:0), (0,0,10), BCTRL: 0x0008 (VGA_EN is set) (II) Bus 0 I/O range: [0] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) Bus 0 non-prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (II) Bus 0 prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (--) PCI:*(10:0:0) nVidia Corporation Quadro FX 4500 rev 161, Mem @ 0xf3000000/24, 0xc0000000/28, 0xf4000000/24, I/O @ 0x3000/7 (II) Addressable bus resource ranges are [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] [1] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) OS-reported resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) Active PCI resource ranges: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) Active PCI resource ranges after removing overlaps: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) OS-reported resource ranges after removing overlaps with PCI: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) All system resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) LoadModule: "extmod" (II) Loading /usr/lib64/xorg/modules/extensions/libextmod.so (II) Module extmod: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension SHAPE (II) Loading extension MIT-SUNDRY-NONSTANDARD (II) Loading extension BIG-REQUESTS (II) Loading extension SYNC (II) Loading extension MIT-SCREEN-SAVER (II) Loading extension XC-MISC (II) Loading extension XFree86-VidModeExtension (II) Loading extension XFree86-Misc (II) Loading extension XFree86-DGA (II) Loading extension DPMS (II) Loading extension TOG-CUP (II) Loading extension Extended-Visual-Information (II) Loading extension XVideo (II) Loading extension XVideo-MotionCompensation (II) Loading extension X-Resource (II) LoadModule: "dbe" (II) Loading /usr/lib64/xorg/modules/extensions/libdbe.so (II) Module dbe: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension DOUBLE-BUFFER (II) LoadModule: "glx" (II) Loading /usr/lib64/xorg/modules/extensions/libglx.so (II) Module glx: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Server Extension (II) NVIDIA GLX Module 185.18.36 Fri Aug 14 18:27:24 PDT 2009 (II) Loading extension GLX (II) LoadModule: "freetype" (II) Loading /usr/lib64/xorg/modules/fonts/libfreetype.so (II) Module freetype: vendor="X.Org Foundation & the After X-TT Project" compiled for 7.1.1, module version = 2.1.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font FreeType (II) LoadModule: "type1" (II) Loading /usr/lib64/xorg/modules/fonts/libtype1.so (II) Module type1: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.2 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Type1 (II) LoadModule: "record" (II) Loading /usr/lib64/xorg/modules/extensions/librecord.so (II) Module record: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.13.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension RECORD (II) LoadModule: "dri" (II) Loading /usr/lib64/xorg/modules/extensions/libdri.so (II) Module dri: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading sub module "drm" (II) LoadModule: "drm" (II) Loading /usr/lib64/xorg/modules/linux/libdrm.so (II) Module drm: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading extension XFree86-DRI (II) LoadModule: "nvidia" (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so (II) Module nvidia: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Video Driver (II) LoadModule: "kbd" (II) Loading /usr/lib64/xorg/modules/input/kbd_drv.so (II) Module kbd: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.0 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) LoadModule: "mouse" (II) Loading /usr/lib64/xorg/modules/input/mouse_drv.so (II) Module mouse: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.1 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) NVIDIA dlloader X Driver 185.18.36 Fri Aug 14 17:51:02 PDT 2009 (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs (II) Primary Device is: PCI 0a:00:0 (--) Chipset NVIDIA GPU found (II) Loading sub module "fb" (II) LoadModule: "fb" (II) Loading /usr/lib64/xorg/modules/libfb.so (II) Module fb: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org ANSI C Emulation, version 0.3 (II) Loading sub module "wfb" (II) LoadModule: "wfb" (II) Loading /usr/lib64/xorg/modules/libwfb.so (II) Module wfb: vendor="NVIDIA Corporation" compiled for 7.1.99.2, module version = 1.0.0 (II) Loading sub module "ramdac" (II) LoadModule: "ramdac" (II) Loading /usr/lib64/xorg/modules/libramdac.so (II) Module ramdac: vendor="X.Org Foundation" compiled for 7.1.1, module version = 0.1.0 ABI class: X.Org Video Driver, version 1.0 (II) resource ranges after xf86ClaimFixedResources() call: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) resource ranges after probing: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) NVIDIA(0): Option "TwinView" "0" (**) NVIDIA(0): Option "MetaModes" "nvidia-auto-select +0+0" (**) NVIDIA(0): Enabling RENDER acceleration (II) NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) NVIDIA(0): enabled. (II) NVIDIA(0): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:10:0:0 (GPU-0) (--) NVIDIA(0): Memory: 524288 kBytes (--) NVIDIA(0): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(0): Detected PCI Express Link width: 16X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on Quadro FX 4500 at PCI:10:0:0: (--) NVIDIA(0): DELL 3007WFP (DFP-0) (--) NVIDIA(0): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Assigned Display Device: DFP-0 (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): "nvidia-auto-select+0+0" (II) NVIDIA(0): Virtual screen size determined to be 2560 x 1600 (--) NVIDIA(0): DPI set to (101, 101); computed from "UseEdidDpi" X config (--) NVIDIA(0): option (WW) NVIDIA(0): UBB is incompatible with the Composite extension. Disabling (WW) NVIDIA(0): UBB. (==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp (II) do I need RAC? No, I don't. (II) resource ranges after preInit: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) NVIDIA(GPU-1): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:129:0:0 (GPU-1) (--) NVIDIA(GPU-1): Memory: 524288 kBytes (--) NVIDIA(GPU-1): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(GPU-1): Detected PCI Express Link width: 16X (--) NVIDIA(GPU-1): Interlaced video modes are supported on this GPU (--) NVIDIA(GPU-1): Connected display device(s) on Quadro FX 4500 at PCI:129:0:0: (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0) (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Initialized GPU GART. (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Loading extension NV-GLX (II) NVIDIA(0): NVIDIA 3D Acceleration Architecture Initialized (==) NVIDIA(0): Disabling shared memory pixmaps (II) NVIDIA(0): Using the NVIDIA 2D acceleration architecture (==) NVIDIA(0): Backing store disabled (==) NVIDIA(0): Silken mouse enabled (**) Option "dpms" (**) NVIDIA(0): DPMS enabled (II) Loading extension NV-CONTROL (==) RandR enabled (II) Setting vga for screen 0. (II) Initializing built-in extension MIT-SHM (II) Initializing built-in extension XInputExtension (II) Initializing built-in extension XTEST (II) Initializing built-in extension XKEYBOARD (II) Initializing built-in extension XC-APPGROUP (II) Initializing built-in extension SECURITY (II) Initializing built-in extension XINERAMA (II) Initializing built-in extension XFIXES (II) Initializing built-in extension XFree86-Bigfont (II) Initializing built-in extension RENDER (II) Initializing built-in extension RANDR (II) Initializing built-in extension COMPOSITE (II) Initializing built-in extension DAMAGE (II) Initializing built-in extension XEVIE (II) Initializing extension GLX (WW) Disabling Composite since Xinerama is enabled (**) Option "CoreKeyboard" (**) Keyboard0: Core Keyboard (**) Option "Protocol" "standard" (**) Keyboard0: Protocol: standard (**) Option "AutoRepeat" "500 30" (**) Option "XkbRules" "xorg" (**) Keyboard0: XkbRules: "xorg" (**) Option "XkbModel" "pc105" (**) Keyboard0: XkbModel: "pc105" (**) Option "XkbLayout" "us" (**) Keyboard0: XkbLayout: "us" (**) Option "CustomKeycodes" "off" (**) Keyboard0: CustomKeycodes disabled (**) Option "Protocol" "auto" (**) Mouse0: Device: "/dev/input/mice" (**) Mouse0: Protocol: "auto" (**) Option "CorePointer" (**) Mouse0: Core Pointer (**) Option "Device" "/dev/input/mice" (**) Option "Emulate3Buttons" "no" (**) Option "ZAxisMapping" "4 5" (**) Mouse0: ZAxisMapping: buttons 4 and 5 (**) Mouse0: Buttons: 9 (II) XINPUT: Adding extended input device "Mouse0" (type: MOUSE) (II) XINPUT: Adding extended input device "Keyboard0" (type: KEYBOARD) (--) Mouse0: PnP-detected protocol: "ExplorerPS/2" (II) Mouse0: ps2EnableDataReporting: succeeded (II) Open ACPI successful (/var/run/acpid.socket) (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Mouse0: ps2EnableDataReporting: succeeded (the snipped part can be changed if necessary) Any help at all would be appreciated. Cheers, Alex

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • Link Maven OSGi to Maven NetBeans Platform Project

    - by mxro
    I am using NetBeans 6.9 Beta and I would like to accomplish the following: Set up a project representing the main application using Maven (for instance "Maven Project", "Maven NetBeans Application") Ideally, the project should only contain the necessary libraries to run in Apache Felix (I would like to be able to right-click the project and select "Run in Felix") I do not want that the project contains all the NetBean Platform APIs I would prefer to implement the modules using OSGi. For instance "Maven OSGi Bundle", "Maven NetBeans Module" + OSGi These are the problems, which I have at the moment: The standard Maven archetype ("Maven NetBeans Application") seems always to select all APIs and I have not found a way to deselect APIs - in normal NetBeans Platform Applications that can be accomplished by going to the project properties and deselected the platform modules) - I guess it has something to do with the NetBeans repository (http://bits.netbeans.org/maven2)? Do I have to create another repository? When creating normal "NetBeans Module" with OSGi support, the modules contain both NetBeans Module and OSGi meta data, which is nice. But the "Maven NetBeans Modules" have only NetBeans meta data and the Maven OSGi Bundles have only OSGi meta data). I figured out how to add modules to the project by using project / new and then placing the modules in the Maven project folder. However, I do not quite know yet how I could link to modules from other locations (NetBeans uses Maven modules, which have to be in the same directory as the project?). Below some useful links for Maven + OSGi in NetBeans wiki.netbeans.org/STS_69_Maven_OSGI NetBeans Maven OSGi Test Specification platform.netbeans.org/tutorials/nbm-maven-quickstart.html NetBeans Platform Quick Start Using Maven (6.9) wiki.netbeans.org/MavenBestPractices NetBeans Maven BestPractices maven.apache.org/pom.html#Aggregation Maven Documentation Multi-Module Projects (sorry about the missing protocol but couldn't post the message otherwise)

    Read the article

  • I'd like to rebuild my web server without web management software; what knowledge, skills, and tools will I require? [closed]

    - by Joe Zeng
    I've been using Webmin for my web server that runs my personal website and a host of other websites for a while now, and I feel like I should be able to manage my web server more directly, because I haven't even touched the Webmin for the past year or so and I feel like maybe it has too much functionality that I have to click through the next time I want to access it or create a new subdomain or database on my site. I want to try something lighter and more wholly manageable, now that I'm more comfortable with using ssh and command-line tools. I've decided that I'm going to try using Django as a framework, but obviously that's only part of the picture. What sort of knowledge will I require?

    Read the article

  • How do I use beta Perl modules from beta Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How do I use beta test Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How do I use test Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "test" code location (e.g. production Perl code us in /usr/code/scripts, test Perl code is in /usr/code/test/scripts; production Perl libraries are in /usr/code/lib/perl and test versions of those libraries are in /usr/code/test/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and test location. To clarify, to promote any code (library or script) from test to production, the ONLY thing which needs to happen is literally issuing cp command from test to prod location - both the file name AND file contents must remain identical. Test versions of scripts must call other test scripts and test libraries (if exist) or production libraries (if test libraries do not exist) The code paths must be the same between test and production with the exception of base directory (/usr/code/ vs /usr/code/test/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How to import modules that are used in both the main code and a module correctly?

    - by Deniz
    Let's assume I have a main script, main.py, that imports another python file with 'import coolfunctions' and another: 'import chores' Now, suppose coolfunctions also uses stuff from chores, hence I declare 'import chores' inside coolfunctions. Since both main.py, and coolfunctions import chores ~ is this redundant? Is there any other way of doing this? Am I doing it correctly? I'm confused about how python projects should be structured in general. I have a "conf.py" file, that I import for a bunch of variables ~ is this a module or not? I load this conf file in multiple places as well.

    Read the article

  • How do I use test/beta Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • Strange behavior with Powershell scriptblock variable scope and modules, any suggestions?

    - by DanMan
    NOTE: I'm using PowerShell 2.0 on Windows Vista. I'm trying to add support for specifying build arguments to psake, but I've run into some strange PowerShell variable scoping behavior dealing specifically with calling functions that have been exported using Export-ModuleMember (which is how psake exposes it's main method). Following is a simple PowerShell module to illustrate (named repoCase.psm1): function Test { param( [Parameter(Position=0,Mandatory=0)] [scriptblock]$properties = {} ) $defaults = {$message = "Hello, world!"} Write-Host "Before running defaults, message is: $message" . $defaults #At this point, $message is correctly set to "Hellow, world!" Write-Host "Aftering running defaults, message is: $message" . $properties #At this point, I would expect $message to be set to whatever is passed in, #which in this case is "Hello from poperties!", but it isn't. Write-Host "Aftering running properties, message is: $message" } Export-ModuleMember -Function "Test" To test the module, run the following sequence of commands (be sure you're in the same directory as the repoCase.psm1): Import-Module .\repoCase.psm1 #Note that $message should be null Write-Host "Before execution - In global scope, message is: $message" Test -properties { "Executing properties, message is $message"; $message = "Hello from properties!"; } #Now $message is set to the value from the script block. The script block affected only the global scope. Write-Host "After execution - In global scope, message is: $message" Remove-Module repoCase The behavior I expected was for the script block I passed to Test to affect the local scope of Test. It is being 'dotsourced' in, so any changes it makes should be within the scope of the caller. However, that's not what's happening, it seems to be affecting the scope of where it was declared. Here's the output: Before execution - In global scope, message is: Before running defaults, message is: Aftering running defaults, message is: Hello, world! Executing properties, message is Aftering running properties, message is: Hello, world! After execution - In global scope, message is: Hello from properties! Interestingly, if I don't export Test as a module and instead just declare the function and invoke it, everything works just like I would expect it to. The script block affects only Test's scope, and does not modify the global scope. I'm not a PowerShell guru, but can someone explain this behavior to me?

    Read the article

  • Looking for an easy cms system? flexible plugins modules, blocks, themes

    - by judi
    Hi I'm a big user of Wordpress however its not ideal for cms sites. I'm currently looking at pixie cms, however if say I want three columns, only one column can be allowed to change across pages and I can't apply specific templeates to pages. I'm also looking at impress cms which looks confusing in the admin although it has good reviews. Does anyone have any suggestions what is a flexible cms for a designer who only needs to edit the html and css and maybe add a news feed in one column. Thanks for your help Regards Judi

    Read the article

  • Python modules not updating after restarting the main module.

    - by Ian
    I've recently come back to a project having had to stop for about 6 months, and after reinstalling my operating system and coming back to it I'm having all kinds of crazy things happen. I made sure to install the same version(2.6) of python that I was using before. It started by giving me strange tkinter error that I hadn't had trouble with before, the program is relatively simple and the 2 or 3 bugs that were left when i quit, I had documented and weren't related to the interface. Things got even weirder when the same error would pop up even after I had removed the offending section of code. In fact, the traceback pointed to a line that didn't even exist in the module it was referencing, eg: line 262 when the module was only 200 lines long. After just starting a completely new file for the main module and copy/pasting it finally recognized that the offending code was gone and I stopped getting the error only to find that any updates to the code I made in another module didn't show up when I restarted the program through the shell. (I didn't forget to save.) After fiddling with this, of course, the old interface error came back, only in a different section of code that had been working previously. In fact, if I revert back to the files I had six months ago, the program works fine. As soon as I change anything in the main module, however, the interface bug comes back. Here's the original error: Exception in Tkinter callback Traceback (most recent call last): File "C:\Python26\lib\lib-tk\Tkinter.py", line 1410, in __call__ return self.func(*args) File "C:\PyStuff\interface.py", line 202, in dispOne __main__.top.destroy() File "C:\Python26\lib\lib-tk\Tkinter.py", line 1938, in destroy self.tk.call('destroy', self._w) TclError: can't invoke "destroy" command: application has been destroyed I'm guessing something else is going on here other than my own poor programming. Anyone have any ideas?

    Read the article

  • How important is it to use short names for Python packages and modules?

    - by Dan
    PEP 8 says that Python package and module names should be short, since some file systems will truncate long names. And I'm trying to follow Python conventions in a new project. But I really like long, descriptive names. So I'm wondering, how short do names need to be to comply with PEP 8. And does anyone really worry about this anymore? I'm tempted to ignore this recommendation, and use longer names, thinking this isn't all that relevant anymore. Does anyone think this recommendation is still worth following? If yes, why? And how short is short enough?

    Read the article

  • How to create a cross-plataform application, doing the interface modules (Mac/Qt/GTK+) in a totally

    - by Somebody still uses you MS-DOS
    I'm amazed at Transmission, a BT client. It has a Mac, a GTK+, a QT, a Web Client and a CLI interface to it. I tried reading some of it's source to understand how he creates all these interfaces, but no luck. Does the developer creates them using a single ide? Or does he create the interface logic in each specific environment (specially mac), "exports" this window code and integrates with the main logic? Is it possible to create that mac interface in another OS using an IDE? How did the developers create this software with so many interfaces, in a independent way?

    Read the article

  • Modules vs. Classes and their influence on descendants of ActiveRecord::Base

    - by Chris
    Here's a Ruby OO head scratcher for ya, brought about by this Rails scenario: class Product < ActiveRecord::Base has_many(:prices) # define private helper methods end module PrintProduct attr_accessor(:isbn) # override methods in ActiveRecord::Base end class Book < Product include PrintProduct end Product is the base class of all products. Books are kept in the products table via STI. The PrintProduct module brings some common behavior and state to descendants of Product. Book is used inside fields_for blocks in views. This works for me, but I found some odd behavior: After form submission, inside my controller, if I call a method on a book that is defined in PrintProduct, and that method calls a helper method defined in Product, which in turn calls the prices method defined by has_many, I'll get an error complaining that Book#prices is not found. Why is that? Book is a direct descendant of Product! More interesting is the following.. As I developed this hierarchy PrintProduct started to become more of an abstract ActiveRecord::Base, so I thought it prudent to redefine everything as such: class Product < ActiveRecord::Base end class PrintProduct < Product end class Book < PrintProduct end All method definitions, etc. are the same. In this case, however, my web form won't load because the attributes defined by attr_accessor (which are "virtual attributes" referenced by the form but not persisted in the DB) aren't found. I'll get an error saying that there is no method Book#isbn. Why is that?? I can't see a reason why the attr_accessor attributes are not found inside my form's fields_for block when PrintProduct is a class, but they are found when PrintProduct is a Module. Any insight would be appreciated. I'm dying to know why these errors are occurring!

    Read the article

  • Change default global installation directory for node.js modules in Windows?

    - by gremo
    In my windows installation PATH includes C:\Program Files\nodejs, where executable node.exe is. I'm able to launch node from the shell, as well as npm. I'd like new executables to be installed in C:\Program Files\nodejs as well, but it seems impossible to achieve. Setting NODE_PATH and NODE_MODULES variables doesn't change anything: things are still installed in %appdata%\npm by default. How can I change the global installation path?

    Read the article

  • How can I profile a subroutine without using modules?

    - by Zaid
    I'm tempted to relabel this question 'Look at this brick. What type of house does it belong to?' Here's the situation: I've effectively been asked to profile some subroutines having access to neither profilers (even Devel::DProf) nor Time::HiRes. The purpose of this exercise is to 'locate' bottlenecks. At the moment, I'm sprinkling print statements at the beginning and end of each sub that log entries and exits to file, along with the result of the time function. Not ideal, but it's the best I can go by given the circumstances. At the very least it'll allow me to see how many times each sub is called. The code is running under Unix. The closest thing I see to my need is perlfaq8, but that doesn't seem to help (I don't know how to make a syscall, and am wondering if it'll affect the code timing unpredictably). Not your typical everyday SO question...

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >