Search Results

Search found 353 results on 15 pages for 'recompile'.

Page 6/15 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How to change the BIOS splash screen on HP Pavilion G4 1303AU?

    - by avirk
    My cousin got (legally) a laptop model HP Pavilion G4 1303AU from the government of my state. The laptop dual-boots Ubuntu and Windows 7. Everything is great except the BIOS logo at boot which I would like to change but don't know how to. I tried to flash a new BIOS firmware, but this didn't change the logo. I found this link for the same model and the same problem, but no success there either. Also I saw this blog post which wasn't helpful, except for this comment that mentioned an image in .ROM format. Not wishing to decompile and recompile the BIOS, can someone help me out with a step by step guide ?

    Read the article

  • VirtualBox error VERR_SUPDRV_COMPONENT_NOT_FOUND

    - by ant2009
    I am using Fedora 12, with kernel stv-fedora 2.6.31.12-174.2.22.fc12.i686 I have install VirtualBox OSE v3.1.4-1.fc12. When I start VirtualBox I get the following error: Failed to start the virtual machine WinXP32. Failed to open/create the internal network 'HostInterfaceNetworking-eth0' (VERR_SUPDRV_COMPONENT_NOT_FOUND). One of the kernel modules was not successfully loaded. Make sure that no kernel modules from an older version of VirtualBox exist. Then try to recompile and reload the kernel modules by executing '/etc/init.d/vboxdrv setup' as root (VERR_SUPDRV_COMPONENT_NOT_FOUND). However, if I run /etc/init.d/vboxdrv setup I get this error: -bash: /etc/init.d/vboxdrv: No such file or directory How do I get rid of the error and run VirtualBox?

    Read the article

  • How to completely disable Mysqli

    - by Boon
    It seems the new hacker tool refref has been launched, and apparently it abuses a bug in the mysqli extension. Now I do not use mysqli at all for my websites, so i thought the best way to fight off this refref tool was to completely disable mysqli. These are the settings i have set in my php.ini. Is there a way I can disable mysqli completely with having to recompile PHP? ;extension=php_mysqli.dll [MySQLi] mysqli.max_persistent = -1 ;mysqli.allow_local_infile = On mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off

    Read the article

  • Damaged XenServer Storage LVM partition table

    - by Fiolek
    I have a homeserver running under XenServer control with 3x1TB discs inside, one for XenServer and two mirrored(using Intel's fakeRAID and dmraid) for VMs and a user data(but now I think RAID didn't work). I tried to pass PCI card to VM using PCI-passthroug and I read somewhere that I need to recompile kernel with pciback module but something went wrong(I made mistake in /boot/extlinux.conf and server couldn't run) and I had to use LiveCD of GPartEd(I already had it on USB key) to correct this. But when I re-run the server all VDIs were gone. I have completly no idea what could go wrong. I tried to repair RAID using dmraid -R in the hope that everything will return to noramal but now I think this done more bad than good(and corrupted rest of LVM table...). Is there any possibility to recover this SR or only data from one(~100GB) of VDI? I also wants to apologise for my English, I'm not from English-speaking country and I'm only 16 years old, so I hadn't "time" to learn it(school isn't good place to do this) in sufficient way.

    Read the article

  • List all devices of system

    - by pareshverma91
    In my understanding linux can list only those devices which it can understand, i.e. for which the drivers have been installed. I think lspci is the command for that. But how can one figure out if there exists some device in the system for which the drivers are not installed and perhaps some hint about what that device is for and what driver would suffice for it. I would like to know this info so as to be able to recompile my linux kernel to a minimum and would like to avoid a hit and trial approach.

    Read the article

  • PHP5 module makes Apache 1.3 segfault

    - by tehalive
    I am trying to get PHP4 and 5 to work with Apache 1.3. PHP4 is compiled as a module and currently works fine, although Apache does display the following warning upon starting: Loaded DSO libphp4.so uses plain Apache 1.3 API, this module might crash under EAPI! (please recompile it with -DEAPI) So I compiled PHP5 using the latest source. I get the same warning, twice now for each PHP module, but then Apache gets a segmentation fault when loading the PHP4 + PHP5 modules. I have tried compiling PHP5 with apxs and without. It does appear to be using the -DEAPI flag. Maybe this isn't related to the segfault. The flags I am using to configure PHP5: ./configure --with-mysql --with-zlib --disable-cgi --with-apxs=/www/bin/apxs

    Read the article

  • CENTOS: unsupported dictionary type: sqlite in POSTFIX

    - by Ferdinand
    Oct 30 09:24:15 postfix postfix/smtpd[1622]: fatal: unsupported dictionary type: sqlite Oct 30 09:24:16 postfix postfix/master[1165]: warning: process /usr/libexec/postfix/smtpd pid 1622 exit status 1 Oct 30 09:24:16 postfix postfix/master[1165]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling I'm trying to use sqlite with postfix, but I get that error. I'm using CENTOS 6.4 x64. I have sqlite and sqlite-devel installed too. I'm assuming postfix from BASE (CentOS repo) comes without sqlite support? I've been not able to recompile with sqlite support using this: http://www.postfix.org/SQLITE_README.html Is there another way to get it to work?

    Read the article

  • difference between compiled and installed via rpm (zypper)

    - by cherouvim
    In an openSUSE 11.1 I download, compile and install ImageMagick via: wget ftp://.../pub/graphics/ImageMagick/ImageMagick-6.7.7-0.zip unzip ImageMagick-6.7.7-0.zip cd ImageMagick-6.7.7-0 ./configure --prefix=/usr/local/ImageMagick make make install Everything works nicelly until I discover that JPG is not supported: identify -list format | grep -i jpg [nothing related to JPG returned] So I reconfigure and recompile using: ./configure --prefix=/usr/local/ImageMagick --with-jpeg=yes --with-jp2=yes make make install But that changes nothing. I end up uninstalling: make uninstall and installing via zypper: zypper install ImageMagick This installed version 6.4.3 and now it does support JPG: identify -list format | grep -i jpg JPG* JPEG rw- Joint Photographic Experts Group JFIF format Any idea on what is going on here? What is a possible reason that this capability of ImageMagick was not there when compiled from source but was there when installed from rpm? Note that I don't necessarily care a lot about ImageMagick (since it now works), but generally about his kind of behaviour, becase in one way or another I've seen this happen in other ocasions as well.

    Read the article

  • PHP5 without PHP4 compatibility

    - by Serty Oan
    This is the opposite question of this one. My purpose is that there is no need when you write only PHP5 code to have PHP4 compatibility. It is not helpful at all and due to differences between object models in those two version, it must be penalizing on interpreter performances (or am I wrong ?). So my questions are : is it possible to recompile PHP5 without retrocompatibility with PHP4 or to disable it in any other way to gain performances ? if it is not possible, is there a project somewhere which aim that goal ?

    Read the article

  • Fedora12 Slow USB 2.0 Write Speed, ehci_hcd module is missing

    - by MA1
    I am using Fedora 12, the problem I am facing is USB 2.0 write speed. I have a dual boot system with Windows XP and Fedora 12. USB 2.0 write speed in Windows XP is much faster then what I am getting in Fedora 12. After searching Google I came to know that ehci_hcd module is missing/not present in my system. ehci_hcd module is neither loaded nor it is present in the available list of modules. Can someone guide me how to fix this issue? Does ehci_hcd have something to do with USB 2.0 write speed? Do I have to recompile the kernel and add/enable the ehci_hcd module?

    Read the article

  • Why is Mac OS X 10.6 using /usr/lib to start Apache when I compiled PHP using /opt/local/lib?

    - by Anthony
    PHP 5.3.3 compiled on Mac OS X 10.6 - using /usr/lib when trying to start Apache... rather than /opt/local/lib specified when PHP was configured. Why is it trying to load from /usr/lib when I specified in my configure not to? httpd: Syntax error on line 115 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/libphp5.so into server: dlopen(/usr/libexec/apache2/libphp5.so, 10): Library not loaded: /opt/local/lib/libiconv.2.dylib\n Referenced from: /usr/libexec/apache2/libphp5.so\n Reason: Incompatible library version: libphp5.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 The error message above refers to /opt/local/lib which when I run: otool -LD /opt/local/lib/libiconv.2.dylib Message: /opt/local/lib/libiconv.2.dylib: /opt/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.0) It shows that the version is different than what httpd is erring out as. I have a feeling I need to recompile Apache using newer libraries, but the error message still doesn't make too much sense to me.

    Read the article

  • Kernel panic error

    - by cioby23
    We have a dedicated server with software RAID1 and one of the disk failed recently. The disk was replaced but after rebuilding the array and rebooting the server freezes with a Kernel Panic message No filesystem could mount root, tried: reiserfs ext3 ext2 cramfs msdos vfat iso9660 romfs fuseblk xfs Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(9,1) The filesystem on both disks is ext4. It seems the kernel can't load ext4 support. Is there any way to add ext4 support or do I need to recompile a new kernel again ? Interesting point that before disk replacement all was fine. The kernel is a stock kernel bzImage-2.6.34.6-xxxx-grs-ipv6-64 from our provider OVH Kind regards,

    Read the article

  • Full HD video playback acceleration with mplayer on Ubuntu Lucid

    - by pts
    I know that for an NVidia card I can sudo apt-get install nvidia-current mplayer, reboot, and then use mplayer -vo vdpau -vc ffmpeg12vdpau,ffwmv3vdpau,ffvc1vdpau,ffh264vdpau FILE.mkv to get accelerated video playback of H.264 and other codecs, so even full HD videos can be played back with only little CPU. (And there are many other options, e.g. XBMC also supports VDPAU.) But how do I get accelerated video playback if I have a recent ATI or Intel video card on Ubuntu Lucid? How do I figure out if my video card has acceleration built in? The solution has to work with mplayer or mplayer2. It's OK for me to recompile mplayer(2), but I'd prefer installing both the kernel and the X.org X server from a binary package repository.

    Read the article

  • How do you change the "scan this dir for additional ini files" path?

    - by amvx
    I managed to get the custom INI to load, but its still loading other .ini files from the default location. I created an fcgi wrapper that passed the ini value as a parameter. That worked. Now just these other ini's need to be loaded from the same dir as my custom ini. The problem is the other .ini files are overriding the settings in my custom php.ini =/ I realize the problem now is that the php.fcgi was compiled with a custom path parameter. So that's a problem. I might have to recompile it using a different location or none at all. I'd hate to have to compile an fcgi for each domain =/

    Read the article

  • Fedora12 Slow USB 2.0 Write Speed, ehci_hcd module is missing.

    - by MA1
    I am using Fedora 12, the problem i am facing is USB 2.0 write speed. I have a dual boot system with Window XP and Fedora 12. USB 2.0 write speed in Windows XP is much faster then what i am getting in Fedora 12. After some googling i came to know that ehci_hcd module is missing/not present in my system. ehci_hcd module is neither loaded nor it is present in the available list of modules. Can someone guide me how to fix this issue? Is ehci_hcd have something to do with USB 2.0 write speed or? Should i have to recompile the kernel and add/enable he ehci_hcd module?

    Read the article

  • reinstall PHP as apache module

    - by Ewan
    Hi sorry for the newbie question, I am not a sys admin by any means. I recently set up a VPS with webmin/virtualmin. Our CMS of choice says that PHP needs to be run as an Apache module in order to run. It says: Recompile PHP with the flag --with-apxs I've googled but can't find how to do this particularly. I think I installed PHP using "yum install php" or thru webmin, but can't remember exactly. Appreciate if anyone can point me in the right direction.

    Read the article

  • Clean Install: Ubuntu 8.04, 10.04 or somthing else?

    - by Robert S. Barnes
    I'm faced with a pending clean install due to a dying hard drive in my old Dell Inspiron 8600 laptop. I've been running Ubuntu 8.04 on it since... 8.04 and I've been more or less happy with it except that it's a PITA to recompile the kernel or do any other kernel related work. I mostly do software dev on it, gcc, gvim, c/c++/perl/php/mysql and running vmware server 2.0. I've heard mixed reviews of 10.04, and am wondering what to put on the new HD. I'm even considering just sticking with 8.04 as it seems to mostly meet my needs. Any suggestions?

    Read the article

  • Help: My SSL Certificate expired but I can't renew this weekend, is there a way I can disable SSL through IIS?

    - by shogun
    Or perhaps force redirect to HTTP when they request HTTPS? I tried removing the SSL port settings under 'Advanced' but it broke the login page. I don't want to do a deploy/recompile right now but I am able to edit the VIEWS. However I am thinking there may be a way for IIS to do this, but then again I think the .NET code tries to force SSL. Crap. This is bull crap because I was bugging people about getting the new certificate since December and they were like oh we will take care of it... I think it may have even gotten to the point where they were getting annoyed with me hassling them about it! (rant over.) And yes, the key is that more support tickets will be caused by the browser giving a security error than if SSL was removed entirely for one day. Becaues they wouldn't very very likely not notice it being gone.

    Read the article

  • Slow speed for UFS mounted drive in Linux

    - by Incredible
    Hi, I have a disk that has Sun OS disk, (ufs filesystem). And I want to mount it in my debain machine with read/write mode. Since by deafult linux doesn't support write to ufs filesystem. I had to recompile the kernel by setting to the flag CONFIG_UFS_FS_WRITE=y. Now I am able to write to the filesystem, but the read/write speed is very slow. It is around 120 KB/s. Any idea what is wrong and how to resolve this issue? Thank you in advance.

    Read the article

  • Visual Studio 2010 locking referenced Assembly

    - by cunningdave
    I have a sandbox app that is built from the simple WPF Application template. This sandbox references an assembly that I am also building which contains the definition of a UserControl (WPF). I am instantiating this user control in the sandbox, to test the control's behaviour. The point of all this is to speed up development. This worked fine, but recently the .Vshost.exe paired with the sandbox process won't shut down. This prevents me from recompiling the Controls library, though ironically I can recompile the sandbox application. I can't kill the vshost process with Task Manager... only restarting VS2010 will clear it out. But every time I run the application from VS, the process just hangs there, blocking my workflow. I'm at a loss. Any ideas what could be causing this? Or does someone have any proposed workaround (mega-kill switch, perhaps?)

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Avoid Memory Leaks in SharePoint2010 Development

    - by ybbest
    When you develop SharePoint solution using code, you need to Dispose SPWeb appropriately to avoid memory Leaks. The general guideline for this are: Dispose Not to dispose OpenWebEnumerating Webs or AllWebs ParentWebRootWeb SPWeb from SPContext There are more rules than the one list above and as a smart SharePoint developer, you do not have to memories all the rules .There is a tool called SharePoint Dispose Checker which can help you to find potential memory leak. To use SPDisposeChecker in you solution, you need to download the tool from MSDN Code Gallery and install it in your development machine as follow. 1. Run the installer with elevated privilege. 2. Accept the agreement and click next. 3. Select those two options and click next. 4. Select Everyone and click Next. 5. Go to Toolsà SharePoint Dispose Check to Configure the SPDisposeCheck. 6. You can change the Treat problems as Errors to Warnings. 7. after clicking Save, you are all set to use the tool.Recompile my project , I can get the result below. References: SharePoint 2007/2010 “Do Not Dispose Guidance” + SPDisposeCheck

    Read the article

  • Unable to Install VirtualBox Due to Missing Kernel Module

    - by SoftTimur
    I am trying to install VirtualBox on my Ubuntu. I first tried to sudo apt-get install virtualbox-ose in a terminal, but after the configuration step, it fails with an error: No suitable module for running kernel found When proceeding with starting virtualbox, I get this error: WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-ose-dkms package and the appropriate headers, most likely linux-headers-generic. You will not be able to start VMs until this problem is fixed. So I tried the package from http://www.virtualbox.org/, but starting VirtualBox fails with: WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (2.6.38-8-generic-pae) or it failed to load. Please recompile the kernel module and install it by sudo /etc/init.d/vboxdrv setup You will not be able to start VMs until this problem is fixed. So I ran sudo /etc/init.d/vboxdrv setup, but it fails too: * Stopping VirtualBox kernel modules [ OK ] * Uninstalling old VirtualBox DKMS kernel modules [ OK ] * Trying to register the VirtualBox kernel modules using DKMS Error! Your kernel headers for kernel 2.6.38-8-generic-pae cannot be found at /lib/modules/2.6.38-8-generic-pae/build or /lib/modules/2.6.38-8-generic-pae/source. * Failed, trying without DKMS * Recompiling VirtualBox kernel modules * Look at /var/log/vbox-install.log to find out what went wrong The contents of /var/log/vbox-install.log. As I am stuck, I also tried to install kernel-devel with yum, still fruitless: root@ubuntu# yum install kernel-devel Setting up Install Process No package kernel-devel available. Nothing to do Now I've no idea how to correct this. Any ideas?

    Read the article

  • Fix invalid objects and components - BEFORE you upgrade!

    - by Mike Dietrich
    We are currently running a Tech Challange Workshop with 25 Oracle consultants and support folks from all over EMEA. We call it Tech Challange because we seperate these experts having between 5 and 20 years of Oracle experience into 5 groups - and each group has to complete their special challange such as moving a database from 10.2 to Exadata V2 or upgrading from single instance 10.2 to Real Application Clusters 11.2 with the new Grid Infrastructure. Actually we start this training with a bit presentation pieces about upgrades, Real Application Testing and Golden Gate. And one topic I always point out: Keep your database tidy before the upgrade!!! Clean up all invalid objects - especially in SYS and SYSTEM user schema BEFORE you upgrade. Use utlrp.sql to recompile invalid objects. Use Note:753041.1 to diagnose and fix invalid components. Do this always BEFORE you start the upgrade. Even if it may take some time. Otherwise your upgrade could fails or significant parts of the database packages could be invalid after the upgrade as well. I just came across this today as one group had ~240 invalid objects in the database - and due to the fact that the original system was still there could proof that the objects had been invalid before. Good job, BUT ... :-)

    Read the article

  • #mvvmlight V4 update for Win8 RTM

    - by Laurent Bugnion
    With Windows 8 RTM out of the doors (at least for some of us), it was also time to create an update to MVVM Light. I selected the V4 RTM to do this (V4.0.23).This RTM version was released a few weeks ago with no much bells and whistles because I was just too busy to write much about it. Now after some vacation, I will resume blogging on all my favorite topics including of course MVVM Light. Upgrade Upgrading the installations should not require an ununistall, so try to simply run the MSI downloaded from the Download section at http://mvvmlight.codeplex.com/. Should you encounter any issue, try to uninstall the old version first following the steps at http://galasoft.ch/mvvm/cleaning/. Upgrading current apps from Win8 RP to Win8 RTM I didn’t recompile the assemblies of MVVM Light, so if you had a version running with V4.0.23 on Windows 8 RP, you should be able to use the same DLLs on Windows 8 RTM. If you were using earlier versions however, I would recommend doing an upgrade. I noticed a warning regarding the signing certificate. It is due to the PFX key which appears to be outdated after the upgrade to Windows 8 RTM. I solved this warning by replacing the old PFX key with a new one I copied from a new project. The warning did not cause the build to fail though. About MVVM Light V4 RTM The RTM is finalizing quite a lot of issues. The change log is available at http://galasoft.ch/mvvm/installing/changes/. There are some issues that are either still open or that popped up since then, and I am working on V4.1 to be released in the next few weeks. In addition to that, I have plans to support Windows Phone 8 (when it comes) and have a nice list of ideas for V5 with a few new components. Thanks again for your continuous support of MVVM Light! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >