Search Results

Search found 7257 results on 291 pages for 'eager loading'.

Page 77/291 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Does the Ubuntu One sync work?

    - by bisi
    I have been on this for several hours now, trying to get a simple second folder to sync with my (paid) account. I cannot tell you how many times I removed all devices, removed stored passwords, killed all processes of u1, logged out and back in online...and still, the tick in the file browser (Synchronize this folder) is loading and loading and loading. Also, I have logged out, rebooted countless times. And this is after me somehow managing to get the u1 preferences to finally "connect" again. I have also checked the status of your services, and none are close to what I am experiencing. And I have checked the suggested related questions above! So please, just confirm whether it is a problem on my side, or a problem on your side. EDIT: In the mean time, here is what has changed, on top of what is mentioned just above. • My files went from 0MB to 71.9MB, and is still rising. • My first folder of 400.2MB is being filled with the data as I write this. The second folder has the folder sub-structure in place. • Both folders now show in the File Browser that it will be synchronized. I believe that right now, it is all back to normal and working fine, and I guess that's what a good night's sleep can do ;). And we're now only back to the point where synchronizing is slow, but will pick up with the release of Natty (https://wiki.ubuntu.com/UbuntuOne/FAQ/WhyIsItTakingSoLongForMyFilesToSync). But to get to the questions: My about says I use 11.04, Natty Narwhal, but I am quite sure the last distribution I installed was 10.10. Folder A is 400.2MB, and Folder B is 29.5MB I am on a DSL line, behind a regular fritz.box setting. No proxy servers in use, and I did not install any particular firewall features. No physical firewall, just the router (on which I have a TV signal as well), and 2 switches to get to this floor. Status: inactive The ubuntuone-indicator runs the same window as when I click on my name on the top-right corner and select Ubuntu one, or in the Control Center choose Ubuntu one. It wasn't supposed to go further than this was it?

    Read the article

  • PHP, when to use iterators, how to buffer results?

    - by Jon L.
    When is it best to use Iterators in PHP, and how can they be implemented to best avoid loading all objects into memory simultaneously? Do any constructs exist in PHP so that we can queue up results of an operation for use with an Iterator, while again avoiding loading all objects into memory simultaneously? An example would be a curl HTTP request against a REST server In the case of an HTTP request that returns all results at once (a la curl), would we be better off to go with streaming results, and if so, are there any limitations or pitfalls to be aware of? If using streaming, is it better to replace curl with a PHP native stream/socket? My intention is to implement Iterators for a REST client, and separately a document ORM that I'm maintaining, but only if I can do so while gaining benefits from reduced memory usage, increased performance, etc. Thanks in advance for any responses :-)

    Read the article

  • HIB Games (Aquaria & Penumbra) cannot find libGL.so.1 even though it exists

    - by aberration
    I'm try to play some Humble Indie Bundle (HIB) games, but I'm getting errors with Aquaria and Penumbra: Overture that are related to the libGL.so.1 file. Aquaria gives this error on launch: Message: SDL_GL_LoadLibrary Error: Failed loading libGL.so.1 And Penumbra: Overture gives this error on launch: ./penumbra.bin: error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory I know that the file libGL.so.1 does exist (in /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1). From past errors like this, I'm guessing that you need to symlink the library to another directory, but I can't figure out which one.

    Read the article

  • Tool Review: Telerik JustDecompile

    - by Sam Abraham
    In the next few lines, I will be providing a brief review of Telerik’s JustDecompile, a free .Net decompiler and assembly browser. In using Telerik’s 2012 Q3 JustDecompile release, one can see many great features.  First off, I loved the built-in options for loading .Net assemblies automatically using the Open->Load Framework menu option. Other options enable loading assemblies from GAC, XAP URL or locally from disk. The ability to create an “Assembly List” is quiet handy for grouping and saving a “List” of DLLs to load. All loaded assemblies are shown in the left panel of a split-panel screen. Clicking an assembly expands all namespaces within. Drilling further to class level displays the actual source code in the right panel in either IL, C# or Visual Basic. In conclusion, JustDecompile has grown and quickly matured into an indispensible handy tool for us developers. Telerik’s effort in maintaining and updating JustDecompile as well as the company’s commitment to keeping it free is much appreciated and valued.

    Read the article

  • How do I install graphviz 2.29 in 12.04?

    - by bidur
    In my ubuntu 12.04, the graphviz is not the latest version(2.29). I need some features available in the latest version of graphviz. I tried to install the graphviz version 2.29, which requires libgraphviz4(=2.18). I anyhow installed libgraphviz4 and installed graphviz 2.29. For that I have to remove packages libcdt4 and libpathplan4. Now whenever I try to generate graph, I get some problems: For e.g.: dot -Kfdp -n -Tpng -o samplePOS.png forcePOS.dot It says: dot: error while loading shared libraries: libgvc.so.6: cannot open shared object file: No such file or directory neato -Tps -o sample_1.ps sourcedot.gv It says: neato: error while loading shared libraries: libgvc.so.6: cannot open shared object file: No such file or directory So, I am looking for some ways so that I can run graphviz 2.29 in my ubuntu 12.04.

    Read the article

  • Google I/O 2012 - How we Make JavaScript Widgets Scream

    Google I/O 2012 - How we Make JavaScript Widgets Scream Malte Ubl, John Hjelmstad When loading websites every millisecond counts. Social widgets should enhance a website experience and they should definitely not slow it down. We'll walk through the unique challenges of loading social widgets such as the +1 button and how we made sure that they load as fast as possible -- yes, there will be war stories! While we'll focus on widget performance, many of the techniques we used have wider applicability and we'll show how they can make your website faster, too. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 734 3 ratings Time: 51:44 More in Science & Technology

    Read the article

  • Ubuntu 12.10 installation. Hit or miss Booting

    - by Robert
    I just recently downloaded and installed Ubuntu 12.10. I also completely wiped the laptop. The Ubuntu OS will only boot part of the time but never on the first time. Always, the first start up will automatically go to a purple screen and stay there. At first I waited for 45 minutes and nothing changed. Once I held down the power button to turn it off, I turned it back on to see the Asus screen followed by the grub menu. I can select "Ubuntu" and then there is a white box blinking in the top left corner. Then, it will either keep blinking or transfer to the Ubuntu loading screen(everything works fine if it gets to the loading screen). This has never happened before with prior versions of Ubuntu. Any ideas are helpful. Thanks!

    Read the article

  • What simple offline GUI database should I use for this application?

    - by gcc
    I am looking an open source application. Application should have : * database support ( create two or three table ) * GUI ( what I have created should be seen ) Example : Assume that I have created a table ; X_table : | A | B | C | D | After creating table, I am loading data | A | B | C | D | | 1 | 11 | b | f | - | 3 | 12 | a | o | - data | 4 | 13 | r | o | - When I am opening application not for loading data, I want see data in graphical interface. Are there any open source application which have above feature ? Application can be so simple, * no internet connection * support only one database * static table creation ( once created never changed ) Application can be run Ubuntu 12.04 and/or Windows. In other words, I am wanting database viewer and editor. EDIT: I should load pdf file, image etc. or give path of the file to the application. This link can be reference to my question . ( Interface should be like this, just a list )

    Read the article

  • Error after installing Ubuntu 12.04 using Wubi

    - by KJ50
    After using Windows Ubuntu Installer from within Windows, I am prompted to restart, so I follow the directions. When I try to start Ubuntu after restarting, the desktop background appears, but then a loading bar with this title appears. Verifying the installation configuration... While this is loading, an error window pops up that says No root file system is defined Please correct this from the partitioning menu There is only an 'Ok' button available to click, and if I click that the same error window appears. I do not know how to get to the "partitioning menu" from this state, so the only option I have is to shut down my computer. What can I do so that Ubuntu finds a "root file system"? Can I diagnose this problem via Windows? Does anyone have any insight? FYI - I am using a new ultrabook with 6GB RAM, Intel i7 3rd gen processor, and no CD/DVD drive.

    Read the article

  • Is it safe to run multiple XNA ContentManager instances on multiple threads?

    - by Boinst
    My XNA project currently uses one ContentManager instance, and one dedicated background thread for loading all content. I wonder, would it be safe to have multiple ContentManager instances, each in it's own dedicated thread, loading different content at the same time? I'm prompted to ask this question because this article makes the following statement: If there are two textures created at the same time on different threads, they will clobber the other and you will end up with some garbage in the textures. I think that what the author is saying here, is that if I access one ContentManager simultaneously on two threads, I'll get garbage. But what if I have separate ContentManager instances for each thread? If no-one knows the answer already from experience, I'll go ahead and try it and see what happens.

    Read the article

  • ASUS U24a can't boot without live disk

    - by user98965
    I recently picked up a new ASUS U24a while travelling in asia. I've managed to go through hell with the UEFI setup, and finally now have a working GRUB. However, I can't manage to get past the "Loading initial ramdisk". If I boot the live CD-USB (only in BIOS legacy mode), I get a wonderful, working Ubuntu. I finally managed to get UEFI installed on the hard-drive (no option for legacy BIOS boot, or I'd be there in a flash!), and can boot in UEFI mode into GRUB2. But... I can't manage to get past the "loading initial ramdisk". It appears that the disk drivers are failing (there is no disk activity after this point). Ideas? pastebin from the boot-repair is at: http://paste.ubuntu.com/1290011/ best, -tony

    Read the article

  • 12.10 Unity GUI Not Displaying

    - by lolajl
    I had 12.04 installed and had no problems at all, having had it setup on my Compaq Presario CQ62 for about 2 weeks (I'm new to Ubuntu and I had a spare laptop to experiment with). Last night, I installed 12.10 through the update manager. Now, I'm not seeing the Unity GUI if I select Default or Ubuntu, at all, just the Eclipse launcher which I had created sitting on the desktop. Even hitting the Windows key to bring up Dash doesn't work. But, when I select GNOME during login, I'm able to access everything in the GUI, including the menu folders for games, internet, system settings, etc. There was a couple error message saying that a system file wasn't loading properly but I forgot to write these down, and now these error messages aren't loading when I restart. Will I need to wipe clean and reinstall?

    Read the article

  • Azure VS Tools and SDK - systray already running&hellip;

    - by Shawn Cicoria
    If you are getting a message when you start the Compute Emulator “Systray already running…” from within Visual Studio one fix is to check what the image name is loading is. For some reason, on 2 of my machines the image was loading with the 8.3 format.  This caused the logic in the VS tools to not find the process.  So, to fix, I just did a little copy/rename magic. C:\Program Files\Windows Azure SDK\v1.3\bin>copy csmonitor.exe csmonitor-a.exe 1 file(s) copied. C:\Program Files\Windows Azure SDK\v1.3\bin>del csmonitor.exe C:\Program Files\Windows Azure SDK\v1.3\bin>copy csmonitor-a.exe csmonitor.exe 1 file(s) copied. If you bring up task manager and see something like CSMON~1.EXE in the Image Name column, you probably have this issue.

    Read the article

  • Accidentally deleted symlink libc.so.6 in CentOS 6.4. How to get sudo privilege to re-create it?

    - by Eric
    I accidentally deleted the symbol link /lib64/libc.so.6 - /lib64/libc-2.12.so with $ sudo rm libc.so.6 Then I can not use anything including ls command. The error appears for any command I type ls: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory I've tried $ export LD_PRELOAD=/lib64/libc-2.12.so After this I can use ls and ln ..., but still can not use sudo ln ..., sudo -E ln ..., sudo su or even su. I always get this err sudo: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory or su: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory It seems LD_PRELOAD works only for the current shell session of my account, but not for a new account like root or a new session. It's a remote server so I can not use a live CD. I now have a ssh bash session alive but can not establish new ones. I have sudo privilege, but don't have root password. So currently my problem is I need to run sudo sln -s libc-2.12.so libc.so.6 to re-create the symlink libc.so.6, but I can not run sudo without libc.so.6. How can I fix it? Thanks~

    Read the article

  • Xen hipervisor 4.1 Kernel Panic on Ubuntu 12.04

    - by rkmax
    I have a fresh Ubuntu 12.04.1 amd64 server install following this guide I have used LVM option used all disk and make 2 LV /dev/mapper/vg-root / (80GB) vg-swap swap (4GB) now i install xen with apt-get install xen-hypervisor-4.1-amd64 and config /etc/default/grub like the guide and add GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=768M" later all this i exec update-grub and reboot. but when i try to boot with Xen 4.1-amd64 always i get a kernel panic with the message Domain-0 allocation is too small for kernel image my questions are: this error is about what? where i can grow this allocation for avoid this error? grub.cfg menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3541e241-7f39-4ebe-8d99-c5306294c266 echo 'Loading Xen 4.1-amd64 ...' multiboot /xen-4.1-amd64.gz placeholder dom0_mem=768M echo 'Loading Linux 3.2.0-29-generic ...' module /vmlinuz-3.2.0-29-generic placeholder root=/dev/mapper/backup--xen-root ro rootdelay=180 echo 'Loading initial ramdisk ...' module /initrd.img-3.2.0-29-generic } Note: I've followed this guide too

    Read the article

  • Why can't I boot in to Windows Recovery Environment to fix my HDD or salvage my data?

    - by Kevin
    I've been trying to get in to WindowsRE to salvage the files on my Sony Vaio laptop after it failed to load Vista (it finally, consistently displays "Error loading operating system" after months of such intermittent failures, usually rectified via restarts or utilizing Startup Repair or CHKDSK from WindowsRE) . The problem is, after successfully accessing it once after this failure (and many times before over the course of the laptop's life), I can no longer get it to load. During the last successful access (right after the failure), I ran startup repair, which itself failed and notified me that the boot sector was corrupt. I attempted to head in to Sony's proprietary recovery tools menu, which is accessible from WindowsRE when it is loaded from the recovery partition or recovery disk, however it hung. I have since been unable to access the recovery environment after restarting, using any of these methods: Access via the recovery partition (pressing F10 on boot) Access via recovery DVD (created using the same computer when it was healthy) Access via a Windows Vista installation DVD All three methods produce the same results: The computer acknowledges the boot attempt The computer successfully gets passed the "Windows is loading files" screen The computer successfully gets passed the Windows loading screen The computer then stalls at a black screen, while showing HDD activity (via indicator light). After a few minutes, the HDD activity ceases, and after a few more minutes, the over sized cursor that is utilized in WindowsRE appears on the black screen. The actual recovery environment, however, never appears, even after leaving the computer in such a state overnight. What is fustrating is that other bootable utilities, such as SeaTools for DOS and MemTest, boot up and run fine. In running perfectly normally, MemTest was able to produce a plethora of errors utilizing my RAM. I'm inclined to believe the RAM's faultiness may causing the WindowsRE booting to fail. Would this be a valid assumption? If I'm not mistaken, booting from external media utilizes the RAM, so such a reason is plausible, assuming my knowledge of bootloading is correct. Other than that, I can't figure out any reason why all the bootable utilities except WindowsRE run fine. Does anyone know what the problem is, or could be? Any solutions?

    Read the article

  • Is Gmail Being Blocked by my ISP (wait till you read this)?

    - by James
    This is the strangest thing I have ever encountered. I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). So okay, maybe my ISP is blocking these for some reason or maybe my firewall is, or maybe there is something wrong with my connectivity, right? NO. On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail ages starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping a complete reinstall would solve the issue, but no luck. So can anyone for the life of them figure out why this could be? And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cabel from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before gmail was perfectly accessible in this computer. I can't remember anything special that happened on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter

    Read the article

  • Apache memory allocation error message

    - by la_f0ka
    I'm trying to set up a medium sized Drupal 7 website on my miniserver but I keep getting a 500 error message. This is what I found in Apache's error log: [Wed Sep 12 15:02:04 2012] [notice] SSL FIPS mode disabled [Wed Sep 12 15:02:04 2012] [warn] No JkShmFile defined in httpd.conf. Using default /usr/local/apache/logs/jk-runtime-status [Wed Sep 12 15:02:04 2012] [notice] Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_jk/1.2.35 configured -- resuming normal operations [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] File does not exist: /home/brighton/public_html/favicon.ico [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php I contacted support and they just told me I should just upgrade my package (right not I have a 512Mb account), but I am not sure if I'm buying it... even if I'm trying to access a file which only contains phpinfo(); I still get the 500. Any help would be much appreciated, and if there's need of any other information please let me know and I'll update the question. I compiled apache with tomcat because I intend to use Solr... not sure if this is relevant or not.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • iPhone HUD style progress bar

    - by Alexander
    I've been wanting to create a HUD style loading bar like the SMS app on the iPhone used to have (http://www.jonokane.com/images/blog/iphoneFixProblem.jpg) but don't know how. I was wondering if anyone has done this before or if there is a tutorial somewhere for it? I just think it looks so nice and would like to use it rather than the built in loading progress bar. Thanks a bunch!

    Read the article

  • Can't see why JavaMail doesn't work in a Spring MVC application on Tomcat

    - by Kartoch
    I have wrote a small web application using Spring MVC. It runs on tomcat and use the JavaMail library. At the present time I can't find why the mails are not sent. But I have no pertinent log messages in tomcat log files to find where is the problem. At the present time, my logging is configured in a log4j.properties file in the root of my CLASSPATH: log4j.rootLogger= DEBUG, CONSOLE log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n How can i see the java mail log message in debug mode ? I think it is related to JDK logging system (as I'm using Sun's JavaMail) but I don't know how to configure it. Edit: Well, one problem solves and another arises. I change my bean definition to include debug support for javamail: mail.debug=true But still no pertinent info in it: DEBUG: JavaMail version 1.4.1ea-SNAPSHOT DEBUG: not loading file: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.providers DEBUG: java.io.FileNotFoundException: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.providers (No such file or directory) DEBUG: !anyLoaded DEBUG: not loading resource: /META-INF/javamail.providers DEBUG: successfully loaded resource: /META-INF/javamail.default.providers DEBUG: Tables of loaded providers DEBUG: Providers Listed By Class Name:{com.sun.mail.smtp.SMTPSSLTransport=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc], com.sun.mail.imap.IMAPSSLStore=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3SSLStore=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc]} DEBUG: Providers Listed By Protocol: {imaps=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], smtps=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc], pop3s=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]} DEBUG: successfully loaded resource: /META-INF/javamail.default.address.map DEBUG: !anyLoaded DEBUG: not loading resource: /META-INF/javamail.address.map DEBUG: not loading file: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.address.map DEBUG: java.io.FileNotFoundException: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.address.map (No such file or directory)

    Read the article

  • Key Coder/Observer example for Iphone

    - by ReduxDJ
    I'm trying to implement KVO into an application, yet, I've followed the documentation provided by Apple, however I can't get it to work. I'm hoping to see a bare minimal example of how to use this with my NSObjects. My use case, is I want one item in a table-cell to update without loading the entire data in a tableView because I am loading images from URLs and I don't want to reload all of the image, while I am polling a server. Thanks,

    Read the article

  • FBML elements rendering delay

    - by jeffreyveon
    I am using FBML for rendering certain elements on the page such as the name of the user, profil pic, etc. However when there are many FBML elements on page, there is a slight delay which occurs before they are rendered - that's fine since AJAX calls are made to the server to fetch the data by the JS FB library. However, I want to hide the container DIV holding these element till the elements have finished loading, so is there any way to specify a JS callback function which gets fired when the FBML data has finished loading?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >