Search Results

Search found 48390 results on 1936 pages for 'space used'.

Page 132/1936 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • How to rotate FBX files in 90 degree while running on a path in iTween in unity 3d

    - by Jack Dsilva
    I am doing one racing game,in which I used iTween path systems to smooth camera turning in turns,iTween path systems works fine(special thanks to Bob Berkebile) Here first I used one cube to follow path and it works fine in turning But my problem is instead of using cube I used FBX(character) to follow path here when turn comes character will not move This is my problem Image: I want this type: How to Slove this problem?

    Read the article

  • Everything turning black when pitching down

    - by Gordon
    Just a quick questions about something that's occurring in my world. Every time I pitch my camera downward, everything starts turning black, and if I pitch upward, everything sort of intensifies. I'm multiplying my normals by the normal matrix in the shader, and I'm multiplying my lights direction by the model view matrix. If I leave the normal and light dir in world space everything ends up fine. I thought putting them both in view space would not cause those weird things to happen?

    Read the article

  • How can I convince IE to honor my explicit instructions to make a table column X pixels wide? [migrated]

    - by AnthonyWJones
    Please consider this small but complete chunk of HTML: <!DOCTYPE html > <html> <head> <title>Test</title> <style type="text/css"> span {overflow:hidden; white-space:nowrap; } td {overflow:hidden; text-overflow:ellipsis} </style> </head> <body> <table cellspacing="0" > <tbody> <tr> <td nowrap="nowrap" style="max-width:30px; width:30px; white-space:nowrap; "><span>column 1</span></td> <td nowrap="nowrap" style="max-width:30px; width:30px; white-space:nowrap; "><span>column 2</span></td> <td nowrap="nowrap" style="max-width:30px; width:30px; white-space:nowrap; "><span>column 3</span></td> </tr> </tbody> </table> </body> </html> If you render the above in Chrome you'll see the effect I'm looking for. However render it in IE8 or 9 the width and/or max-width is ignored. So my question is how do get IE to simply let me specify the width of a cell explicitly? BTW, I've tried various combinations of table-layout:fixed and using colgroup with cols and all sorts, nothing I've tried convinces IE to what I'm clearly asking it to explicitly do? If I had any hair before starting this I wouldn't have any left by now.

    Read the article

  • How to Improve Website SEO Ranking With Proper Keyword Research

    The right use of keywords not only allows the website to be ranked higher but also enables it to be used most likely again and again by the people. In business fields the appropriate keywords used on their websites not only let people reach them but also convince them to use their products. So, words when used precisely can make one reach the top.

    Read the article

  • Inverse projection: question about w coordinate

    - by fayeWilly
    I have to perform in shader an inverse projection from a u/v of a render target. What I do is: Get NDC as 2*(u,v,depth) - 1 Then world space as tmp = (P*V)^-1 * (NDC,1.0); world space = tmp/tmp.w; This apparently works, but I am confused about the w division there. Why this work? Shouldn't be a multiplication by a w somewhere (as in the "forward" pipeline there is the perpsective division?) Thank you, Faye

    Read the article

  • Media Archive System with branches?

    - by Ian McEwen
    In short, how can I get VCS features (revisioning, branching, and deduplication) for a media collection that's far too large for most/all VCS systems? Background I have a 300GB music folder; unfortunately, I only have the hard drive space for this on my desktop system. However, a good portion of my collection is FLAC; therefore, I could theoretically have a space-optimized version in which I transcode all the FLAC to mp3 or some other lossy format, and use only that version on the laptop. However, a portion of my collection isn't FLAC. And that which isn't FLAC shouldn't be transcoded to an equivalent format; it won't have any space savings, which is the point. Moreover, it shouldn't be duplicated: the mp3/ogg portions of the collection should probably be exactly the same files. Thoughts One solution is to have format-specific organization of my music folders, and use some script to transcode the FLAC directory to mp3 or such into another directory. Another is some sort of hack using entirely separate copies and symbolic links for deduplication, or something similar. But these also have a disadvantage of lacking versioning; I'd like to be able to reorganize my music collection, retag things, etc. and save history. This isn't key, but would be awfully nice. I can't see it as entirely unreasonable to set up VCS hooks or something equivalent to keep directory structure synced between two copies, update tags, and transcode FLAC automatically into the space-optimized copy. Basically, the system I really want is a version control system. Two branches: one archival/desktop branch including the FLAC, one space-optimized/laptop branch without it; most VCSes would deal well with whole chunks being the same files well by compressing in a reasonable way (i.e. don't keep two copies of the same data). I could also do a lot of what I talk about above with hooks. But I don't know of any VCS that would deal with a 300GB repository with almost 20k files. Many of them would just not even initialize the whole affair; others would just do it inexpressibly slowly or otherwise badly. checkpoint looks like it's designed for something close (it's at least for media), but wouldn't do deduplication well (and I'm not convinced I'd be able to script it to do things like automatic transcoding and directory-structure syncing). So. Is there anything out there that can do all this, or should I consider it a programming project?

    Read the article

  • Should I install ubuntu on USB instead of HDD dual-boot?

    - by user2147243
    I had Ubuntu 12.04 installed as dual-boot OS on top of Vista on my laptop. Hacked the grub settings to default to Vista (instead of the default Ubuntu -- pain) on startup, and all was OK for occasional Ubuntu use for past 6 months. Then last week I got a strange message about 'lack of disk space' (~50MB free) when installing pxyplot, even though there was still about 6GB free disk space when I checked later. Then today the Ubuntu wouldn't load at all, and checking the HDD partitions in Vista it looked like the 15GB Ubuntu partition was now three smaller partitions! So, I got rid of those partitions and expanded the Vista partition to use the reclaimed space. Now can't restart ('grub rescue' appears and doesn't 'rescue' anything), so I'll have to do a boot recovery using a Vista installation CD. (Not a particularly user-friendly failure mode of the dual-boot installation!) I now have to decide to either a) try installing ubuntu on the HDD again, but don't want to stuff up my Vista ever again, as that is my most used OS, or b) install Ubuntu on a 16GB USB 3.0 stick. Apparently performance from USB won't be as good as from HDD, and running OS from USB stick does lots of r/w so the stick may fail after a few years! Perhaps installing Ubuntu on live USB and setup to then run in RAM would alleviate the performance/USB lifespan problems? If I create a live-USB for Ubuntu OS, will it boot off that when I restart the laptop with it plugged in? Or will I have to change the laptop setting for boot-order whenever I want to boot Ubuntu instead of Vista (that would be even more painful than the grub default boot order putting Ubuntu ahead of the existing Vista OS!) -- update: I recovered my Vista setup using Iolo SystemMechanic Disaster Recovery Tool, and created a bootable USB of Ubuntu 13.10 on an 8GB USB3.0 pendrive, with 4GB of 'persistence' to allow saving of settings, install some packages etc. It worked OK for a couple of test boots, but once I changed the time and desktop wallpaper, the next Ubuntu reboot crashed and I then couldn't get it to boot successfully. So I decided to install Ubuntu 12.04 LTS as a dual-boot again, but this time instead of partitioning the HDD and installing from an ISO DVD I used the wubi.exe tool to install Ubuntu as a dual-boot. Worked very well, although one oddity was that, despite asking how big the make the partition (20GB), the installed Ubuntu appears to be happily installed somewhere within the Vista NTFS file system (no partition shows up in Windows disk manager, and in Ubuntu disk management tool the entire 133 GB of HDD is showing, with ~40GB free space). A nice feature of installing the dual-boot using wubi is that the laptop now uses Windows boot manager on startup, with Vista as the default OS and Ubuntu happily listed as second on the list. So far so good.

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Partition resize

    - by borax12
    I have a dual boot system with 1.C:drive with windows 227 GB 2.E: drive in windows 185 GB 3.Ext4 Ubuntu - 38 GB 4.Linux swap - 4 GB I want to decrease the space from E: drive from 185 GB to say about 160 GB and assign the 25 GB achieved from the resizing to the ext4 partition so that my ubuntu home has more space I was told that do a resize in gparted could cause some boot problems,please tell me a safe way to achieve this resizing

    Read the article

  • What is the term for a 'decoy' feature or intentional bug?

    - by Freiheit
    I have forgotten a slang programming term. This thing is an intentional bug or a decoy feature used as a distraction. An example usage, "Hey Bob, QA is doing a review today. Put a $THING into the module so they actually have a problem to find". This can be used negatively, to have a very obvious intentional flaw to discover as a distraction from a real problem. This can also be used positively. Its like how you always let rescue dogs 'find' a victim when searching a disaster area. It can also be used to verify that a QA process is actually catching flaws. What is the term I am looking for?

    Read the article

  • What does Ubuntu do when I signal undocking to a laptop?

    - by Seppo Erviälä
    It seems that Ubuntu runs some script or command when I signal that I want to undock my laptop by pressing the undock button on the dock. Most visible thing that happens is that resolution on external display is changed. After prepearing for undock my laptop is still connected to power, VGA-output and audio jacks through dock but not to any usb devices or optical drive. I'm running 11.04 on a ThinkPad X61s with X6 UltraBase. What happens when I signal undocking? This is what dmesg says after pressing undock button: [81459.990682] ata1.00: disabled [81459.990727] ata1.00: detaching (SCSI 0:0:0:0) [81459.991722] ACPI: \_SB_.GDCK - undocking [81460.009462] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020252] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xfe226c00-0xfe226fff] (PCI address [0xfe226c00-0xfe226fff]) [81460.020265] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020281] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [81460.020309] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900102) [81460.020338] ehci_hcd 0000:00:1a.7: PME# disabled [81460.020346] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020352] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020363] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [81460.020372] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [81460.020432] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040071] ehci_hcd 0000:00:1d.7: BAR 0: set to [mem 0xfe227000-0xfe2273ff] (PCI address [0xfe227000-0xfe2273ff]) [81460.040085] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040104] ehci_hcd 0000:00:1d.7: restoring config space at offset 0xf (was 0x400, writing 0x40b) [81460.040133] ehci_hcd 0000:00:1d.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900102) [81460.040170] ehci_hcd 0000:00:1d.7: PME# disabled [81460.040178] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040184] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040195] ehci_hcd 0000:00:1d.7: PCI INT D -> GSI 19 (level, low) -> IRQ 19 [81460.040204] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [81460.040503] ehci_hcd 0000:00:1d.7: PCI INT D disabled [81460.040552] ehci_hcd 0000:00:1d.7: PME# enabled [81460.061657] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D3 [81460.200414] usb 1-4: USB disconnect, address 14 [81462.220088] ehci_hcd 0000:00:1a.7: PCI INT C disabled [81462.220169] ehci_hcd 0000:00:1a.7: PME# enabled [81462.240115] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D3

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • 12 days to go for Messenger!

    - by TATWORTH
    In just over twelve days from now, the Messenger space probe will go into orbit around our innermost planet, Mercury. See http://messenger.jhuapl.edu/index.php for latest mission timings. After 2405 days in space and 15+ circuits of the sun (see http://messenger.jhuapl.edu/whereis/index.php), it about to go into orbit around Mercury. It has flown by Earth, Venus and Mercury in order to change velocity sufficiently to be able to go into orbit without requiring a massive amount of propellant.

    Read the article

  • Feature Usage Reporting in Early Access Programs

    After doing Web development, you can get very used to the luxury of having basic information about your users' machines and browsers. With their permission, you can also get the same information from an application, and can even get more targeted anonymous information that will tell you how the features are used. Kevin explains how this can be used with early access builds to improve the reliability and usability of applications.

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • Deletes that Split Pages and Forwarded Ghosts

    - by Paul White
    Can DELETE operations cause pages to split? Yes. It sounds counter-intuitive on the face of it; deleting rows frees up space on a page, and page splitting occurs when a page needs additional space. Nevertheless, there are circumstances when deleting rows causes them to expand before they can be deleted. The mechanism at work here is row versioning (extract from Books Online below): Isolation Levels The relationship between row-versioning isolation levels (the first bullet point) and page splits is...(read more)

    Read the article

  • Pseudo-magnet implementation with chipmunk

    - by Eimantas
    How should I go about implementing "natural" magnet on a certain body in chipmunk space? Context is of simple bodies lying in the space (think chessboard). When one of the figures is activated as a magnet - others should start moving towards it. Currently I'm applying force (cpBodyApplyForce)to the other figures with vector calculated towards the activated figure. However this doesn't really feel "natural". Are there any known algorithms for imitating magnets?

    Read the article

  • How di I fix this Synaptic manager error

    - by mick
    Synaptic manager is giving me the following error: Failed to fetch cdrom://Ubuntu 11.04 _Natty Narwhal_ - Release i386 (20110427.1)/kubuntu/dists/natty/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognised by APT. apt-get update cannot be used to add new CD-ROMs Failed to fetch cdrom://Ubuntu 11.04 _Natty Narwhal_ - Release i386 (20110427.1)/kubuntu/dists/natty/restricted/binary-i386/Packages Please use apt- cdrom to make this CD-ROM recognised by APT. apt-get update cannot be used to add new CD-ROMs Failed to fetch cdrom://Ubuntu 11.04 _Natty Narwhal_ - Release i386 (20110427.1)/xubuntu/dists/natty/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognised by APT. apt-get update cannot be used to add new CD-ROMs Failed to fetch cdrom://Ubuntu 11.04 _Natty Narwhal_ - Release i386 (20110427.1)/xubuntu/dists/natty/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognised by APT. apt-get update cannot be used to add new CD-ROMs

    Read the article

  • What is Pseudocode?

    - by Jae
    I've seen a lot of mentions of Pseudocode lately, on this site and others. But I don't get it: What is Pseudocode? For example, the Wikipedia article below says "It uses the structural conventions of a programming language, but is intended for human reading rather than machine reading." Does this mean that it isn't actually used to make programs? Why is it used? How is it used? Is it considered a Programming Language? See the above Wikipedia quote. Is it commonly known/used? Anything else... I honestly don't know where to start with this. I have Googled it and I've seen the Wikipedia article on the topic, but I still don't fully understand what it is.

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • How to Convert HTML to PDF Using PHP?

    - by Meng Longlong
    PDF or Portable Document Format is a popular file type that is often used for online documents. It's great for distributing downloadable written content, and is frequently used by governments and businesses alike. Because it's a format that's familiar to all, many applications allow the user to convert other document types to the PDF format. PHP is one programming language that has a built-in ability to convert to PDF. PHP scripts can be used to transform file types such as HTML into PDF files.

    Read the article

  • EntityDataSource Control Basics

    The Entity Framework can be easily used to create websites based on ASP.NET. The EntityDataSource control, which is one of a set of Web Server Datasource controls, can be used to to bind an Entity Data Model (EDM) to data-bound controls on the page. Thse controls can be editable grids, forms, drop-down list controls and master-detail pages which can then be used to create, read, update, and delete data. Joydip tells you what you need to get started.

    Read the article

  • Why can't I install Microsoft Office 2007 in Ubuntu 11.04?

    - by DK new
    I am very new to Ubuntu and only just getting a hang of it, and my questions might sound stupid especially because I am a learner in terms of techie things as well. So because of the nature of work where everyone uses stupid Windows and Microsoft, I need to have access to MS Office 2007/2010 as documents with too many tables or images open all haywire in Libre Office (which has otherwise been great!). I have been reading up about installing MS Office through WINE/PlayonLinux, but have been unsuccessful so far. I downloaded a MS Office 2007 package from Pirate Bay, which I extracted into a folder. I tried numerous different ways to install through WINE and PlayonLinux, but will discuss the one which seems to be getting me somewhere. http://www.webupd8.org/2011/01/how-to-install-microsoft-office-2007-in.html ..... Initially, when I would click on the install button of MS Office, I get a message saying "The install location you selected does not have 1558MB free space. Free up space from the selected install location or choose a different install location". The install location in this case said "C:\Program Files\Microsoft Office", which confused me as I don't have drives named as C, Z etc. I went to configure WINE and under the drives tab, created a drive named A with the path location /media/cd025f16-433b-4a90-abb6-bb7a025d0450/. Also the space thing is confusing as I have at least 450GB of unused space on my computer. anyways, when I selected the A drive for installation, the installation starts, but soon I get the following error message, "Office cannot find Office.en-us\OfficeLR.Cab. Browse to a valid installation source" .... The part saying "OfficeLR.Cab" have said different things after the Office bit every time I have made an attempt. When I select the Office.en-us sub-folder or any other folder within the folder where MS Office 2007 is saved, it says "invalid source"! I have been trying to get this sorted since 15hrs now (addictive!) and have learnt loads of things in the process, but have not managed to crack it. It might be something stupidly simple I am not aware off that is stopping it. I would really appreciate some help! Thanks a lot.. Also I am still getting used to the language, so might have many questions Also I am using Ubuntu 11.04 (tag 11.04). Also I think I don't have windows -- when my friend installed Ubuntu on my new laptop which had Windows 7, he was trying to keep windows in a separate partition, but something happened and windows was not there! Looking forward to some support! Again thanks a lot

    Read the article

  • Linux Best Practices

    - by Zac
    I'm a life-long Windows developer switching over to Linux for the first time, and I'm starting off with Ubuntu to ease the learning curve. My new laptop will primarily be a development machine: 6GB RAM, 320 GB HD. I'd like there to be 2 non-root users: (a) Development, which will always be me, and (b) Guest, for anyone else. I assume the root user is added by default, like System Administrator in Windows. (1) I'd like to mount /home to its own partition, but how does this work if I have two user accounts (Development and Guest)? Are there 2 separate /home directories, or do they get shared? Is it possible to allocate more space for Development and only a tiny bit of space for Guest in GRUB2? How?!?! (2) I'm assuming that its okay that all of my development tools (Eclipse & plugins, SVN, JUnit, ant, etc.) and Java will end up getting installed in non-/home directories such as /usr and /opt, but that my Eclipse/SVN workspace will live under my /home directory on a separate partition... any problems, issues, concerns with that? (3) As far as partitioning schemes, nothing too complicated, but not plain Jane either: Boot Partition, 512 MB, in case I want to install other OSes Ubuntu & non-/home file system, 187.5 GB Swap Partition, 12 GB = RAM x 2 /home Partition, 120 GB I don't have any bulky media data (I don't have music or video libraries, this is a lean and mean dev machine) so having 320 GB is like winning the lottery and not knowing what to do with all this space. I figured I'd give a little extra space to the OS/FS partition since I'll be running JEE containers locally and doing a lot of file IO, logging and other memory-instensive operations. Any issues, problems, concerns, suggestions? (4) I was thinking about using ext4; seems to have good filestamping without any space ceiling for me to hit. Any other suggestions for a dev machine? (5) I read somewhere that you need to be careful when you install software as the root user, but I can't remember why. What general caveats do I need to be aware of when doing things (installing packages, making system configurations, etc.) as root vs "Development" user? Thanks!

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >