Search Results

Search found 12061 results on 483 pages for 'non printable'.

Page 164/483 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Understanding G1 GC Logs

    - by poonam
    The purpose of this post is to explain the meaning of GC logs generated with some tracing and diagnostic options for G1 GC. We will take a look at the output generated with PrintGCDetails which is a product flag and provides the most detailed level of information. Along with that, we will also look at the output of two diagnostic flags that get enabled with -XX:+UnlockDiagnosticVMOptions option - G1PrintRegionLivenessInfo that prints the occupancy and the amount of space used by live objects in each region at the end of the marking cycle and G1PrintHeapRegions that provides detailed information on the heap regions being allocated and reclaimed. We will be looking at the logs generated with JDK 1.7.0_04 using these options. Option -XX:+PrintGCDetails Here's a sample log of G1 collection generated with PrintGCDetails. 0.522: [GC pause (young), 0.15877971 secs] [Parallel Time: 157.1 ms] [GC Worker Start (ms): 522.1 522.2 522.2 522.2 Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] [Processed Buffers : 2 2 3 2 Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] [GC Worker Other (ms): 0.3 0.3 0.3 0.3 Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] [Clear CT: 0.1 ms] [Other: 1.5 ms] [Choose CSet: 0.0 ms] [Ref Proc: 0.3 ms] [Ref Enq: 0.0 ms] [Free CSet: 0.3 ms] [Eden: 12M(12M)->0B(10M) Survivors: 0B->2048K Heap: 13M(64M)->9739K(64M)] [Times: user=0.59 sys=0.02, real=0.16 secs] This is the typical log of an Evacuation Pause (G1 collection) in which live objects are copied from one set of regions (young OR young+old) to another set. It is a stop-the-world activity and all the application threads are stopped at a safepoint during this time. This pause is made up of several sub-tasks indicated by the indentation in the log entries. Here's is the top most line that gets printed for the Evacuation Pause. 0.522: [GC pause (young), 0.15877971 secs] This is the highest level information telling us that it is an Evacuation Pause that started at 0.522 secs from the start of the process, in which all the regions being evacuated are Young i.e. Eden and Survivor regions. This collection took 0.15877971 secs to finish. Evacuation Pauses can be mixed as well. In which case the set of regions selected include all of the young regions as well as some old regions. 1.730: [GC pause (mixed), 0.32714353 secs] Let's take a look at all the sub-tasks performed in this Evacuation Pause. [Parallel Time: 157.1 ms] Parallel Time is the total elapsed time spent by all the parallel GC worker threads. The following lines correspond to the parallel tasks performed by these worker threads in this total parallel time, which in this case is 157.1 ms. [GC Worker Start (ms): 522.1 522.2 522.2 522.2Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] The first line tells us the start time of each of the worker thread in milliseconds. The start times are ordered with respect to the worker thread ids – thread 0 started at 522.1ms and thread 1 started at 522.2ms from the start of the process. The second line tells the Avg, Min, Max and Diff of the start times of all of the worker threads. [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] This gives us the time spent by each worker thread scanning the roots (globals, registers, thread stacks and VM data structures). Here, thread 0 took 1.6ms to perform the root scanning task and thread 1 took 1.5 ms. The second line clearly shows the Avg, Min, Max and Diff of the times spent by all the worker threads. [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] Update RS gives us the time each thread spent in updating the Remembered Sets. Remembered Sets are the data structures that keep track of the references that point into a heap region. Mutator threads keep changing the object graph and thus the references that point into a particular region. We keep track of these changes in buffers called Update Buffers. The Update RS sub-task processes the update buffers that were not able to be processed concurrently, and updates the corresponding remembered sets of all regions. [Processed Buffers : 2 2 3 2Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] This tells us the number of Update Buffers (mentioned above) processed by each worker thread. [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] These are the times each worker thread had spent in scanning the Remembered Sets. Remembered Set of a region contains cards that correspond to the references pointing into that region. This phase scans those cards looking for the references pointing into all the regions of the collection set. [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] These are the times spent by each worker thread copying live objects from the regions in the Collection Set to the other regions. [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] Termination time is the time spent by the worker thread offering to terminate. But before terminating, it checks the work queues of other threads and if there are still object references in other work queues, it tries to steal object references, and if it succeeds in stealing a reference, it processes that and offers to terminate again. [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] This gives the number of times each thread has offered to terminate. [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] These are the times in milliseconds at which each worker thread stopped. [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] These are the total lifetimes of each worker thread. [GC Worker Other (ms): 0.3 0.3 0.3 0.3Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] These are the times that each worker thread spent in performing some other tasks that we have not accounted above for the total Parallel Time. [Clear CT: 0.1 ms] This is the time spent in clearing the Card Table. This task is performed in serial mode. [Other: 1.5 ms] Time spent in the some other tasks listed below. The following sub-tasks (which individually may be parallelized) are performed serially. [Choose CSet: 0.0 ms] Time spent in selecting the regions for the Collection Set. [Ref Proc: 0.3 ms] Total time spent in processing Reference objects. [Ref Enq: 0.0 ms] Time spent in enqueuing references to the ReferenceQueues. [Free CSet: 0.3 ms] Time spent in freeing the collection set data structure. [Eden: 12M(12M)->0B(13M) Survivors: 0B->2048K Heap: 14M(64M)->9739K(64M)] This line gives the details on the heap size changes with the Evacuation Pause. This shows that Eden had the occupancy of 12M and its capacity was also 12M before the collection. After the collection, its occupancy got reduced to 0 since everything is evacuated/promoted from Eden during a collection, and its target size grew to 13M. The new Eden capacity of 13M is not reserved at this point. This value is the target size of the Eden. Regions are added to Eden as the demand is made and when the added regions reach to the target size, we start the next collection. Similarly, Survivors had the occupancy of 0 bytes and it grew to 2048K after the collection. The total heap occupancy and capacity was 14M and 64M receptively before the collection and it became 9739K and 64M after the collection. Apart from the evacuation pauses, G1 also performs concurrent-marking to build the live data information of regions. 1.416: [GC pause (young) (initial-mark), 0.62417980 secs] ….... 2.042: [GC concurrent-root-region-scan-start] 2.067: [GC concurrent-root-region-scan-end, 0.0251507] 2.068: [GC concurrent-mark-start] 3.198: [GC concurrent-mark-reset-for-overflow] 4.053: [GC concurrent-mark-end, 1.9849672 sec] 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.090: [GC concurrent-cleanup-start] 4.091: [GC concurrent-cleanup-end, 0.0002721] The first phase of a marking cycle is Initial Marking where all the objects directly reachable from the roots are marked and this phase is piggy-backed on a fully young Evacuation Pause. 2.042: [GC concurrent-root-region-scan-start] This marks the start of a concurrent phase that scans the set of root-regions which are directly reachable from the survivors of the initial marking phase. 2.067: [GC concurrent-root-region-scan-end, 0.0251507] End of the concurrent root region scan phase and it lasted for 0.0251507 seconds. 2.068: [GC concurrent-mark-start] Start of the concurrent marking at 2.068 secs from the start of the process. 3.198: [GC concurrent-mark-reset-for-overflow] This indicates that the global marking stack had became full and there was an overflow of the stack. Concurrent marking detected this overflow and had to reset the data structures to start the marking again. 4.053: [GC concurrent-mark-end, 1.9849672 sec] End of the concurrent marking phase and it lasted for 1.9849672 seconds. 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] This corresponds to the remark phase which is a stop-the-world phase. It completes the left over marking work (SATB buffers processing) from the previous phase. In this case, this phase took 0.0030184 secs and out of which 0.0000254 secs were spent on Reference processing. 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] Cleanup phase which is again a stop-the-world phase. It goes through the marking information of all the regions, computes the live data information of each region, resets the marking data structures and sorts the regions according to their gc-efficiency. In this example, the total heap size is 138M and after the live data counting it was found that the total live data size dropped down from 117M to 106M. 4.090: [GC concurrent-cleanup-start] This concurrent cleanup phase frees up the regions that were found to be empty (didn't contain any live data) during the previous stop-the-world phase. 4.091: [GC concurrent-cleanup-end, 0.0002721] Concurrent cleanup phase took 0.0002721 secs to free up the empty regions. Option -XX:G1PrintRegionLivenessInfo Now, let's look at the output generated with the flag G1PrintRegionLivenessInfo. This is a diagnostic option and gets enabled with -XX:+UnlockDiagnosticVMOptions. G1PrintRegionLivenessInfo prints the live data information of each region during the Cleanup phase of the concurrent-marking cycle. 26.896: [GC cleanup ### PHASE Post-Marking @ 26.896### HEAP committed: 0x02e00000-0x0fe00000 reserved: 0x02e00000-0x12e00000 region-size: 1048576 Cleanup phase of the concurrent-marking cycle started at 26.896 secs from the start of the process and this live data information is being printed after the marking phase. Committed G1 heap ranges from 0x02e00000 to 0x0fe00000 and the total G1 heap reserved by JVM is from 0x02e00000 to 0x12e00000. Each region in the G1 heap is of size 1048576 bytes. ### type address-range used prev-live next-live gc-eff### (bytes) (bytes) (bytes) (bytes/ms) This is the header of the output that tells us about the type of the region, address-range of the region, used space in the region, live bytes in the region with respect to the previous marking cycle, live bytes in the region with respect to the current marking cycle and the GC efficiency of that region. ### FREE 0x02e00000-0x02f00000 0 0 0 0.0 This is a Free region. ### OLD 0x02f00000-0x03000000 1048576 1038592 1038592 0.0 Old region with address-range from 0x02f00000 to 0x03000000. Total used space in the region is 1048576 bytes, live bytes as per the previous marking cycle are 1038592 and live bytes with respect to the current marking cycle are also 1038592. The GC efficiency has been computed as 0. ### EDEN 0x03400000-0x03500000 20992 20992 20992 0.0 This is an Eden region. ### HUMS 0x0ae00000-0x0af00000 1048576 1048576 1048576 0.0### HUMC 0x0af00000-0x0b000000 1048576 1048576 1048576 0.0### HUMC 0x0b000000-0x0b100000 1048576 1048576 1048576 0.0### HUMC 0x0b100000-0x0b200000 1048576 1048576 1048576 0.0### HUMC 0x0b200000-0x0b300000 1048576 1048576 1048576 0.0### HUMC 0x0b300000-0x0b400000 1048576 1048576 1048576 0.0### HUMC 0x0b400000-0x0b500000 1001480 1001480 1001480 0.0 These are the continuous set of regions called Humongous regions for storing a large object. HUMS (Humongous starts) marks the start of the set of humongous regions and HUMC (Humongous continues) tags the subsequent regions of the humongous regions set. ### SURV 0x09300000-0x09400000 16384 16384 16384 0.0 This is a Survivor region. ### SUMMARY capacity: 208.00 MB used: 150.16 MB / 72.19 % prev-live: 149.78 MB / 72.01 % next-live: 142.82 MB / 68.66 % At the end, a summary is printed listing the capacity, the used space and the change in the liveness after the completion of concurrent marking. In this case, G1 heap capacity is 208MB, total used space is 150.16MB which is 72.19% of the total heap size, live data in the previous marking was 149.78MB which was 72.01% of the total heap size and the live data as per the current marking is 142.82MB which is 68.66% of the total heap size. Option -XX:+G1PrintHeapRegions G1PrintHeapRegions option logs the regions related events when regions are committed, allocated into or are reclaimed. COMMIT/UNCOMMIT events G1HR COMMIT [0x6e900000,0x6ea00000]G1HR COMMIT [0x6ea00000,0x6eb00000] Here, the heap is being initialized or expanded and the region (with bottom: 0x6eb00000 and end: 0x6ec00000) is being freshly committed. COMMIT events are always generated in order i.e. the next COMMIT event will always be for the uncommitted region with the lowest address. G1HR UNCOMMIT [0x72700000,0x72800000]G1HR UNCOMMIT [0x72600000,0x72700000] Opposite to COMMIT. The heap got shrunk at the end of a Full GC and the regions are being uncommitted. Like COMMIT, UNCOMMIT events are also generated in order i.e. the next UNCOMMIT event will always be for the committed region with the highest address. GC Cycle events G1HR #StartGC 7G1HR CSET 0x6e900000G1HR REUSE 0x70500000G1HR ALLOC(Old) 0x6f800000G1HR RETIRE 0x6f800000 0x6f821b20G1HR #EndGC 7 This shows start and end of an Evacuation pause. This event is followed by a GC counter tracking both evacuation pauses and Full GCs. Here, this is the 7th GC since the start of the process. G1HR #StartFullGC 17G1HR UNCOMMIT [0x6ed00000,0x6ee00000]G1HR POST-COMPACTION(Old) 0x6e800000 0x6e854f58G1HR #EndFullGC 17 Shows start and end of a Full GC. This event is also followed by the same GC counter as above. This is the 17th GC since the start of the process. ALLOC events G1HR ALLOC(Eden) 0x6e800000 The region with bottom 0x6e800000 just started being used for allocation. In this case it is an Eden region and allocated into by a mutator thread. G1HR ALLOC(StartsH) 0x6ec00000 0x6ed00000G1HR ALLOC(ContinuesH) 0x6ed00000 0x6e000000 Regions being used for the allocation of Humongous object. The object spans over two regions. G1HR ALLOC(SingleH) 0x6f900000 0x6f9eb010 Single region being used for the allocation of Humongous object. G1HR COMMIT [0x6ee00000,0x6ef00000]G1HR COMMIT [0x6ef00000,0x6f000000]G1HR COMMIT [0x6f000000,0x6f100000]G1HR COMMIT [0x6f100000,0x6f200000]G1HR ALLOC(StartsH) 0x6ee00000 0x6ef00000G1HR ALLOC(ContinuesH) 0x6ef00000 0x6f000000G1HR ALLOC(ContinuesH) 0x6f000000 0x6f100000G1HR ALLOC(ContinuesH) 0x6f100000 0x6f102010 Here, Humongous object allocation request could not be satisfied by the free committed regions that existed in the heap, so the heap needed to be expanded. Thus new regions are committed and then allocated into for the Humongous object. G1HR ALLOC(Old) 0x6f800000 Old region started being used for allocation during GC. G1HR ALLOC(Survivor) 0x6fa00000 Region being used for copying old objects into during a GC. Note that Eden and Humongous ALLOC events are generated outside the GC boundaries and Old and Survivor ALLOC events are generated inside the GC boundaries. Other Events G1HR RETIRE 0x6e800000 0x6e87bd98 Retire and stop using the region having bottom 0x6e800000 and top 0x6e87bd98 for allocation. Note that most regions are full when they are retired and we omit those events to reduce the output volume. A region is retired when another region of the same type is allocated or we reach the start or end of a GC(depending on the region). So for Eden regions: For example: 1. ALLOC(Eden) Foo2. ALLOC(Eden) Bar3. StartGC At point 2, Foo has just been retired and it was full. At point 3, Bar was retired and it was full. If they were not full when they were retired, we will have a RETIRE event: 1. ALLOC(Eden) Foo2. RETIRE Foo top3. ALLOC(Eden) Bar4. StartGC G1HR CSET 0x6e900000 Region (bottom: 0x6e900000) is selected for the Collection Set. The region might have been selected for the collection set earlier (i.e. when it was allocated). However, we generate the CSET events for all regions in the CSet at the start of a GC to make sure there's no confusion about which regions are part of the CSet. G1HR POST-COMPACTION(Old) 0x6e800000 0x6e839858 POST-COMPACTION event is generated for each non-empty region in the heap after a full compaction. A full compaction moves objects around, so we don't know what the resulting shape of the heap is (which regions were written to, which were emptied, etc.). To deal with this, we generate a POST-COMPACTION event for each non-empty region with its type (old/humongous) and the heap boundaries. At this point we should only have Old and Humongous regions, as we have collapsed the young generation, so we should not have eden and survivors. POST-COMPACTION events are generated within the Full GC boundary. G1HR CLEANUP 0x6f400000G1HR CLEANUP 0x6f300000G1HR CLEANUP 0x6f200000 These regions were found empty after remark phase of Concurrent Marking and are reclaimed shortly afterwards. G1HR #StartGC 5G1HR CSET 0x6f400000G1HR CSET 0x6e900000G1HR REUSE 0x6f800000 At the end of a GC we retire the old region we are allocating into. Given that its not full, we will carry on allocating into it during the next GC. This is what REUSE means. In the above case 0x6f800000 should have been the last region with an ALLOC(Old) event during the previous GC and should have been retired before the end of the previous GC. G1HR ALLOC-FORCE(Eden) 0x6f800000 A specialization of ALLOC which indicates that we have reached the max desired number of the particular region type (in this case: Eden), but we decided to allocate one more. Currently it's only used for Eden regions when we extend the young generation because we cannot do a GC as the GC-Locker is active. G1HR EVAC-FAILURE 0x6f800000 During a GC, we have failed to evacuate an object from the given region as the heap is full and there is no space left to copy the object. This event is generated within GC boundaries and exactly once for each region from which we failed to evacuate objects. When Heap Regions are reclaimed ? It is also worth mentioning when the heap regions in the G1 heap are reclaimed. All regions that are in the CSet (the ones that appear in CSET events) are reclaimed at the end of a GC. The exception to that are regions with EVAC-FAILURE events. All regions with CLEANUP events are reclaimed. After a Full GC some regions get reclaimed (the ones from which we moved the objects out). But that is not shown explicitly, instead the non-empty regions that are left in the heap are printed out with the POST-COMPACTION events.

    Read the article

  • Optimizing collision engine bottleneck

    - by Vittorio Romeo
    Foreword: I'm aware that optimizing this bottleneck is not a necessity - the engine is already very fast. I, however, for fun and educational purposes, would love to find a way to make the engine even faster. I'm creating a general-purpose C++ 2D collision detection/response engine, with an emphasis on flexibility and speed. Here's a very basic diagram of its architecture: Basically, the main class is World, which owns (manages memory) of a ResolverBase*, a SpatialBase* and a vector<Body*>. SpatialBase is a pure virtual class which deals with broad-phase collision detection. ResolverBase is a pure virtual class which deals with collision resolution. The bodies communicate to the World::SpatialBase* with SpatialInfo objects, owned by the bodies themselves. There currenly is one spatial class: Grid : SpatialBase, which is a basic fixed 2D grid. It has it's own info class, GridInfo : SpatialInfo. Here's how its architecture looks: The Grid class owns a 2D array of Cell*. The Cell class contains two collection of (not owned) Body*: a vector<Body*> which contains all the bodies that are in the cell, and a map<int, vector<Body*>> which contains all the bodies that are in the cell, divided in groups. Bodies, in fact, have a groupId int that is used for collision groups. GridInfo objects also contain non-owning pointers to the cells the body is in. As I previously said, the engine is based on groups. Body::getGroups() returns a vector<int> of all the groups the body is part of. Body::getGroupsToCheck() returns a vector<int> of all the groups the body has to check collision against. Bodies can occupy more than a single cell. GridInfo always stores non-owning pointers to the occupied cells. After the bodies move, collision detection happens. We assume that all bodies are axis-aligned bounding boxes. How broad-phase collision detection works: Part 1: spatial info update For each Body body: Top-leftmost occupied cell and bottom-rightmost occupied cells are calculated. If they differ from the previous cells, body.gridInfo.cells is cleared, and filled with all the cells the body occupies (2D for loop from the top-leftmost cell to the bottom-rightmost cell). body is now guaranteed to know what cells it occupies. For a performance boost, it stores a pointer to every map<int, vector<Body*>> of every cell it occupies where the int is a group of body->getGroupsToCheck(). These pointers get stored in gridInfo->queries, which is simply a vector<map<int, vector<Body*>>*>. body is now guaranteed to have a pointer to every vector<Body*> of bodies of groups it needs to check collision against. These pointers are stored in gridInfo->queries. Part 2: actual collision checks For each Body body: body clears and fills a vector<Body*> bodiesToCheck, which contains all the bodies it needs to check against. Duplicates are avoided (bodies can belong to more than one group) by checking if bodiesToCheck already contains the body we're trying to add. const vector<Body*>& GridInfo::getBodiesToCheck() { bodiesToCheck.clear(); for(const auto& q : queries) for(const auto& b : *q) if(!contains(bodiesToCheck, b)) bodiesToCheck.push_back(b); return bodiesToCheck; } The GridInfo::getBodiesToCheck() method IS THE BOTTLENECK. The bodiesToCheck vector must be filled for every body update because bodies could have moved meanwhile. It also needs to prevent duplicate collision checks. The contains function simply checks if the vector already contains a body with std::find. Collision is checked and resolved for every body in bodiesToCheck. That's it. So, I've been trying to optimize this broad-phase collision detection for quite a while now. Every time I try something else than the current architecture/setup, something doesn't go as planned or I make assumption about the simulation that later are proven to be false. My question is: how can I optimize the broad-phase of my collision engine maintaining the grouped bodies approach? Is there some kind of magic C++ optimization that can be applied here? Can the architecture be redesigned in order to allow for more performance? Actual implementation: SSVSCollsion Body.h, Body.cpp World.h, World.cpp Grid.h, Grid.cpp Cell.h, Cell.cpp GridInfo.h, GridInfo.cpp

    Read the article

  • Incorrect colour blending when using a pixel shader with XNA

    - by MazK
    I'm using XNA 4.0 to create a 2D game and while implementing a layer tinting pixel shader I noticed that when the texture's alpha value is anything between 1 or 0 the end result is different than expected. The tinting works from selecting a colour and setting the amount of tint. This is achieved via the shader which works out first the starting colour (for each r, g, b and a) : float red = texCoord.r * vertexColour.r; and then the final tinted colour : output.r = red + (tintColour.r - red) * tintAmount; The alpha value isn't tinted and is left as : output.a = texCoord.a * vertexColour.a; The picture in the link below shows different backdrops against an energy ball object where it's outer glow hasn't blended as I would like it to. The middle two are incorrect as the second non tinted one should not show a glow against a white BG and the third should be entirely invisible. The blending function is NonPremultiplied. Why the alpha value is interfering with the final colour?

    Read the article

  • New Exadata Book Available Soon

    - by Rob Reynolds
    Oracle Press is set to released the first book on data warehouse performance and Exadata on March 14th. Achieving Extreme Performance with Oracle Exadata , by my colleagues Rick Greenwald, Robert Stackowiak, Maqsood Alam, and Mans Bhuller will be available at your favorite booksellers next week. I've seen a sneak peak of the content in this book and its a great way to fully grasp the power of Exadata and how to best apply it to achieve extreme data warehouse performance. From the publisher's description: Achieving Extreme Performance with Oracle Exadata and the Sun Oracle Database Machine is filled with best practices for deployments, hardware sizing, architecting the database machine environments for maximum availability, and backup and recovery. Oracle Database 11gR2 features used within these offerings, as well as migration options and paths for Oracle and non-Oracle databases to Oracle Exadata are covered. This Oracle Press guide also discusses architecture, administration, maintenance, monitoring, and tuning of Oracle Exadata Storage Servers and the Sun Oracle Database Machine. If your company is considering Exadata, or if you need more horsepower out of your data warehouse, I highly recommend grabbing a copy of this book next week.

    Read the article

  • gpgpu vs. physX for physics simulation

    - by notabene
    Hello First theoretical question. What is better (faster)? Develop your own gpgpu techniques for physics simulation (cloth, fluids, colisions...) or to use PhysX? (If i say develop i mean implement existing algorithms like navier-strokes...) I don't care about what will take more time to develop. What will be faster for end user? As i understand that physx are accelerated through PPU units in gpu, does it mean that physical simulation can run in paralel with rastarization? Are PPUs different units than unified shader units used as vertex/geometry/pixel/gpgpu shader units? And little non-theoretical question: Is physx able to do sofisticated simulation equal to lets say Autodesk's Maya fluid solver? Are there any c++ gpu accelerated physics frameworks to try? (I am interested in both physx and gpgpu, commercial engines are ok too).

    Read the article

  • 301 redirect to 404 page or set status code to 404 and stay on page?

    - by WPRookie82
    I have a number of pages on my website that only administrators can access and access to these pages is given if a querystring value if found and correctly set. For example: http://www.mydomain.com/show-daily-statistics?key=abc The above link will show the content of the page but anything else such as the below will not: http://www.mydomain.com/show-daily-statistics Now I was thinking about what to do if search engines and/or non-admin users somehow land on these hidden pages. I can of course either change the status code of the page to 404 or else 301 redirect to: http://www.mydomain.com/404-error What's the best solution in respect to Google and SEO?

    Read the article

  • Will Ubuntu break my RAID 0 array?

    - by Chad
    I am upgrading an older machine today with new Motherboard, RAM, and CPU. Then I am going to do a fresh install of Ubuntu 64bit. Currently the old machine has an 80gb system drive, and a 4TB RAID 0 array. The old Motherboard has no SATA ports, so I used a SATA card. Ubuntu set up the old RAID array, will it still recognize the array on a newer machine? Are there any steps I should take to ensure the array isn't damaged? It's non-crucial data, but I would rather not start over if it can be avoided. Thanks.

    Read the article

  • What functionality does dynamic typing allow?

    - by Justin984
    I've been using python for a few days now and I think I understand the difference between dynamic and static typing. What I don't understand is under what circumstances it would be preferred. It is flexible and readable, but at the expense of more runtime checks and additional required unit testing. Aside from non-functional criteria like flexibility and readability, what reasons are there to choose dynamic typing? What can I do with dynamic typing that isn't possible otherwise? What specific code example can you think of that illustrates a concrete advantage of dynamic typing?

    Read the article

  • Visualize flowchart diagram with multiple end symbols

    - by platzhirsch
    I am looking for a standardize way to visualize the following hierarchical logic: The state of the thread is represented by the answers to the hierarchical set of question You can read this listing like a flowchart, you iterate over the questions decide, go one step deeper and so on. Therefore I thought the best way to visualize it, using a flowchart. The problem is, in this hierarchical set it is possible to end in more than one state and its totally valid. I have never seen a flowchart where you can enter more than one state. Is it still possible and I am missing the right symbol to present this logic or are flowchart not fitting anyway? What other graphical representation could I use, is there something fitting in UML? A non-deterministic state machine seems not to be intuitive enough, transfering it into a deterministic state machine would result in to many states, and so on.

    Read the article

  • SpringSource rachète Rabbit Technologies, société à l'origine de la solution AMQP Rabbit MQ

    Et une acquisition de plus pour SpringSource ! Non content d'avoir déjà une bonne assise dans le domaine de messaging en employant plusieurs commiter sur ActiveMQ, SpringSource rachète aquiert Rabbit Technologies. Rabbit Technologies est la société derrière RabbitMQ, qui est une implémentation d'un serveur AMQP. Ce protocol, contraitement à JMS qui n'a pour objectif de fournir une API d'accès, à pour but de standardiser tout le système de messaging allant des différents mode de transmission au format de messages. Grâce à cela, tout serveurs compatibles AMQP peuvent commiquer entre eux, indépendamment du langage dans lequel ils sont implémenté ou le système sur lequel il tourne.

    Read the article

  • How to get Spotify running?

    - by Dante Ashton
    A while ago Spotify (the streaming music service) came out with a preview for Linux of their client. I had succesfully run it throughout 10.04. Now I'm on 10.10, I can't seem to find it in the package manager, let alone install it. Software Sources gives me this; Failed to fetch http://repository.spotify.com/dists/stable/Release Unable to find expected entry non-free/source/Sources in Meta-index file (malformed Release file?) Some index files failed to download, they have been ignored, or old ones used instead. So...as I'm paying for Spotify what...umm...do I do? :P

    Read the article

  • Loading dynamic content and rewrite URL on Hashchange event with Jquery Mobile

    - by user3611500
    I'm building a mobile version for my website using Jquery Mobile API. The framework provides automate AJAX navigation processing. But as far as i know it require "real" pages for loading purpose. What i want to do is override the automate navigation process of it and process the hashchange on my own. But i can't not rewrite the url using window.hashChange, which is running well on my non-mobile website version : $(function () { $(window).off().hashchange(function () { if (location.hash.length > 1) { PageSelect(); } }); $(window).hashchange(); }); I just only want to take advantage on jquery mobile interfaces, i don't want anything with its automate ajax navigation stuff ! I tried to disable it using ajaxEnabled() but got no luck.

    Read the article

  • What are the benefits and drawback of documentation vs tutorials vs video tutorials [closed]

    - by Cat
    Which types of learning resources do you find the most helpful, for which kinds of learning and/or perhaps at specific times? Some examples of types of learning you could consider: When starting to integrate a new SDK inside an existing codebase When learning a new framework without having to integrate legacy code When digging deeper into an already-used SDK that you may not know very well yet For example - (video) tutorials are usually very easy to follow and tells a story from beginning to end to get results, but will nearly always assume starting from scratch or a previous tutorial. Therefore such a resource is useful for quick learning if you don't have legacy code around, but less so if you have to search for the best-fit to the code you already have. SDK Documentation on the other hand is well-structured but does not tell a story. It is more difficult to get to a specific larger result with documentation alone, but it is a better fit when you do have legacy code around and are searching for perhaps non-obvious ways of employing the SDK or library. Are there other forms of resources that you find useful, such as interactive training?

    Read the article

  • How to setup Thinkpad features on Thinkpad T500

    - by gijoemike
    I have a IBM-Lenovo Thinkpad T500. I was previously a exclusive windows user, but recently installed ubuntu and loving it because of speed and interface. The only thing is that I don't get some features that I came to enjoy in windows. I need help setting these up: Hard-drive protection - active protection software that pauses drive when there is movement My printer doesn't work (can't find the driver for this one): canon Ip2600 A way to change which graphics chip to use while in OS. I have both the integrated and non integrated (dual-graphics). (If not easy to setup, I know there's a way to do it before it boots, but don't know how). CPU performance level - in windows you can pick "high performance", "power saver", etc.. to save batteries. My integrated camera w/light - it works but need an app where I can record videos, take snapshots, etc. can't find one that works. Thanks!

    Read the article

  • Happy 3rd Birthday SilverlightCream!

    - by Dave Campbell
    Happy 3rd Birthday!     Yesterday (May 16) was the 'Birthday' of SilverlightCream, which started just after MIX in 2007 with a post "Interesting Silverlight posts today: Silverlight Control & Silverlight Pad". Too many good posts flying around led me to want to archive them, particularly since I was being aggregated at a new site Silverlight.net, and I could give some of that 'reach' to the community. Saturday's post was number 862, and as of that post, there were 5697 blog posts archived in the database all tagged up and searchable at SilverlightCream.com using the search page. The search needs to be better, and that's another discussion, but it does work. The blog didn't begin life as the SilverlightCream blog, as is obvious from the name, but once I realized people were following it closely, I've tried to keep the signal-to-noise ratio very high. I even secured another blog for when I just want to rant about something to keep that stuff out of this one :) If you've been around since MIX07 days you've heard all this, but after talking to some people at MIX10 I realized not everyone knows all the ways the information is presented, so I figured doing a post like this once a year probably isn't a bad idea :) I scrounge through an ever-growing list of blogs (right now sitting at 505) looking for good stuff. I try to spin through the list every day, but with the list growing that large, it's getting tough. I usually use it as a background task while working or watching TV. If I just sit and go through the blogs it takes about an hour. The list is long enough now that from time to time, I'll only get partway through it and have 10 to 13 entries, so I'll just stop there and go on the next day... I don't like to have more than 15 in any single post. It's all pattern recognition as in "seen that", "seen that", "that's new", etc... so if you're a blogger, look at a heading below for some comments about blogging from my perspective. When I see something new, I make sure you're not pulling a 'Mike Taulty' on me and dumping 6 or 8 new posts in one day :), and I tag the ones I want to review. If there's not a lot going on, I may just push the posts as I come across them. Some days there may be 60 posts in that 'to review' list! Some are non-Silverlight, some are essentially duplicates of others, some are demos, ads, new releases of something, session materials, etc. I push lots of material into a database at WynApse.com, and the "Tagged Posts" menu on the left sidebar there takes you to a tag cloud of (at this very moment) "9224 articles tagged 13915 different ways using 459 unique tags". There are links in there on Gibson guitars, Jazz Guitar instructional stuff, Ford F-250 links, and tons of technical and non-technical stuff I've been aggregating for about 5 years now. So when I decide to blog (or shoutout) something, I first push it into the database at WynApse.com. Then I tag it all up and push it into the database at SilverlightCream.com. Then it gets pushed to @SilverlightNews. For a little over a year now, we're tracking unique IP hits on posts launched from either the blog post or from one of the SilverlightCream.com pages, and the posts with top hits from unique IP addresses in the last 7 days are displayed in a 'Skim' page at SilverlightCream... and that page needs work as well. The Skim page and tracking was the brainchild of my buddy Michael Washington. What I blog/shoutout After some time doing posts, I decided there were things that probably have no need to be searchable, but are good information, so I post those as 'Shoutouts'. Eventually I also decided the Shoutouts should get posted to @SilverlightNews, and that's now taking place. Notes to bloggers Remember I said spinning throught the Big List-o-BlogsTM is pattern recognition... that means I don't spend a lot of time on any individual blog deciding if it has new content. If you're familiar with the term 'Above the Fold', then you're probably ok. If I have to scroll the page to see if there's something new, or wade through some maze of menus, I'm probably going to miss new stuff. Likewise if you only show the latest on the front page and make it a puzzle to find the rest of them, or if you make the titles and initial graphics almost identical to the previous article, I'll miss it. Another thing is name/brand-recognition. Far be it for me (WynApse) to comment on someone blogging with a pseudonym, but if you want to get get some recognition, you are going to want your name to be available somewhere. I can think right off the top of my head of a couple good blogs that I have no idea of the individuals' real names. I can pull that off a bit because I've been around so long almost everyone knows who I am, but if you're new to the blog-o-sphere, being able to be name-recognized is as important as getting your brand out there. Kick my tires Finally, stuff happens... I may hit the wrong key and delete your blog, or a post might slip past me and I not realize it's new because of the naming, and never blog it. If you think I missed something, send me an email or use the submit page at SilverlightCream.com. Some bloggers have figured out that if they submit (one way or another) to me, their posts will go out next. I try to honor anyone that takes the time to submit with a quicker 'Cream posting. Thanks! Finally, thanks to everyone that contributes to the community as a whole... the blogs, the videos, and the presentations. A special thanks to everyone that reads SilverlightCream, or follows @WynApse or @SilverlightNews. Keep it all coming, and... Stay in the 'Light

    Read the article

  • Best PHP-based web development 'stack' of 2011

    - by Jens Roland
    I have been building PHP-based web sites for many years, and lately it seems I'm discovering another interesting new tool or method once every few weeks. This begs the question - what is the current state of the art in PHP development stacks for the seasoned coder? I'm specifically interested in the following: High-performance web server Database MVC framework Build tool Revision control Continuous Integration Automated testing Non-persistent caching I'd like to optimize my stack for scalability and rapid development. I'm not looking for personal preference here, I'm looking for real, quantifiable reasons to pick this-over-that.

    Read the article

  • SQL SERVER – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    I receive a lot of emails every day. I try to answer each and every email and comments on Facebook and Twitter. I prefer communication on social media as this gives opportunities to others to read the questions and participate along with me. There is always some question which everyone likes to read and remember. Here is one of the questions which I received in email. I believe the same question will be there any many developers who are beginning with SQL Server. I decided to blog about it so everyone can read it and participate. “I am beginner in SQL Server. I have a very interesting situation and need your help. I am beginner to SQL Server and that is why I do not have access to the production server and I work entirely on the development server. The project I am working on is also in the infant stage as well. In product I had to create a multiple tables and every table had few columns. Later on I have written Stored Procedures using those tables. During a code review my manager has requested to change one of the column which I have used in the table. As per him the naming convention was not accurate. Now changing the columname in the table is not a big issue. I figured out that I can do it very quickly either using T-SQL script or SQL Server Management Studio. The real problem is that I have used this column in nearly 50+ stored procedure. This looks like a very mechanical task. I believe I can go and change it in nearly 50+ stored procedure but is there a better solution I can use. Someone suggested that I should just go ahead and find the text in system table and update it there. Is that safe solution? If not, what is your solution. In simple words, How to replace a column name in multiple stored procedure efficiently and quickly? Please help me here with keeping my experience and non-production server in mind.” Well, I found this question very interesting. Honestly I would have preferred if this question was asked on my social media handles (Facebook and Twitter) as I am very active there and quite often before I reach there other experts have already answered this question. Anyway I am now answering the same question on the blog so all of us can participate here and come up with an appropriate answer. Here is my answer - “My Friend, I do not advice to touch system table. Please do not go that route. It can be dangerous and not appropriate. The issue which you faced today is what I used to face in early career as well I still face it often. There are two sets of argument I have observed – there are people who see no value in the name of the object and name objects like obj1, obj2 etc. There are sets of people who carefully chose the name of the object where object name is self-explanatory and almost tells a story. I am not here to take any side in this blog post – so let me go to a quick solution for your problem. Note: Following should not be directly practiced on Production Server. It should be properly tested on development server and once it is validated they should be pushed to your production server with your existing deployment practice. The answer is here assuming you have regular stored procedures and you are working on the Development NON Production Server. Go to Server Note >> Databases >> DatabaseName >> Programmability >> Stored Procedure Now make sure that Object Explorer Details are open (if not open it by clicking F7). You will see the list of all the stored procedures there. Now you will see a list of all the stored procedures on the right side list. Select either all of them or the one which you believe are relevant to your query. Now… Right click on the stored procedures >> SELECT DROP and CREATE to >> Now select New Query Editor Window or Clipboard. Paste the complete script to a new window if you have selected Clipboard option. Now press Control+H which will bring up the Find and Replace Screen. In this screen insert the column to be replaced in the “Find What”box and new column name into “Replace With” box. Now execute the whole script. As we have selected DROP and CREATE to, it will created drop the old procedure and create the new one. Another method would do all the same procedure but instead of DROP and CREATE manually replace the CREATE word with ALTER world. There is a small advantage in doing this is that if due to any reason the error comes up which prevents the new stored procedure to be created you will have your old stored procedure in the system as it is. “ Well, this was my answer to the question which I have received. Do you see any other workaround or solution? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Google I/O 2012 - Maps for Good

    Google I/O 2012 - Maps for Good Rebecca Moore, Dave Thau Developers are behind many cutting-edge map applications that make the world a better place. In this session we'll show you how developers are using Google Earth Builder, Google Earth Engine, Google Maps API and Android apps for applications as diverse as ethno-mapping of indigenous cultural sites, monitoring deforestation of the Amazon and tracking endangered species migrations around the globe. Come learn about how you can partner with a non-profit to apply for a 2012 Developer Grant and make a positive impact with your maps. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 739 7 ratings Time: 54:23 More in Science & Technology

    Read the article

  • How to change System default language on GNOME3?

    - by Vor
    I just installed GNOME3 on my Ubuntu. Everything worked fine till I restarted computer. Then I received a message if I want to change folders name to some other (different language, don't know what is this, but looks like Chinese). I pressed, 'keep old names' but it still changed all my folder names! And also the rest of the names. (like settings, and all that staff). So if you can give me the direction on where to click (cause all English names changed to non-English) and I simply don't know what does any of them means!

    Read the article

  • How do I report a missing package dependency during an upgrade?

    - by crasic
    A friend of mine (somewhat new to linux) recently upgraded from 10.10 to 11.04 and his OS broke from the upgrade. A few minutes of troubleshooting showed that the culprit was the PAE kernel that the upgrade decided to install since it determined he had 4GB of phyisical RAM. More specifically the upgrade forgot to install the linux-headers-generic-pae required by the closed source nvidia drivers. I'm not entirely sure how to report this bug to the devs. Its an easy fix (after booting into the non-pae kernel and installing the package everything worked), but they are encouraging users to use the built-in bug reporting system and I'm not entirely certain how to report update bugs.

    Read the article

  • Why not Green Threads?

    - by redjamjar
    Whilst I know questions on this have been covered already (e.g. http://stackoverflow.com/questions/5713142/green-threads-vs-non-green-threads), I don't feel like I've got a satisfactory answer. The question is: why don't JVM's support green threads anymore? It says this on the code-style Java FAQ: A green thread refers to a mode of operation for the Java Virtual Machine (JVM) in which all code is executed in a single operating system thread. And this over on java.sun.com: The downside is that using green threads means system threads on Linux are not taken advantage of and so the Java virtual machine is not scalable when additional CPUs are added. It seems to me that the JVM could have a pool of system processes equal to the number of cores, and then run green threads on top of that. This could offer some big advantages when you have a very number large of threads which block often (mostly because current JVM's cap the number of threads). Thoughts?

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • Do professional software developers still dream of creating industry/world-changing apps?

    - by Andrew Heath
    I'm a hobby programmer. The absence of real world deadlines, customer feedback, or performance reviews leaves me free to daydream about having and implementing The Next Great Idea That Changes the World. Of course I'm aware I probably have a better chance of winning the lottery, but it's fun to imagine knocking out some fully-homebrewed app that destroys the status quo. I know many professional programmers have side projects, some for profit others not. I was wondering on the way to work this morning (non-IT boring work) if having to code for your food tended to dampen the dreaming? Does greater experience leave you jaded and more focused on the projects at hand? Not trying to be a downer, just interested in the mindset of the real software professional :-)

    Read the article

  • CUDA 4.0 : sortie de la Release Candidate de l'architecture de calcul parallèle de NVIDIA

    CUDA 4.0 : sortie de la Release Candidate De l'architecture de calcul parallèle de NVIDIA La sortie cette semaine aux développeurs enregistrés de CUDA est l'accomplissement de milliers d'heures dédiées à ce projet. Cette technologie peut sembler jeune, sa première version, la 1.0, n'a été sortie qu'en 2007. L'effet de cette sortie sur le monde entier n'a, sans nul doute possible, pas été nul. Elle n'est pas non plus le seul terrain d'avance pour les technologies massivement parallèles sur matériel NVIDIA : les API comme OpenCL et DirectCompute sont elles aussi supportées et améliorées, en plus d'un accès direct au GPU en C, C++ et Fortran. Comme toujours, NVIDIA est à la recherch...

    Read the article

  • Firefox 4 : les extensions se mettront à jour automatiquement, mais pas le navigateur

    Firefox 4 : les extensions se mettront à jour automatiquement Mais pas le navigateur Mise à jour du 07/12/10 A mesure que le navigateur de la Fondation Mozilla enchaîne les betas (7 à ce jour - lire ci-avant), la version définitive de Firefox 4 se précise. Dernière nouveauté en date, dévoilée par Jenny Boriss, en charge de l'UI et de l'expérience utilisateur, la mise à jour automatique ? et silencieuse ? des extensions. Jusqu'ici, Mozilla avait fait le choix d'indiquer à l'utilisateur qu'une mise à jour était disponible pour tel ou tel add-on. Ce dernier pouvait alors choisir de l'appliquer ou non. Mieux, la beta 7 perm...

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >