Search Results

Search found 3402 results on 137 pages for 'statistical analysis soft'.

Page 100/137 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Using * in SELECT Query

    - by libregeek
    I am currently porting an application written in MySQL3 and PHP4 to MySQL5 and PHP5. On analysis I found several SQL queries which uses "select * from tablename" even if only one column(field) is processed in PHP. The table has almost 60 columns and it has a primary key. In most cases, the only column used is id which is the primary key. Will there be any performance boost if I use queries in which the column names are explicitly mentioned instead of * ? (In this application there is only one method which we need all the columns and all other methods return only a subset of the columns)

    Read the article

  • How to output KML by GAE

    - by Niklas R
    Hi I use KML for a google map where entities have a geopt.db coordinate and soft memory limit was exceeded with 213.465 MB after servicing 1 requests total. The log says /list.kml 200 13130ms 10211cpu_ms 4238api_cpu_ms The file list.kml which outputs about 455,7 KB is a template as follows <?xml version="1.0" encoding="UTF-8"?><kml xmlns="http:// www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2" xmlns:kml="http://www.opengis.net/kml/2.2" xmlns:atom="http:// www.w3.org/2005/Atom"> <Document>{% for a in list %} <Placemark> <name> </name> <description> <![CDATA[<a href="http://{{host}}/{{a.key.id}}"> {{ a.title }} </a> <br/>{{a.text}}]]> </description> <Style> <IconStyle> <Icon> <href> http://www.google.com/intl/en_us/mapfiles/ms/icons/green-dot.png </href> </Icon> </IconStyle> </Style> <Point> <coordinates> {{a.geopt.lon|floatformat:2}},{{a.geopt.lat|floatformat:2}} </coordinates> </Point> </Placemark> {% endfor %} </Document> </kml> Is there a memory leak in the template or the python that passes the list variable? Can I improve using other template engine or other framework than default? Is kmz compression a good idea in this case? Thanks in advance for any suggestion where or how to change the code.

    Read the article

  • C#. Whats the fastest way to make an integer positive

    - by maxima120
    I asked wrong question previously and was swamped with negative votes... Let me try again... What is absolutely fastest way to make an int positive (given 50/50 distribution of pos/neg over time). To be nominated for an answer I will require MSIL analysis and not a guess or measuring of time with granny's watch... P.S. as one of variations I proposed i * i not because I wanted to do Sqrt(i * i) afterwards but because i will be used only once to be compared to a const. And if i * i will win competition I simply multiply the const.. Hence the following solution is valid: int trigger = realTrigger * realTrigger; i = SomeCalcs(); i = i * i; if(i < trigger) DoSomething(); P.P.S. pointless rant is not acceptable.. like: why do you need this, its BS! C# cannot tolerate developers like you!

    Read the article

  • Clean up domain list in Excel - regex / macros?

    - by Tim
    I have a huge spreadsheet of domains that I need to clean up as follows: Remove all http:// (simple replace all - "http://" with "") Remove any www. (simple replace all - "www." with "") Delete any sub-domains (delete the actual row completely, not just the subdomain from the url) Remove anything after the domain extension (i.e. website.com/blah/blahbah/ becomes just website.com (simple replace all - "/*" with "", then replace all "/" with "") So what I'm left with is just a spreadsheet of clean domains like "website.com". I think I've got 1, 2 and 4 sorted (as above), but I'm really struggling with 3. Any ideas? Can I do this with regexp / vba, and actually delete the row completely? Sample data: http://www.scholastic.com/kids/stacks/games/ http://imgworld.teamworkonline.com/ http://topfreegraphics.com/ http://www.workcircle.co.uk/ http://www.healthycanadians.gc.ca/index-eng.php http://gsociology.icaap.org/methods/soft.html Post 1, 2 and 4 would leave me with: scholastic.com imgworld.teamworkonline.com topfreegraphics.com workcircle.co.uk healthycanadians.gc.ca gsociology.icaap.org It's those pesky sub-domains I need to just delete completely, just delete the row. I've realised I can't just search for 2 x ".", because obviously plenty of domain extensions (i.e .co.uk) include that. Any help appreciated.

    Read the article

  • Any info about book "Unix Internals: The New Frontiers" by Uresh Vahalia 2nd edition (Jan 2010)

    - by claws
    This summer I'm getting into UNIX (mostly *BSD) development. I've graduate level knowledge about operating systems. I can also understand the code & read from here and there but the thing is I want to make most of my time. Reading books are best for this. From my search I found that these two books The Design and Implementation of the 4.4 BSD Operating System "Unix Internals: The New Frontiers" by Uresh Vahalia are like established books on UNIX OS internals. But the thing is these books are pretty much outdated. yay!! Lucky me. "Unix Internals: The New Frontiers" by Uresh Vahalia 2 edition (Jan 2010) is released. I've been search for information on this book. Sadly, Amazon says "Out of Print--Limited Availability" & I couldn't find any info regarding this book. This is the information I'm looking for: Table of Contents Whats new in this edition? Where the hell can I buy soft-copy of this book? I really cannot afford buying a hardcopy. How can I contact the author? I've lot of hopes & expectations on this book. I've been waiting for its release for a long time. I've sent random mails to & & requesting to have a proper website for this book. I even contacted publisher for any further information but no replies from any one. If you have any other books that you think will help me. I again repeat, I want to get max possible out of these 2.5 months summer.

    Read the article

  • what is the idea behind scaling an image using lanczos?

    - by banister
    Hi, I'm interested in image scaling algorithms and have implemented the bilinear and bicubic methods. However, I have heard of the lanczos and other more sophisticated methods for even higher quality image scaling and I am very curious how they work. Could someone here explain the basic idea behind scaling an image using lanczos (both upscaling and downscaling) and why it results in higher quality? I do have a background in fourier analysis and have done some signal processing stuff in the past, but not with relation to image processing, so don't be afraid to use terms like "frequency response" and such in your answer :) EDIT: I guess what i really want to know is the concept and theory behind using a convolution filter for interpolation. (Note: i have already read the wikipedia article on lanczos resampling but it didn't have nearly enough detail for me) thanks alot!

    Read the article

  • Google app engine or Amazin ec2 for Restful services and direct access to datastore

    - by imran
    I'm thinking of building a Restful app on either App engine or ec2 devloped in Java. I'm interested in opinions/experience of using the two options for this. The primary purpose is to create web services to write and retrieve data through a mobile device...basically creating an API for the service I want to create. It seems to me it would be quicker and cheaper in the beginning to go with google app engine using either restlet or grails.But I also think that I could run into problems in the future when I want to so somthing more advanced and might be restricted by app engines environment. I also want to be able to do data analysis on the data in the datastore as well. It seems that with app engine this would be hard as I don't have direct access to the datastore ( in Amazon I could still have access to the underlying db if I go with MySQL ) .

    Read the article

  • How to check the backtrace of a "USER process" in the Linux Kernel Crash Dump

    - by Biswajit
    I was trying to debug a USER Process in Linux Crash Dump. The normal steps to go to the crash dump are: Go to the path where the dump is located. Use the command crash kernel_link dump.201104181135. Where kernel_link is a soft link I have created for vmlinux image. Now you will be in the CRASH prompt. If you run the command foreach <PID Of the process> bt Eg: crash> **foreach 6920 bt** **PID: 6920 TASK: ffff88013caaa800 CPU: 1 COMMAND: **"**climmon**"**** #0 [ffff88012d2cd9c8] **schedule** at ffffffff8130b76a #1 [ffff88012d2cdab0] **schedule_timeout** at ffffffff8130bbe7 #2 [ffff88012d2cdb50] **schedule_timeout_uninterruptible** at ffffffff8130bc2a #3 [ffff88012d2cdb60] **__alloc_pages_nodemask** at ffffffff810b9e45 #4 [ffff88012d2cdc60] **alloc_pages_curren**t at ffffffff810e1c8c #5 [ffff88012d2cdc90] **__page_cache_alloc** at ffffffff810b395a #6 [ffff88012d2cdcb0] **__do_page_cache_readahead** at ffffffff810bb592 #7 [ffff88012d2cdd30] **ra_submit** at ffffffff810bb6ba #8 [ffff88012d2cdd40] **filemap_fault** at ffffffff810b3e4e #9 [ffff88012d2cdda0] **__do_fault** at ffffffff810caa5f #10 [ffff88012d2cde50] **handle_mm_fault** at ffffffff810cce69 #11 [ffff88012d2cdf00] **do_page_fault** at ffffffff8130f560 #12 [ffff88012d2cdf50] **page_fault** at ffffffff8130d3f5 RIP: 00007fd02b7e9071 RSP: 0000000040e86ea0 RFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fd02b7e9071 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040e86ec0 RBP: 0000000040e87140 R8: 0000000000000800 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 00007fff16ec43d0 R13: 00007fd02bcadf00 R14: 0000000040e87950 R15: 0000000000001000 ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b If you check the above backtrace it shows the kernel functions used for scheduling/handling page fault but not the functions that were executed in the USER process (here eg. climmon). So I am not able to debug this process as I am not able to see the functions executed in that process. Can any one help me with this case?

    Read the article

  • How to store an inventory using hashtables?

    - by Harm De Weirdt
    Hello everyone. For an assignment in collego we have to make a script in Perl that allows us to manage an inventory for an e-store. (The example given was Amazon) Users can make orders in a fully text-based environment and the inventory must be updated when an order is completed. Every item in the inventory has 3 to 4 attributes: a product code, a title, a price and for some an amount (MP3's for example do not have this attribute) Since this is my first encounter with Perl, i don't really know how to start. My main problem is how i should "implement" the inventory in the program. One of the functions of the program is searching trough the titles. Another is to make an order, where the user should give a product code. My first idea was a hashtable with the productcode as key. But if i wanted to search in the titles that could be a problem because of this: the hashkey would be something like DVD-123, the information belonging to that key could be "The Green Mask 12" (without the ") where the 12 indicates how many of this DVD are currently in stock. So i'd have to find a way to ignore the 12 in the end. Another solution was to use the title as Hashkey, but that would prove cumbersome too I think. Is there a way to make a hashtable with 2 key's, and when I give only one it returns an array with the other values? (Including the other key and the other information) That way I could use another key depending on what info I need from my inventory. We have to read the default inventory from a txt file looking like this: MP3-72|Lady Gaga - Kiss and Run (Fear of Commitment Monster)|0.99 CD-400|Kings of Leon - Only By The Night|14.50|2 MP3-401|Kings of Leon - Closer|0.85 DVD-144|Live Free or Die Hard|14.99|2 SOFT-864|Windows Vista|49.95 Any help would be appreciated very much :) PS: I am sorry for my bad grammar, English isn't my native language.

    Read the article

  • Java: Best practices for turning foreign horror-code into clean API...?

    - by java.is.for.desktop
    Hello, everyone! I have a project (related to graph algorithms). It is written by someone else. The code is horrible: public fields, no getters/setters huge methods, all public some classes have over 20 fields some classes have over 5 constructors (which are also huge) some of those constructors just left many fields null (so I can't make some fields final, because then every second constructor signals errors) methods and classes rely on each other in both directions I have to rewrite this into a clean and understandable API. Problem is: I myself don't understand anything in this code. Please give me hints on analyzing and understanding such code. I was thinking, perhaps, there are tools which perform static code analysis and give me call graphs and things like this.

    Read the article

  • How to name an event handler of a private variable in Vb.Net following FxCop rules and Vb.Net standa

    - by SoMoS
    Hello, On one side, in Vb.Net when you add an event handler to an object the created method is named: <NameOfTheObject>_<NameOfTheMethod>. As I like to have consistent syntax I always follow this rule when creating event handlers by hand. On the other side when I create private variables I prefix them with m_ as this is a common thing used by the community, in C# people use to put _ at the beginning of a variable but this is no CLS compliant. At the end, when I create event handlers for events raised by private variables I end with Subs like m_myVariable_MyEvent. Code Analysis (Fx Cop) is complainig about this way of naming because the method does not start with uppercase and because the _, so the question is: What naming standards do you follow when creating event handlers by hand that follow the Fxcop rules if any? Thanks in advance.

    Read the article

  • How to generate makefile targets from variables?

    - by Ketil
    I currently have a makefile to process some data. The makefile gets the inputs to the data processing by sourcing a CONFIG file, which defines the input data in a variable. Currently, I symlink the input files to a local directory, i.e. the makefile contains: tmp/%.txt: tmp ln -fs $(shell echo $(INPUTS) | tr ' ' '\n' | grep $(patsubst tmp/%,%,$@)) $@ This is not terribly elegant, but appears to work. Is there a better way? Basically, given INPUTS = /foo/bar.txt /zot/snarf.txt I would like to be able to have e.g. %.out: %.txt some command As well as targets to merge results depending on all $(INPUT) files. Also, apart from the kludgosity, the makefile doesn't work correctly with -j, something that is crucial for the analysis to complete in reasonable time. I guess that's a bug in GNU make, but any hints welcome.

    Read the article

  • Solving a SQL Server Deadlock situation

    - by mjh41
    I am trying to find a solution that will resolve a recurring deadlock situation in SQL server. I have done some analysis on the deadlock graph generated by the profiler trace and have come up with this information: The first process (spid 58) is running this query: UPDATE cds.dbo.task_core SET nstate = 1 WHERE nmboxid = 89 AND ndrawerid = 1 AND nobjectid IN (SELECT nobjectid FROM ( SELECT nobjectid, count(nobjectid) AS counting FROM cds.dbo.task_core GROUP BY nobjectid) task_groups WHERE task_groups.counting > 1) The second process (spid 86) is running this query: INSERT INTO task_core (…) VALUES (…) spid 58 is waiting for a Shared Page lock on CDS.dbo.task_core (spid 86 holds a conflicting intent exclusive (IX) lock) spid 86 is waiting for an Intent Exclusive (IX) page lock on CDS.dbo.task_core (spid 58 holds a conflicting Update lock)

    Read the article

  • Reducing time in C# Forms Control.set_Text(string) function

    - by awshepard
    Hoping for a quick answer (which SO seems to be pretty good for)... I just ran a performance analysis with VS2010 on my app, and it turns out that I'm spending about 20% of my time in the Control.set_Text(string) function, as I'm updating labels in quite a few places in my app. The window has a timer object (Forms timer, not Threading timer) that has a timer1_Tick callback, which updates one label every tick (to give a stop-watch sort of effect), and updates about 15 labels once each second. Does anyone have quick suggestions for how to reduce the amount of time spent updating text on a form, other than increasing the update interval? Are there other structures or functions I should be using?

    Read the article

  • Left/Right/Inner joins using C# and LINQ

    - by Keith Barrows
    I am trying to figure out how to do a series of queries to get the updates, deletes and inserts segregated into their own calls. I have 2 tables, one in each of 2 databases. One is a Read Only feeds database and the other is the T-SQL R/W Production source. There are a few key columns in common between the two. What I am doing to setup is this: List<model.AutoWithImage> feedProductList = _dbFeed.AutoWithImage.Where(a => a.ClientID == ClientID).ToList(); List<model.vwCompanyDetails> companyDetailList = _dbRiv.vwCompanyDetails.Where(a => a.ClientID == ClientID).ToList(); foreach (model.vwCompanyDetails companyDetail in companyDetailList) { List<model.Product> productList = _dbRiv.Product.Include("Company").Where(a => a.Company.CompanyId == companyDetail.CompanyId).ToList(); } Now that I have a (source) list of products from the feed, and an existing (target) list of products from my prod DB I'd like to do 3 things: Find all SKUs in the feed that are not in the target Find all SKUs that are in both, that are active feed products and update the target Find all SKUs that are in both, that are inactive and soft delete from the target What are the best practices for doing this without running a double loop? Would prefer a LINQ 4 Objects solution as I already have my objects. EDIT: BTW, I will need to transfer info from feed rows to target rows in the first 2 instances, just set a flag in the last instance. TIA

    Read the article

  • MDX: How To Aggregate Hierarchy Level Members With Same Name

    - by Dave Frautnick
    Greetings, I am new to MDX, and am having trouble understanding how to perform an aggregation on a hierarchy level with members that have the same names. This query is particular to Microsoft Analysis Services 2000 cubes. I have a given hierarchy dimension with levels defined as follows: [Segment].[Flow].[Segment Week] Within the [Segment Week] level, I have the following members: [Week- 1] [Week- 2] [Week- 3] ... [Week- 1] [Week- 2] [Week- 3] The members have the same names, but are aligned with a different [Flow] in the parent level. So, the first occurrence of the [Week- 1] member aligns with [Flow].[A] while the second occurrence of [Week- 1] aligns with [Flow].[B]. What I am trying to do is aggregate all the members within the [Segment Week] level that have the same name. In SQL terms, I want to GROUP BY the member names within the [Segment Week] level. I am unsure how to do this. Thank you. Dave

    Read the article

  • JJIL Android Java Problem

    - by Danny_E
    Hey Guys, Long time reader never posted until now. Im having some trouble with Android, im implementing a library called JJIL its an open source imaging library. My problem is this i need to run some analysis on an image and to do so i need to have it in jjil.core.image format and once those processes are complete i need to convert the changed image from jjil.core.image to java.awt.image. I cant seem to find a method of doing this does anyone have any ideas or have any experience with this? I would be grateful of any help. Danny

    Read the article

  • How to improve workflow for creating a Lua-based Wireshark dissector

    - by piyo
    I've finally created a Dissector for my UDP protocol in Lua for Wireshark, but the work flow is just horrendous. It consists of editing my custom Lua file in my editor, then double-clicking my example capture file to launch Wireshark to see the changes. If there was an error, Wireshark informs me via dialogs or a red line in the Tree analysis sub-pane. I then re-edit my custom Lua file and then close that Wireshark instance, then double-click my example capture file again. It's like compiling a C file and only seeing one compiler error at a time. Is there a better (faster) way of looking at my changes, without having to restart Wireshark all the time? At the time, I was using Wireshark 1.2.9 for Windows with Lua enabled.

    Read the article

  • Screening (multi)collinearity in a regression model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

  • Better way to make a bash script self-tracing?

    - by Kevin Little
    I have certain critical bash scripts that are invoked by code I don't control, and where I can't see their console output. I want a complete trace of what these scripts did for later analysis. To do this I want to make each script self-tracing. Here is what I am currently doing: #!/bin/bash # if last arg is not '_worker_', relaunch with stdout and stderr # redirected to my log file... if [[ "$BASH_ARGV" != "_worker_" ]]; then $0 "$@" _worker_ >>/some_log_file 2>&1 # add tee if console output wanted exit $? fi # rest of script follows... Is there a better, cleaner way to do this?

    Read the article

  • How can I make keyword order more relevant in my search?

    - by Atomiton
    In my database, I have a keywords field that stores a comma-delimited list of keywords. For example, a Shrek doll might have the following keywords: ogre, green, plush, hero, boys' toys A "Beanie Baby" doll ( that happens to be an ogre ) might have: beanie baby, kids toys, beanbag toys, soft, infant, ogre (That's a completely contrived example.) What I'd like to do is if the consumer searches for "ogre" I'd like the "Shrek" doll to come up higher in the search results. My content administrator feels that if the keyword is earlier in the list, it should get a higher ranking. ( This makes sense to me and it makes it easy for me to let them control the search result relevance ). Here's a simplified query: SELECT p.ProductID AS ContentID , p.ProductName AS Title , p.ProductCode AS Subtitle , 100 AS Rank , p.ProductKeywords AS Keywords FROM Products AS p WHERE FREETEXT( p.ProductKeywords, @SearchPredicate ) I'm thinking something along the lines of replacing the RANK with: , 200 - INDEXOF(@SearchTerm) AS Rank This "should" rank the keyword results by their relevance I know INDEXOF isn't a SQL command... but it's something LIKE that I would like to accomplish. Am I approaching this the right way? Is it possible to do something like this? Does this make sense?

    Read the article

  • Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements

    - by Levi Hackwith
    Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show exactly what I'm working on. However, I'll do my best to be descriptive. Here's a breakdown of what goes on in my application: User clicks a button Server retrieves a list of images in the form of a data-table Each row in the table contains 8 data-cells that in turn each contain one hyperlink Each request by the user can contain up to 50 rows (I can change this number if need be) That means the table contains upwards of 800 individual DOM elements My analysis shows that jQuery("#dataTable").empty() and jQuery("#dataTable).replaceWith(tableCloneObject) take up 97% of my overall processing time and take on average 4 - 6 seconds to complete. I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.

    Read the article

  • XNA or C# Pop-up progress bar for the LoadContent() method

    - by Warlax
    Hey people, We wrote a small game using Microsoft's XNA Game Studio 3.1. The LoadContent() takes a long time because, other than loading models, and config files, we're also running some one-time (per run) terrain analysis. We are not C# or XNA programmers... we're Java programmers, and want to be able to give the user some feedback that the system is loading. Preferably, this will be through a simple pop-up with a progress bar that will say something simple like "loading please wait". The progress bar doesn't have to be a 0 to 1 progress bar, it can instead be one of those 'back and forth' progress bars. I was hoping for some quick copy-paste ready code to just do that - as it is not a central piece of our project, nor do we have a need to delve into too much documentation. I appreciate you time, effort, and possible donation. Thanks.

    Read the article

  • Merge overlapping date intervals

    - by leoinfo
    Is there a better way of merging overlapping date intervals? The solution I came up with is so simple that now I wonder if someone else has a better idea of how this could be done. /***** DATA EXAMPLE *****/ DECLARE @T TABLE (d1 DATETIME, d2 DATETIME) INSERT INTO @T (d1, d2) SELECT '2010-01-01','2010-03-31' UNION SELECT '2010-04-01','2010-05-31' UNION SELECT '2010-06-15','2010-06-25' UNION SELECT '2010-06-26','2010-07-10' UNION SELECT '2010-08-01','2010-08-05' UNION SELECT '2010-08-01','2010-08-09' UNION SELECT '2010-08-02','2010-08-07' UNION SELECT '2010-08-08','2010-08-08' UNION SELECT '2010-08-09','2010-08-12' UNION SELECT '2010-07-04','2010-08-16' UNION SELECT '2010-11-01','2010-12-31' UNION SELECT '2010-03-01','2010-06-13' /***** INTERVAL ANALYSIS *****/ WHILE (1=1) BEGIN UPDATE t1 SET t1.d2 = t2.d2 FROM @T AS t1 INNER JOIN @T AS t2 ON DATEADD(day, 1, t1.d2) BETWEEN t2.d1 AND t2.d2 -- AND t1.d2 <= t2.d2 /***** this condition is useless *****/ IF @@ROWCOUNT = 0 BREAK END /***** RESULT *****/ SELECT StartDate = MIN(d1) , EndDate = d2 FROM @T GROUP BY d2 ORDER BY StartDate, EndDate /***** OUTPUT *****/ /***** StartDate EndDate 2010-01-01 2010-06-13 2010-06-15 2010-08-16 2010-11-01 2010-12-31 *****/ EDIT: I realized that the t1.d2 <= t2.d2 condition is not really useful.

    Read the article

  • How does one modify the thread scheduling behavior when using Threading Building Blocks (TBB)?

    - by J Teller
    Does anyone know how to modify the thread scheduling (specifically affinity) when using TBB? Doing a high level analysis on a simple parallel-for application, it seems like TBB is specifying the underlying threads' affinity in a way that reduces performance. Specifically, the cores I'm running on have hyper-threading enabled, and it looks like TBB is affinitizing threads to the same core even if there is a different core left completely unloaded. FWIW, I realize it's likely that TBB is doing the "right thing" and that changing the threads' affinity will only reduce performance. I'd just like to experiment with it to see if that's really the case.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >