Search Results

Search found 2347 results on 94 pages for 'slightly frustrated'.

Page 42/94 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics?

    - by PeterDC
    Background: I'm a 3D artist (as a hobby) and have recently started using Ubuntu 12.04 LTS as a dual-boot with Windows 7. It's running on my a fairly new 64-bit Toshiba laptop with an nVidia GeForce GT 540M GPU (graphics card). It also, however has Intel Integrated Graphics (which I suspect Ubuntu's been using). So, when I render my 3D scenes to images on Windows, I am able to choose between using my CPU or my nVidia GPU (faster). From the 3D application, I can set the GPU to use either CUDA or OpenCL. In Ubuntu, there's no GPU option. After doing (too much?) research on the issues with Linux and the nVidia Optimus technology, I am slightly more enlightened, but a lot more confused. I don't care one bit about the Optimus technology, as battery life is not by any means an issue for me. Here's my question: What can I do to be able to use CUDA-utilizing programs (such as Blender) on my nVidia GPU in Ubuntu? Will I need nVidia drivers? (I have heard they don't play nicely with Optimus setups on Linux.) Is there at least a way to use OpenCL on my GPU in Ubuntu?

    Read the article

  • Ubuntu 12.04 gets overheated, whereas I have no heating problems on Windows 7 whatsoever

    - by G K
    I bought a new laptop which came pre-installed with Windows 7. I love working on Ubuntu and hence installed it. I can work on Windows for 6 hours at a stretch and feel the laptop being only slightly warm, but 15 minutes into running Ubuntu and my laptop is too hot. The battery also drains out very quickly on Ubuntu. 1.5 hrs of backup on Ubuntu compared to 5-6 hrs on Windows. I previously owned a Dell Inspiron N5010 and everything ran smoothly on that. No heating issues. It came with intel i3 processor. So I'm wondering whether this problem has something to do with the processor? (AMD A8) Specs: HP Pavilion G6-2005AX Laptop (APU Quad Core A8/ 4GB/ 500GB/ Win7 HB/ 1.5GB Graph) 1 GB AMD Radeon HD 7670M Dedicated 512 MB AMD Radeon HD 7640G Graphics Integrated Is there any fix for this problem? Thanks in advance. Edit: I've installed ATI proprietary drivers suggested by Ubuntu.

    Read the article

  • What is the best way to store a table in C++

    - by Topo
    I'm programming a decision tree in C++ using a slightly modified version of the C4.5 algorithm. Each node represents an attribute or a column of your data set and it has a children per possible value of the attribute. My problem is how to store the training data set having in mind that I have to use a subset for each node so I need a quick way to only select a subset of rows and columns. The main goal is to do it in the most memory and time efficient possible (in that order of priority). The best way I have thought of is to have an array of arrays (or std::vector), or something like that, and for each node have a list (array, vector, etc) or something with the column,line(probably a tuple) pairs that are valid for that node. I now there should be a better way to do this, any suggestions? UPDATE: What I need is something like this: In the beginning I have this data: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False But for the second node I just need this data: Paris 4 5.0 New York 7 1.3 Paris 9 6.8 And for the third node: Tokio 2 9.1 Tokio 0 8.4 But with a table of millions of records with up to hundreds of columns. What I have in mind is keep all the data in a matrix, and then for each node keep the info of the current columns and rows. Something like this: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False Node 2: columns = [0,1,2] rows = [0,1,3] Node 3: columns = [0,1,2] rows = [2,4] This way on the worst case scenario I just have to waste size_of(int) * (number_of_columns + number_of_rows) * node That is a lot less than having an independent data matrix for each node.

    Read the article

  • Upgraded from 11.10 to 12.04 now no network access

    - by MadeTheLeap
    A few weeks ago I decided I should enter the Linux world and read that Ubuntu is the most widely used release. I installed version 11.10 and it worked perfectly. Just this past week I decided I would do the upgrade to 12.04. The upgrade process itself worked fine. However, when I logged in I no longer had a network connection. I am running an AMD-based PC with a D-LINK DFE-530TXS network card and as I said, it worked fine in 11.10. I have scoured the Internet and come across a thousand slightly varying solutions, but they are too convoluted for someone new to Linux. Not because I can't follow the steps, but because most of the tools/utilities that are referenced (e.g. to compile, install, etc.) are not available when I use the stated steps in the solutions. So....should I re-install 11.10 or is there hope in getting this version to use the NIC that I know works. I have the latest driver from d-link for my NIC but I have no idea how to 'install' it for Ubuntu 12.04 to use. I know you will require additional information, but I wasn't sure what you would need. Thanks in advance.

    Read the article

  • Drawing chunks, and positioning the camera

    - by Troubleshoot
    I've seen many questions and answers regarding how to draw tiled maps but I can't really get my head around it. Many answers suggest either loading the visible part of the map, or loading and unloading chunks of the map. I've decided the best option would be to load chunks, but I'm slightly confused as to how this would be implemented. Currently I'm loading the full map to a 2D array of buffered images, then drawing it every time repaint is called. Q1: If I were to load chunks of the map, would I load the map as a whole then draw the necessary chunk(s), or load & unload the chunks as the player moves along, and if so, how? My second question regards the camera. I want the player to be in the centre of the X axis and the camera to follow it. I've thought of drawing everything in relation to the map and calculating the position of the camera in relation to the players coordinates on the map. So, to calculate the camera's X position I understand that I should use cameraX = playerX - (canvasWidth/2), but how should I calculate the Y position? I want the camera to only move up when the player reaches cameraHeight/2 but to move down when the player reaches 3/4(cameraHeight). Q2: Should I check for this in the same way I check for collision, and move the camera relative to the movement of the player until the player stops moving, or am I thinking about it in the wrong way?

    Read the article

  • Docking CDialogBar Horizontally with CToolbar [migrated]

    - by PSU
    I need to display a CToolbar (m_wndToolBar) and a CDialogBar (m_wndDlgBarSid1) horizontally (i.e. next to each other, not above one another). The parent frame is derived from CMDIFrameWnd. I've tried all sorts of variations to get this to work. While I can properly position the CDialogBar to the right of the CToolbar, I cannot persist the positioning, although the WINDOWPLACEMENT mechanism is working correctly (the registry is written on program exit); whenever the program is run, the CToolbar shows up docked left, and the CDialogBar shows up below it, also docked left. I'm using (perforce) MFC and Visual C++ 6.0. Here's the code, slightly redacted to remove debug printouts and the like: int CMainFrame::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CMDIFrameWnd::OnCreate(lpCreateStruct) == -1) { return -1; } if (!m_wndToolBar.Create(this) || !m_wndToolBar.LoadToolBar(IDR_MAINFRAME) ) { return -1; // fail to create } if (!m_wndDlgBarSid1.Create(this, IDD_DIALOGBAR_SID1, CBRS_ALIGN_TOP, AFX_IDW_DIALOGBAR)) { return -1; // fail to create } WINDOWPLACEMENT wp ; CString sSection = "DialogBarSettings"; CString sEntry = "Sid1"; if ( ReadWindowPlacement( &wp, sSection, sEntry )) { BOOL bSWP = m_wndDlgBarSid1.SetWindowPlacement( &wp ); RecalcLayout(); } m_wndToolBar.SetBarStyle(m_wndToolBar.GetBarStyle() | CBRS_TOOLTIPS | CBRS_FLYBY | CBRS_SIZE_DYNAMIC); m_wndToolBar.GetToolBarCtrl().ModifyStyle( 0, TBSTYLE_FLAT, 0 ) ; m_wndDlgBarSid1.SetBarStyle(m_wndToolBar.GetBarStyle() | CBRS_SIZE_DYNAMIC | CBRS_TOP | CBRS_GRIPPER | CBRS_TOOLTIPS | CBRS_FLYBY ) ; m_wndToolBar.EnableDocking(CBRS_ALIGN_ANY); EnableDocking(CBRS_ALIGN_ANY); DockControlBar(&m_wndToolBar); m_wndDlgBarSid1.EnableDocking(CBRS_ALIGN_TOP | CBRS_ALIGN_BOTTOM); DockControlBar(&m_wndDlgBarSid1,AFX_IDW_DOCKBAR_TOP); return 0; } Any thoughts?

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • UISplitViewController. Can we hide/show the master view?

    - by dugla
    I would like to use a UISplitViewController in a slightly different way then is common. Because my iPad app is fullscreen app I would like the ability to hide/show the master view when in landscape mode. Portrait view is not an issue since the master view transforms into a popover which can be hidden via the toolbar button. Is there a method on UISplitViewController that will nicely hide the master view and expand the detail view? Any insight would be most appreciated. Cheers. Thanks, Doug

    Read the article

  • Performance of Silverlight Datagrid in Silverlight 3 vs Silverlight 4 on a mac

    - by Simon
    I'm using Silverlight Beta 4 for a LOB application. After finding out today that I'll have to wait perhaps 4 months to be able to develop with SL4 on Visual Studio 2010 I'm thinking I need to downgrade my application to SL3 but thats another question. The problem is I'm noticing absolutely abismal performance for simple datagrids that work just fine on a PC when I'm running on a Mac. These grids contain only 5-10 columns and maybe 50 rows. Paging up and down takes about 1-2 seconds sometimes. I would appreciate anybody's experience in which of the following is the best solution: reverting to Silverlight 3 and hoping DataGrid is faster switching to 3rd party datagrid such as Telerik forgetting silverlight altogether I was hoping that possibly SL4 runtime might be updated but that won't happen probably for 3-4 months. Just a reminder - this is specifically a mac issue. Performance on my PC while slightly slow to populate the grid initially is fine.

    Read the article

  • How to disable floating tabs in Visual Studio 2010

    - by md1337
    I now use the new Visual Studio 2010 and I experience something very annoying that wasn't happening before with Visual Studio 2008. Something changed with the way it handles the floating of tabs and I can't stand it. Every once in a while, I would somehow trigger the floating of a tab instead of just switching to it. It may have to do with the way I click (maybe a very fast double click gets sent), or maybe I very slightly drag the mouse when clicking the tab. I don't know. All I know is that I was fine with Visual Studio 2008. Is there a way to disable this somewhere? I want to either un-register the double click as a floating tab trigger, or remove the floating option altogether. How can I do that? Thanks.

    Read the article

  • WPF FlowDocument - Absolute Character Position

    - by Alan Spark
    I have a WPF RichTextBox that I am typing some text into and then parsing the whole of the text to do processing on. During this parse, I have the absolute character positions of the start and end of each word. I would like to use these character positions to apply formatting to certain words. However, I have discovered that the FlowDocument uses TextPointer instances to mark positions in the document. I have found that I can create a TextRange by constructing it with start and end pointers. Once I have the TextRange I can easily apply formatting to the text within it. I have been using GetPositionAtOffset to get a TextPointer for my character offset but suspect that its offset is different from mine because the selected text is in a slightly different position from what I expect. My question is, how can I accurately convert an absolute character position to a TextPointer?

    Read the article

  • Adding array of images to Firebase using AngularFire

    - by user2833143
    I'm trying to allow users to upload images and then store the images, base64 encoded, in firebase. I'm trying to make my firebase structured as follows: |--Feed |----Feed Item |------username |------epic |---------name,etc. |------images |---------image1, image 2, etc. However, I can't get the remote data in firebase to mirror the local data in the client. When I print the array of images to the console in the client, it shows that the uploaded images have been added to the array of images...but these images never make it to firebase. I've tried doing this multiple ways to no avail. I tried using implicit syncing, explicit syncing, and a mixture of both. I can;t for the life of me figure out why this isn;t working and I'm getting pretty frustrated. Here's my code: $scope.complete = function(epicName){ for (var i = 0; i < $scope.activeEpics.length; i++){ if($scope.activeEpics[i].id === epicName){ var epicToAdd = $scope.activeEpics[i]; } } var epicToAddToFeed = {epic: epicToAdd, username: $scope.currUser.username, userImage: $scope.currUser.image, props:0, images:['empty']}; //connect to feed data var feedUrl = "https://epicly.firebaseio.com/feed"; $scope.feed = angularFireCollection(new Firebase(feedUrl)); //add epic var added = $scope.feed.add(epicToAddToFeed).name(); //connect to added epic in firebase var addedUrl = "https://epicly.firebaseio.com/feed/" + added; var addedRef = new Firebase(addedUrl); angularFire(addedRef, $scope, 'added').then(function(){ // for each image uploaded, add image to the added epic's array of images for (var i = 0, f; f = $scope.files[i]; i++) { var reader = new FileReader(); reader.onload = (function(theFile) { return function(e) { var filePayload = e.target.result; $scope.added.images.push(filePayload); }; })(f); reader.readAsDataURL(f); } }); } EDIT: Figured it out, had to connect to "https://epicly.firebaseio.com/feed/" + added + "/images"

    Read the article

  • C: WinAPI CreateDIBitmap() from byte[] problem

    - by zipcodeman
    I have been working on this problem for a while now. I am trying to add JPEG support to a program with libjpeg. For the most part, it is working fairly well, But for some JPEGs, they show up like the picture on the left. It may not be obvious, but the background shows up with alternating red green and blue rows. If anyone has seen this behavior before and knows a probable cause, I would appreciate any input. I have padded the rows to be multiples of four bytes, and it only slightly helped the issue.

    Read the article

  • Scrollbar problem with jquery ui dialog in Chrome and Safari

    - by alexis.kennedy
    I'm using the jquery ui dialog with modal=true. In Chrome and Safari, this disables scrolling via the scroll bar and cursor keys (scrolling with the mouse wheel and page up/down still works). This is a problem if the dialog is too tall to fit on one page - users on a laptop get frustrated. Someone raised this three months ago on the jquery bug tracker - http://dev.jqueryui.com/ticket/4671 - it doesn't look like fixing it is a priority. :) So does anyone (i) have a fix for this? (ii) have a suggested workaround that would give a decent usability experience? I'm experimenting with mouseover / scrollto on bits of the form, but it's not a great solution :( EDIT: props to Rowan Beentje (who's not on SO afaict) for finding a solution to this. jQueryUI prevents scrolling by capturing the mouseup / mousedown events. So this: $("dialogId").dialog({ open: function(event, ui) { window.setTimeout(function() { jQuery(document) .unbind('mousedown.dialog-overlay') .unbind('mouseup.dialog-overlay') ; }, 100); }, modal: true}); seems to fix it. Use at own risk, I don't know what other unmodal behaviour unbinding this stuff might allow.

    Read the article

  • opencv fails to open avi with ffdshow codec

    - by Seb
    Hey everyone, I am currently using OpenCV to try and open an AVI file that was made using ffdshow. The program manages to open the video file and play however, the video file is in black and white and is slightly skewed. VLC and windows media player can run it fine. Is there anything that I am able to do to install the ffdshow codec into OpenCV or do I have to covert each file with ffdshow used into appropriate OpenCV codec formats? Thank you in advance for your help.

    Read the article

  • Good examples of MVVM Template

    - by jwarzech
    I am currently working with the Microsoft MVVM template and find the lack of detailed examples frustrating. The included ContactBook example shows very little Command handling and the only other example I've found is from an MSDN Magazine article where the concepts are similar but uses a slightly different approach and still lack in any complexity. Are there any decent MVVM examples that at least show basic CRUD operations and dialog/content switching? Everyone's suggestions were really useful and I will start compiling a list of good resources Frameworks/Templates WPF Model-View-ViewModel Toolkit MVVM Light Toolkit Prism Caliburn Cinch Useful Articles Data Validation in .NET 3.5 Using a ViewModel to Provide Meaningful Validation Error Messages Action based ViewModel and Model validation Dialogs Command Bindings in MVVM More than just MVC for WPF MVVM + Mediator Example Application Additional Libraries WPF Disciples' improved Mediator Pattern implementation(I highly recommend this for applications that have more complex navigation)

    Read the article

  • Async friendly DispatcherTimer wrapper/subclass

    - by Simon_Weaver
    I have a DispatcherTimer running in my code that fire every 30 seconds to update system status from the server. The timer fires in the client even if I'm debugging my server code so if I've been debugging for 5 minutes I may end up with a dozen timeouts in the client. Finally decided I needed to fix this so looking to make a more async / await friendly DispatcherTimer. Code running in DispatcherTimer must be configurable whether it is reentrant or not (i.e. if the task is already running it should not try to run it again) Should be task based (whether or not this requires I actually expose Task at the root is a gray area) Should be able to run async code and await on tasks to complete Whether it wraps or extends DispatcherTimer probably doesn't really matter but wrapping it may be slightly less ambiguous if you don't know how to use it Possibly expose bindable properties for IsRunning for UI

    Read the article

  • How to record an iPad screencast

    - by hgpc
    How do you record an iPad screencast at full scale? I have an iMac with maximum resolution 1680x1050 and the simulator doesn't fit the screen in portrait orientation. It does fit in landscape orientation. Reducing the scale to 50% is not an option because the end result is too small. If the scale could be reduced slightly it would be fine, but not 50%. Is it possible to put the simulator in landscape orientation and still keep the app in portrait mode? Then I could simply rotate the resulting video to get a portrait screencast.

    Read the article

  • ExtJS: Ext.data.DataReader: #realize was called with invalid remote-data

    - by TomH
    I'm receiving a "Ext.data.DataReader: #realize was called with invalid remote-data" error when I create a new record via a POST request. Although similar to the discussion at this SO conversation, my situation is slightly different: My server returns the pk of the new record and additional information that is to be associated with the new record in the grid. My server returns the following: {'success':true,'message':'Created Quote','data': [{'id':'610'}, {'quoteNumber':'1'}]} Where id is the PK for the record in the mysql database. quoteNumber is a db generated value that needs to be added to the created record. Other relevant bits: var quoteRecord = Ext.data.Record.create([{name:'id', type:'int'},{name:'quoteNumber', type:'int'},{name:'slideID'}, {name:'speaker'},{name:'quote'}, {name:'metadataID'}, {name:'priorityID'}]); var quoteWriter = new Ext.data.JsonWriter({ writeAllFields:false, encode:true }); var quoteReader = new Ext.data.JsonReader({id:'id', root:'data',totalProperty: 'totalitems', successProperty: 'success',messageProperty: 'message',idProperty:'id'}, quoteRecord); I'm stumped. Anyone?? thanks tom

    Read the article

  • Wrapping/warping a CALayer/UIView (or OpenGL) in 3D (iPhone)

    - by jbrennan
    I've got a UIView (and thus a CALayer) which I'm trying to warp or bend slightly in 3D space. That is, imagine my UIView is a flat label which I want to partially wrap around a beer bottle (not 360 degrees around, just on one "side"). I figured this would be possible by applying a transform to the view's layer, but as far as I can tell, this transform is limited to rotation, scale and translation of the layer uniformly. I could be wrong here, as my linear algebra is foggy at this point, to say the least. How can I achieve this?

    Read the article

  • Controlling FPU behavior in an OpenMP program?

    - by STingRaySC
    I have a large C++ program that modifies the FPU control word (using _controlfp()). It unmasks some FPU exceptions and installs a SEHTranslator to produce typed C++ exceptions. I am using VC++ 9.0. I would like to use OpenMP (v.2.0) to parallelize some of our computational loops. I've already successfully applied it to one, but the numerical results are slightly different (though I understand it could also be due to calculations being performed in a different order). I'm assuming this is because the FPU state is thread-specific. Is there some way to have the OpenMP threads inherit that state from the master thread? Or is there some way to specify using OpenMP that new threads execute a particular function that sets up the correct state? What is the idiomatic way to deal with this situation?

    Read the article

  • NetBeans behaves differently if project is run via "Run Project" or build.xml>run

    - by Rogach
    I slightly modified the build-impl.xml file of my NetBeans project. (Specifically, I made it to insert build time into program code). If I run project via build.xml "run" target, I get behavior I expect - the program displays build time and date. But if I run project using standard (and most obvious, used it always) button "Run Main Project", I get totally another result (no build date). Moreover, if I insert any code into build.xml, I still get result if I run the target explicitly and no result if it is run simply by NetBeans. And this leads me to conclusion, that this button uses another method to run my application. My question is: what does that button do? What method does it call? And can it be configured to run the needed target of make file?

    Read the article

  • Validation in asp.net MVC - validation only to happen when "asked"

    - by jeriley
    I've got a slightly different validation requirement than the usual "when save, validate!". The system can allow someone to update a bunch of times without being "bothered" with a validation listing UNTIL they say it's complete. I thought I might be able to pull a fast one and put the [HandleError] on the method of which this would fire, but that validates every save. Is there an attribte I can put into my custom validator that can handle this or am I going to have to write up my own HandleError attribute?

    Read the article

  • No scrolling occurs when UITableView scrollToRowAtIndexPath is invoked

    - by theactiveactor
    My view contains a simple TableView with 6 rows and a button that invokes doScroll when clicked. My objective for doScroll() is simply scroll to the 5th cell such that it's at top of the table view. - (void)doScroll: (id)sender { NSIndexPath *index = [NSIndexPath indexPathForRow:4 inSection: 1]; [m_tableView scrollToRowAtIndexPath:index atScrollPosition:UITableViewScrollPositionTop animated:YES]; } However when doScroll is invoked, the tableview scrolls only slightly. I suspect this has something to do with the size of the scroll view. So I tried increasing the height of m_tableview.contentsize before scrolling. After I do this however, no scrolling occurs at all... The view controller is a simple UIViewController and refers to the table view via IBOutlet. For some reason, scrolling works as expected for the default Navigation Based Application, where the controller is a UITableViewController

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >