Search Results

Search found 14383 results on 576 pages for 'double checked locking'.

Page 56/576 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • how to wrap the command1 return strings with single/double quotation marks (\'or\") to feed to the next command2?

    - by infantcoder
    For example, I want to use mplayer to play the music of several dirs, type like this in bash: $find './l_music/La Scala Concert 03 03 03' './l_music/Echoes The Einaudi Collection' './l_music/Ludovico Einaudi - The Royal Albert Hall Concert [2 CD] (2010)' -name '*.mp3' | xargs mplayer Well, You Know, the find command return path, which dir and file always have space, the pipe right command mplayer do not accept those mp3 path. I think that if wrap the find return strings with single/double quotation marks (\'or\") to feed to mplayer, the problem will be solved. But how can I do to solve the problem just use bash command, do not use bash or perl scripts, while can give me one perl line command use Perl Command-Line Options.

    Read the article

  • Script or Utility to convert .nab to .csv without importing double entries in Outlook.

    - by Chris
    Currently our environment is migrating from Groupwise 7 to Outlook 2003 and we have multiple users with mission critical outside contacts in their frequent contacts that will have to be imported in Outlook. Currently our only solution is to export GW contacts to a .nab, import to excel to scrub out the contacts in our own domain (to avoid double entry) and convert to .csv. This current solution will require a lot of man hours for hand holding because most of our users are not technically savvy AT ALL and are frankly too busy to do this themselves. Anyone know of any kind of tool or script to assist with this?

    Read the article

  • TFS 2010 - TF14040 The Folder may not be checked out.

    - by Patricker
    I have a .NET 4 website in VS2010 stored in a TFS 2010 team project. I need to add a reference to System.Data.Linq.dll to the website. I am referencing a LINQ DataContext that is defined in another project and I get build errors saying that I need the reference to System.Data.Linq. I go up to the "Add Reference" menu option and add it like I would any normal reference, and it even shows up in the Web.config and in the Properties pages for the website... BUT if I build I still get the same error. So I found a place in my code where I was referencing the LINQ count function and it told me it was invalid because I was missing a reference and it offered to add the reference automatically. I told it to add the reference automatically and it is at this point that I get the error mentioned in the subject: TF14040: The folder $/Folder/Subfolder may not be checked out. No items were checked out I've done some research online but I haven't been able to find much. I saw on a blog that making the folder not readonly fixed the issue for him, but it didn't seem to work for me unless I misunderstood something. I tried loading up the project from source control onto a fresh computer where that project had never been loaded before and I can reproduce the issue the same way. Help would be greatly appreciated.

    Read the article

  • Read specific line from text file, according to Checked Listbox selection number.

    - by Manolis
    Heya, i want to create an application which will read a specific line from a text file and show it in a textbox. The line will be chosen according to the number of the listbox selection i will make. Here's the code: Public Class Form1 Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load End Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim i As Integer For i = 0 To Me.CheckedListBox1.CheckedIndices.Count - 1 Me.CheckedListBox1.SetItemChecked(Me.CheckedListBox1.CheckedIndices(0),False) Next i End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click If CheckedListBox1.CheckedItems.Count <> 0 Then Dim reader As New System.IO.StreamReader(CurDir() & "\" & "READ.txt") Dim x As Integer Dim s As String = "" For x = 0 To CheckedListBox1.CheckedItems.Count - 1 s = s & "Answer " & (x + 1).ToString & ") " & CheckedListBox1.CheckedItems(x).ToString & ControlChars.CrLf & reader.ReadLine() & ControlChars.CrLf & ControlChars.CrLf Next x Answer.Text = (s) Else MessageBox.Show("Please select questions.", "Error", _ MessageBoxButtons.OK, _ MessageBoxIcon.Information) Return End If End Sub End Class So lets say i 'check' the first, second, and fifth items from the checked listbox, i want it to read from the text file the first, second, and fifth lines of text and show them in the textbox. The current code just reads line 1, 2, 3 (...) in order, no matter what item i have 'checked'. Thanks in advance!

    Read the article

  • Im getting fatal errors... can anyone help me edit my program!

    - by user350217
    The errors i am getting are: Error 1 error LNK2019: unresolved external symbol "double __cdecl getDollarAmt(void)" (? getDollarAmt@@YANXZ) referenced in function _main hid.obj Error 2 fatal error LNK1120: 1 unresolved externals this is my program: #include<iostream> #include<cmath> #include<string> using namespace std; double getDollarAmt(); void displayCurrencies(); char getCurrencySelection (float amtExchanged); bool isSelectionValid(char selection); double calcExchangeAmt (float amtExchanged, char selection); void displayResults(double newAmount, float amtExchanged, char selection, char yesNo); const double russianRubles = 31.168; const double northKoreanWon = .385; const double chineseYuan = 6.832; const double canadianDollar = 1.1137; const double cubanPeso = 1.0; const double ethiopianBirr = 9.09; const double egyptianPound = 5.6275; const double tunisianDinar = 1.3585; const double thaiBaht = 34.4; /****** I changed the variables to global variables so you don't have to worry about accidentally setting them to 0 or assigning over a value that you need ********/ float amtEchanged = 0.0; char selection; char yesNo; double newAmount; int main() { float amtExchanged = 0.0; selection = 'a'; yesNo = 'y'; newAmount = 0.0; getDollarAmt (); displayCurrencies(); getCurrencySelection (amtExchanged); isSelectionValid(selection);/**** you only need to use the selection variable ****/ calcExchangeAmt (amtExchanged, selection); displayResults(newAmount, amtExchanged, selection, yesNo); return 0; } double getDollarAmt (float amtExchanged) // promt user for eachange amount and return it to main { float amtExchanged0;//created temporary variable to set amtExchanged to cout<< "Please enter the total dollar amount to exchange: "; cin>> amtExchanged0; amtExchanged = amtExchanged0;//setting amtExchanged to the right value return amtExchanged; } void displayCurrencies() // display list of currencies { cout<<"A Russian Ruble"<<endl <<"B North Korean Won"<<endl <<"C Chinese Yuan"<<endl <<"D Cuban Peso"<<endl <<"E Ethiopian Birr"<<endl <<"F Thai Baht"<<endl <<"G Canadian Dollars"<<endl <<"H Tunisian Dinar"<<endl <<"I Egyptian Pound"<<endl; } char getCurrencySelection (float amtExchanged) // make a selection and return to main { char selection0;//again, created a temporary variable for selection cout<<"Please enter your selection: "; cin>>selection0; selection = selection0;//setting the temporary variable to the actual variable you use /***** we are now going to see if isSelectionValid returns false. if it returns false, that means that their selection was not character A-H. if it is false we keep calling getCurrencySelection *****/ if(isSelectionValid(selection)==false) { cout<<"Sorry, the selection you chose is invalid."<<endl; getCurrencySelection(amtExchanged); } return selection; } bool isSelectionValid(char selection) // this fuction is supposed to be called from getCurrencySelection, the selction // must be sent to isSelectionValid to determine if its valid // if selection is valid send it bac to getCurrencySelection // if it is false then it is returned to getCurrencySelection and prompted to // make another selection until the selection is valid, then it is returned to main. { /**** i'm not sure if this is what you mean, all i am doing is making sure that their selection is A-H *****/ if(selection=='A' || selection=='B' || selection=='C' || selection=='D' || selection=='E' || selection=='F' || selection=='G' || selection=='H' || selection=='I') return true; else return false; } double calcExchangeAmt (float amtExchanged,char selection) // function calculates the amount of money to be exchanged { switch (toupper(selection)) { case 'A': newAmount =(russianRubles) * (amtExchanged); break; case 'B': newAmount = (northKoreanWon) * (amtExchanged); break; case 'C': newAmount = (chineseYuan) * (amtExchanged); break; case 'D': newAmount = (canadianDollar) * (amtExchanged); break; case 'E': newAmount = (cubanPeso) * (amtExchanged); break; case 'F': newAmount = (ethiopianBirr) * (amtExchanged); break; case 'G': newAmount = (egyptianPound) * (amtExchanged); break; case 'H': newAmount = (tunisianDinar) * (amtExchanged); break; case 'I': newAmount = (thaiBaht) * (amtExchanged); break; } return newAmount; } void displayResults(double newAmount, float amtExchanged, char selection, char yesNo) // displays results and asked to repeat. IF they want to repeat it clears the screen and starts over. { switch(toupper(selection)) { case 'A': cout<<"$"<<amtExchanged<<" is "<<newAmount<<" Russian Rubles."<<endl<<endl; break; case 'B': cout<<"$"<<amtExchanged<<" is "<<newAmount<<" North Korean Won."<<endl<<endl; break; case 'C': cout<<"$"<<amtExchanged<<" is "<<newAmount<<" Chinese Yuan."<<endl<<endl; break; case 'D': cout<<"$"<<amtExchanged<<" is "<<newAmount<<" Cuban Pesos."<<endl<<endl; break; case 'E': cout<<"$"<<amtExchanged<<" is "<<newAmount<<" Ethiopian Birr."<<endl<<endl; break; case 'F': cout<<"$"<<amtExchanged<<" is "<<newAmount<<" Thai Baht."<<endl<<endl; break; case 'G': cout<<"$"<<amtExchanged<<" is "<<newAmount<<" Canadian Dollars."<<endl<<endl; break; case 'H': cout<<"$"<<amtExchanged<<" is "<<newAmount<<" Tunisian Dinar."<<endl<<endl; break; case 'I': cout<<"$"<<amtExchanged<<" is "<<newAmount<<" Egyptian Pound."<<endl<<endl; break; } cout<<"Do you wish to continue? (Y for Yes / N for No)"; cin>>yesNo; if(yesNo=='y' || yesNo=='Y') { getDollarAmt(); } else { system("cls"); } }

    Read the article

  • Question regarding parent/child relationships with dynamically generated checkboxes using jquery

    - by Jeff
    I have a form of checkboxes, that is dynamically generated based on the users content. Sections of checkboxes are broken up by categories, and each category has projects within. For database purposes, the category values have checkboxes, that are hidden. IF a category has sub items that have checkboxes that are checked, THEN the category checkbox is checked as well. I have gotten this working ok using the JQuery .click(), but I can't figure out how to do it when the page loads. Here is my code for when a checkbox is actually clicked: $(".project").click(function() { if($(this).is(":checked") && $(this).hasClass("project")) { $(this).parent().parent().children(':first').attr('checked', true); }else if(!$(this).parent().children('.project').is(":checked")){ $(this).parent().parent().children(':first').attr('checked', false); } }); Now when I am editing the status of these boxes (meaning after they have been saved to the db) the category boxes are not showing up as checked even though their children projects are checked. What can I do to make it so that my category box will be checked at load time if that category's child is checked? Part of the problem I think is with the dynamically changing parent child setup, how can I find the parent box in order to have it checked? Thanks!

    Read the article

  • Simple C++ program on multidimensional arrays - Getting C2143 error among others. Not sure why?

    - by noobzilla
    Here is my simple multidimensional array program. The first error occurs where I declare the function addmatrices and then a second one where it is implemented. I am also getting an undefined variable error for bsize. What am I doing incorrectly? #include <iostream> #include <fstream> #include <string> #include <iomanip> using namespace std; //Function declarations void constmultiply (double matrixA[][4], int asize, double matrixC[][4], int bsize, double multiplier); //Pre: The address of the output file, the matrix to be multiplied by the constant, the matrix in which // the resultant values will be stored and the multiplier are passed in. //Post: The matrix is multiplied by the multiplier and the results are displayed on screen and written to the // output file. int addmatrices (double matrixA[][4], int asize, double matrixB[]4], int bsize, double matrixC[][4], int csize); //Pre: The addresses of three matrices are passed in //Post: The values in each of the two matrices are added together and put into a third matrix //Error Codes int INPUT_FILE_FAIL = 1; int UNEQUAL_MATRIX_SIZE = 2; //Constants const double multiplier = 2.5; const int rsize = 4; const int csize = 4; //Main Driver int main() { //Declare the two matrices double matrix1 [rsize][csize]; double matrix2 [rsize][csize]; double matrix3 [rsize][csize]; //Variables double temp; string filename; //Declare filestream object ifstream infile; //Ask the user for the name of the input file cout << "Please enter the name of the input file: "; cin >> filename; //Open the filestream object infile.open(filename.c_str()); //Verify that the input file opened correctly if (infile.fail()) { cout << "Input file failed to open" <<endl; exit(INPUT_FILE_FAIL); } //Begin reading in data from the first matrix for (int i = 0; i <= 3; i++)//i = row { for (int j = 0; j <= 3; j++)// j = column { infile >> temp; matrix1[i][j] = temp; } } //Begin reading in data from the second matrix for (int k = 0; k <= 3; k++)// k = row { for (int l = 0; l <= 3; l++)// l = column { infile >> temp; matrix2[k][l] = temp; } } //Notify user cout << "Input file open, reading matrices...Done!" << endl << "Read in 2 matrices..."<< endl; //Output the values read in for Matrix 1 for (int i = 0; i <= 3; i++) { for (int j = 0; j <= 3; j ++) { cout << setprecision(1) << matrix1[i][j] << setw(8); } cout << "\n"; } cout << setw(40)<< setfill('-') << "-" << endl ; //Output the values read in for Matrix 2 for (int i = 0; i <= 3; i++) { for (int j = 0; j <= 3; j ++) { cout << setfill(' ') << setprecision(2) << matrix2[i][j] << setw(8); } cout << "\n"; } cout << setw(40)<< setfill('-') << "-" << endl ; //Multiply matrix 1 by the multiplier value constmultiply (matrix1, rsize, matrix3, rsize, multiplier); //Output matrix 3 values to screen for (int i = 0; i <= 3; i++) { for (int j = 0; j <= 3; j ++) { cout << setfill(' ') << setprecision(2) << matrix3[i][j] << setw(8); } cout << "\n"; } cout << setw(40)<< setfill('-') << "-" << endl ; // //Add matrix1 and matrix2 // addmatrices (matrix1, 4, matrix2, 4, matrix3, 4); // //Finished adding. Now output matrix 3 values to screen // for (int i = 0; i <= 3; i++) // { //for (int j = 0; j <= 3; j ++) //{ // cout << setfill(' ') << setprecision(2) << matrix3[i][j] << setw(8); //} //cout << "\n"; // } // cout << setw(40)<< setfill('-') << "-" << endl ; //Close the input file infile.close(); return 0; } //Function implementation void constmultiply (double matrixA[][4], int asize, double matrixC[][4], int bsize, double multiplier) { //Loop through each row and multiply the value at that location with the multiplier for (int i = 0; i < asize; i++) { for (int j = 0; j < 4; j++) { matrixC[i][j] = matrixA[i][j] * multiplier; } } } int addmatrices (double matrixA[][4], int asize, double matrixB[]4], int bsize, double matrixC[][4], int csize) { //Remember that you can only add two matrices that have the same shape - i.e. They need to have an equal //number of rows and columns. Let's add some error checking for that: if(asize != bsize) { cout << "You are attempting to add two matrices that are not equal in shape. Program terminating!" << endl; return exit(UNEQUAL_MATRIX_SIZE); } //Confirmed that the matrices are of equal size, so begin adding elements for (int i = 0; i < asize; i++) { for (int j = 0; j < bsize; j++) { matrixC[i][j] = matrixA[i][j] + matrixB[i][j]; } } }

    Read the article

  • Where is the virtual function call overhead?

    - by Semen Semenych
    Hello everybody, I'm trying to benchmark the difference between a function pointer call and a virtual function call. To do this, I have written two pieces of code, that do the same mathematical computation over an array. One variant uses an array of pointers to functions and calls those in a loop. The other variant uses an array of pointers to a base class and calls its virtual function, which is overloaded in the derived classes to do absolutely the same thing as the functions in the first variant. Then I print the time elapsed and use a simple shell script to run the benchmark many times and compute the average run time. Here is the code: #include <iostream> #include <cstdlib> #include <ctime> #include <cmath> using namespace std; long long timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p) { return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) - ((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec); } void function_not( double *d ) { *d = sin(*d); } void function_and( double *d ) { *d = cos(*d); } void function_or( double *d ) { *d = tan(*d); } void function_xor( double *d ) { *d = sqrt(*d); } void ( * const function_table[4] )( double* ) = { &function_not, &function_and, &function_or, &function_xor }; int main(void) { srand(time(0)); void ( * index_array[100000] )( double * ); double array[100000]; for ( long int i = 0; i < 100000; ++i ) { index_array[i] = function_table[ rand() % 4 ]; array[i] = ( double )( rand() / 1000 ); } struct timespec start, end; clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start); for ( long int i = 0; i < 100000; ++i ) { index_array[i]( &array[i] ); } clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end); unsigned long long time_elapsed = timespecDiff(&end, &start); cout << time_elapsed / 1000000000.0 << endl; } and here is the virtual function variant: #include <iostream> #include <cstdlib> #include <ctime> #include <cmath> using namespace std; long long timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p) { return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) - ((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec); } class A { public: virtual void calculate( double *i ) = 0; }; class A1 : public A { public: void calculate( double *i ) { *i = sin(*i); } }; class A2 : public A { public: void calculate( double *i ) { *i = cos(*i); } }; class A3 : public A { public: void calculate( double *i ) { *i = tan(*i); } }; class A4 : public A { public: void calculate( double *i ) { *i = sqrt(*i); } }; int main(void) { srand(time(0)); A *base[100000]; double array[100000]; for ( long int i = 0; i < 100000; ++i ) { array[i] = ( double )( rand() / 1000 ); switch ( rand() % 4 ) { case 0: base[i] = new A1(); break; case 1: base[i] = new A2(); break; case 2: base[i] = new A3(); break; case 3: base[i] = new A4(); break; } } struct timespec start, end; clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start); for ( int i = 0; i < 100000; ++i ) { base[i]->calculate( &array[i] ); } clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end); unsigned long long time_elapsed = timespecDiff(&end, &start); cout << time_elapsed / 1000000000.0 << endl; } My system is LInux, Fedora 13, gcc 4.4.2. The code is compiled it with g++ -O3. The first one is test1, the second is test2. Now I see this in console: [Ignat@localhost circuit_testing]$ ./test2 && ./test2 0.0153142 0.0153166 Well, more or less, I think. And then, this: [Ignat@localhost circuit_testing]$ ./test2 && ./test2 0.01531 0.0152476 Where are the 25% which should be visible? How can the first executable be even slower than the second one? I'm asking this because I'm doing a project which involves calling a lot of small functions in a row like this in order to compute the values of an array, and the code I've inherited does a very complex manipulation to avoid the virtual function call overhead. Now where is this famous call overhead?

    Read the article

  • File Segmentation when trying to write in a file

    - by user1286390
    I am trying in C language to use the method of bisection to find the roots of some equation, however when I try to write every step of this process in a file I get the problem "Segmentation fault". This might be an idiot fault that I did, however I have been trying to solve this for a long time. I am compiling using gcc and that is the code: #include <stdio.h> #include <stdlib.h> #include <math.h> #define R 1.0 #define h 1.0 double function(double a); void attractor(double *a1, double *a2, double *epsilon); void attractor(double *a1, double *a2, double *epsilon) { FILE* bisection; double a2_copia, a3, fa1, fa2; bisection = fopen("bisection-part1.txt", "w"); fa1 = function(*a1); fa2 = function(*a2); if(function(*a1) - function(*a2) > 0.0) *epsilon = function(*a1) - function(*a2); else *epsilon = function(*a2) - function(*a1); fprintf(bisection, "a1 a2 fa1 fa2 epsilon\n"); a2_copia = 0.0; if(function(*a1) * function(*a2) < 0.0 && *epsilon >= 0.00001) { a3 = *a2 - (*a2 - *a1); a2_copia = *a2; *a2 = a3; if(function(*a1) - function(*a2) > 0.0) *epsilon = function(*a1) - function(*a2); else *epsilon = function(*a2) - function(*a1); if(function(*a1) * function(*a2) < 0.0 && *epsilon >= 0.00001) { fprintf(bisection, "%.4f %.4f %.4f %.4f %.4f\n", *a1, *a2, function(*a1), function(*a1), *epsilon); attractor(a1, a2, epsilon); } else { *a2 = a2_copia; *a1 = a3; if(function(*a1) - function(*a2) > 0) *epsilon = function(*a1) - function(*a2); else *epsilon = function(*a2) - function(*a1); if(function(*a1) * function(*a2) < 0.0 && *epsilon >= 0.00001) { fprintf(bisection, "%.4f %.4f %.4f %.4f %.4f\n", *a1, *a2, function(*a1), function(*a1), *epsilon); attractor(a1, a2, epsilon); } } } fa1 = function(*a1); fa2 = function(*a2); if(function(*a1) - function(*a2) > 0.0) *epsilon = function(*a1) - function(*a2); else *epsilon = function(*a2) - function(*a1); fprintf(bisection, "%.4f %.4f %.4f %.4f %.4f\n", a1, a2, fa1, fa2, epsilon); } double function(double a) { double fa; fa = (a * cosh(h / (2 * a))) - R; return fa; } int main() { double a1, a2, fa1, fa2, epsilon; a1 = 0.1; a2 = 0.5; fa1 = function(a1); fa2 = function(a2); if(fa1 - fa2 > 0.0) epsilon = fa1 - fa2; else epsilon = fa2 - fa1; if(epsilon >= 0.00001) { fa1 = function(a1); fa2 = function(a2); attractor(&a1, &a2, &epsilon); fa1 = function(a1); fa2 = function(a2); if(fa1 - fa2 > 0.0) epsilon = fa1 - fa2; else epsilon = fa2 - fa1; } if(epsilon < 0.0001) printf("Vanish at %f", a2); else printf("ERROR"); return 0; } Thanks anyway and sorry if this question is not suitable.

    Read the article

  • How to break out of a loop depending on a CheckBox Checked value while the loop is running?

    - by Rayner Vaz
    Lets say I want to call a function repeatedly if the user selects a CheckBox named "repeat" but the calls to the function should stop as soon as the user 'unchecks' the CheckBox. I tried using the 'Checked', 'Click' and 'Tap' event of the CheckBox but once the loop starts, it does not sense any changes in the checkbox's state. I even tried using a loop inside another button's _Click method, but then that creates a lock on the Button's 'pressed' state. Any ideas/alternate suggestions?

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • C# Neural Networks with Encog

    - by JoshReuben
    Neural Networks ·       I recently read a book Introduction to Neural Networks for C# , by Jeff Heaton. http://www.amazon.com/Introduction-Neural-Networks-C-2nd/dp/1604390093/ref=sr_1_2?ie=UTF8&s=books&qid=1296821004&sr=8-2-spell. Not the 1st ANN book I've perused, but a nice revision.   ·       Artificial Neural Networks (ANNs) are a mechanism of machine learning – see http://en.wikipedia.org/wiki/Artificial_neural_network , http://en.wikipedia.org/wiki/Category:Machine_learning ·       Problems Not Suited to a Neural Network Solution- Programs that are easily written out as flowcharts consisting of well-defined steps, program logic that is unlikely to change, problems in which you must know exactly how the solution was derived. ·       Problems Suited to a Neural Network – pattern recognition, classification, series prediction, and data mining. Pattern recognition - network attempts to determine if the input data matches a pattern that it has been trained to recognize. Classification - take input samples and classify them into fuzzy groups. ·       As far as machine learning approaches go, I thing SVMs are superior (see http://en.wikipedia.org/wiki/Support_vector_machine ) - a neural network has certain disadvantages in comparison: an ANN can be overtrained, different training sets can produce non-deterministic weights and it is not possible to discern the underlying decision function of an ANN from its weight matrix – they are black box. ·       In this post, I'm not going to go into internals (believe me I know them). An autoassociative network (e.g. a Hopfield network) will echo back a pattern if it is recognized. ·       Under the hood, there is very little maths. In a nutshell - Some simple matrix operations occur during training: the input array is processed (normalized into bipolar values of 1, -1) - transposed from input column vector into a row vector, these are subject to matrix multiplication and then subtraction of the identity matrix to get a contribution matrix. The dot product is taken against the weight matrix to yield a boolean match result. For backpropogation training, a derivative function is required. In learning, hill climbing mechanisms such as Genetic Algorithms and Simulated Annealing are used to escape local minima. For unsupervised training, such as found in Self Organizing Maps used for OCR, Hebbs rule is applied. ·       The purpose of this post is not to mire you in technical and conceptual details, but to show you how to leverage neural networks via an abstraction API - Encog   Encog ·       Encog is a neural network API ·       Links to Encog: http://www.encog.org , http://www.heatonresearch.com/encog, http://www.heatonresearch.com/forum ·       Encog requires .Net 3.5 or higher – there is also a Silverlight version. Third-Party Libraries – log4net and nunit. ·       Encog supports feedforward, recurrent, self-organizing maps, radial basis function and Hopfield neural networks. ·       Encog neural networks, and related data, can be stored in .EG XML files. ·       Encog Workbench allows you to edit, train and visualize neural networks. The Encog Workbench can generate code. Synapses and layers ·       the primary building blocks - Almost every neural network will have, at a minimum, an input and output layer. In some cases, the same layer will function as both input and output layer. ·       To adapt a problem to a neural network, you must determine how to feed the problem into the input layer of a neural network, and receive the solution through the output layer of a neural network. ·       The Input Layer - For each input neuron, one double value is stored. An array is passed as input to a layer. Encog uses the interface INeuralData to hold these arrays. The class BasicNeuralData implements the INeuralData interface. Once the neural network processes the input, an INeuralData based class will be returned from the neural network's output layer. ·       convert a double array into an INeuralData object : INeuralData data = new BasicNeuralData(= new double[10]); ·       the Output Layer- The neural network outputs an array of doubles, wraped in a class based on the INeuralData interface. ·        The real power of a neural network comes from its pattern recognition capabilities. The neural network should be able to produce the desired output even if the input has been slightly distorted. ·       Hidden Layers– optional. between the input and output layers. very much a “black box”. If the structure of the hidden layer is too simple it may not learn the problem. If the structure is too complex, it will learn the problem but will be very slow to train and execute. Some neural networks have no hidden layers. The input layer may be directly connected to the output layer. Further, some neural networks have only a single layer. A single layer neural network has the single layer self-connected. ·       connections, called synapses, contain individual weight matrixes. These values are changed as the neural network learns. Constructing a Neural Network ·       the XOR operator is a frequent “first example” -the “Hello World” application for neural networks. ·       The XOR Operator- only returns true when both inputs differ. 0 XOR 0 = 0 1 XOR 0 = 1 0 XOR 1 = 1 1 XOR 1 = 0 ·       Structuring a Neural Network for XOR  - two inputs to the XOR operator and one output. ·       input: 0.0,0.0 1.0,0.0 0.0,1.0 1.0,1.0 ·       Expected output: 0.0 1.0 1.0 0.0 ·       A Perceptron - a simple feedforward neural network to learn the XOR operator. ·       Because the XOR operator has two inputs and one output, the neural network will follow suit. Additionally, the neural network will have a single hidden layer, with two neurons to help process the data. The choice for 2 neurons in the hidden layer is arbitrary, and often comes down to trial and error. ·       Neuron Diagram for the XOR Network ·       ·       The Encog workbench displays neural networks on a layer-by-layer basis. ·       Encog Layer Diagram for the XOR Network:   ·       Create a BasicNetwork - Three layers are added to this network. the FinalizeStructure method must be called to inform the network that no more layers are to be added. The call to Reset randomizes the weights in the connections between these layers. var network = new BasicNetwork(); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(1)); network.Structure.FinalizeStructure(); network.Reset(); ·       Neural networks frequently start with a random weight matrix. This provides a starting point for the training methods. These random values will be tested and refined into an acceptable solution. However, sometimes the initial random values are too far off. Sometimes it may be necessary to reset the weights again, if training is ineffective. These weights make up the long-term memory of the neural network. Additionally, some layers have threshold values that also contribute to the long-term memory of the neural network. Some neural networks also contain context layers, which give the neural network a short-term memory as well. The neural network learns by modifying these weight and threshold values. ·       Now that the neural network has been created, it must be trained. Training a Neural Network ·       construct a INeuralDataSet object - contains the input array and the expected output array (of corresponding range). Even though there is only one output value, we must still use a two-dimensional array to represent the output. public static double[][] XOR_INPUT ={ new double[2] { 0.0, 0.0 }, new double[2] { 1.0, 0.0 }, new double[2] { 0.0, 1.0 }, new double[2] { 1.0, 1.0 } };   public static double[][] XOR_IDEAL = { new double[1] { 0.0 }, new double[1] { 1.0 }, new double[1] { 1.0 }, new double[1] { 0.0 } };   INeuralDataSet trainingSet = new BasicNeuralDataSet(XOR_INPUT, XOR_IDEAL); ·       Training is the process where the neural network's weights are adjusted to better produce the expected output. Training will continue for many iterations, until the error rate of the network is below an acceptable level. Encog supports many different types of training. Resilient Propagation (RPROP) - general-purpose training algorithm. All training classes implement the ITrain interface. The RPROP algorithm is implemented by the ResilientPropagation class. Training the neural network involves calling the Iteration method on the ITrain class until the error is below a specific value. The code loops through as many iterations, or epochs, as it takes to get the error rate for the neural network to be below 1%. Once the neural network has been trained, it is ready for use. ITrain train = new ResilientPropagation(network, trainingSet);   for (int epoch=0; epoch < 10000; epoch++) { train.Iteration(); Debug.Print("Epoch #" + epoch + " Error:" + train.Error); if (train.Error > 0.01) break; } Executing a Neural Network ·       Call the Compute method on the BasicNetwork class. Console.WriteLine("Neural Network Results:"); foreach (INeuralDataPair pair in trainingSet) { INeuralData output = network.Compute(pair.Input); Console.WriteLine(pair.Input[0] + "," + pair.Input[1] + ", actual=" + output[0] + ",ideal=" + pair.Ideal[0]); } ·       The Compute method accepts an INeuralData class and also returns a INeuralData object. Neural Network Results: 0.0,0.0, actual=0.002782538818034049,ideal=0.0 1.0,0.0, actual=0.9903741937121177,ideal=1.0 0.0,1.0, actual=0.9836807956566187,ideal=1.0 1.0,1.0, actual=0.0011646072586172778,ideal=0.0 ·       the network has not been trained to give the exact results. This is normal. Because the network was trained to 1% error, each of the results will also be within generally 1% of the expected value.

    Read the article

  • Applications: The mathematics of movement, Part 1

    - by TechTwaddle
    Before you continue reading this post, a suggestion; if you haven’t read “Programming Windows Phone 7 Series” by Charles Petzold, go read it. Now. If you find 150+ pages a little too long, at least go through Chapter 5, Principles of Movement, especially the section “A Brief Review of Vectors”. This post is largely inspired from this chapter. At this point I assume you know what vectors are, how they are represented using the pair (x, y), what a unit vector is, and given a vector how you would normalize the vector to get a unit vector. Our task in this post is simple, a marble is drawn at a point on the screen, the user clicks at a random point on the device, say (destX, destY), and our program makes the marble move towards that point and stop when it is reached. The tricky part of this task is the word “towards”, it adds a direction to our problem. Making a marble bounce around the screen is simple, all you have to do is keep incrementing the X and Y co-ordinates by a certain amount and handle the boundary conditions. Here, however, we need to find out exactly how to increment the X and Y values, so that the marble appears to move towards the point where the user clicked. And this is where vectors can be so helpful. The code I’ll show you here is not ideal, we’ll be working with C# on Windows Mobile 6.x, so there is no built-in vector class that I can use, though I could have written one and done all the math inside the class. I think it is trivial to the actual problem that we are trying to solve and can be done pretty easily once you know what’s going on behind the scenes. In other words, this is an excuse for me being lazy. The first approach, uses the function Atan2() to solve the “towards” part of the problem. Atan2() takes a point (x, y) as input, Atan2(y, x), note that y goes first, and then it returns an angle in radians. What angle you ask. Imagine a line from the origin (0, 0), to the point (x, y). The angle which Atan2 returns is the angle the positive X-axis makes with that line, measured clockwise. The figure below makes it clear, wiki has good details about Atan2(), give it a read. The pair (x, y) also denotes a vector. A vector whose magnitude is the length of that line, which is Sqrt(x*x + y*y), and a direction ?, as measured from positive X axis clockwise. If you’ve read that chapter from Charles Petzold’s book, this much should be clear. Now Sine and Cosine of the angle ? are special. Cosine(?) divides x by the vectors length (adjacent by hypotenuse), thus giving us a unit vector along the X direction. And Sine(?) divides y by the vectors length (opposite by hypotenuse), thus giving us a unit vector along the Y direction. Therefore the vector represented by the pair (cos(?), sin(?)), is the unit vector (or normalization) of the vector (x, y). This unit vector has a length of 1 (remember sin2(?) + cos2(?) = 1 ?), and a direction which is the same as vector (x, y). Now if I multiply this unit vector by some amount, then I will always get a point which is a certain distance away from the origin, but, more importantly, the point will always be on that line. For example, if I multiply the unit vector with the length of the line, I get the point (x, y). Thus, all we have to do to move the marble towards our destination point, is to multiply the unit vector by a certain amount each time and draw the marble, and the marble will magically move towards the click point. Now time for some code. The application, uses a timer based frame draw method to draw the marble on the screen. The timer is disabled initially and whenever the user clicks on the screen, the timer is enabled. The callback function for the timer follows the standard Update and Draw cycle. private double totLenToTravelSqrd = 0; private double startPosX = 0, startPosY = 0; private double destX = 0, destY = 0; private void Form1_MouseUp(object sender, MouseEventArgs e) {     destX = e.X;     destY = e.Y;     double x = marble1.x - destX;     double y = marble1.y - destY;     //calculate the total length to be travelled     totLenToTravelSqrd = x * x + y * y;     //store the start position of the marble     startPosX = marble1.x;     startPosY = marble1.y;     timer1.Enabled = true; } private void timer1_Tick(object sender, EventArgs e) {     UpdatePosition();     DrawMarble(); } Form1_MouseUp() method is called when ever the user touches and releases the screen. In this function we save the click point in destX and destY, this is the destination point for the marble and we also enable the timer. We store a few more values which we will use in the UpdatePosition() method to detect when the marble has reached the destination and stop the timer. So we store the start position of the marble and the square of the total length to be travelled. I’ll leave out the term ‘sqrd’ when speaking of lengths from now on. The time out interval of the timer is set to 40ms, thus giving us a frame rate of about ~25fps. In the timer callback, we update the marble position and draw the marble. We know what DrawMarble() does, so here, we’ll only look at how UpdatePosition() is implemented; private void UpdatePosition() {     //the vector (x, y)     double x = destX - marble1.x;     double y = destY - marble1.y;     double incrX=0, incrY=0;     double distanceSqrd=0;     double speed = 6;     //distance between destination and current position, before updating marble position     distanceSqrd = x * x + y * y;     double angle = Math.Atan2(y, x);     //Cos and Sin give us the unit vector, 6 is the value we use to magnify the unit vector along the same direction     incrX = speed * Math.Cos(angle);     incrY = speed * Math.Sin(angle);     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;     }     //distance between destination and current point, after updating marble position     x = destX - marble1.x;     y = destY - marble1.y;     double newDistanceSqrd = x * x + y * y;     //length from start point to current marble position     x = startPosX - (marble1.x);     y = startPosY - (marble1.y);     double lenTraveledSqrd = x * x + y * y;     //check for end conditions     if ((int)lenTraveledSqrd >= (int)totLenToTravelSqrd)     {         System.Console.WriteLine("Stopping because destination reached");         timer1.Enabled = false;     }     else if (Math.Abs((int)distanceSqrd - (int)newDistanceSqrd) < 4)     {         System.Console.WriteLine("Stopping because no change in Old and New position");         timer1.Enabled = false;     } } Ok, so in this function, first we subtract the current marble position from the destination point to give us a vector. The first three lines of the function construct this vector (x, y). The vector (x, y) has the same length as the line from (marble1.x, marble1.y) to (destX, destY) and is in the direction pointing from (marble1.x, marble1.y) to (destX, destY). Note that marble1.x and marble1.y denote the center point of the marble. Then we use Atan2() to get the angle which this vector makes with the positive X axis and use Cosine() and Sine() of that angle to get the unit vector along that same direction. We multiply this unit vector with 6, to get the values which the position of the marble should be incremented by. This variable, speed, can be experimented with and determines how fast the marble moves towards the destination. After this, we check for bounds to make sure that the marble stays within the screen limits and finally we check for the end condition and stop the timer. The end condition has two parts to it. The first case is the normal case, where the user clicks well inside the screen. Here, we stop when the total length travelled by the marble is greater than or equal to the total length to be travelled. Simple enough. The second case is when the user clicks on the very corners of the screen. Like I said before, the values marble1.x and marble1.y denote the center point of the marble. When the user clicks on the corner, the marble moves towards the point, and after some time tries to go outside of the screen, this is when the bounds checking comes into play and corrects the marble position so that the marble stays inside the screen. In this case the marble will never travel a distance of totLenToTravelSqrd, because of the correction is its position. So here we detect the end condition when there is not much change in marbles position. I use the value 4 in the second condition above. After experimenting with a few values, 4 seemed to work okay. There is a small thing missing in the code above. In the normal case, case 1, when the update method runs for the last time, marble position over shoots the destination point. This happens because the position is incremented in steps (which are not small enough), so in this case too, we should have corrected the marble position, so that the center point of the marble sits exactly on top of the destination point. I’ll add this later and update the post. This has been a pretty long post already, so I’ll leave you with a video of how this program looks while running. Notice in the video that the marble moves like a bot, without any grace what so ever. And that is because the speed of the marble is fixed at 6. In the next post we will see how to make the marble move a little more elegantly. And also, if Atan2(), Sine() and Cosine() are a little too much to digest, we’ll see how to achieve the same effect without using them, in the next to next post maybe. Ciao!

    Read the article

  • Tile engine Texture not updating when numbers in array change

    - by Corey
    I draw my map from a txt file. I am using java with slick2d library. When I print the array the number changes in the array, but the texture doesn't change. public class Tiles { public Image[] tiles = new Image[5]; public int[][] map = new int[64][64]; public Image grass, dirt, fence, mound; private SpriteSheet tileSheet; public int tileWidth = 32; public int tileHeight = 32; public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); fence = tileSheet.getSprite(2, 0); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = fence; tiles[3] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.dat")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); x = 0; for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); x++; } //System.out.println(""); y++; } in.close(); } public void update(GameContainer gc) { } public void render(GameContainer gc) { for(int y = 0; y < map.length; y++) { for(int x = 0; x < map[0].length; x ++) { int textureIndex = map[x][y]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } } Mouse Picking Where I change the number in the array Input input = gc.getInput(); gc.getInput().setOffset(cameraX-400, cameraY-300); float mouseX = input.getMouseX(); float mouseY = input.getMouseY(); double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { System.out.println("Clicked"); if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } I never ask a question until I have tried to figure it out myself. I have been stuck with this problem for two weeks. It's not like this site is made for asking questions or anything. So if you actually try to help me instead of telling me to use a debugger thank you. You either get told you have too much or too little code. Nothing is never enough for the people on here it's as bad as something like reddit. Idk what is wrong all my textures work when I render them it just doesn't update when the number in the array changes. I am obviously debugging when I say that I was printing the array and the number is changing like it should, so it's not a problem with my mouse picking code. It is a problem with my textures, but I don't know what because they all render correctly. That is why I need help.

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >