Search Results

Search found 18096 results on 724 pages for 'let'.

Page 671/724 | < Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >

  • Mootools 1.2.4 delegation not working in IE8...?

    - by michael
    Hey there everybody-- So I have a listbox next to a form. When the user clicks an option in the select box, I make a request for the related data, returned in a JSON object, which gets put into the form elements. When the form is saved, the request goes thru and the listbox is rebuilt with the updated data. Since it's being rebuilt I'm trying to use delegation on the listbox's parent div for the onchange code. The trouble I'm having is with IE8 (big shock) not firing the delegated event. I have the following HTML: <div id="listwrapper" class="span-10 append-1 last"> <select id="list" name="list" size="20"> <option value="86">Adrian Franklin</option> <option value="16">Adrian McCorvey</option> <option value="196">Virginia Thomas</option> </select> </div> and the following script to go with it: window.addEvent('domready', function() { var jsonreq = new Request.JSON(); $('listwrapper').addEvent('change:relay(select)', function(e) { alert('this doesn't fire in IE8'); e.stop(); var status= $('statuswrapper').empty().addClass('ajax-loading'); jsonreq.options.url = 'de_getformdata.php'; jsonreq.options.method = 'post'; jsonreq.options.data = {'getlist':'<?php echo $getlist ?>','pkey':$('list').value}; jsonreq.onSuccess = function(rObj, rTxt) { status.removeClass('ajax-loading'); for (key in rObj) { status.set('html','You are currently editing '+rObj['cname']); if ($chk($(key))) $(key).value = rObj[key]; } $('lalsoaccomp-yes').set('checked',(($('naccompkey').value > 0)?'true':'false')); $('lalsoaccomp-no').set('checked',(($('naccompkey').value > 0)?'false':'true')); } jsonreq.send(); }); }); (I took out a bit of unrelated stuff). So this all works as expected in firefox, but IE8 refuses to fire the delegated change event on the select element. If I attach the change function directly to the select, then it works just fine. Am I missing something? Does IE8 just not like the :relay? Sidenote: I'm very new to mootools and javascripting, etc, so if there's something that can be improved code-wise, please let me know too.. Thanks!

    Read the article

  • Need some constructive criticism on my SSE/Assembly attempt

    - by Brett
    Hello, I'm working on converting a bit of code to SSE, and while I have the correct output it turns out to be slower than standard c++ code. The bit of code that I need to do this for is: float ox = p2x - (px * c - py * s)*m; float oy = p2y - (px * s - py * c)*m; What I've got for SSE code is: void assemblycalc(vector4 &p, vector4 &sc, float &m, vector4 &xy) { vector4 r; __m128 scale = _mm_set1_ps(m); __asm { mov eax, p //Load into CPU reg mov ebx, sc movups xmm0, [eax] //move vectors to SSE regs movups xmm1, [ebx] mulps xmm0, xmm1 //Multiply the Elements movaps xmm2, xmm0 //make a copy of the array shufps xmm2, xmm0, 0x1B //shuffle the array subps xmm0, xmm2 //subtract the elements mulps xmm0, scale //multiply the vector by the scale mov ecx, xy //load the variable into cpu reg movups xmm3, [ecx] //move the vector to the SSE regs subps xmm3, xmm0 //subtract xmm3 - xmm0 movups [r], xmm3 //Save the retun vector, and use elements 0 and 3 } } Since its very difficult to read the code, I'll explain what I did: loaded vector4 , xmm0 _ p = [px , py , px , py ] mult. by vector4, xmm1 _ cs = [c , c , s , s ] _____________mult---------------------------- result,______ xmm0 = [px*c, py*c, px*s, py*s] reuse result, xmm0 = [px*c, py*c, px*s, py*s] shuffle result, xmm2 = [py*s, px*s, py*c, px*c] ___________subtract---------------------------- result, xmm0 = [px*c-py*s, py*c-px*s, px*s-py*c, py*s-px*c] reuse result, xmm0 = [px*c-py*s, py*c-px*s, px*s-py*c, py*s-px*c] load m vector4, scale = [m, m, m, m] ______________mult---------------------------- result, xmm0 = [(px*c-py*s)*m, (py*c-px*s)*m, (px*s-py*c)*m, (py*s-px*c)*m] load xy vector4, xmm3 = [p2x, p2x, p2y, p2y] reuse, xmm0 = [(px*c-py*s)*m, (py*c-px*s)*m, (px*s-py*c)*m, (py*s-px*c)*m] ___________subtract---------------------------- result, xmm3 = [p2x-(px*c-py*s)*m, p2x-(py*c-px*s)*m, p2y-(px*s-py*c)*m, p2y-(py*s-px*c)*m] then ox = xmm3[0] and oy = xmm3[3], so I essentially don't use xmm3[1] or xmm3[4] I apologize for the difficulty reading this, but I'm hoping someone might be able to provide some guidance for me, as the standard c++ code runs in 0.001444ms and the SSE code runs in 0.00198ms. Let me know if there is anything I can do to further explain/clean this up a bit. The reason I'm trying to use SSE is because I run this calculation millions of times, and it is a part of what is slowing down my current code. Thanks in advance for any help! Brett

    Read the article

  • Parse lines of integers in C

    - by Jérôme
    This is a classical problem, but I can not find a simple solution. I have an input file like: 1 3 9 13 23 25 34 36 38 40 52 54 59 2 3 9 14 23 26 34 36 39 40 52 55 59 63 67 76 85 86 90 93 99 108 114 2 4 9 15 23 27 34 36 63 67 76 85 86 90 93 99 108 115 1 25 34 36 38 41 52 54 59 63 67 76 85 86 90 93 98 107 113 2 3 9 16 24 28 2 3 10 14 23 26 34 36 39 41 52 55 59 63 67 76 Lines of different number of integers separated by a space. I would like to parse them in an array, and separate each line with a marker, let say -1. The difficulty is that I must handle integers and line returns. Here my existing code, it loops upon the scanf loop (because scanf can not begin at a given position). #include <stdio.h> #include <stdlib.h> int main(int argc, char **argv) { if (argc != 4) { fprintf(stderr, "Usage: %s <data file> <nb transactions> <nb items>\n", argv[0]); return 1; } FILE * file; file = fopen (argv[1],"r"); if (file==NULL) { fprintf(stderr, "Error: can not open %s\n", argv[1]); fclose(file); return 1; } int nb_trans = atoi(argv[2]); int nb_items = atoi(argv[3]); int *bdd = malloc(sizeof(int) * (nb_trans + nb_items)); char line[1024]; int i = 0; while ( fgets(line, 1024, file) ) { int item; while ( sscanf (line, "%d ", &item )){ printf("%s %d %d\n", line, i, item); bdd[i++] = item; } bdd[i++] = -1; } for ( i = 0; i < nb_trans + nb_items; i++ ) { printf("%d ", bdd[i]); } printf("\n"); }

    Read the article

  • Alert on gridview edit based on permission

    - by Vicky
    I have a gridview with edit option at the start of the row. Also I maintain a seperate table called Permission where I maintain user permissions. I have three different types of permissions like Admin, Leads, Programmers. These all three will have access to the gridview. Except admin if anyone tries to edit the gridview on clicking the edit option, I need to give an alert like This row has important validation and make sure you make proper changes. When I edit, the action with happen on table called Application. The table has a column called Comments. Also the alert should happen only when they try to edit rows where the Comments column have these values in them. ManLog datas Funding Approved Exported Applications My try so far. public bool IsApplicationUser(string userName) { return CheckUser(userName); } public static bool CheckUser(string userName) { string CS = ConfigurationManager.ConnectionStrings["ConnectionString"].ToString(); DataTable dt = new DataTable(); using (SqlConnection connection = new SqlConnection(CS)) { SqlCommand command = new SqlCommand(); command.Connection = connection; string strquery = "select * from Permissions where AppCode='Nest' and UserID = '" + userName + "'"; SqlCommand cmd = new SqlCommand(strquery, connection); SqlDataAdapter da = new SqlDataAdapter(cmd); da.Fill(dt); } if (dt.Rows.Count >= 1) return true; else return true; } protected void Details_RowCommand(object sender, GridViewCommandEventArgs e) { string currentUser = HttpContext.Current.Request.LogonUserIdentity.Name; string str = ConfigurationManager.ConnectionStrings["ConnectionString"].ToString(); string[] words = currentUser.Split('\\'); currentUser = words[1]; bool appuser = IsApplicationUser(currentUser); if (appuser) { DataSet ds = new DataSet(); using (SqlConnection connection = new SqlConnection(str)) { SqlCommand command = new SqlCommand(); command.Connection = connection; string strquery = "select Role_Cd from User_Role where AppCode='PM' and UserID = '" + currentUser + "'"; SqlCommand cmd = new SqlCommand(strquery, connection); SqlDataAdapter da = new SqlDataAdapter(cmd); da.Fill(ds); } if (e.CommandName.Equals("Edit") && ds.Tables[0].Rows[0]["Role_Cd"].ToString().Trim() != "ADMIN") { int index = Convert.ToInt32(e.CommandArgument); GridView gvCurrentGrid = (GridView)sender; GridViewRow row = gvCurrentGrid.Rows[index]; string strID = ((Label)row.FindControl("lblID")).Text; string strAppName = ((Label)row.FindControl("lblAppName")).Text; Response.Redirect("AddApplication.aspx?ID=" + strID + "&AppName=" + strAppName + "&Edit=True"); } } } Kindly let me know if I need to add something. Thanks for any suggestions.

    Read the article

  • Help with C# program design implementation: multiple array of lists or a better way?

    - by Bob
    I'm creating a 2D tile-based RPG in XNA and am in the initial design phase. I was thinking of how I want my tile engine to work and came up with a rough sketch. Basically I want a grid of tiles, but at each tile location I want to be able to add more than one tile and have an offset. I'd like this so that I could do something like add individual trees on the world map to give more flair. Or set bottles on a bar in some town without having to draw a bunch of different bar tiles with varying bottles. But maybe my reach is greater than my grasp. I went to implement the idea and had something like this in my Map object: List<Tile>[,] Grid; But then I thought about it. Let's say I had a world map of 200x200, which would actually be pretty small as far as RPGs go. That would amount to 40,000 Lists. To my mind I think there has to be a better way. Now this IS pre-mature optimization. I don't know if the way I happen to design my maps and game will be able to handle this, but it seems needlessly inefficient and something that could creep up if my game gets more complex. One idea I have is to make the offset and the multiple tiles optional so that I'm only paying for them when needed. But I'm not sure how I'd do this. A multiple array of objects? object[,] Grid; So here's my criteria: A 2D grid of tile locations Each tile location has a minimum of 1 tile, but can optionally have more Each extra tile can optionally have an x and y offset for pinpoint placement Can anyone help with some ideas for implementing such a design (don't need it done for me, just ideas) while keeping memory usage to a minimum? If you need more background here's roughly what my Map and Tile objects amount to: public struct Map { public Texture2D Texture; public List<Rectangle> Sources; //Source Rectangles for where in Texture to get the sprite public List<Tile>[,] Grid; } public struct Tile { public int Index; //Where in Sources to find the source Rectangle public int X, Y; //Optional offsets }

    Read the article

  • How to access a matrix in a matlab struct's field from a mex function?

    - by B. Ruschill
    I'm trying to figure out how to access a matrix that is stored in a field in a matlab structure from a mex function. That's awfully long winded... Let me explain: I have a matlab struct that was defined like the following: matrixStruct = struct('matrix', {4, 4, 4; 5, 5, 5; 6, 6 ,6}) I have a mex function in which I would like to be able to receive a pointer to the first element in the matrix (matrix[0][0], in c terms), but I've been unable to figure out how to do that. I have tried the following: /* Pointer to the first element in the matrix (supposedly)... */ double *ptr = mxGetPr(mxGetField(prhs[0], 0, "matrix"); /* Incrementing the pointer to access all values in the matrix */ for(i = 0; i < 3; i++){ printf("%f\n", *(ptr + (i * 3))); printf("%f\n", *(ptr + 1 + (i * 3))); printf("%f\n", *(ptr + 2 + (i * 3))); } What this ends up printing is the following: 4.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 I have also tried variations of the following, thinking that perhaps it was something wonky with nested function calls, but to no avail: /* Pointer to the first location of the mxArray */ mxArray *fieldValuePtr = mxGetField(prhs[0], 0, "matrix"); /* Get the double pointer to the first location in the matrix */ double *ptr = mxGetPr(fieldValuePtr); /* Same for loop code here as written above */ Does anyone have an idea as to how I can achieve what I'm trying to, or what I am potentially doing wrong? Thanks! Edit: As per yuk's comment, I tried doing similar operations on a struct that has a field called array which is a one-dimensional array of doubles. The struct containing the array is defined as follows: arrayStruct = struct('array', {4.44, 5.55, 6.66}) I tried the following on the arrayStruct from within the mex function: mptr = mxGetPr(mxGetField(prhs[0], 0, "array")); printf("%f\n", *(mptr)); printf("%f\n", *(mptr + 1)); printf("%f\n", *(mptr + 2)); ...but the output followed what was printed earlier: 4.440000 0.000000 0.000000

    Read the article

  • Setting up a "to-many" relationship value dependency for a transient Core Data attribute

    - by Greg Combs
    I've got a relatively complicated Core Data relationship structure and I'm trying to figure out how to set up value dependencies (or observations) across various to-many relationships. Let me start out with some basic info. I've got a classroom with students, assignments, and grades (students X assignments). For simplicity's sake, we don't really have to focus much on the assignments yet. StudentObj <--->> ScoreObj <<---> AssignmentObj Each ScoreObj has a to-one relation with the StudentObj and the AssignmentObj. ScoreObj has real attributes for the numerical grade, the turnInDate, and notes. AssignmentObj.scores is the set of Score objects for that assignment (N = all students). AssignmentObj has real attributes for name, dueDate, curveFunction, gradeWeight, and maxPoints. StudentObj.scores is the set of Score objects for that student (N = all assignments). StudentObj also has real attributes like name, studentID, email, etc. StudentObj has a transient (calculated, not stored) attribute called gradeTotal. This last item, gradeTotal, is the real pickle. it calculates the student's overall semester grade using the scores (ScoreObj) from all their assignments, their associated assignment gradeWeights, curves, and maxPoints, and various other things. This gradeTotal value is displayed in a table column, along with all the students and their individual assignment grades. Determining the value of gradeTotal is a relatively expensive operation, particularly with a large class, therefore I want to run it only when necessary. For simplicity's sake, I'm not storing that gradeTotal value in the core data model. I don't mind caching it somewhere, but I'm having a bitch of a time determining where and how to best update that cache. I need to run that calculation for each student whenever any value changes that affects their gradeTotal. If this were a simple to-one relationship, I know I could use something like keyPathsForValuesAffectingGradeTotal ... but it's more like a many-to-one-to-many relationship. Does anyone know of an elegant (and KVC correct) solution? I guess I could tear through all those score and assignment objects and tell them to register their students as observers. But this seems like a blunt force approach.

    Read the article

  • How to change quicksort to output elements in descending order?

    - by masato-san
    Hi, I wrote a quicksort algorithm however, I would like to make a change somewhere so that this quicksort would output elements in descending order. I searched and found that I can change the comparison operator (<) in partition() to other way around (like below). //This is snippet from partition() function while($array[$l] < $pivot) { $l++; } while($array[$r] > $pivot) { $r--; } But it is not working.. If I quicksort the array below, $array = (3,9,5,7); should be: $array = (9,7,5,3) But actual output is: $array = (3,5,7,9) Below is my quicksort which trying to output elements in descending order. How should I make change to sort in descending order? If you need any clarification please let me know. Thanks! $array = (3,9,5,7); $app = new QuicksortDescending(); $app->quicksort($array, 0, count($array)); print_r($array); class QuicksortDescending { public function partitionDesc(&$array, $left, $right) { $l = $left; $r = $right; $pivot = $array[($right+$left)/2]; while($l <= $r) { while($array[$l] > $pivot) { $l++; } while($array[$r] < $pivot) { $r--; } if($l <= $r) {// if L and R haven't cross $this->swap($array, $l, $r); $l ++; $j --; } } return $l; } public function quicksortDesc(&$array, $left, $right) { $index = $this->partition($array, $left, $right); if($left < $index-1) { //if there is more than 1 element to sort on right subarray $this->quicksortDesc($array, $left, $index-1); } if($index < $right) { //if there is more than 1 element to sort on right subarray $this->quicksortDesc($array, $index, $right); } } }

    Read the article

  • Can this way of storing typed objects be improved?

    - by Pindatjuh
    This is an "can it be improved"-question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 Windows platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical position when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignment. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 01010101 01010101 01010101 01010101 Data (user defined) Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 01010101 01010101 01010101 01010101 How can I improve this? (Both performance- and memory-consumption wise)

    Read the article

  • Using fft2 with reshaping for an RGB filter

    - by Mahmoud Aladdin
    I want to apply a filter on an image, for example, blurring filter [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]. Also, I'd like to use the approach that convolution in Spatial domain is equivalent to multiplication in Frequency domain. So, my algorithm will be like. Load Image. Create Filter. convert both Filter & Image to Frequency domains. multiply both. reconvert the output to Spatial Domain and that should be the required output. The following is the basic code I use, the image is loaded and displayed as cv.cvmat object. Image is a class of my creation, it has a member image which is an object of scipy.matrix and toFrequencyDomain(size = None) uses spf.fftshift(spf.fft2(self.image, size)) where spf is scipy.fftpack and dotMultiply(img) uses scipy.multiply(self.image, image) f = Image.fromMatrix([[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]) lena = Image.fromFile("Test/images/lena.jpg") print lena.image.shape lenaf = lena.toFrequencyDomain(lena.image.shape) ff = f.toFrequencyDomain(lena.image.shape) lenafm = lenaf.dotMultiplyImage(ff) lenaff = lenafm.toTimeDomain() lena.display() lenaff.display() So, the previous code works pretty well, if I told OpenCV to load the image via GRAY_SCALE. However, if I let the image to be loaded in color ... lena.image.shape will be (512, 512, 3) .. so, it gives me an error when using scipy.fttpack.ftt2 saying "When given, Shape and Axes should be of same length". What I tried next was converted my filter to 3-D .. as [[[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]] And, not knowing what the axes argument do, I added it with random numbers as (-2, -1, -1), (-1, -1, -2), .. etc. until it gave me the correct filter output shape for the dotMultiply to work. But, of course it wasn't the correct value. Things were totally worse. My final trial, was using fft2 function on each of the components 2-D matrices, and then re-making the 3-D one, using the following code. # Spiltting the 3-D matrix to three 2-D matrices. for i, row in enumerate(self.image): r.append(list()) g.append(list()) b.append(list()) for pixel in row: r[i].append(pixel[0]) g[i].append(pixel[1]) b[i].append(pixel[2]) rfft = spf.fftshift(spf.fft2(r, size)) gfft = spf.fftshift(spf.fft2(g, size)) bfft = spf.fftshift(spf.fft2(b, size)) newImage.image = sp.asarray([[[rfft[i][j], gfft[i][j], bfft[i][j]] for j in xrange(len(rfft[i]))] for i in xrange(len(rfft))] ) return newImage Any help on what I made wrong, or how can I achieve that for both GreyScale and Coloured pictures.

    Read the article

  • Error implementing Dynamic Tab with ListView within each Tab - Tabs don't show up

    - by Jon
    I am trying to create a dyanmic tab application that has tabs on the left for each type of Food (e.g. Pasta, Dairy, etc. including those dynamically defined by user) and for each tab, have a listview connected to a SQLite database that updates. This is in the onCreate method: setContentView(R.layout.foodCabinet_layout); dbHelper = new FoodStorageDBHelper(this); dbHelper.open(); Cursor foodTypes = dbHelper.getAllFoodTypes(); ArrayList<String> typeOfFood = new ArrayList<String>(); typeOfFoods.add(foodTypes.getString(1)); while(foodTypes.moveToNext()){ typeOfFoods.add(foodTypes.getString(1)); } final TabHost tbh = (TabHost)findViewById(R.id.food_tabhost_cabinet); tbh.setup(); for(final String s : typeOfFood){ TabSpec nts = tbh.newTabSpec(s); nts.setIndicator(s.replace('_', ' ')); nts.setContent(new TabHost.TabContentFactory(){ public View createTabContent(String tag) { ListView foodCabinetLV = new ListView(FoodCabinet.this); Cursor mCursor = dbHelper.getAllFoodsWithType(s); // Create an array to specify the fields we want to display in the list (only TITLE) String[] from = new String[]{FoodStorageDBHelper.KEY_NAME, FoodStorageDBHelper.KEY_YEAR,FoodStorageDBHelper.KEY_RANK}; // and an array of the fields we want to bind those fields to (in this case just text1) int[] to = new int[]{R.id.childname,R.id.childyear,R.id.childrank}; // Now create a simple cursor adapter and set it to display SimpleCursorAdapter notes = new SimpleCursorAdapter(FoodCabinet.this, R.layout.food_row, mCursor, from, to); foodCabinetLV.setAdapter(notes); startManagingCursor(mCursor); foodCabinetLV.setOnItemClickListener( new AdapterView.OnItemClickListener() { public void onItemClick(AdapterView arg0, View arg1, int arg2, long arg3) { Intent i = new Intent(); i.setClass(arg0.getContext(), EditFoodDialog.class); i.putExtra(FoodStorageDBHelper.KEY_ROWID, Long.toString(arg3)); startActivityForResult(i, LoadFoodOrganizer.ACTIVITY_EDIT); } }); return foodCabinetLV; } }); } For some reason, it doesn't show anything... it's a blank screen. Any help would be very much appreciated. Let me know! Thanks! Jon

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • Setting up Netbeans/Eclipse for Linux Kernel Development

    - by red.october
    Hi: I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include: files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions. cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem. can't find header files: self-explanatory I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion. Let me preface this question with some points: I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them. As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine SO's useful "Related Questions" feature has informed me that the following question is related: http://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers. Cheers

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • C strange array behaviour

    - by LukeN
    After learning that both strncmp is not what it seems to be and strlcpy not being available on my operating system (Linux), I figured I could try and write it myself. I found a quote from Ulrich Drepper, the libc maintainer, who posted an alternative to strlcpy using mempcpy. I don't have mempcpy either, but it's behaviour was easy to replicate. First of, this is the testcase I have #include <stdio.h> #include <string.h> #define BSIZE 10 void insp(const char* s, int n) { int i; for (i = 0; i < n; i++) printf("%c ", s[i]); printf("\n"); for (i = 0; i < n; i++) printf("%02X ", s[i]); printf("\n"); return; } int copy_string(char *dest, const char *src, int n) { int r = strlen(memcpy(dest, src, n-1)); dest[r] = 0; return r; } int main() { char b[BSIZE]; memset(b, 0, BSIZE); printf("Buffer size is %d", BSIZE); insp(b, BSIZE); printf("\nFirst copy:\n"); copy_string(b, "First", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); printf("\nSecond copy:\n"); copy_string(b, "Second", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); return 0; } And this is its result: Buffer size is 10 00 00 00 00 00 00 00 00 00 00 First copy: F i r s t b = 46 69 72 73 74 00 62 20 3D 00 b = 'First' Second copy: S e c o n d 53 65 63 6F 6E 64 00 00 01 00 b = 'Second' You can see in the internal representation (the lines insp() created) that there's some noise mixed in, like the printf() format string in the inspection after the first copy, and a foreign 0x01 in the second copy. The strings are copied intact and it correctly handles too long source strings (let's ignore the possible issue with passing 0 as length to copy_string for now, I'll fix that later). But why are there foreign array contents (from the format string) inside my destination? It's as if the destination was actually RESIZED to match the new length.

    Read the article

  • xml append issue in ie,chrome

    - by 3gwebtrain
    Hi, I am creating a html page, using xml data. in which i am using the following function. It works fine with firefox,opera,safari. but in case of ie7,ie8, and chrome the data what i am getting from xml, is not appending properly. any one help me to solve this issue? in case any special thing need to concentrate on append funcation as well let me know.. $(function(){ var thisPage; var parentPage; $('ul.left-navi li a').each(function(){ $('ul.left-navi li a').removeClass('current'); var pathname = (window.location.pathname.match(/[^\/]+$/)[0]); var currentPage = $(this).attr('href'); var pathArr = new Array(); pathArr = pathname.split("."); var file = pathArr[pathArr.length - 2]; thisPage = file; if(currentPage==pathname){ $(this).addClass("active"); } }) $.get('career-utility.xml',function(myData){ var myXml = $(myData).find(thisPage); parentPage = thisPage; var overviewTitle = myXml.find('overview').attr('title'); var description = myXml.find('discription').text(); var mainsublinkTitle = myXml.find('mainsublink').attr('title'); var thisTitle = myXml.find("intro").attr('title'); var thisIntro = myXml.find("introinfo").text(); $('<h3>'+overviewTitle+'</h3>').appendTo('.overViewInfo'); $('<p>'+description+'</p>').appendTo('.overViewInfo'); var sublinks = myXml.find('mainsublink').children('sublink'); $('#intro h3').append(thisTitle); $('#intro').append(thisIntro); sublinks.each(function(numsub){ var newSubLink = $(this); var sublinkPage = $(this).attr('pageto'); var linkInfo = $(this).text(); $('ul.career-link').append('<li><a href="'+sublinkPage+'">'+linkInfo+'</a></li>'); }) $(myXml).find('listgroup').each(function(index){ var count = index; var listGroup = $(this); var listGroupTitle = $(this).attr('title'); var shortNote = $(this).attr('shortnote'); var subLink = $(this).find('sublist'); var firstList = $(this).find('list'); $('.grouplist').append('<div class="list-group"><h3>'+listGroupTitle+'</h3><ul class="level-one level' + count + '"></ul></div>'); firstList.each(function(listnum) { $(this).wrapInner('<li>') .find('sublistgroup').wrapInner('<ul>').children().unwrap() .find('sublist').wrapInner('<li>').children().unwrap(); $('ul.level'+count).append($(this).children()); }); }); }); }) Thanks for advance..

    Read the article

  • How can I prevent segmentation faults in my program?

    - by worlds-apart89
    I have a C assignment. It is a lot longer than the code shown below, and we are given the function prototypes and instructions only. I have done my best at writing code, but I am stuck with segmentation faults. When I compile and run the program below on Linux, at "735 NaN" it will terminate, indicating a segfault occurred. Why? What am I doing wrong? Basically, the program does not let me access table-list_array[735]-value and table-list_array[735]-key. This is of course the first segfault. There might be more following index 735. #include <stdio.h> #include <stdlib.h> typedef struct list_node list_node_t; struct list_node { char *key; int value; list_node_t *next; }; typedef struct count_table count_table_t; struct count_table { int size; list_node_t **list_array; }; count_table_t* table_allocate(int size) { count_table_t *ptr = malloc(sizeof(count_table_t)); ptr->size = size; list_node_t *nodes[size]; int k; for(k=0; k<size; k++){ nodes[k] = NULL; } ptr->list_array = nodes; return ptr; } void table_addvalue(count_table_t *table) { int i; for(i=0; i<table->size; i++) { table->list_array[i] = malloc(sizeof(list_node_t)); table->list_array[i]->value = i; table->list_array[i]->key = "NaN"; table->list_array[i]->next = NULL; } } int main() { count_table_t *table = table_allocate(1000); table_addvalue(table); int i; for(i=0; i<table->size; i++) printf("%d %s\n", table->list_array[i]->value, table->list_array[i]->key); return 0; }

    Read the article

  • Is this a legitimate implementation of a 'remember me' function for my web app?

    - by user246114
    Hi, I'm trying to add a "remember me" feature to my web app to let a user stay logged in between browser restarts. I think I got the bulk of it. I'm using google app engine for the backend which lets me use java servlets. Here is some pseudo-code to demo: public class MyServlet { public void handleRequest() { if (getThreadLocalRequest().getSession().getAttribute("user") != null) { // User already has session running for them. } else { // No session, but check if they chose 'remember me' during // their initial login, if so we can have them 'auto log in' // now. Cookie[] cookies = getThreadLocalRequest().getCookies(); if (cookies.find("rememberMePlz").exists()) { // The value of this cookie is the cookie id, which is a // unique string that is in no way based upon the user's // name/email/id, and is hard to randomly generate. String cookieid = cookies.find("rememberMePlz").value(); // Get the user object associated with this cookie id from // the data store, would probably be a two-step process like: // // select * from cookies where cookieid = 'cookieid'; // select * from users where userid = 'userid fetched from above select'; User user = DataStore.getUserByCookieId(cookieid); if (user != null) { // Start session for them. getThreadLocalRequest().getSession() .setAttribute("user", user); } else { // Either couldn't find a matching cookie with the // supplied id, or maybe we expired the cookie on // our side or blocked it. } } } } } // On first login, if user wanted us to remember them, we'd generate // an instance of this object for them in the data store. We send the // cookieid value down to the client and they persist it on their side // in the "rememberMePlz" cookie. public class CookieLong { private String mCookieId; private String mUserId; private long mExpirationDate; } Alright, this all makes sense. The only frightening thing is what happens if someone finds out the value of the cookie? A malicious individual could set that cookie in their browser and access my site, and essentially be logged in as the user associated with it! On the same note, I guess this is why the cookie ids must be difficult to randomly generate, because a malicious user doesn't have to steal someone's cookie - they could just randomly assign cookie values and start logging in as whichever user happens to be associated with that cookie, if any, right? Scary stuff, I feel like I should at least include the username in the client cookie such that when it presents itself to the server, I won't auto-login unless the username+cookieid match in the DataStore. Any comments would be great, I'm new to this and trying to figure out a best practice. I'm not writing a site which contains any sensitive personal information, but I'd like to minimize any potential for abuse all the same, Thanks

    Read the article

  • JSF 2.0: Preserving component state across multiple views

    - by tlind
    The web application I am developing using MyFaces 2.0.3 / PrimeFaces 2.2RC2 is divided into a content and a navigation area. In the navigation area, which is included into multiple pages using templating (i.e. <ui:define>), there are some widgets (e.g. a navigation tree, collapsible panels etc.) of which I want to preserve the component state across views. For example, let's say I am on the home page. When I navigate to a product details page by clicking on a product in the navigation tree, my Java code triggers a redirect using navigationHandler.handleNavigation(context, null, "/detailspage.jsf?faces-redirect=true") Another way of getting to that details page would be by directly clicking on a product teaser that is shown on the home page. The corresponding <h:link> would lead us to the details page. In both cases, the expansion state of my navigation tree (a PrimeFaces tree component) and my collapsible panels is lost. I understand this is because the redirect / h:link results in the creation of a new view. What is the best way of dealing with this? I am already using MyFaces Orchestra in my project along with its conversation scope, but I am not sure if this is of any help here (since I'd have to bind the expansion/collapsed state of the widgets to a backing bean... but as far as I know, this is not possible). Is there a way of telling JSF which component states to propagate to the next view, assuming that the same component exists in that view? I guess I could need a pointer into the right direction here. Thanks! Update 1: I just tried binding the panels and the tree to a session-scoped bean, but this seems to have no effect. Also, I guess I would have to bind all child components (if any) manually, so this doesn't seem like the way to go. Update 2: Binding UI components to non-request scoped beans is not a good idea (see link I posted in a comment below). If there is no easier approach, I might have to proceed as follows: When a panel is collapsed or the tree is expanded, save the current state in a session-scoped backing bean (!= the UI component itself) The components' states are stored in a map. The map key is the component's (hopefully) unique, relative ID. I cannot use the whole absolute component path here, since the IDs of the parent naming containers might change if the view changes, assuming these IDs are generated programmatically. As soon as a new view gets constructed, retrieve the components' states from the map and apply them to the components. For example, in case of the panels, I can set the collapsed attribute to a value retrieved from my session-scoped backing bean.

    Read the article

  • Opinions on collision detection objects with a moving scene

    - by Evan Teran
    So my question is simple, and I guess it boils down to how anal you want to be about collision detection. To keep things simple, lets assume we're talking about 2D sprites defined by a bounding box. In addition, let's assume that my sprite object has a function to detect collisions like this: S.collidesWith(other); Finally the scene is moving and "walls" in the scene can move, an object may not touch a wall. So a simple implementation might look like this (psuedo code): moveWalls(); moveSprite(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } The problem with this is that if the sprite and wall move towards each other, depending on the circumstances (such as diagonal moment). They may pass though each other (unlikely but could happen). So I may do this instead. moveWalls(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } moveSprite(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } This takes care of the passing through each other issue, but another rare issue comes up. If they are adjacent to each other (literally the next pixel) and both the wall and the sprite are moving left, then I will get an invalid collision since the wall moves, checks for collision (hit) then the sprite is moved. Which seems unfair. In addition, to that, the redundant collision detection feels very inefficient. I could give the player movement priority alleviating the first issue but it is still checking twice. moveSprite(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } moveWalls(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } Am I simply over thinking this issue, should this just be chalked up to "it'll happen rare enough that no one will care"? Certainly looking at old sprite based games, I often find situations where the collision detection has subtle flaws, but I figure by now we can do better :-P. What are people's thoughts?

    Read the article

  • Just a small problem regarding javscript BOM question

    - by caramel1991
    The question is this: Create a page with a number of links. Then write code that fires on the window onload event, displaying the href of each of the links on the page. And this is my solution <html> <body language="Javascript" onload="displayLink()"> <a href="http://www.google.com/">First link</a> <a href="http://www.yahoo.com/">Second link</a> <a href="http://www.msn.com/">Third link</a> <script type="text/javascript" language="Javascript"> function displayLink() { for(var i = 0;document.links[i];i++) { alert(document.links[i].href); } } </script> </body> </html> This is the answer provided by the book <html> <head> <script language=”JavaScript” type=”text/javascript”> function displayLinks() { var linksCounter; for (linksCounter = 0; linksCounter < document.links.length; linksCounter++) { alert(document.links[linksCounter].href); } } </script> </head> <body onload=”displayLinks()”> <A href=”link0.htm” >Link 0</A> <A href=”link1.htm”>Link 2</A> <A href=”link2.htm”>Link 2</A> </body> </html> Before I get into the javascript tutorial on how to check user browser version or model,I was using the same method as the example,by acessing the length property of the links array for the loop,but after I read through the tutorial,I find out that I can also use this alternative ways,by using the method that the test condition will evalute to true only if the document.links[i] return a valid value,so does my code is written using the valid method??If it's not,any comment regarding how to write a better code??Correct me if I'm wrong,I heard some of the people say "a good code is not evaluate solely on whether it works or not,but in terms of speed,the ability to comprehend the code,and could posssibly let others to understand the code easily".Is is true??

    Read the article

  • What regular expression(s) would I use to remove escaped html from large sets of data.

    - by Elizabeth Buckwalter
    Our database is filled with articles retrieved from RSS feeds. I was unsure of what data I would be getting, and how much filtering was already setup (WP-O-Matic Wordpress plugin using the SimplePie library). This plugin does some basic encoding before insertion using Wordpress's built in post insert function which also does some filtering. I've figured out most of the filters before insertion, but now I have whacko data that I need to remove. This is an example of whacko data that I have data in one field which the content I want in the front, but this part removed which is at the end: <img src="http://feeds.feedburner.com/~ff/SoundOnTheSound?i=xFxEpT2Add0:xFbIkwGc-fk:V_sGLiPBpWU" border="0"></img> <img src="http://feeds.feedburner.com/~ff/SoundOnTheSound?d=qj6IDK7rITs" border="0"></img> &lt;img src=&quot;http://feeds.feedburner.com/~ff/SoundOnTheSound?i=xFxEpT2Add0:xFbIkwGc-fk:D7DqB2pKExk&quot; Notice how some of the images are escape and some aren't. I believe this has to do with the last part being cut off so as to be unrecognizable as an html tag, which then caused it to be html endcoded. Another field has only this which is now filtered before insertion, but I have to get rid of the others: &lt;img src=&quot;http://farm3.static.flickr.com/2183/2289902369_1d95bcdb85.jpg&quot; alt=&quot;post_img&quot; width=&quot;80&quot; (all examples are on one line, but broken up for readability) Question: What is the best way to work with the above escaped html (or portion of an html tag)? I can do it in Perl, PHP, SQL, Ruby, and even Python. I believe Perl to be the best at text parsing, so that's why I used the Perl tag. And PHP times out on large database operations, so that's pretty much out unless I wanted to do batch processing and what not. PS One of the nice things about using Wordpress's insert post function, is that if you use php's strip_tags function to strip out all html, insert post function will insert <p> at the paragraph points. Let me know if there's anything more that I can answer. Some article that didn't quite answer my questions. (http://stackoverflow.com/questions/2016751/remove-text-from-within-a-database-text-field) (http://stackoverflow.com/questions/462831/regular-expression-to-escape-html-ampersands-while-respecting-cdata)

    Read the article

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • Stuck in Infinite Loop while PostInvalidating

    - by Nicholas Roge
    I'm trying to test something, however, the loop I'm using keeps getting stuck while running. It's just a basic lock thread while doing something else before continuing kind of loop. I've double checked that I'm locking AND unlocking the variable I'm using, but regardless it's still stuck in the loop. Here are the segments of code I have that cause the problem: ActualGame.java: Thread thread=new Thread("Dialogue Thread"){ @Override public void run(){ Timer fireTimer=new Timer(); int arrowSequence=0; gameHandler.setOnTouchListener( new OnTouchListener(){ @Override public boolean onTouch(View v, MotionEvent me) { //Do something. if(!gameHandler.fireTimer.getActive()){ exitLoop=true; } return false; } } ); while(!exitLoop){ while(fireTimer.getActive()||!gameHandler.drawn); c.drawBitmap(SpriteSheet.createSingleBitmap(getResources(), R.drawable.dialogue_box,240,48),-48,0,null); c.drawBitmap(SpriteSheet.createSingleBitmap(getResources(),R.drawable.dialogue_continuearrow,32,16,8,16,arrowSequence,0),-16,8,null); gameHandler.drawn=false; gameHandler.postInvalidate(); if(arrowSequence+1==4){ arrowSequence=0; exitLoop=true; }else{ arrowSequence++; } fireTimer.startWait(100); } gameHandler.setOnTouchListener(gameHandler.defaultOnTouchListener); } }; thread.run(); And the onDraw method of GameHandler: canvas.scale(scale,scale); canvas.translate(((screenWidth/2)-((terrainWidth*scale)/2))/scale,((screenHeight/2)-((terrainHeight*scale)/2))/scale); canvas.drawColor(Color.BLACK); for(int layer=0;layer(less than)tiles.length;layer++){ if(layer==playerLayer){ canvas.drawBitmap(playerSprite.getCurrentSprite(), playerSprite.getPixelLocationX(), playerSprite.getPixelLocationY(), null); continue; } for(int y=0;y(less than)tiles[layer].length;y++){ for(int x=0;x(less than)tiles[layer][y].length;x++){ if(layer==0&&tiles[layer][y][x]==null){ tiles[layer][y][x]=nullTile; } if(tiles[layer][y][x]!=null){ runningFromTileEvent=false; canvas.drawBitmap(tiles[layer][y][x].associatedSprite.getCurrentSprite(),x*tiles[layer][y][x].associatedSprite.spriteWidth,y*tiles[layer][y][x].associatedSprite.spriteHeight,null); } } } } for(int i=0;i(less than)canvasEvents.size();i++){ if(canvasEvents.elementAt(i).condition(this)){ canvasEvents.elementAt(i).run(canvas,this); } } Log.e("JapaneseTutor","Got here.[1]"); drawn=true; Log.e("JapaneseTutor","Got here.[2]"); If you need to see the Timer class, or the full length of the GameHandler or ActualGame classes, just let me know.

    Read the article

< Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >