Search Results

Search found 8219 results on 329 pages for 'less'.

Page 256/329 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • How to read time from recorded surveillance camera video?

    - by stressed_geek
    I have a problem where I have to read the time of recording from the video recorded by a surveillance camera. The time shows up on the top-left area of the video. Below is a link to screen grab of the area which shows the time. Also, the digit color(white/black) keeps changing during the duration of the video. http://i55.tinypic.com/2j5gca8.png Please guide me in the direction to approach this problem. I am a Java programmer so would prefer an approach through Java. EDIT: Thanks unhillbilly for the comment. I had looked at the Ron Cemer OCR library and its performance is much below our requirement. Since the ocr performance is less than desired, I was planning to build a character set using the screen grabs for all the digits, and using some image/pixel comparison library to compare the frame time with the character-set which will show a probabilistic result after comparison. So I was looking for a good image comparison library(I would be OK with a non-java library which I can run using the command-line). Also any advice on the above approach would be really helpful.

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • C : files manipulation Can't figure out how to simplify this code with files manipulation.

    - by Bon_chan
    Hey guys, I have been working on this code but I can't find out what is wrong. This program does compile and run but it ends up having a fatal error. I have a file called myFile.txt, with the following content : James------ 07.50 Anthony--- 17.00 And here is the code : int main() { int n =2, valueTest=0,count=0; FILE* file = NULL; float temp= 00.00f, average= 00.00f, flTen = 10.00f; float *totalNote = (float*)malloc(n*sizeof(float)); int position = 0; char selectionNote[5+1], nameBuffer[10+1], noteBuffer[5+1]; file = fopen("c:\\myFile.txt","r"); fseek(file,10,SEEK_SET); while(valueTest<2) { fscanf(file,"%5s",&selectionNote); temp = atof(selectionNote); totalNote[position]= temp; position++; valeurTest++; } for(int counter=0;counter<2;counter++) { average += totalNote[counter]; } printf("The total is : %f \n",average); rewind(file); printf("here is the one with less than 10.00 :\n"); while(count<2) { fscanf(file,"%10s",&nameBuffer); fseek(file,10,SEEK_SET); fscanf(file,"%5s",&noteBuffer); temp = atof(noteBuffer); if(temp<flTen) { printf("%s who has %f\n",nameBuffer,temp); } fseek(file,1,SEEK_SET); count++; } fclose(file); } I am pretty new to c and find it more difficult than c# or java. And I woud like to get some suggestions to help me to get better. I think this code could be simplier. Do you think the same ?

    Read the article

  • Can some tell me why I am seg faulting in this simple C program?

    - by user299648
    I keep on getting seg faulted after I end my first for loop, and for the life of me I don't why. The file I'm scanning is just 18 strings in 18 lines. I thinks the problem is the way I'm mallocing the double pointer called picks, but I don't know exactly why. I'm am only trying to scanf strings that are less than 15 chars long, so I don't see the problem. Can someone please help. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_LENGTH 100 int main( int argc,char *argv[] ) { char* string = malloc( 15*sizeof(char) ); char** picks = malloc(15*sizeof(char*)); FILE* pick_file = fopen( argv[l], "r" ); int num_picks; for( num_picks=0 ; fgets( string, MAX_LENGTH, pick_file ) != NULL ; num_picks++ ) { scanf( "%s", picks+num_picks ); } //this is where i seg fault int x; for(x=0; x<num_picks;x++) printf("s\n", picks+x); }

    Read the article

  • Incorrect sizing of a JPanel in a JScrollPane In Java 1.5

    - by Coder
    Hi, I am making an image loading component which consists of a JPanel containing a JScrollPane, which in turn contains another JPanel. What this component does is allows images to be dropped on top of it, after which point the image is loaded and the inner most JPanel is set to the size of the image dropped. This in turn causes the scroll bars to show up and the user can scroll the image. This all works fine. The problem comes in when i try to auto-shrink the image to the maximum visible area in the outer JPanel. In this case i do a uniform scale of the image to be less than or equal to the width and height of the outer JPanel. What happens now is that both the horizontal and vertical scroll bars show up indicating the the inner JPanel is bigger than the visible area (which should not be the case). I verified that the image is scale to the proper dimensions(ie. the maximum width and height is respected). I also verified that if i decrease the maximum height by 3 pixels, then no scroll bars appear. What i believe the problem is, is that panel.getWidth() and panel.getHeight() don't actually return the visible area (maximum area) that sub components can take up. Ie. there is likely some more width and height taken up by the border around the JPanel or something like that. My question is, how do i get around this problem. Functionally all i want is to determine the maximum size a JPanel can be in a JScrollPane, then set the panel to that size and paint an image over top of it and be assured that the scroll bars of the scroll pane will not show up. Right now the scroll bars are set to AS_NEEDED. Thanks!

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

  • Terminal-based snake game: input thread manipulates output

    - by enlightened
    I'm writing a snake game for the terminal, i.e. output via print. The following works just fine: while status[snake_monad] do print to_string draw canvas, compose_all([ frame, specs, snake_to_hash(snake[snake_monad]) ]) turn! snake_monad, get_dir move! snake_monad, specs sleep 0.25 end But I don't want the turn!ing to block, of course. So I put it into a new Thread and let it loop: Thread.new do loop do turn! snake_monad, get_dir end end while status[snake_monad] do ... # no turn! here ... end Which also works logically (the snake is turning), but the output is somehow interspersed with newlines. As soon as I kill the input thread (^C) it looks normal again. So why and how does the thread have any effect on my output? And how do I work around this issue? (I don't know much about threads, even less about them in ruby. Input and output concurrently on the same terminal make the matter worse, I guess...) Also (not really important): Wanting my program as pure as possible, would it be somewhat easily possible to get the input non-blockingly while passing everything around? Thank you!

    Read the article

  • Design Philosophy Question - When to create new functions

    - by Eclyps19
    This is a general design question not relating to any language. I'm a bit torn between going for minimum code or optimum organization. I'll use my current project as an example. I have a bunch of tabs on a form that perform different functions. Lets say Tab 1 reads in a file with a specific layout, tab 2 exports a file to a specific location, etc. The problem I'm running into now is that I need these tabs to do something slightly different based on the contents of a variable. If it contains a 1 I may need to use Layout A and perform some extra concatenation, if it contains a 2 I may need to use Layout B and do no concatenation but add two integer fields, etc. There could be 10+ codes that I will be looking at. Is it more preferable to create an individual path for each code early on, or attempt to create a single path that branches out only when absolutely required. Creating an individual path for each code would allow my code to be extremely easy to follow at a glance, which in turn will help me out later on down the road when debugging or making changes. The downside to this is that I will increase the amount of code written by calling some of the same functions in multiple places (for example, steps 3, 5, and 9 for every single code may be exactly the same. Creating a single path that would branch out only when required will be a bit messier and more difficult to follow at a glance, but I would create less code by placing conditionals only at steps that are unique. I realize that this may be a case-by-case decision, but in general, if you were handed a previously built program to work on, which would you prefer?

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • java number exceeds long.max_value - how to detect?

    - by jurchiks
    I'm having problems detecting if a sum/multiplication of two numbers exceeds the maximum value of a long integer. Example code: long a = 2 * Long.MAX_VALUE; System.out.println("long.max * smth > long.max... or is it? a=" + a); This gives me -2, while I would expect it to throw a NumberFormatException... Is there a simple way of making this work? Because I have some code that does multiplications in nested IF blocks or additions in a loop and I would hate to add more IFs to each IF or inside the loop. Edit: oh well, it seems that this answer from another question is the most appropriate for what I need: http://stackoverflow.com/a/9057367/540394 I don't want to do boxing/unboxing as it adds unnecassary overhead, and this way is very short, which is a huge plus to me. I'll just write two short functions to do these checks and return the min or max long. Edit2: here's the function for limiting a long to its min/max value according to the answer I linked to above: /** * @param a : one of the two numbers added/multiplied * @param b : the other of the two numbers * @param c : the result of the addition/multiplication * @return the minimum or maximum value of a long integer if addition/multiplication of a and b is less than Long.MIN_VALUE or more than Long.MAX_VALUE */ public static long limitLong(long a, long b, long c) { return (((a > 0) && (b > 0) && (c <= 0)) ? Long.MAX_VALUE : (((a < 0) && (b < 0) && (c >= 0)) ? Long.MIN_VALUE : c)); } Tell me if you think this is wrong.

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • minimum L sum in a mxn matrix - 2

    - by hilal
    Here is my first question about maximum L sum and here is different and hard version of it. Problem : Given a mxn *positive* integer matrix find the minimum L sum from 0th row to the m'th row . L(4 item) likes chess horse move Example : M = 3x3 0 1 2 1 3 2 4 2 1 Possible L moves are : (0 1 2 2), (0 1 3 2) (0 1 4 2) We should go from 0th row to the 3th row with minimum sum I solved this with dynamic-programming and here is my algorithm : 1. Take a mxn another Minimum L Moves Sum array and copy the first row of main matrix. I call it (MLMS) 2. start from first cell and look the up L moves and calculate it 3. insert it in MLMS if it is less than exists value 4. Do step 2. until m'th row 5. Choose the minimum sum in the m'th row Let me explain on my example step by step: M[ 0 ][ 0 ] sum(L1 = (0, 1, 2, 2)) = 5 ; sum(L2 = (0,1,3,2)) = 6; so MLMS[ 0 ][ 1 ] = 6 sum(L3 = (0, 1, 3, 2)) = 6 ; sum(L4 = (0,1,4,2)) = 7; so MLMS[ 2 ][ 1 ] = 6 M[ 0 ][ 1 ] sum(L5 = (1, 0, 1, 4)) = 6; sum(L6 = (1,3,2,4)) = 10; so MLMS[ 2 ][ 2 ] = 6 ... the last MSLS is : 0 1 2 4 3 6 6 6 6 Which means 6 is the minimum L sum that can be reach from 0 to the m. I think it is O(8*(m-1)*n) = O(m*n). Is there any optimal solution or dynamic-programming algorithms fit this problem? Thanks, sorry for long question

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • Search engine recommendation for 100 sites of about 4000 pages

    - by fwkb
    I am looking for a search engine that can regularly (daily-ish) scan about 100 pages for changes and index an associated site if changes since the last scan are found. It should be able to handle about 100 sites, each averaging 4000 pages of about 5k average size, each on a different server (but only the one centralized search engine). Each of these sites will have a search form that gets submitted to this search engine. The results that are returned must be specific to the site that submitted them. I create the templates for the external sites, so I can give the search form a hidden field that specifies which site the form is submitted from. What would you recommend I look into? I would love to use a Python-based system for this, if feasible. I am currently using something called iSearch2. It doesn't seem very stable at this scale, the description of the product states it is not really intended to do multiple sites, is in PHP (which is less comfortable to me than Python), and has a few other shortcomings for my specific situation.

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • Javascript to fire event when a key pressed on the Ajax Toolkit Combo box.

    - by Paul Chapman
    I have the following drop down list which is using the Ajax Toolkit to provide a combo box <cc1:ComboBox ID="txtDrug" runat="server" style="font-size:8pt; width:267px;" Font-Size="8pt" DropDownStyle="DropDownList" AutoCompleteMode="SuggestAppend" AutoPostBack="True" ontextchanged="txtDrug_TextChanged" /> Now I need to load this up with approx 7,000 records which takes a considerable time, and effects the response times when the page is posted back and forth. The code which loads these records is as follows; dtDrugs = wsHelper.spGetAllDrugs(); txtDrug.DataValueField = "pkDrugsID"; txtDrug.DataTextField = "drugName"; txtDrug.DataSource = dtDrugs; txtDrug.DataBind(); However if I could get an event to fire when a letter is typed instead of having to load 7000 records it is reduced to less than 50 in most instances. I think this can be done in Javascript. So the question is how can I get an event to fire such that when the form starts there is nothing in the drop down, but as soon as a key is pressed it searches for those records starting with that letter. The .Net side of things I'm sure about - it is the Javascript I'm not. Thanks in advance

    Read the article

  • Flex profiling - what is [enterFrameEvent] doing?

    - by Herms
    I've been tasked with finding (and potentially fixing) some serious performance problems with a Flex application that was delivered to us. The application will consistently take up 50 to 100% of the CPU at times when it is simply idling and shouldn't be doing anything. My first step was to run the profiler that comes with FlexBuilder. I expected to find some method that was taking up most of the time, showing me where the bottleneck was. However, I got something unexpected. The top 4 methods were: [enterFrameEvent] - 84% cumulative, 32% self time [reap] - 20% cumulative and self time [tincan] - 8% cumulative and self time global.isNaN - 4% cumulative and self time All other methods had less than 1% for both cumulative and self time. From what I've found online, the [bracketed methods] are what the profiler lists when it doesn't have an actual Flex method to show. I saw someone claim that [tincan] is the processing of RTMP requests, and I assume [reap] is the garbage collector. Does anyone know what [enterFrameEvent] is actually doing? I assume it's essentially the "main" function for the event loop, so the high cumulative time is expected. But why is the self time so high? What's actually going on? I didn't expect the player internals to be taking up so much time, especially since nothing is actually happening in the app (and there are no UI updates going on). Is there any good way to find dig into what's happening? I know something is going on that shouldn't be (it looks like there must be some kind of busy wait or other runaway loop), but the profiler isn't giving me any results that I was expecting. My next step is going to be to start adding debug trace statements in various places to try and track down what's actually happening, but I feel like there has to be a better way.

    Read the article

  • What category of combinatorial problems appear on the logic games section of the LSAT?

    - by Merjit
    There's a category of logic problem on the LSAT that goes like this: Seven consecutive time slots for a broadcast, numbered in chronological order I through 7, will be filled by six song tapes-G, H, L, O, P, S-and exactly one news tape. Each tape is to be assigned to a different time slot, and no tape is longer than any other tape. The broadcast is subject to the following restrictions: L must be played immediately before O. The news tape must be played at some time after L. There must be exactly two time slots between G and P, regardless of whether G comes before P or whether G comes after P. I'm interested in generating a list of permutations that satisfy the conditions as a way of studying for the test and as a programming challenge. However, I'm not sure what class of permutation problem this is. I've generalized the type problem as follows: Given an n-length array A: How many ways can a set of n unique items be arranged within A? Eg. How many ways are there to rearrange ABCDEFG? If the length of the set of unique items is less than the length of A, how many ways can the set be arranged within A if items in the set may occur more than once? Eg. ABCDEF = AABCDEF; ABBCDEF, etc. How many ways can a set of unique items be arranged within A if the items of the set are subject to "blocking conditions"? My thought is to encode the restrictions and then use something like Python's itertools to generate the permutations. Thoughts and suggestions are welcome.

    Read the article

  • App crashes frequently at time when UIActionSheet would be presented

    - by Jim Hankins
    I am getting the following error intermittently when a call is made to display an action sheet. Assertion failure in -[UIActionSheet showInView:] Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: view != nil' Now in this case I've not changed screens. The UIActionSheet is presented when a local notification is fired and I have an observer call a local method on this view as such: I have the property marked as strong. When the action sheet is dismissed I also set it to nil. I am using a story board for the UI. It's fairly repeatable to crash it, perhaps less than 5 tries. (Thankfully I have that going for me). Any suggestions what to try next? I'm really pulling my hair out on this one. Most of the issues I've seen on this topic are pointing to the crash occurring once the selection is made. In my case it's at presentation and intermittently. Also for what it's worth, this particular view is several stacks deep in an embedded navigation controller. Hometableviewdetail selectviewController in question. This same issue occurs so far in testing on iOS 5.1 and iOS 6. I'm presuming it's something to do with how the show InView is being targeted. self.actionSheet = [[UIActionSheet alloc] initWithTitle:@"Select Choice" delegate:self cancelButtonTitle:@"Not Yet" destructiveButtonTitle:@"Do this Now" otherButtonTitles:nil]; [self.actionSheet showInView:self.parentViewController.tabBarController.view];

    Read the article

  • Quicksort / vector / partition issue

    - by xxx
    Hi, I have an issue with the following code : class quicksort { private: void _sort(double_it begin, double_it end) { if ( begin == end ) { return ; } double_it it = partition(begin, end, bind2nd(less<double>(), *begin)) ; iter_swap(begin, it-1); _sort(begin, it-1); _sort(it, end); } public: quicksort (){} void operator()(vector<double> & data) { double_it begin = data.begin(); double_it end = data.end() ; _sort(begin, end); } }; However, this won't work for too large a number of elements (it works with 10 000 elements, but not with 100 000). Example code : int main() { vector<double>v ; for(int i = n-1; i >= 0 ; --i) v.push_back(rand()); quicksort f; f(v); return 0; } Doesn't the STL partition function works for such sizes ? Or am I missing something ? Many thanks for your help.

    Read the article

  • Agile methodologies. Is it a by-product of mind control techniques as NLP / Scientology?

    - by Bobb
    The more I read about contemporary methods combinging scrum, tdd and xp, the more I feel like I already seen the methods. I am not arguing that agile approach is much more progressive than older rigid structures like waterfall, what I am saying is that it seems to me that agile methodologies are ideal to be used as a nest for a brainwashing business. I read few articles which kept referring to authors which I checked afterwards and they call themselves - coaches, trainers (usual thing when NLP specialists are involved) with no apparent software development history. Also I met a guy who is a scrum faciltator (term widly used in relation to scientology) in a high profile company. I talked to him less than 5 min but I got complete feeling that he is either on drugs or he has been programmed by a powerful NLP specialist. The way to talk and his body movements witnessed he is not an average normal person (in terms of normal distribution :))... Please dont get me wrong. I am not a fun of conspiracy theories. But I had an experience with a member of church of scientology tried to invade a commercial firm and actually went half way through to very top in just 3 weeks. I saw his work. For now I have complete impression is that psycho manipulators are now invading IT industry through the convenient door of agile techniques. Anyone has the same feeling/thoughts?

    Read the article

  • How can I intelligently group rows of integers for a faceted search?

    - by Alastair
    I'm not even quite sure what terms I should be using for what I want, so any advice on what I'm even asking for would be very welcome. Basically, my web site lists user-generated accommodations. Each has a rent price, which users will be able to query in our new faceted search box. Users search by city, and within each city I'd like to present a different rent grouping. That is to say that in City #1, if we have listings ranging from $200 - $1000, I'd like to present checkboxes for: less than $300 $301 - $500 $501 - $700 more than $700 However, if City #2 has values that range from $500 - $1500, I want the ranges above to change accordingly. So, if I say that I want 5 or 6 range options in each city, I think I have two options: Take the min and max values and just split the difference. I don't like this idea because one listing with a rent of $10,000 will throw the whole scale off. Intelligently calculate the ranges using means, medians etc. Number 2 is what I need help with. I'm a web developer that gets logic, but was never strong on math and statistics at school. Can anyone point me towards a guide that'll help me figure this out?

    Read the article

  • Permuting a binary tree without the use of lists

    - by Banang
    I need to find an algorithm for generating every possible permutation of a binary tree, and need to do so without using lists (this is because the tree itself carries semantics and restraints that cannot be translated into lists). I've found an algorithm that works for trees with the height of three or less, but whenever I get to greater hights, I loose one set of possible permutations per height added. Each node carries information about its original state, so that one node can determine if all possible permutations have been tried for that node. Also, the node carries information on weather or not it's been 'swapped', i.e. if it has seen all possible permutations of it's subtree. The tree is left-centered, meaning that the right node should always (except in some cases that I don't need to cover for this algorithm) be a leaf node, while the left node is always either a leaf or a branch. The algorithm I'm using at the moment can be described sort of like this: if the left child node has been swapped swap my right node with the left child nodes right node set the left child node as 'unswapped' if the current node is back to its original state swap my right node with the lowest left nodes' right node swap the lowest left nodes two childnodes set my left node as 'unswapped' set my left chilnode to use this as it's original state set this node as swapped return null return this; else if the left child has not been swapped if the result of trying to permute left child is null return the permutation of this node else return the permutation of the left child node if this node has a left node and a right node that are both leaves swap them set this node to be 'swapped' The desired behaviour of the algoritm would be something like this: branch / | branch 3 / | branch 2 / | 0 1 branch / | branch 3 / | branch 2 / | 1 0 <-- first swap branch / | branch 3 / | branch 1 <-- second swap / | 2 0 branch / | branch 3 / | branch 1 / | 0 2 <-- third swap branch / | branch 3 / | branch 0 <-- fourth swap / | 1 2 and so on... Sorry for the ridiculisly long and waddly explanation, would really, really apreciate any sort of help you guys could offer me. Thanks a bunch!

    Read the article

  • Significance in R

    - by Gemsie
    Ok, this is quite hard to explain, but I'm at a complete loss what to do. I'm a relative newcomer to R and although I can completely admire how powerful it is, I'm not too good at actually using it.... Basically, I have some very contrived data that I need to analyse (it wasn't me who chose this, I can assure you!). I have the right and left hand lengths of lots of people, as well as some numeric data that shows their sociability. Now I would like to know if people who have significantly different lengths of hand are more or less sociable than those who have the same (leading into the research that 'symmetrical' people are more sociable and intelligent, etc. I have got as far as loading the data into R, then I have no idea where to go from there. How on Earth do I start to separate those who are close to symmetrical to those who aren't to then start to do the analysis? Ok, using Sasha's great advice, I did the cor.test and got the following: Pearson's product-moment correlation data: measurements$l.hand - measurements$r.hand and measurements$sociable t = 0.2148, df = 150, p-value = 0.8302 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1420623 0.1762437 sample estimates: cor 0.01753501 I have never used this test before, so am unsure how to intepret it...you wouldn't think I was on my fourth Scientific degree would you?! :(

    Read the article

  • How to mimic polymorphism in classes with template methods (c++)?

    - by davide
    in the problem i am facing i need something which works more or less like a polymorphic class, but which would allow for virtual template methods. the point is, i would like to create an array of subproblems, each one being solved by a different technique implemented in a different class, but holding the same interface, then pass a set of parameters (which are functions/functors - this is where templates jump up) to all the subproblems and get back a solution. if the parameters would be, e.g., ints, this would be something like: struct subproblem { ... virtual void solve (double& solution, double parameter)=0; } struct subproblem0: public subproblem { ... virtual void solve (double& solution, double parameter){...}; } struct subproblem1: public subproblem { ... virtual void solve (double* solution, double parameter){...}; } int main{ subproblem problem[2]; subproblem[0] = new subproblem0(); subproblem[1] = new subproblem1(); double argument0(0), argument1(1), sol0[2], sol1[2]; for(unsigned int i(0);i<2;++i) { problem[i]->solve( &(sol0[i]) , argument0); problem[i]->solve( &(sol1[i]) , argument1); } return 0; } but the problem is, i need the arguments to be something like Arg<T1,T2> argument0(f1,f2) and thus the solve method to be something of the likes of template<T1,T2> solve (double* solution, Arg<T1,T2> parameter) which cant obviously be declared virtual ( so cant be called from a pointer to the base class)... now i'm pretty stuck and don't know how to procede...

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >