Search Results

Search found 6031 results on 242 pages for 'imaginary numbers'.

Page 157/242 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Can MYSQL filter by date if date is stored as text? ex "02/10/1984"

    - by Roeland
    Hello! I am trying to modify an app for a client which has already a database of over 1000 items. The dates are stored as text in the database with the format "02/10/1984". The system allows you to add and remove fields to the catalog dynamically and it also allows the advanced search to have specific fields be allowed. The problem is that it wasn't designed with dates in mind, so when I set a field as a date, and try to search by a range the query is trying to do a AND (cfv0.value = 01/02/2004 AND cfv0.value <= 05/03/2008) . I can make it so the date range passed is a numeric time value. Is there a way that when sending the query, it takes the text fields (with the date) and converts it to numeric time value so at that point I am basically just comparing numbers which would work fine. I do not have the option to change all the current date to numeric value due to the way the dynamic fields are set up. Thanks guys!

    Read the article

  • Stop VBA Evaluate from calling target function twice

    - by Abiel
    I am having trouble getting VBA's Evaluate() function to only execute once; it seems to always run twice. For instance, consider the trivial example below. If we run the RunEval() subroutine, it will call the EvalTest() function twice. This can be seen by the two different random numbers that get printed in the immediate window. The behavior would be the same if we were calling another subroutine with Evaluate instead of a function. Can someone explain how I can get Evaluate to execute the target function once instead of twice? Thank you. Sub RunEval() Evaluate "EvalTest()" End Sub Public Function EvalTest() Debug.Print Rnd() End Function

    Read the article

  • Validate a string in a table in SQL Server

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance

    Read the article

  • how to apply Discrete wavelet transform on image

    - by abuasis
    I am implementing an android application that will verify signature images , decided to go with the Discrete wavelet transform method (symmlet-8) the method requires to apply the discrete wavelet transform and separate the image using low-pass and high-pass filter and retrieve the wavelet transform coefficients. the equations show notations that I cant understand thus can't do the math easily , also didn't know how to apply low-pass and high-pass filters to my x and y points. is there any tutorial that shows you how to apply the discrete wavelet transform to my image easily that breaks it out in numbers? thanks alot in advance.

    Read the article

  • JavaScript distributed computing project

    - by Ben L.
    I made a website that does absolutely nothing, and I've proven to myself that people like to stay there - I've already logged 11+ hours worth of cumulative time on the page. My question is whether it would be possible (or practical) to use the website as a distributed computing site. My first impulse was to find out if there were any JavaScript distributed computing projects already active, so that I could put a piece of code on the page and be done. Unfortunately, all I could find was a big list of websites that thought it might be a cool idea. I'm thinking that I might want to start with something like integer factorization - in this case, RSA numbers. It would be easy for the server to check if an answer was correct (simply test for modulus equals zero), and also easy to implement. Is my idea feasible? Is there already a project out there that I can use?

    Read the article

  • Python - Converting CSV to Objects - Code Design

    - by victorhooi
    Hi, I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data. We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees. import csv import re import sys #class CSVLoader: # """Virtual class to assist with loading in CSV files.""" # def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'): # gd_extract = csv.DictReader(open(input_file), dialect='excel') # employees = [] # for row in gd_extract: # curr_employee = Employee(row) # employees.append(curr_employee) # return employees # #self.employees = {row['dbdirid']:row for row in gd_extract} # Previously, this was inside a (virtual) class called "CSVLoader". # However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-fucntion but with a module-level function def import_gd_dump(input_file='Gp Directory 20100331 original.csv'): """Return a list ('employee') of dict objects, taken from a Group Directory CSV file.""" gd_extract = csv.DictReader(open(input_file), dialect='excel') employees = [] for row in gd_extract: employees.append(row) return employees def write_gd_formatted(employees_dict, output_file="gd_formatted.csv"): """Read in an Employees() object, and write out each Employee() inside this to a CSV file""" gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname') try: gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel') except IOError: print('Unable to open file, IO error (Is it locked?)') sys.exit(1) headers = {n:n for n in gd_output_fieldnames} gd_formatted.writerow(headers) for employee in employees_dict.employee_list: # We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice? gd_formatted.writerow(employee.__dict__) class Employee: """An Employee in the system, with employee attributes (name, email, cost-centre etc.)""" def __init__(self, employee_attributes): """We use the Employee constructor to convert a dictionary into instance attributes.""" for k, v in employee_attributes.items(): setattr(self, k, v) def clean_phone_number(self): """Perform some rudimentary checks and corrections, to make sure numbers are in the right format. Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number.""" if self.telephoneNumber is None or self.telephoneNumber == '': return '', 'Missing phone number.' else: standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})') if standard_format.search(self.telephoneNumber): result = standard_format.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), '' elif extra_zero.search(self.telephoneNumber): result = extra_zero.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. ' elif missing_hyphen.search(self.telephoneNumber): result = missing_hyphen.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. ' else: return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumber class Employees: def __init__(self, import_list): self.employee_list = [] for employee in import_list: self.employee_list.append(Employee(employee)) def clean_all_phone_numbers(self): for employee in self.employee_list: #Should we just set this directly in Employee.clean_phone_number() instead? employee.PHFull, employee.PHFull_message = employee.clean_phone_number() # Hmm, the search is O(n^2) - there's probably a better way of doing this search? def lookup_all_supervisors(self): for employee in self.employee_list: if employee.hrreportsto is not None and employee.hrreportsto != '': for supervisor in self.employee_list: if supervisor.hrid == employee.hrreportsto: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.sn break else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.') else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.') #Is thre a more pythonic way of doing this? def print_employees(self): for employee in self.employee_list: print(employee.__dict__) if __name__ == '__main__': db_employees = Employees(import_gd_dump()) db_employees.clean_all_phone_numbers() db_employees.lookup_all_supervisors() #db_employees.print_employees() write_gd_formatted(db_employees) Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound? Anyhow, to the specifics: The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad? Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this? In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself? There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes. I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related). Cheers, Victor

    Read the article

  • Drawing text at an angle (e.g. upside down) in Android

    - by Damian
    I'm trying to build a custom clock view in Android. See image http://twitpic.com/1devk7 So far to draw the time and hour markers I have been using the Canvas.rotate method to get the desired effect. However, notice that it is difficult to interpret the numbers in the lower half of the clock (e.g. 6 or 9?) because of the angle in which they are drawn. When using drawText, is it possible to draw the text at 45/90/180 degrees so that all text appears upright when my onDraw method has finished?

    Read the article

  • Rails 3 nested forms with has_many :through, entry in join table dosen't get deleted after update

    - by Hadi S.
    Hi, i have a 'User' model which has a has_many relationship to a 'Number' model through a join table 'user_number' model. I use accepts_nested_attributes_for :numbers, :allow_destroy = true in the 'User' model. Everything works fine except that whenever i delete a number from a user in the edit form, the associated number is deleted correctly in the 'number' table, but not the entry in the 'user_number' join table. In the update controller action i only use this: ... if @user.update_attributes(params[:user]) ... How can i force rails to also delete the associated entry in the join table?

    Read the article

  • How to suppress a group header based on running-count results?

    - by Sarah
    Hi I'm trying to suppress a group header when there are no detailed results in another following group. I have added a manual running count total which is showing correct numbers (such as 0 when no records show on the report). I've taken this approach since I have various items suppressed within the detailed section and don't want them as part of the count. I'm trying to say in the header not to shown the header if there are no corresponding records showing in the detailed section. But, it's not working. When I say suppress if the display count is 0, it suppresses all of the headers instead of just the ones that need to be displayed. HOW CAN I FIX THIS? THANKS... Sarah

    Read the article

  • How do I use .NET to find an orange ball in an image?

    - by JohnS
    I'm getting images from a C328R camera attached to a small arduino robot. I want the robot to drive towards orange ping-pong balls and pick them up. I'm using the C# code supplied by funkotron76 at http://www.codeproject.com/KB/recipes/C328R.aspx. Is there a library I can use to do this, or do I need to iterate over every pixel in the image looking for orange? If so, what kind of tolerance would I need to compensate for various lighting conditions? I could probably test to figure out these numbers, but I'm hoping someone out there knows the answers.

    Read the article

  • Generating C++ BackTraces in OS/X (10.5.7)

    - by phillipwei
    I've been utilizing backtrace and backtrace_symbols to generate programmatic stack traces for the purposes of logging/diagnosis. It seems to roughly work, however, I'm getting a little bit of mangling and there are no accompanying file/line numbers associated with each function invocation (as I'd expect within a gdb bt call or something). Here's an example: 1 leonardo 0x00006989 _ZN9ExceptionC2E13ExceptionType + 111 2 leonardo 0x00006a20 _ZN9ExceptionC1E13ExceptionType + 24 3 leonardo 0x0000ab64 _ZN5Rules11ApplyActionER16ApplicableActionR9GameState + 1060 4 leonardo 0x0000ed15 _ZN9Simulator8SimulateEv + 2179 5 leonardo 0x0000eec9 _ZN9Simulator8SimulateEi + 37 6 leonardo 0x00009729 main + 45 7 leonardo 0x000025c6 start + 54 Anything I'm missing something, doing something silly, or is this all I can expect out of backtrace on OS/X? Some other tidbits: I don't see a rdynamic link option for the g++ version (4.0.1) I'm using. -g/-g3 doesn't make any difference. abi::__cxa__demangle doesn't seem to do anything

    Read the article

  • can lapply not modify variables in a higher scope

    - by stevejb
    I often want to do essentially the following: mat <- matrix(0,nrow=10,ncol=1) lapply(1:10, function(i) { mat[i,] <- rnorm(1,mean=i)}) But, I would expect that mat would have 10 random numbers in it, but rather it has 0. (I am not worried about the rnorm part. Clearly there is a right way to do that. I am worry about affecting mat from within an anonymous function of lapply) Can I not affect matrix mat from inside lapply? Why not? Is there a scoping rule of R that is blocking this?

    Read the article

  • Negative logical shift

    - by user320862
    In Java, why does -32 -1 = 1 ? It's not specific to just -32. It works for all negative numbers as long as they're not too big. I've found that x -1 = 1 x -2 = 3 x -3 = 7 x -4 = 15 given 0 x some large negative number Isn't -1 the same as << 1? But -32 << 1 = -64. I've read up on two's complements, but still don't understand the reasoning.

    Read the article

  • C# Button Text Unicode characters.

    - by Fossaw
    C# doesn't want to put Unicode characters on buttons. If I put \u2129 in the Text attribute of the button, the button displays the \u2129, not the Unicode character, (example - I chose 2129 because I could see it in the font currently active on the machine). I saw this question before, link text, but the question isn't really answered, just got around. I am working on applications which are going all over the world, and don't want to install all the fonts, more then "don't want", there are that many that I doubt the machine I am working on has sufficient disk space. Our overseas sales agents supply the Unicode character "numbers". Is there another way forward with this? As an aside, (curiosity), why does it not work?

    Read the article

  • C - how to get fscanf() to determine whether what it read is only digits, and no characters

    - by hatorade
    Imagine I have a csv with and each value is an integer. so the first value is the INTEGER 100. I want fscanf() to read this line, and either tell me it's an integer ONLY, or not. So, it would pass 100 but fail on 100t. What i've been trying to get work is "%d," where the comma is the delimiter of my CSV. so the whole function is fscanf(fp, "%d,", &count) Unfortunately, this fails to fail on '100t,' works on '100' and works on 't'. so it just isn't distinguishing between 100 and 100t (all of these numbers are followed by commas, of course

    Read the article

  • What's the official Microsoft way to track counts of dynamic controls to be reconstructed upon Postback?

    - by John K
    When creating dynamic controls based on a data source of arbitrary and changing size, what is the official way to track exactly how many controls need to be rebuilt into the page's control collection after a Postback operation (i.e. on the server side during the ASP.NET page event lifecycle) specifically the point at which dynamic controls are supposed to be rebuilt? Where is the arity stored for retrieval and reconstruction usage? By "official" I mean the Microsoft way of doing it. There exist hacks like Session storage, etc but I want to know the bonafide or at least Microsoft-recommended way. I've been unable to find a documentation page stating this information. Usually code samples work with a set of dynamic controls of known numbers. It's as if doing otherwise would be tougher. Update: I'm not inquiring about user controls or static expression of declarative controls, but instead about dynamically injecting controls completely from code-behind, whether they be mine, 3rd-party or built-in ASP.NET controls.

    Read the article

  • Recommendations on Triming Large Amounts of Text from a DOM Object

    - by aronchick
    I'm doing some in browser editing, and I have some content that's on the order of around 20k characters long in a <pre>. So it looks something like: <pre> Text 1 Text 2 Text 3 Text 4 [...] Text 20,000 </pre> I'd like to use jquery to trim it down when someone hits a button to chop, but I'm having trouble doing it without overloading the browser. Assume I know that the character numbers are at 16,510 - 17,888, and what I'd like to do is trim it. I was using: jQuery('#textsection').html(jQuery('textarea').html().substr(range.start)); But browsers seem to enjoy crashing when I do this. Alternatives?

    Read the article

  • Dynamic programming: Largest diamond(rhombus) block

    - by Darksody
    I have a small program to do in Java. I have a 2D array filled with 0 and 1, and I must find the largest rhombus (as in square rotated by 90 degrees) and their numbers. Example: 0 1 0 0 0 1 1 0 1 1 1 0 1 0 1 1 1 1 0 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 Result: 1 1 1 1 1 1 1 1 1 1 1 1 1 The problem is similar to this SO question. If you have any idea, post it here.

    Read the article

  • View a git diff-tree in a reasonable format

    - by Josh
    Howdy, I'm about to do a git svn dcommit to our svn repo -- and as is recommended in a number of places, I wanted to figure out exactly what I was going to be committing with a dry run. As such I ran: git svn dcommit -n This produced output: Committing to http://somerepo/svn/branches/somebranch diff-tree 1b937dacb302908602caedf1798171fb1b7afc81~1 1b937dacb302908602caedf1798171fb1b7afc81 How do I view this in a format that I can consume as a human? A list of modified files comes to mind. This is probably easy, but running git diff-tree on those hashes gives me a reference to a directory and a some other hashes, as well as some numbers. Not quite sure what to make of it. Thanks very much, Josh

    Read the article

  • Solving the water jug problem

    - by Amit
    While reading through some lecture notes on preliminary number theory, I came across the solution to water jug problem (with two jugs) which is summed as thus: Using the property of the G.C.D of two numbers that GCD(a,b) is the smallest possible linear combination of a and b, and hence a certain quantity Q is only measurable by the 2 jugs, iff Q is a n*GCD(a,b), since Q=sA + tB, where: n = a positive integer A = capacity of jug A B= capacity of jug B And, then the method to the solution is discussed Another model of the solution is to model the various states as a state-space search problem as often resorted to in Artificial Intelligence. My question is: What other known methods exist which models the solution, and how? Google didn't throw up much.

    Read the article

  • What do the square brackets in LaTeX logs mean?

    - by stefan-majewsky
    I'm currently working on a parser that reads complete LaTeX logs. Most of the log format is, though weird, easy to figure out, but these square brackets are puzzling me. Here's an example from near the end of one of my logs: Overfull \hbox (10.88788pt too wide) in paragraph at lines 40--40 []$[]$ [] [102]) [103] Kapitel 14. (./Thermo-141-GrenzenFundamentalpostulat.tex [104 ]) (./Thermo-142-Mastergleichung.tex [105]) (./Thermo-143-HTheorem.tex [106pdfTeX warning (ext4): destination with the same identifier (name{equation.14.3.3}) ha s been already used, duplicate ignored Can anybody give me a hint what these square brackets mean? I can't see any structure in them. I have the suspicion that lines 2/3 above are some kind of ASCII art representing the box layout, though I know too less about badboxes to justify this or identify the meaning of the single characters. Then, the "[104" etc. seem to correspond to the page numbers, but I am still not seeing the reason why there is sometimes something inbetween the square brackets (like the pdfTeX warning above), and sometimes not.

    Read the article

  • filter to reverse lines of a text file

    - by Greg Hewgill
    I'm writing a small shell script that needs to reverse the lines of a text file. Is there a standard filter command to do this sort of thing? My specific application is that I'm getting a list of Git commit identifiers, and I want to process them in reverse order: git log --pretty=oneline work...master | grep -v DEBUG: | cut -d' ' -f1 | reverse The best I've come up with is to implement reverse like this: ... | cat -b | sort -rn | cut -f2- This uses cat to number every line, then sort to sort them in descending numeric order (which ends up reversing the whole file), then cut to remove the unneeded line number. The above works for my application, but may fail in the general case because cat -b only numbers nonblank lines. Is there a better, more general way to do this?

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

  • git: programmatically know by how much the branch is ahead/behind a remote branch

    - by Olivier
    I would like to extract the information that is printed after a github status, which looks like: # On branch master # Your branch is ahead of 'origin/master' by 2 commits. Of course I can parse the output of git status but this is not recommended since this human readable output is liable to change. There are two problems: How to know the remote tracked branch? It is often origin/branch but need not be. How to get the numbers? How to know whether it is ahead/behind? By how many commits? And what about the diverged branch case?

    Read the article

  • How Do You Profile & Optimize CUDA Kernels?

    - by John Dibling
    I am somewhat familiar with the CUDA visual profiler and the occupancy spreadsheet, although I am probably not leveraging them as well as I could. Profiling & optimizing CUDA code is not like profiling & optimizing code that runs on a CPU. So I am hoping to learn from your experiences about how to get the most out of my code. There was a post recently looking for the fastest possible code to identify self numbers, and I provided a CUDA implementation. I'm not satisfied that this code is as fast as it can be, but I'm at a loss as to figure out both what the right questions are and what tool I can get the answers from. How do you identify ways to make your CUDA kernels perform faster?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >