Search Results

Search found 6031 results on 242 pages for 'imaginary numbers'.

Page 83/242 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Python recursive function error: "maximum recursion depth exceeded"

    - by user283169
    I solved Problem 10 of Project Euler with the following code, which works through brute force: def isPrime(n): for x in range(2, int(n**0.5)+1): if n % x == 0: return False return True def primeList(n): primes = [] for i in range(2,n): if isPrime(i): primes.append(i) return primes def sumPrimes(primelist): prime_sum = sum(primelist) return prime_sum print (sumPrimes(primeList(2000000))) The three functions work as follows: isPrime checks whether a number is a prime; primeList returns a list containing a set of prime numbers for a certain range with limit 'n', and; sumPrimes sums up the values of all numbers in a list. (This last function isn't needed, but I liked the clarity of it, especially for a beginner like me.) I then wrote a new function, primeListRec, which does exactly the same thing as primeList, to help me better understand recursion: def primeListRec(i, n): primes = [] #print i if (i != n): primes.extend(primeListRec(i+1,n)) if (isPrime(i)): primes.append(i) return primes return primes The above recursive function worked, but only for very small values, like '500'. The function caused my program to crash when I put in '1000'. And when I put in a value like '2000', Python gave me this: RuntimeError: maximum recursion depth exceeded. What did I do wrong with my recursive function? Or is there some specific way to avoid a recursion limit?

    Read the article

  • How should I Test a Genetic Algorithm

    - by James Brooks
    I have made a quite few genetic algorithms; they work (they find a reasonable solution quickly). But I have now discovered TDD. Is there a way to write a genetic algorithm (which relies heavily on random numbers) in a TDD way? To pose the question more generally, How do you test a non-deterministic method/function. Here is what I have thought of: Use a specific seed. Which wont help if I make a mistake in the code in the first place but will help finding bugs when refactoring. Use a known list of numbers. Similar to the above but I could follow the code through by hand (which would be very tedious). Use a constant number. At least I know what to expect. It would be good to ensure that a dice always reads 6 when RandomFloat(0,1) always returns 1. Try to move as much of the non-deterministic code out of the GA as possible. which seems silly as that is the core of it's purpose. Links to very good books on testing would be appreciated too.

    Read the article

  • alphanumeric and special character sorting

    - by Kaushik Gopal
    Hi ppl, I wanted to know the different standards of sorting. To be more specific take the sample set: (Please note there's capitals, small letters, special characters, null values and numbers here) A a 3F Zx - 1Ad NULL How would the Oracle Database sort this by default? How would LINQ sort this by default? How would db2 sort this by default? (the following may get even more vague) How does the Windows platform sort this? (I mean say you have a couple of filenames, by default how would this get treated in a name sort) How does the *nix platform sort this? Is there some sort of standard for alphanumeric/special character sorting? The Windows operating system orders with numbers first, then alphabets. The Oracle database however treats alphabets first. I'm not sure of the *nix platform. It would be nice to have one place to know all these rules for the most common platforms (listed in questions above). Would the gurus throw some light on this topic? Cheers, K

    Read the article

  • Mathematically Find Max Value without Conditional Comparison

    - by Cnich
    ----------Updated ------------ codymanix and moonshadow have been a big help thus far. I was able to solve my problem using the equations and instead of using right shift I divided by 29. Because with 32bits signed 2^31 = overflows to 29. Which works! Prototype in PHP $r = $x - (($x - $y) & (($x - $y) / (29))); Actual code for LEADS (you can only do one math function PER LINE!!! AHHHH!!!) DERIVDE1 = IMAGE1 - IMAGE2; DERIVED2 = DERIVED1 / 29; DERIVED3 = DERIVED1 AND DERIVED2; MAX = IMAGE1 - DERIVED3; ----------Original Question----------- I don't think this is quite possible with my application's limitations but I figured it's worth a shot to ask. I'll try to make this simple. I need to find the max values between two numbers without being able to use a IF or any conditional statement. In order to find the the MAX values I can only perform the following functions Divide, Multiply, Subtract, Add, NOT, AND ,OR Let's say I have two numbers A = 60; B = 50; Now if A is always greater than B it would be simple to find the max value MAX = (A - B) + B; ex. 10 = (60 - 50) 10 + 50 = 60 = MAX Problem is A is not always greater than B. I cannot perform ABS, MAX, MIN or conditional checks with the scripting applicaiton I am using. Is there any way possible using the limited operation above to find a value VERY close to the max?

    Read the article

  • Fastest way to remove non-numeric characters from a VARCHAR in SQL Server

    - by Dan Herbert
    I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is slow and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: http://stackoverflow.com/questions/52315/t-sql-trim-nbsp-and-other-non-alphanumeric-characters But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform fast. Update Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is.

    Read the article

  • C# average function without overflow exception

    - by Ron Klein
    .NET Framework 3.5. I'm trying to calculate the average of some pretty large numbers. For instance: using System; using System.Linq; class Program { static void Main(string[] args) { var items = new long[] { long.MaxValue - 100, long.MaxValue - 200, long.MaxValue - 300 }; try { var avg = items.Average(); Console.WriteLine(avg); } catch (OverflowException ex) { Console.WriteLine("can't calculate that!"); } Console.ReadLine(); } } Obviously, the mathematical result is 9223372036854775607 (long.MaxValue - 200), but I get an exception there. This is because the implementation (on my machine) to the Average extension method, as inspected by .NET Reflector is: public static double Average(this IEnumerable<long> source) { if (source == null) { throw Error.ArgumentNull("source"); } long num = 0L; long num2 = 0L; foreach (long num3 in source) { num += num3; num2 += 1L; } if (num2 <= 0L) { throw Error.NoElements(); } return (((double) num) / ((double) num2)); } I know I can use a BigInt library (yes, I know that it is included in .NET Framework 4.0, but I'm tied to 3.5). But I still wonder if there's a pretty straight forward implementation of calculating the average of integers without an external library. Do you happen to know about such implementation? Thanks!! UPDATE: The previous example, of three large integers, was just an example to illustrate the overflow issue. The question is about calculating an average of any set of numbers which might sum to a large number that exceeds the type's max value. Sorry about this confusion. I also changed the question's title to avoid additional confusion. Thanks all!!

    Read the article

  • open flash chart rails x-axis issue

    - by Jimmy
    Hey guys, I am using open flash chart 2 in my rails application. Everything is looking smooth except for the range on my x axis. I am creating a line to represent cell phone plan cost over a specific amount of usage and I'm generate 8 values, 1-5 are below the allowed usage while 6-8 are demonstrations of the cost for usage over the limit. The problem I'm encountering is how to set the range of the X axis in ruby on rails to something specific to the data. Right now the values being displayed are the indexes of the array that I'm giving. When I try to hand a hash to the values the chart doesn't even load at all. So basically I need help getting a way to set the data for my line properly so that it displays correctly, right now it is treating every value as if it represents the x value of the index of the array. Here is a screen shot which may be a better description than what I am saying: http://i163.photobucket.com/albums/t286/Xeno56/Screenshot.png Note that those values are correct just the range on the x-axis is incorrect, it should be something like 100, 200, 300, 400, 500, 600, 700 Code: y = YAxis.new y.set_range(0,100, 20) x_legend = XLegend.new("Usage") x_legend.set_style('{font-size: 20px; color: #778877}') y_legend = YLegend.new("Cost") y_legend.set_style('{font-size: 20px; color: #770077}') chart =OpenFlashChart.new chart.set_x_legend(x_legend) chart.set_y_legend(y_legend) chart.y_axis = y line = Line.new line.text = plan.name line.width = 2 line.color = '#006633' line.dot_size = 2 line.values = generate_data(plan) chart.add_element(line) def generate_data(plan) values = [] #generate below threshold numbers 5.times do |x| usage = plan.usage / 5 * x cost = plan.cost * 10 values << cost end #generate above threshold numbers 3.times do |x| usage = plan.usage + ((plan.usage / 5) * x) cost = plan.cost + (usage * plan.overage) values << cost end return values end

    Read the article

  • Boost.Program_options fixed number of tokens

    - by kloffy
    Boost.Program_options provides a facility to pass multiple tokens via command line arguments as follows: std::vector<int> nums; po::options_description desc("Allowed options"); desc.add_options() ("help", "Produce help message.") ("nums", po::value< std::vector<int> >(&nums)->multitoken(), "Numbers.") ; po::variables_map vm; po::store(po::parse_command_line(argc, argv, desc), vm); po::notify(vm); However, what is the preferred way of accepting only a fixed number of arguments? The only solution I could come is to manually assign values: int nums[2]; po::options_description desc("Allowed options"); desc.add_options() ("help", "Produce help message.") ("nums", "Numbers.") ; po::variables_map vm; po::store(po::parse_command_line(argc, argv, desc), vm); if (vm.count("nums")) { // Assign nums } This feels a bit clumsy. Is there a better solution?

    Read the article

  • Regex query: how can I search PDFs for a phrase where words in that phrase appear on more than one l

    - by Alison
    I am trying to set up an index page for the weekly magazine I work on. It is to show readers the names of companies mentioned in that weeks' issue, plus the page numbers they are appear on. I want to search all the PDF files for the week, where one PDF = one magazine page (originally made in Adobe InDesign CS3 and Adobe InCopy CS3). I have set up a list of companies I want to search for and, using PowerGREP and using delimited regular expressions, I am able to find most page numbers where a company is mentioned. However, where a company name contains two or more words, the search I am running will not pick up instances where the name appears over more than one line. For example, when looking for "CB Richard Ellis" and "Cushman & Wakefield", I got no result when the text appeared like this: DTZ beat BNP PRE, CB [line break here] Richard Ellis and Cushman & [line break here] Wakefield to secure the contract. [line end here] Could someone advise me on how to write a regular expression that will ignore white space between words and ignore line endings OR one that will look for the words including all types of white space (ie uneven spaces between words; spaces at the end of lines or line endings; and tabs (I am guessing that this info is imbedded somehow in PDF files). Here is a sample of the set of terms I have asked PowerGREP to search for: \bCB Richard Ellis\b \bCB Richard Ellis Hotels\b \bCentaur Services\b \bChapman Herbert\b \bCharities Property Fund\b \bChetwoods Architects\b \bChurch Commissioners\b \bClive Emson\b \bClothworkers’ Company\b \bColliers CRE\b \bCombined English Stores Group\b \bCommercial Estates Group\b \bConnells\b \bCooke & Powell\b \bCordea Savills\b \bCrown Estate\b \bCushman & Wakefield\b \bCWM Retail Property Advisors\b [Note that there is a delimited hard return between each \b at the end of each phrase and beginnong of the next phrase.] By the way, I am a production journalist and not usually involved in finding IT-type solutions and am finding it difficult to get to grips with the technical language on the PowerGREP site. Thanks for assistance Alison

    Read the article

  • How to write a C program using the fork() system call that generates the Fibonacci sequence in the

    - by Ellen
    The problem I am having is that when say for instance the user enters 7, then the display shows: 0 11 2 3 5 8 13 21 child ends. I cannot seem to figure out how to fix the 11 and why is it displaying that many numbers in the sequence! Can anyone help? The number of the sequence will be provided in the command line. For example, if 5 is provided, the first five numbers in the Fibonacci sequence will be output by the child process. Because the parent and child processes have their own copies of the data, it will be necessary for the child to output the sequence. Have the parent invoke the wait() call to wait for the child process to complete before exiting the program. Perform necessary error checking to ensure that a non-negative number is passed on the command line. #include <stdio.h> #include <sys/types.h> #include <unistd.h> int main() { int a=0, b=1, n=a+b,i,ii; pid_t pid; printf("Enter the number of a Fibonacci Sequence:\n"); scanf("%d", &ii); if (ii < 0) printf("Please enter a non-negative integer!\n"); else { pid = fork(); if (pid == 0) { printf("Child is producing the Fibonacci Sequence...\n"); printf("%d %d",a,b); for (i=0;i<ii;i++) { n=a+b; printf("%d ", n); a=b; b=n; } printf("Child ends\n"); } else { printf("Parent is waiting for child to complete...\n"); wait(NULL); printf("Parent ends\n"); } } return 0; }

    Read the article

  • Average function without overflow exception

    - by Ron Klein
    .NET Framework 3.5. I'm trying to calculate the average of some pretty large numbers. For instance: using System; using System.Linq; class Program { static void Main(string[] args) { var items = new long[] { long.MinValue + 100, long.MinValue + 200, long.MinValue + 300 }; try { var avg = items.Average(); Console.WriteLine(avg); } catch (OverflowException ex) { Console.WriteLine("can't calculate that!"); } Console.ReadLine(); } } Obviously, the mathematical result is 9223372036854775607 (long.MaxValue - 200), but I get an exception there. This is because the implementation (on my machine) to the Average extension method, as inspected by .NET Reflector is: public static double Average(this IEnumerable<long> source) { if (source == null) { throw Error.ArgumentNull("source"); } long num = 0L; long num2 = 0L; foreach (long num3 in source) { num += num3; num2 += 1L; } if (num2 <= 0L) { throw Error.NoElements(); } return (((double) num) / ((double) num2)); } I know I can use a BigInt library (yes, I know that it is included in .NET Framework 4.0, but I'm tied to 3.5). But I still wonder if there's a pretty straight forward implementation of calculating the average of integers without an external library. Do you happen to know about such implementation? Thanks!! UPDATE: The previous example, of three large integers, was just an example to illustrate the overflow issue. The question is about calculating an average of any set of numbers which might sum to a large number that exceeds the type's max value. Sorry about this confusion. I also changed the question's title to avoid additional confusion. Thanks all!!

    Read the article

  • Excel Regex, or export to Python? ; "Vlookup" in Python?

    - by victorhooi
    heya, We have an Excel file with a worksheet containing people records. 1. Phone Number Sanitation One of the fields is a phone number field, which contains phone numbers in the format e.g.: +XX(Y)ZZZZ-ZZZZ (where X, Y and Z are integers). There are also some records which have less digits, e.g.: +XX(Y)ZZZ-ZZZZ And others with really screwed up formats: +XX(Y)ZZZZ-ZZZZ / ZZZZ or: ZZZZZZZZ We need to sanitise these all into the format: 0YZZZZZZZZ (or OYZZZZZZ with those with less digits). 2. Fill in Supervisor Details Each person also has a supervisor, given as an numeric ID. We need to do a lookup to get the name and email address of that supervisor, and add it to the line. This lookup will be firstly on the same worksheet (i.e. searching itself), and it can then fallback to another workbook with more people. 3. Approach? For the first issue, I was thinking of using regex in Excel/VBA somehow, to do the parsing. My Excel-fu isn't the best, but I suppose I can learn...lol. Any particular points on this one? However, would I be better off exporting the XLS to a CSV (e.g. using xlrd), then using Python to fix up the phone numbers? For the second approach, I was thinking of just using vlookups in Excel, to pull in the data, and somehow, having it fall through, first on searching itself, then on the external workbook, then just putting in error text. Not sure how to do that last part. However, if I do happen to choose to export to CSV and do it in Python, what's an efficient way of doing the vlookup? (Should I convert to a dict, or just iterate? Or is there a better, or more idiomatic way?) Cheers, Victor

    Read the article

  • Preg Expression to identify classes/ids in a CSS file that have no contents

    - by dclowd9901
    I'm in the process of updating some old CSS files in our systems, and we have a bunch that have lots of empty classes simply taking up space in the file. I'd love to learn how to write Regular expressions, but I just don't get them. I'm hoping the more I expose myself to them (with a little more cohesive explanation), the more I'll end up understanding them. The Problem That said, I'm looking for an expression that will identify text followed by a '{' (some have spaces in between, and some do not) and if there are no letters or numbers between that bracket and '}' (spaces don't count), it will be identified as a matching string. I suppose I can trim the whitespace out of the doc before I run a regular expression through it, but I don't want to change the basic structure of the text. I'm hoping to return it into a large <textarea>. Bonus points for explaining the characters and their meanings, and also an expression identifying lines in the copy without any text or numbers, as well. I will likely use the final expression in PHP script. tl;dr: Regular Expression to match: .a_class_or #an_id { /* if there aren't any alphanumerics in here, this should be a matching line of text */ }

    Read the article

  • Assigning console.log to another object (Webkit issue)

    - by Trevor Burnham
    I wanted to keep my logging statements as short as possible while preventing console from being accessed when it doesn't exist; I came up with the following solution: var _ = {}; if (console) { _.log = console.debug; } else { _.log = function() { } } To me, this seems quite elegant, and it works great in Firefox 3.6 (including preserving the line numbers that make console.debug more useful than console.log). But it doesn't work in Safari 4. [Update: Or in Chrome. So the issue seems to be a difference between Firebug and the Webkit console.] If I follow the above with console.debug('A') _.log('B'); the first statement works fine in both browsers, but the second generates a "TypeError: Type Error" in Safari. Is this just a difference between how Firebug and the Safari Web Developer Tools implement console? If so, it is VERY annoying on Apple's Webkit's part. Binding the console function to a prototype and then instantiating, rather than binding it directly to the object, doesn't help. I could, of course, just call console.debug from an anonymous function assigned to _.log, but then I'd lose my line numbers. Any other ideas?

    Read the article

  • mono --aot with MinGW: unknown pseudo-op: `.local'

    - by Jared Updike
    Can I user mono's AOT feature to natively "pre-compile" .NET DLLs (and or EXEs) to make them harder to reverse engineer? If so, how do I get mono/AOT working in Windows 7? (I'm running x64 but the app is targeting x86 explicitly.) I just installed Mono 2.6.3 and MinGW 5.1.6 and I'm trying to AOT compile an exe (or a dll, it doesn't matter). I get screens and screens of error messages: C:\Users\jupdike\AppData\Local\Temp\mono_aot_XSDEAV:533: Error: junk at end of line, first unrecognized character is `H' C:\Users\jupdike\AppData\Local\Temp\mono_aot_XSDEAV:539: Error: unknown pseudo-op: `.local' C:\Users\jupdike\AppData\Local\Temp\mono_aot_XSDEAV:546: Warning: .size pseudo-op used outside of .def/.endef ignored. C:\Users\jupdike\AppData\Local\Temp\mono_aot_XSDEAV:546: Error: junk at end of line, first unrecognized character is `H' I can open the generated assembly code but I have no idea why the assembler chokes on it: .size HappyForms_TextForm__ctor_string_string_string_bool,.-HappyForms_TextForm__ctor_string_string_string_bool (533) _.Lme_a: .Lme_a: .balign 16 _.Lm_b: .Lm_b: .local HappyForms_TextForm_get_InputValue (539) _HappyForms_TextForm_get_InputValue: HappyForms_TextForm_get_InputValue: .byte 85,139,236,131,236,8,139,69,8,139,128,216,2,0,0,131,236,12,80,139,0,144,144,144,255,144,200,2,0,0,131,196 .byte 16,201,195 .size HappyForms_TextForm_get_InputValue,.-HappyForms_TextForm_get_InputValue (546) (numbers above in parens are line numbers)

    Read the article

  • Recursively adding threads to a Java thread pool

    - by Leith
    I am working on a tutorial for my Java concurrency course. The objective is to use thread pools to compute prime numbers in parallel. The design is based on the Sieve of Eratosthenes. It has an array of n bools, where n is the largest integer you are checking, and each element in the array represents one integer. True is prime, false is non prime, and the array is initially all true. A thread pool is used with a fixed number of threads (we are supposed to experiment with the number of threads in the pool and observe the performance). A thread is given a integer multiple to process. The thread then finds the first true element in the array that is not a multiple of thread's integer. The thread then creates a new thread on the thread pool which is given the found number. After a new thread is formed, the existing thread then continues to set all multiples of it's integer in the array to false. The main program thread starts the first thread with the integer '2', and then waits for all spawned threads to finish. It then spits out the prime numbers and the time taken to compute. The issue I have is that the more threads there are in the thread pool, the slower it takes with 1 thread being the fastest. It should be getting faster not slower! All the stuff on the internet about Java thread pools create n worker threads the main thread then wait for all threads to finish. The method I use is recursive as a worker can spawn more worker threads. I would like to know what is going wrong, and if Java thread pools can be used recursively.

    Read the article

  • advanced xml? multi level .. php parsing

    - by dave1019
    hi my knowledge of xml and php is somewhat basic but have been able to parse a simple xml document (one level) ~ <numbers> <number>1</number> <number>2</number> </numbers> i'm attempting to parse an xml document from a website providing the latest market prices for gold bullion. I think i'm going to need a paid professional but want to give it a shot the xml file looks like this: <envelope> <message type="MARKET_DEPTH_A" version="0.1"> <market> <pitches> <pitch securityClassNarrative="GOLD" securityId="AUXLN" considerationCurrency="GBP" > <buyPrices> <price actionIndicator="B" quantity="0.153" limit="23477" /> </buyPrices> <sellPrices> <price actionIndicator="S" quantity="0.058" limit="23589" /> </sellPrices> </pitch> </pitches> </market> </message> </envelope> and simply i have no idea how to access the values within the "headings". (whatever the term is) sounds like i'm asking for someone to do it for me, which I don't want, but I don't know what to search for ~ it doesn't look like a regular xml structure to me. thanks!

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • BackgroundWorker From ASP.Net Application

    - by Kevin
    We have an ASP.Net application that provides administrators to work with and perform operations on large sets of records. For example, we have a "Polish Data" task that an administrator can perform to clean up data for a record (e.g. reformat phone numbers, social security numbers, etc.) When performed on a small number of records, the task completes relatively quickly. However, when a user performs the task on a larger set of records, the task may take several minutes or longer to complete. So, we want to implement these kinds of tasks using some kind of asynchronous pattern. For example, we want to be able to launch the task, and then use AJAX polling to provide a progress bar and status information. I have been looking into using the BackgroundWorker class, but I have read some things online that make me pause. I would love to get some additional advice on this. For example, I understand that the BackgroundWorker will actually use the thread pool from the current application. In my case, the application is an ASP.Net web site. I have read that this can be a problem because when the application recycles, the background workers will be terminated. Some of the jobs I mentioned above may take 3 minutes, but others may take a few hours. Also, we may have several hundred administrators all performing similar operations during the day. Will the ASP.Net application thread pool be able to handle all of these background jobs efficiently while still performing it's normal request processing? So, I am trying to determine if using the BackgroundWorker class and approach is right for our needs. Should I be looking at an alternative approach? Thanks and sorry for such a long post! Kevin

    Read the article

  • how to fix gateway timeout error in php???

    - by developer
    Iam having a php file that sends text messages on mobile to all the users that i have in my database's particular table. Now the entries are like 2000 or so in number and this number will keep on increasing. On my page there is a small form that selects a list of the users to whom message is to be sent from a drop down and then user writes the text to be sent in a textarea and then on clicking the submit button php script stars sending the messages to mobile numbers. Now while trying to send messages my browser has shown gateway timeout error but the script kept on running and messages are sent to the mobiles but not once but 6 times. I checked my script my query and all the code is correct.This all happened coz of that gateway timeout. Now does this gateway timeout kepts the script running again and again till the browser is not closed?? is this was the reason that a single message was sent 6 times to mobile numbers?? I mean how can i escape my file from getting this gateway error so that one message is sent only one time to a number??

    Read the article

  • How do I change an attribute in an HTML table's cell if I know the row and column index of the cell?

    - by Mark
    I know nothing about jQuery but am an experienced C++ programmer (not sure if that helps or hurts). I found jQuery code that gives me the row and column index of a cell in an HTML table when a user clicks on that cell. Using such row-column index numbers, I need to change an attribute's value in the previously selected cell and in the cell just clicked. The index numbers are produced and saved with this code: var $trCurrent = 0; // Index of cell selected when page opens var $tdCurrent = 0; // i.e., previously selected cell $(document).ready(function () { $("td").click(function () { // How toclear previously selected cell's attribute here? ('class', 'recent') var oTr = $(this).parents("tr"); $tdCurrent = oTr.children("td").index(this); }); $("tr").click(function () { $trCurrent = $(this)[0].rowIndex; // How to set new attributes here? ('class', 'current'); // and continue work using information from currently selected cell }); }); Any help or hints would be appreciated. I do not even know if this is the way I should get the index of the row and column. Thanks.

    Read the article

  • How can I change the precision of printing with the stl?

    - by mmr
    This might be a repeat, but my google-fu failed to find it. I want to print numbers to a file using the stl with the number of decimal places, rather than overall precision. So, if I do this: int precision = 16; std::vector<double> thePoint(3); thePoint[0] = 86.3671436; thePoint[0] = -334.8866574; thePoint[0] = 24.2814; ofstream file1(tempFileName, ios::trunc); file1 << std::setprecision(precision) << thePoint[0] << "\\" << thePoint[1] << "\\" << thePoint[2] << "\\"; I'll get numbers like this: 86.36714359999999\-334.8866574\24.28140258789063 What I want is this: 86.37\-334.89\24.28 In other words, truncating at two decimal points. If I set precision to be 4, then I'll get 86.37\-334.9\24.28 ie, the second number is improperly truncated. I do not want to have to manipulate each number explicitly to get the truncation, especially because I seem to be getting the occasional 9 repeating or 0000000001 or something like that that's left behind. I'm sure there's something obvious, like using the printf(%.2f) or something like that, but I'm unsure how to mix that with the stl << and ofstream.

    Read the article

  • SQLDeveloper using over 100MB of PGA

    - by Leigh Riffel
    Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage: SELECT e.SID, e.username, e.status, b.PGA_MEMORY FROM v$session e LEFT JOIN (select y.SID, y.value pga, TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b ON e.sid=b.sid WHERE (PGA)/1024/1024 > 20 ORDER BY 4 DESC; It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this? Update: I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.

    Read the article

  • Converting an input text value to a decimal number

    - by vitto
    Hi, I'm trying to work with decimal data in my PHP and MySql practice and I'm not sure about how can I do for an acceptable level af accuracy. I've wrote a simple function which recives my input text value and converts it to a decimal number ready to be stored in the database. <?php function unit ($value, $decimal_point = 2) { return number_format (str_replace (",", ".", strip_tags (trim ($value))), $decimal_point); } ?> I've resolved something like AbdlBsF5%?nl with some jQuery code for replace and some regex to keep only numbers, dots and commas. In some country, people uses the comma , to send decimal numbers, so a number like 72.08 is wrote like 72,08. I'd like avoid to forcing people to change their usual chars and I've decided to use a jQuery to keep this too. Now every web developer knows the last validation must be handled by the dynamic page for security reasons. So my answer is should I use something like unit (); function to store data or shoul I also check if users inserts invalid chars like letters or something else? If I try this and send letters, the query works without save the invalid data, I think this isn't bad, but I could easily be wrong because I'm a rookie. What kind of method should I use if I want a number like 99999.99 for my query?

    Read the article

  • Writing to a file in Python inserts null bytes

    - by Javier Badia
    I'm writing a todo list program. It keeps a file with a thing to do per line, and lets the user add or delete items. The problem is that for some reason, I end up with a lot of zero bytes at the start of the file, even though the item is correctly deleted. I'll show you a couple of screenshots to make sure I'm making myself clear. This is the file in Notepad++ before running the program: This is the file after deleting item 3 (counting from 1): This is the relevant code. The actual program is bigger, but running just this part triggers the error. import os TODO_FILE = r"E:\javi\code\Python\todo-list\src\todo.txt" def del_elems(f, delete): """Takes an open file and either a number or a list of numbers, and deletes the lines corresponding to those numbers (counting from 1).""" if isinstance(delete, int): delete = [delete] lines = f.readlines() f.truncate(0) counter = 1 for line in lines: if counter not in delete: f.write(line) counter += 1 f = open(TODO_FILE, "r+") del_elems(f, 3) f.close() Could you please point out where's the mistake?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >