Search Results

Search found 732 results on 30 pages for 'trailing zeros'.

Page 25/30 | < Previous Page | 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Should a Trim method generally in the Data Access Layer or with in the Domain Layer?

    - by jpierson
    I'm dealing with a database that contains data with inconsistencies such as white leading and trailing white space. In general I see a lot of developers practice defensive coding by trimming almost all strings that come from the database that may have been entered by a user at some point. In my oppinoin it is better to do such formating before data is persisted so that it is done only once and then the data can be in a consistent and reliable state. Unfortunatley this is not the case however which leads me to the next best solution, using a Trim method. If I trim all data as part of my data access layer then I don't have to concern myself with defensive trimming within the business objects of my domain layer. If I instead put the trimming responsibility in my business objects, such as with set accessors of my C# properties, I should get the same net results however the trim will be operating on all values assigned to my business objects properties not just the ones that come from the inconsistent database. I guess as a somewhat philisophical question that may determine the answer I could ask "Should the domain later be responsible for defensive/coercive formatting of data?" Would it make sense to have a set accessor for a PhoneNumber property on a business object accept a unformatted or formatted string and then attempt to format it as required or should this responsibility be pushed to the presentation and data access layers leaving the domain layer more strict in the type of data that it will accept? I think this may be the more fundamental question. Update: Below are a few links that I thought I should share about the topic of data cleansing. Information service patterns, Part 3: Data cleansing pattern LINQ to SQL - Format a string before saving? How to trim values using Linq to Sql?

    Read the article

  • Introduction to vectorizing in MATLAB - any good tutorials?

    - by Gacek
    I'm looking for any good tutorials on vectorizing (loops) in MATLAB. I have quite simple algorithm, but it uses two for loops. I know that it should be simple to vectorize it and I would like to learn how to do it instead of asking you for the solution. But to let you know what problem I have, so you would be able to suggest best tutorials that are showing how to solve similar problems, here's the outline of my problem: B = zeros(size(A)); % //A is a given matrix. for i=1:size(A,1) for j=1:size(A,2) H = ... %// take some surrounding elements of the element at position (i,j) (i.e. using mask 3x3 elements) B(i,j) = computeSth(H); %// compute something on selected elements and place it in B end end So, I'm NOT asking for the solution. I'm asking for a good tutorials, examples of vectorizing loops in MATLAB. I would like to learn how to do it and do it on my own.

    Read the article

  • why limxml2 quotes starting double slash in CDATA with javascript

    - by Vincenzo
    This is my code: <?php $data = <<<EOL <?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <script type="text/javascript"> //<![CDATA[ var a = 123; // JS code //]]> </script> </html> EOL; $dom = new DOMDocument(); $dom->preserveWhiteSpace = false; $dom->formatOutput = false; $dom->loadXml($data); echo '<pre>' . htmlspecialchars($dom->saveXML()) . '</pre>'; This is result: <?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <script type="text/javascript"><![CDATA[ //]]><![CDATA[ var a = 123; // JS code //]]><![CDATA[ ]]></script></html> If and when I remove the DOCTYPE notation from XML document, CDATA works properly and leading/trailing double slash is not turned into CDATA. What is the problem here? Bug in libxml2? PHP version is 5.2.13 on Linux. Thanks.

    Read the article

  • what is problem in the following matlab codes

    - by raju
    img=imread('img27.jpg'); %function rectangle=rect_test(img) % edge detection [gx,gy]=gradient(img); gx=abs(gx); gy=abs(gy); g=gx+gy; g=abs(g); % make it 300x300 img=zeros(300); s=size(g); img(1:s(1),1:s(2))=g; figure; imagesc((img)); colormap(gray); title('Edge detection') % take the dct of the image IM=abs(dct2(img)); figure; imagesc((IM)); colormap(gray); title('DCT') % normalize m=max(max(IM)); IM2=IM./m*100; % get rid of the peak size_IM2=size(IM2); IM2(1:round(.05*size_IM2(1)),1:round(.05*size_IM2(2))) = 0; % threshold L=length( find(IM2>20)) ; if( L > 60 ) ret = 1; else ret = 0; end

    Read the article

  • why libxml2 quotes starting double slash in CDATA with javascript

    - by Vincenzo
    This is my code: <?php $data = <<<EOL <?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <script type="text/javascript"> //<![CDATA[ var a = 123; // JS code //]]> </script> </html> EOL; $dom = new DOMDocument(); $dom->preserveWhiteSpace = false; $dom->formatOutput = false; $dom->loadXml($data); echo '<pre>' . htmlspecialchars($dom->saveXML()) . '</pre>'; This is result: <?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <script type="text/javascript"><![CDATA[ //]]><![CDATA[ var a = 123; // JS code //]]><![CDATA[ ]]></script></html> If and when I remove the DOCTYPE notation from XML document, CDATA works properly and leading/trailing double slash is not turned into CDATA. What is the problem here? Bug in libxml2? PHP version is 5.2.13 on Linux. Thanks.

    Read the article

  • Iterate with binary structure over numpy array to get cell sums

    - by Curlew
    In the package scipy there is the function to define a binary structure (such as a taxicab (2,1) or a chessboard (2,2)). import numpy from scipy import ndimage a = numpy.zeros((6,6), dtype=numpy.int) a[1:5, 1:5] = 1;a[3,3] = 0 ; a[2,2] = 2 s = ndimage.generate_binary_structure(2,2) # Binary structure #.... Calculate Sum of result_array = numpy.zeros_like(a) What i want is to iterate over all cells of this array with the given structure s. Then i want to append a function to the current cell value indexed in a empty array (example function sum), which uses the values of all cells in the binary structure. For example: array([[0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 0], [0, 1, 2, 1, 1, 0], [0, 1, 1, 0, 1, 0], [0, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0]]) # The array a. The value in cell 1,2 is currently one. Given the structure s and an example function such as sum the value in the resulting array (result_array) becomes 7 (or 6 if the current cell value is excluded). Someone got an idea?

    Read the article

  • Putting together CSV Cells in Excel with a macro

    - by Eric Kinch
    So, I have a macro to export data into CSV format and it's working great (Code at the bottom). The problem is the data I am putting into it. When I put the data in question in it comes out Firstname,Lastname,username,password,description I'd like to change it so I get Firstname Lastname,Firstname,Lastname,username,password,description What I'd like to do is manipulate my existing macro so to accomplish this. I'm not so good at VBS so any input or a shove in the right direction would be fantastic. Thanks! Sub Make_CSV() Dim sFile As String Dim sPath As String Dim sLine As String Dim r As Integer Dim c As Integer r = 1 'Starting row of data sPath = "C:\CSVout\" sFile = "MyText_" & Format(Now, "YYYYMMDD_HHMMSS") & ".CSV" Close #1 Open sPath & sFile For Output As #1 Do Until IsEmpty(Range("A" & r)) 'You can also Do Until r = 17 (get the first 16 cells) sLine = "" c = 1 Do Until IsEmpty(Cells(1, c)) 'Number of Columns - You could use a FOR / NEXT loop instead sLine = sLine & """" & Replace(Cells(r, c), ";", ":") & """" & "," c = c + 1 Loop Print #1, Left(sLine, Len(sLine) - 1) 'Remove the trailing comma r = r + 1 Loop Close #1 End Sub

    Read the article

  • Ruby syntax error: unexpected $end, expecting keyword_end

    - by user2839246
    I am supposed to: Capitalize the first letter of string. Capitalize every word except articles (the, a, an), conjunctions (and), and prepositions (in). Capitalize i (as in "I am male."). Specify the first word of a string (I actually have no idea what this means. I'm trying to run the spec file to test other functions). Here's my code: class Book def initialize(string) title(string) end def title(string) arts_conjs_preps = %w{ a an the and but or nor for yet so although because since unless despite in to } array = string.downcase.split array.each do |word| if (word == array[0] || word == "i") then word = word.capitalize if arts_conjs_preps !include?(word) then word = word.capitalize end puts array.join(' ') end end puts Book.new("inferno") Ruby says I'm messing up at: puts Book.new("inferno") <--(right after the last line of code) I get exactly the same error message with this test code: def title(string) array = string.downcase.split array.each do |word| if word == array[0] then word = word.capitalize end array.join(' ') end puts title("dante's inferno") The only other Stack Overflow thread regarding this particular syntax error that did not suggest trailing or missing ends or .s as the root of the problem is here. The last comment recommends deleting and recreating the gemset, Which sounds scary. And I'm not sure how to do. Any thoughts? Simple solution? Resources to help? Solution class Book def initialize(string) title(string) end def title(string) arts_conjs_preps = %w{ a an the and but or nor for yet so although because since unless despite of in to } array = string.downcase.split title = array.map do |word| if (word == array[0] || word == "i") || !arts_conjs_preps.include?(word) word = word.capitalize else word end end puts title.join(' ') end end Book.new("dante's the inferno")

    Read the article

  • I need help on this data file to be edited in SOM_PAK format

    - by Mola
    Hi Experts, I am working on Self Organizing Map (SOM) Implementation and i have a microarray dataset which i am trying to read in using some_read_data function, but i keep having an errors when i edit it to have it in SOM_PAK form which is recognise by SOM for reading such as ??? Error using == somtoolbox\som_read_data.m Only 69 vector components on input file data line 1 (dimension is 70) Error in == SomMainFunction at 3 sD = som_read_data('B_r2.txt'); but when i try to read the data without editing which is the original file as shown here: http://rapidshare.com/files/376239367/DLBCL.txt.html It indicates "Data read OK", but i have the following error ??? Error using == unknown Out of memory. Type HELP MEMORY for your options. Error in == somtoolbox\som_bmus.m at 189 Bmus = zeros(dlen,length(which_bmus)); Error in == somvis\somvis_p_matrix.m at 41 [dummy dists] = som_bmus (dat, dat, 2:datlen); Error in == SomMainFunction at 16 [pheight rad_real perc] = somvis_p_matrix(sM,sD); You can get the datafile from here:http://rapidshare.com/files/376239367/DLBCL.txt.html I need someone to help me correct this data for me and put it in SOM_PAK format. I have tried getting it in SOM_PAK format, but it still giving me errors:

    Read the article

  • Dealing With Java Default Level Access Specifiers

    - by Tom Tresansky
    I've seen some code in a project recently where some fields in a couple classes have been using the default access modifier without good reason to. It almost looks like a case of "oops, forgot to make these private". Since the classes are used almost exclusively outside of the package they are defined in, the fields are not visible from the calling code, and are treated as private. So the mistake/oversight would not be very noticeable. However, encapsulation is broken. If I wanted to add a new class to the existing package, I could then mess with internal data in objects using fields with default access. So, my questions: Are there any best practices concerning default access specifiers that I should be aware of? Anything that would help prevent this type of accident from re-occurring? Are are any annotations which might say something to the effect of "I really meant for these to be default access"? Using CheckStyle, or any other Eclipse plugins, is there any way to flag instances of default fields, or disallow any not accompanied by, say, a "//default access" comment trailing them?

    Read the article

  • How to handle building and parsing HTTP URL's / URI's / paths in Perl

    - by Robert S. Barnes
    I have a wget like script which downloads a page and then retrieves all the files linked in img tags on that page. Given the URL of the original page and the the link extracted from the img tag in that page I need to build the URL for the image file I want to retrieve. Currently I use a function I wrote: sub build_url { my ( $base, $path ) = @_; # if the path is absolute just prepend the domain to it if ($path =~ /^\//) { ($base) = $base =~ /^(?:http:\/\/)?(\w+(?:\.\w+)+)/; return "$base$path"; } my @base = split '/', $base; my @path = split '/', $path; # remove a trailing filename pop @base if $base =~ /[[:alnum:]]+\/[\w\d]+\.[\w]+$/; # check for relative paths my $relcount = $path =~ /(\.\.\/)/g; while ( $relcount-- ) { pop @base; shift @path; } return join '/', @base, @path; } The thing is, I'm surely not the first person solving this problem, and in fact it's such a general problem that I assume there must be some better, more standard way of dealing with it, using either a core module or something from CPAN - although via a core module is preferable. I was thinking about File::Spec but wasn't sure if it has all the functionality I would need.

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • Solving quadratic programming using R

    - by user702846
    I would like to solve the following quadratic programming equation using ipop function from kernlab : min 0.5*x'*H*x + f'*x subject to: A*x <= b Aeq*x = beq LB <= x <= UB where in our example H 3x3 matrix, f is 3x1, A is 2x3, b is 2x1, LB and UB are both 3x1. edit 1 My R code is : library(kernlab) H <- rbind(c(1,0,0),c(0,1,0),c(0,0,1)) f = rbind(0,0,0) A = rbind(c(1,1,1), c(-1,-1,-1)) b = rbind(4.26, -1.73) LB = rbind(0,0,0) UB = rbind(100,100,100) > ipop(f,H,A,b,LB,UB,0) Error in crossprod(r, q) : non-conformable arguments I know from matlab that is something like this : H = eye(3); f = [0,0,0]; nsamples=3; eps = (sqrt(nsamples)-1)/sqrt(nsamples); A=ones(1,nsamples); A(2,:)=-ones(1,nsamples); b=[nsamples*(eps+1); nsamples*(eps-1)]; Aeq = []; beq = []; LB = zeros(nsamples,1); UB = ones(nsamples,1).*1000; [beta,FVAL,EXITFLAG] = quadprog(H,f,A,b,Aeq,beq,LB,UB); and the answer is a vector of 3x1 equals to [0.57,0.57,0.57]; However when I try it on R, using ipop function from kernlab library ipop(f,H,A,b,LB,UB,0)) and I am facing Error in crossprod(r, q) : non-conformable arguments I appreciate any comment

    Read the article

  • ndarray field names for both row and column?

    - by Graham Mitchell
    I'm a computer science teacher trying to create a little gradebook for myself using NumPy. But I think it would make my code easier to write if I could create an ndarray that uses field names for both the rows and columns. Here's what I've got so far: import numpy as np num_stud = 23 num_assign = 2 grades = np.zeros(num_stud, dtype=[('assign 1','i2'), ('assign 2','i2')]) #etc gv = grades.view(dtype='i2').reshape(num_stud,num_assign) So, if my first student gets a 97 on 'assign 1', I can write either of: grades[0]['assign 1'] = 97 gv[0][0] = 97 Also, I can do the following: np.mean( grades['assign 1'] ) # class average for assignment 1 np.sum( gv[0] ) # total points for student 1 This all works. But what I can't figure out how to do is use a student id number to refer to a particular student (assume that two of my students have student ids as shown): grades['123456']['assign 2'] = 95 grades['314159']['assign 2'] = 83 ...or maybe create a second view with the different field names? np.sum( gview2['314159'] ) # total points for the student with the given id I know that I could create a dict mapping student ids to indices, but that seems fragile and crufty, and I'm hoping there's a better way than: id2i = { '123456': 0, '314159': 1 } np.sum( gv[ id2i['314159'] ] ) I'm also willing to re-architect things if there's a cleaner design. I'm new to NumPy, and I haven't written much code yet, so starting over isn't out of the question if I'm Doing It Wrong. I am going to be needing to sum all the assignment points for over a hundred students once a day, as well as run standard deviations and other stats. Plus, I'll be waiting on the results, so I'd like it to run in only a couple of seconds. Thanks in advance for any suggestions.

    Read the article

  • How can I build and parse HTTP URL's / URI's / paths in Perl?

    - by Robert S. Barnes
    I have a wget-like script which downloads a page and then retrieves all the files linked in IMG tags on that page. Given the URL of the original page and the the link extracted from the IMG tag in that page I need to build the URL for the image file I want to retrieve. Currently I use a function I wrote: sub build_url { my ( $base, $path ) = @_; # if the path is absolute just prepend the domain to it if ($path =~ /^\//) { ($base) = $base =~ /^(?:http:\/\/)?(\w+(?:\.\w+)+)/; return "$base$path"; } my @base = split '/', $base; my @path = split '/', $path; # remove a trailing filename pop @base if $base =~ /[[:alnum:]]+\/[\w\d]+\.[\w]+$/; # check for relative paths my $relcount = $path =~ /(\.\.\/)/g; while ( $relcount-- ) { pop @base; shift @path; } return join '/', @base, @path; } The thing is, I'm surely not the first person solving this problem, and in fact it's such a general problem that I assume there must be some better, more standard way of dealing with it, using either a core module or something from CPAN - although via a core module is preferable. I was thinking about File::Spec but wasn't sure if it has all the functionality I would need.

    Read the article

  • HTML table manipulation using jQuery

    - by Daniel
    I'm building a site using CakePHP and am currently working on adding data to two separate tables at the same time.. not a problem. The problem I have is that I'm looking to dynamically alter the form that accepts the input values, allowing the click of a button/link to add an additional row of form fields. At the moment I have a table that looks something like this: <table> <thead> <tr> <th>Campus</th> <th>Code</th> </tr> </thead> <tbody> <tr> <td> <select id="FulltimeCourseCampusCode0CampusId" name="data[FulltimeCourseCampusCode][0][campus_id]"> <option value=""></option> <option value="1">Evesham</option> <option value="2">Malvern</option> </select> </td> <td> <input type="text" id="FulltimeCourseCampusCode0CourseCode" name="data[FulltimeCourseCampusCode][0][course_code]"> </td> </tr> </tbody> What I need is for the row within the tbody tag to be replicated, with the minor change of having all the zeros (i.e. such as here FulltimeCourseCampusCode0CampusId and here data[FulltimeCourseCampusCode][0][campus_id]) incremented. I'm very new to jQuery, having done a few minor bits, but nothing this advanced (mostly just copy/paste stuff). Can anyone help? Thank you.

    Read the article

  • Security strategies for storing password on disk

    - by Mike
    I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it. I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely. Here are some basic rules The system has multiple users. We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable) I don't mind anyone with superuser access looking at our stored password Here is what i've got so far Store password in password.txt $chmod 600 password.txt Process reads from password.txt when it's needed Buffer overwritten with zeros when it's no longer needed Although I'm sure there is a better way.

    Read the article

  • Emacs hide/show support for C++ triple-slash Doxygen markup?

    - by jsyjr
    I use Doxygen's triple-slash syntax to markup my C++ code. There are two important cases which arise: 1) block markup comments which are the sole element on the line and may or may not begin flush left; e.g. class foo /// A one sentence brief description of foo. The elaboration can /// continue on for many lines. { ... }; void foo::bar /// A one sentence brief description of bar. The elaboration can /// continue on for many lines. () const { ... } 2) trailing markup comments which always follow some number of C++ tokens earlier on the first line but may still spill over onto subsequent lines; e.g. class foo { int _var1; ///< A brief description of _var1. int _var2; ///< A brief description of _var2 ///< requiring additional lines. } void foo::bar ( int arg1 ///< A brief description of arg1. , int arg2 ///< A brief description of arg2 ///< requiring additional lines. ) const { ... } I wonder what hide/show support exists to deal with these conventions. The most important cases are the block markup comments. Ideally I would like to be able to eliminate these altogether, meaning that I would prefer not to waste a line simply to indicate presence of a folded block markup comment. Instead I would like a fringe marker, a la http://www.emacswiki.org/emacs/hideshowvis.el /john

    Read the article

  • Matlab Coin Toss Simulation

    - by user1772959
    I have to write some code in Matlab that simulates tossing a coin 150 times. I have to count how many times the coin lands on heads and create a vector that gives a running percentage of the heads. Then I have to make a table of the number of trials, random 'flips", and the running percentages of heads. I assume random "flips" means heads or tails for that trial. I also have to create a line graph with trials on the x-axis and probabilities (percentages) on the y-axis. I'm assuming the percentages are just the percentage of getting heads. Sorry if this post was long. I figure giving the details now will make it easier to see what I was trying to do with the code. I didn't create the table or plot yet because I'm not even sure how to code for the actual problem. NUM_TRIALS = 150; trials = 1:NUM_TRIALS; heads = 0; t = rand(NUM_TRIALS,1); for i = trials if (t < 0.5) heads = heads + 1; end z = zeros(NUM_TRIALS,1); percent_h = heads/trials; end

    Read the article

  • cuda 5.0 namespaces for contant memory variable usage

    - by Psypher
    In my program I want to use a structure containing constant variables and keep it on device all long as the program executes to completion. I have several header files containing the declaration of 'global' functions and their respective '.cu' files for their definitions. I kept this scheme because it helps me contain similar code in one place. e.g. all the 'device' functions required to complete 'KERNEL_1' are separated from those 'device' functions required to complete 'KERNEL_2' along with kernels definitions. I had no problems with this scheme during compilation and linking. Until I encountered constant variables. I want to use the same constant variable through all kernels and device functions but it doesn't seem to work. ########################################################################## CODE EXAMPLE ########################################################################### filename: 'common.h' -------------------------------------------------------------------------- typedef struct { double height; double weight; int age; } __CONSTANTS; __constant__ __CONSTANTS d_const; --------------------------------------------------------------------------- filename: main.cu --------------------------------------------------------------------------- #include "common.h" #include "gpukernels.h" int main(int argc, char **argv) { __CONSTANTS T; T.height = 1.79; T.weight = 73.2; T.age = 26; cudaMemcpyToSymbol(d_consts, &T, sizeof(__CONSTANTS)); test_kernel <<< 1, 16 >>>(); cudaDeviceSynchronize(); } --------------------------------------------------------------------------- filename: gpukernels.h --------------------------------------------------------------------------- __global__ void test_kernel(); --------------------------------------------------------------------------- filename: gpukernels.cu --------------------------------------------------------------------------- #include <stdio.h> #include "gpukernels.h" #include "common.h" __global__ void test_kernel() { printf("Id: %d, height: %f, weight: %f\n", threadIdx.x, d_const.height, d_const.weight); } When I execute this code, the kernel executes, displays the thread ids, but the constant values are displayed as zeros. How can I fix this?

    Read the article

  • How to round a number to n decimal places in Java

    - by Alex Spurling
    What I'd like is a method to convert a double to a string which rounds using the half-up method. I.e. if the decimal to be rounded is a 5, it always rounds up the previous number. This is the standard method of rounding most people expect in most situations. I also would like only significant digits to be displayed. That is there should not be any trailing zeroes. I know one method of doing this is to use the String.format method: String.format("%.5g%n", 0.912385); returns: 0.91239 which is great, however it always displays numbers with 5 decimal places even if they are not significant: String.format("%.5g%n", 0.912300); returns: 0.91230 Another method is to use the DecimalFormatter: DecimalFormat df = new DecimalFormat("#.#####"); df.format(0.912385); returns: 0.91238 However as you can see this uses half-even rounding. That is it will round down if the previous digit is even. What I'd like is this: 0.912385 -> 0.91239 0.912300 -> 0.9123 What is the best way to achieve this in Java?

    Read the article

  • How to achieve to following C++ output formatting?

    - by Yan Cheng CHEOK
    I wish to print out double as the following rules : 1) No scietific notation 2) Maximum decimal point is 3 3) No trailing 0. For example : 0.01 formated to "0.01" 2.123411 formatted to "2.123" 2.11 formatted to "2.11" 2.1 formatted to "2.1" 0 formatted to "0" By using .precision(3) and std::fixed, I can only achieve rule 1) and rule 2), but not rule 3) 0.01 formated to "0.010" 2.123411 formatted to "2.123" 2.11 formatted to "2.110" 2.1 formatted to "2.100" 0 formatted to "0" Code example is as bellow : #include <iostream> int main() { std::cout.precision(3); std::cout << std::fixed << 0.01 << std::endl; std::cout << std::fixed << 2.123411 << std::endl; std::cout << std::fixed << 2.11 << std::endl; std::cout << std::fixed << 2.1 << std::endl; std::cout << std::fixed << 0 << std::endl; getchar(); } any idea?

    Read the article

  • Random Loss of precision in Python ReadLine()

    - by jackyouldon
    Hi all, We have a process which takes a very large csv (1.6GB) and breaks it down into pieces (in this case 3). This runs nightly and normally doesn't give us any problems. When it ran last night, however, the first of the output files had lost precision on the numeric fields in the data. The active ingredient in the script are the lines: while lineCounter <= chunk: oOutFile.write(oInFile.readline()) lineCounter = lineCounter + 1 and the normal output might be something like StringField1; StringField2; StringField3; StringField4; 1000000; StringField5; 0.000054454 etc. On this one occasion and in this one output file the numeric fields were all output with 6 zeros at the end i.e. StringField1; StringField2; StringField3; StringField4; 1000000.000000; StringField5; 0.000000 We are using Python v2.6 (and don't want to upgrade unless we really have to) but we can't afford to lose this data. Does anyone have any idea why this might have happened? If the readline is doing some kind of implicit conversion is there a way to do a binary read, because we really just want this data to pass through untouched? It is very wierd to us that this only affected one of the output files generated by the same script, and when it was rerun the output was as expected. thanks Jack

    Read the article

  • Creating a new variable in C from only part of an existing u_char

    - by Alex Kloss
    I'm writing some C code to parse IEEE 802.11 frames, but I'm stuck trying to create a new variable whose length depends on the size of the frame itself. Here's the code I currently have: int frame_body_len = pkt_hdr->len - radio_hdr->len - wifi_hdr_len - 4; u_char *frame_body = (u_char *) (packet + radio_hdr->len + wifi_hdr_len); Basically, the frame consists of a header, a body, and a checksum at the end. I can calculate the length of the frame body by taking the length of the packet and subtracting the length of the two headers that appear before it (radio_hdr->len and wifi_hdr_len respectively), plus 4 bytes at the end for the checksum. However, how can I create the frame_body variable without the trailing checksum? Right now, I'm initializing it with the contents of the packet starting at the position after the two headers, but is there some way to start at that position and end 4 bytes before the end of packet? packet is a pointer to a u_char, if it helps. I'm a new C programmer, so any and all advice about my code you can give me would be much appreciated. Thanks!

    Read the article

  • How do I display core data on second view controller?

    - by jon
    I am working on my first core data iPhone application. I am using a navigation controller, and the root view controller displays 4 rows. Clicking the first row takes me to a second table view controller. However, when I click the back button, repeat the row tap, click the back button again, and tap the row a third time, I get an error. I have been researching this for a week with no success. I can reproduce the error easily: Create a new Navigation-based Application, use Core Data for storage, call it MyTest which creates MyTestAppDelegate and RootViewController. Add new UIViewController subclass, with UITableViewController and xib, call it ListViewController. Copy code from RootViewController.h and .m to ListViewController.h and .m., changing the file names appropriately. To simplify the code, I removed the trailing “_” from all variables. In RootViewController, I added #import ListViewController.h, set up an array to display 4 rows and navigate to ListViewController when clicking the first row. In ListViewController.m, I added #import MyTestAppDelegate.h” and the following code: - (void)viewDidLoad { [super viewDidLoad]; if (managedObjectContext == nil) { managedObjectContext = [(MyTestAppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext]; } .. } The sequence that causes the error is tap row, return, tap row, return, tap row - error. managedObjectContext is synthesized for the third time. I appreciate your patience and your help, as this makes no sense to me. ADDENDUM: I may have a partial solution. http://www.iphonedevsdk.com/forum/iphone-sdk-development/41688-accessing-app-delegates-managed-object-context.html If I do not release the managedObjectContext in the .m file, the error goes away. Is that ok or will that cause me issues? - (void)dealloc { [fetchedResultsController release]; // [managedObjectContext release]; [super dealloc]; } ADDENDUM 2: See solution below. Sorry for the formatting issues - this was my first post.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30  | Next Page >