Search Results

Search found 2134 results on 86 pages for 'numeric limits'.

Page 70/86 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Formvalidator from the iframe.

    - by basit74
    Hello I have a following formvalidatior function in my document. function formValidator(formid) { var form = cic.$(formid); if(!form) return (''); var errors = []; var len = form.elements.length; for(var elementIdx = 0; elementIdx < len; elementIdx++) { var element = form.elements[elementIdx]; if(!element && !element.getAttribute('validationtype')) return (''); switch (element.getAttribute('validationtype')) { case 'text' : if(cic.getValue(element).strip() == "") errors.push(element.getAttribute('validationmsg')); break; case 'email' : if(!cic.isEmail(cic.getValue(element))) errors.push(element.getAttribute('validationmsg')); break; case 'numeric' : if(isNaN(cic.getValue(element).replace(',', '.'))) errors.push(element.getAttribute('validationmsg')); break; case 'confirm' : if(cic.getValue(cic.$(element.getAttribute('sourcefield'))) !== cic.getValue(element)) errors.push(element.getAttribute('validationmsg')); break; } } return (errors.length > 0) ? '<li>' + errors.uniq().join("<li>") : ''; } It works fine, now I have an Iframe in my document, and that I frame contains the form to validate. What will be the best practice to change this function in such a way that it can validate document forms and iframe from simeltaniously. Thanks

    Read the article

  • atol(), atof(), atoi() function behaviours, is there a stable way to convert from/to string/integer

    - by Berkay
    In these days i'm playing with the C functions of atol(), atof() and atoi(), from a blog post i find a tutorial and applied: here are my results: void main() char a[10],b[10]; puts("Enter the value of a"); gets(a); puts("Enter the value of b"); gets(b); printf("%s+%s=%ld and %s-%s=%ld",a,b,(atol(a)+atol(b)),a,b,(atol(a)-atol(b))); getch(); } there is atof() which returns the float value of the string and atoi() which returns integer value. now to see the difference between the 3 i checked this code: main() { char a[]={"2545.965"}; printf("atol=%ld\t atof=%f\t atoi=%d\t\n",atol(a),atof(a),atoi(a)); } the output will be atol=2545 atof=2545.965000 atoi=2545 char a[]={“heyyou”}; now when you run the program the following will be the output (why?, is there any solution to convert pure strings to integer?) atol=0 atof=0 atoi=0 the string should contain numeric value now modify this program as char a[]={“007hey”}; the output in this case(tested in Red hat) will be atol=7 atof=7.000000 atoi=7 so the functions has taken 007 only not the remaining part (why?) Now consider this char a[]={“hey007?}; the output of the program will be atol=0 atof=0.000000 atoi=0 So i just want to convert my strings to number and then again to same text, i played with these functions and as you see i'm getting really interesting results? why is that? any other functions to convert from/to string/integer and vice versa?

    Read the article

  • How can I get this week's dates in Perl?

    - by ABach
    I have the following loop to calculate the dates of the current week and print them out. It works, but I am swimming in the amount of date/time possibilities in Perl and want to get your opinion on whether there is a better way. Here's the code I've written: #!/usr/bin/env perl use warnings; use strict; use DateTime; # Calculate numeric value of today and the # target day (Monday = 1, Sunday = 7); the # target, in this case, is Monday, since that's # when I want the week to start my $today_dt = DateTime->now; my $today = $today_dt->day_of_week; my $target = 1; # Create DateTime copies to act as the "bookends" # for the date range my ($start, $end) = ($today_dt->clone(), $today_dt->clone()); if ($today == $target) { # If today is the target, "start" is already set; # we simply need to set the end date $end->add( days => 6 ); } else { # Otherwise, we calculate the Monday preceeding today # and the Sunday following today my $delta = ($target - $today + 7) % 7; $start->add( days => $delta - 7 ); $end->add( days => $delta - 1 ); } # I clone the DateTime object again because, for some reason, # I'm wary of using $start directly... my $cur_date = $start->clone(); while ($cur_date <= $end) { my $date_ymd = $cur_date->ymd; print "$date_ymd\n"; $cur_date->add( days => 1 ); } As mentioned, this works, but is it the quickest or most efficient? I'm guessing that quickness and efficiency may not necessarily go together, but your feedback is very appreciated.

    Read the article

  • best practice - loging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Is it possible to include a Sexpr before the expression has been evaluated in Sweave / R ?

    - by PaulHurleyuk
    Hello, I'm writing a Sweave document, and I want to include a small section that details the R and package versions, platofrms and how long ti took to evalute the doucment, however, I want to put this in the middle of the document ! I was using a \Sexpr{elapsed} to do this (which didn't work), but thought if I put the code printing elapsed in a chunk that evaluates at the end, I could then include the chunk half way through, which also fails. My document looks something like this % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] @ Text and Sweave Code in here % This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx sec to process. <<>>= <<elapsed>> @ More text and Sweave code in here <<label=bye, include=FALSE, echo=FALSE>>= odbcCloseAll() endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= print(elapsedtime) @ \end{document} But this doesn't seem to work (amazingly !) Does anyone know how I could do this ? Thanks Paul.

    Read the article

  • What's the pattern for a JSONP method that was initiated from a jQuery plugin?

    - by michielvoo
    I'm writing a jQuery plugin to render data retrieved from another domain in an element on the page. I follow the typical pattern for my jQuery plugin: $(selector).Plugin(options); In the plugin I get external data using jQuery.getScript(url, [success]). The external data source allows me to define the name of a method and it will wrap the data in a call to that method (JSONP): $.getScript("http://www.example.com/data?callback=global_callback", instance_callback); This effectively results in: <script type="text/javascript"> global_callback(data); </script> The scope of global_callback limits what the Plugin instance can do with the data. And the global_callback method has no knowledge of the selector or options that the plugin was instantiated with. I was thinking that global_callback would just store the data, and the plugin would retrieve the data in instance_callback. But I need to make sure that instance_callback will retrieve the correct data, I foresee a problem with multiple instances of the Plugin. How can I handle this? Thanks!

    Read the article

  • more problems with the LAG function is SAS

    - by SAS_learner
    The following bit of SAS code is supposed to read from a dataset which contains a numeric variable called 'Radvalue'. Radvalue is the temperature of a radiator, and if a radiator is switched off but then its temperature increases by 2 or more it's a sign that it has come on, and if it is on but its temperature decreases by 2 or more it's a sign that it's gone off. Radstate is a new variable in the dataset which indicates for every observation whether the radiator is on or off, and it's this I'm trying to fill in automatically for the whole dataset. So I'm trying to use the LAG function, trying to initialise the first row, which doesn't have a dif_radvalue, and then trying to apply the algorithm I just described to row 2 onwards. Any idea why the columns Radstate and l_radstate come out completely blank? Thanks everso much!! Let me know if I haven't explained the problem clearly. Data work.heating_algorithm_b; Input ID Radvalue; Datalines; 1 15.38 2 15.38 3 20.79 4 33.47 5 37.03 ; DATA temp.heating_algorithm_c; SET temp.heating_algorithm_b; DIF_Radvalue = Radvalue - lag(Radvalue); l_Radstate = lag(Radstate); if missing(dif_radvalue) then do; dif_radvalue = 0; radstate = "off"; end; else if l_Radstate = "off" & DIF_Radvalue > 2 then Radstate = "on"; else if l_Radstate = "on" & DIF_Radvalue < -2 then Radstate = "off"; else Radstate = l_Radstate; run;

    Read the article

  • Perl - CodeGolf - Nested loops & SQL inserts

    - by CheeseConQueso
    I had to make a really small and simple script that would fill a table with string values according to these criteria: 2 characters long 1st character is always numeric (0-9) 2nd character is (0-9) but also includes "X" Values need to be inserted into a table on a database The program would execute: insert into table (code) values ('01'); insert into table (code) values ('02'); insert into table (code) values ('03'); insert into table (code) values ('04'); insert into table (code) values ('05'); insert into table (code) values ('06'); insert into table (code) values ('07'); insert into table (code) values ('08'); insert into table (code) values ('09'); insert into table (code) values ('0X'); And so on, until the total 110 values were inserted. My code (just to accomplish it, not to minimize and make efficient) was: use strict; use DBI; my ($db1,$sql,$sth,%dbattr); %dbattr=(ChopBlanks => 1,RaiseError => 0); $db1=DBI->connect('DBI:mysql:','','',\%dbattr); my @code; for(0..9) { $code[0]=$_; for(0..9) { $code[1]=$_; insert(@code); } insert($code[0],"X"); } sub insert { my $skip=0; foreach(@_) { if($skip==0) { $sql="insert into table (code) values ('".$_[0].$_[1]."');"; $sth=$db1->prepare($sql); $sth->execute(); $skip++; } else { $skip--; } } } exit; I'm just interested to see a really succinct & precise version of this logic.

    Read the article

  • Structuring input of data for localStorage

    - by WmasterJ
    It is nice when there isn't a DB to maintain and users to authenticate. My professor has asked me to convert a recent research project of his that uses Bespin and calculates errors made by users in a code editor as part of his research. The goal is to convert from MySQL to using HTML5 localStorage completely. Doesn't seem so hard to do, even though digging in his code might take some time. Question: I need to store files and state (last placement of cursor and active file). I have already done so by implementing the recommendations in another stackoverflow thread. But would like your input considering how to structure the content to use. My current solution Hashmap like solution with javascript objects: files = {}; // later, saving files[fileName] = data; And then storing in localStorage using some recommendations localStorage.setObject(files): Currently I'm also considering using some type of numeric id. So that names can be changed without any hassle renaming the key in the hashmap. What is your opinion on the way it is solved and would you do it any differently?

    Read the article

  • Standard and reliable mouse-reporting with GLUT

    - by Victor
    Hello! I'm trying to use GLUT (freeglut) in my OpenGL application, and I need to register some callbacks for mouse wheel events. I managed to dig out a fairly undocumented function: api documentation But the man page and the API entry for this function both state the same thing: Note: Due to lack of information about the mouse, it is impossible to implement this correctly on X at this time. Use of this function limits the portability of your application. (This feature does work on X, just not reliably.) You are encouraged to use the standard, reliable mouse-button reporting, rather than wheel events. Fair enough, but how do I use this standard, reliable mouse-reporting? And how do I know which is the standard? Do I just use glutMouseFunc() and use button values like 4 and 5 for the scroll up and down values respectively, say if 1, 2 and 3 are the left, middle and right buttons? Is this the reliable method? Bonus question: it seems the `xev' tool is reporting different values for my buttons. My mouse buttons are numbered from 1 to 5 with xev, but glut is reporting buttons from 0 to 4, i.e. an off-by-one. Is this common?

    Read the article

  • Varchar columns: Nullable or not.

    - by NYSystemsAnalyst
    The database development standards in our organization state the varchar fields should not allow null values. They should have a default value of an empty string (""). I know this makes querying and concatenation easier, but today, one of my coworkers questioned me about why that standard only existed for varchar types an not other datatypes (int, datetime, etc). I would like to know if others consider this to be a valid, defensible standard, or if varchar should be treated the same as fields of other data types? I believe this standard is valid for the following reason: I believe that an empty string and null values, though technically different, are conceptually the same. An empty, zero length string is a string that does not exist. It has no value. However, a numeric value of 0 is not the same as NULL. For example, if a field called OutstandingBalance has a value of 0, it means there are $0.00 remaining. However, if the same field is NULL, that means the value is unknown. On the other hand, a field called CustomerName with a value of "" is basically the same as a value of NULL because both represent the non-existence of the name. I read somewhere that an analogy for an empty string vs. NULL is that of a blank CD vs. no CD. However, I believe this to be a false analogy because a blank CD still phyically exists and still has physical data space that does not have any meaningful data written to it. Basically, I believe a blank CD is the equivalent of a string of blank spaces (" "), not an empty string. Therefore, I believe a string of blank spaces to be an actual value separate from NULL, but an empty string to be the absense of value conceptually equivalent to NULL. Please let me know if my beliefs regarding variable length strings are valid, or please enlighten me if they are not. I have read several blogs / arguments regarding this subject, but still do not see a true conceptual difference between NULLs and empty strings.

    Read the article

  • Sql Server Compact Edition version error.

    - by Tim
    I am working on .NET ClickOnce project that uses Sql Server 2005 Compact Edition to synchronize remote data through the use of a Merge replication. This application has been live for nearly a year now, and while we encounter occasional synchronization errors, things run quite smoothly for the most part. Yesterday a user reported an error that I have never seen before and have yet to find any information for online. Many users synchronize every night, and I haven't received error reports from anyone else, so this issue must be isolated to this particular user / client machine. Here are the full details of the error: -Error Code : 80004005 -Message : The message contains an unexpected replication operation code. The version of SQL Server Compact Edition Client Agent and SQL Server Compact Edition Server Agent should match. [ replication operation code = 31 ] -Minor Error : 28526 -Source : Microsoft SQL Server Compact Edition -Numeric Parameters : 31 One interesting thing that I've found is that his data does get synchronized to the server, so this error must occur after the upload completes. I have yet to determine whether or not changes at the server are still being downloaded to his subscription. Thinking that maybe there was some kind of version conflict going on, I had a remote desktop session with this user last night and uninstalled both the application and the SQL Server Compact Edition prerequisite, then reinstalled both from our ClickOnce publication site. I also removed his existing local database file so that upon synchronization, an entirely new subscription would be issued to him. Still his errors continue. I suppose the error may be somewhat general, and the text in the error message stating that the versions should match may not necessarily reflect the problem at hand. This site contains the only official reference to this error that I've been able to find, and it offers no more detail than the error message itself. Has anyone else encountered this error? Or at least know more about SQL Compact to have a better guess as to what is going on here? Any help / suggestions will be greatly appreciated!

    Read the article

  • MVC4 - how to vaildate a drop down list?

    - by Grant Roy
    I have a .Net MVC4 model / view with a number of [Required] fields, one of which is selected via a drop down list, "Content_CreatedBy" [the first code block below]. Client side validation fires on all fields except the DDL [although server side validation does not allow no entry in DDL]. I have tried validating on the DDL text as well its numeric value but niether fire on the client side. Can anyone see what I am doing wrong? Thanks Model [Required] [Display(Name = "Author")] [ForeignKey("ContentContrib")] [Range(1, 99, ErrorMessage = "Author field is required.")] public virtual int Content_CreatedBy { get; set; } [Required] [Display(Name = "Date")] public virtual DateTime Content_CreatedDate { get; set; } [Required] [DataType(DataType.MultilineText)] [Display(Name = "Source / Notes ")] [StringLength(10, MinimumLength = 3)] public virtual string Content_Sources { get; set; } [Required] [Display(Name = "Keywords")] [StringLength(50, MinimumLength = 3)] public virtual string Content_KeyWords { get; set; } VIEW <div class="editor-label"> @Html.LabelFor(model => model.Content_CreatedBy, new { @class="whitelabel"}) </div> <div class="editor-field"> @Html.DropDownList("Content_CreatedBy", String.Empty) @Html.EditorFor(model => model.Content_CreatedBy) @Html.ValidationMessageFor(model => model.Content_CreatedBy) </div>

    Read the article

  • Save result of for loop in a vector

    - by hendrik
    i think I'm just too tired to see the mistake. i wrote a function to get the maximal value for two data sets from a for loop plot_zu <- function(x) {for (i in 1:x){ z=data_raw[grep(a[i], data_raw$Gene.names),] b=data_raw_ace[grep(a[i], data_raw_ace$Gene.names),] p<-vector("numeric", length(1:length(a))) p[i]<-max(z$t_test_diff) return(p)} } so picture: a is a vector of names and the data set (data_raw(_ace)) are filtered by it. In the end i would like to have all maxima values of column t_test_diff in a vector. After that i want to add the t_test_diff column values from data_raw_ace also. So the problem is, that i get this: [1] 1.210213 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 [8] 0.000000 0.000000 So there is a problem with brackets or something but i cannot see it ( first value fits). Sorry for no good example but i think it is understandable and an easy to solve question. If im to dumb to explain my problem right, i will add an example. !! Thx a lot !! grateful Hendrik

    Read the article

  • Populate a column as a result of performing math on two columns in a jqGrid.

    - by HacksawOnRye
    Background: Our company selected a workflow tool that has some "interersting" UI restrictions. jqGrid has been identified as one of best ways to overcome these restrictions. Consequently, the answers to this question need to be restricted to functionlity available within the jqGrid space. It pains me to pose this question and I know you will tempted to go down a million other paths - most that we have already traveled before. :( Question: Can jqGrid populate a column as a result of performing math on two other columns. The source of those two columns is in our control so we can "guarantee" that numeric data will be returned. Also, the result is something that we simplay want to display on demand but not store yet. On the example below, is there a function that allow the 'total' column to be populated from the following calculation: 'amount' * 'tax' Example jqGrid javascript: jQuery("#list3").jqGrid({ url:'server.php?q=2', datatype: "json", colNames:['Inv No','Date', 'Client', 'Amount','Tax','Total','Notes'], colModel:[ {name:'id',index:'id', width:60, sorttype:"int"}, {name:'invdate',index:'invdate', width:90, sorttype:"date"}, {name:'name',index:'name', width:100}, {name:'amount',index:'amount', width:80, align:"right",sorttype:"float"}, {name:'tax',index:'tax', width:80, align:"right",sorttype:"float"}, {name:'total',index:'total', width:80,align:"right",sorttype:"float"}, {name:'note',index:'note', width:150, sortable:false} ], rowNum:20, rowList:[10,20,30], pager: '#pager3', sortname: 'id', viewrecords: true, sortorder: "desc", loadonce: true, caption: "Load Once Example" });

    Read the article

  • gridview image column problem

    - by jame
    Work on vs05 C# asp.net .My SQL Syntax is :**** if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[Images]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[Images] GO CREATE TABLE [dbo].[Images] ( [ID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL , [ImageName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [Image] [image] NULL ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO I want to show this Images table values in a grid view.....I do it ...but the image value can not show ....asp.net syntax for gridview is <asp:GridView ID="GridView2" runat="server" AutoGenerateColumns="False"> <Columns> <asp:BoundField DataField="ImageName" HeaderText="ImageName" /> <asp:TemplateField HeaderText="Image"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Eval("Image") %>'></asp:TextBox> </EditItemTemplate> <ItemTemplate> <asp:Image ID="Image1" runat="server" ImageUrl='<%# Eval("Image") %>' /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> i write the below code on pageload event. i want images table values must shown when the page is load... string strSQL = "Select * From Images"; DataTable dt = clsDB.getDataTable(strSQL); this.GridView2.DataSource = dt; this.GridView2.DataBind(); Why not i get the image on my image column of the gridview.....what's the problem is how to solve?

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • Why can't decimal numbers be represented exactly in binary?

    - by Barry Brown
    There have been several questions posted to SO about floating-point representation. For example, the decimal number 0.1 doesn't have an exact binary representation, so it's dangerous to use the == operator to compare it to another floating-point number. I understand the principles behind floating-point representation. What I don't understand is why, from a mathematical perspective, are the numbers to the right of the decimal point any more "special" that the ones to the left? For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers. By contrast, if I move the decimal one place in the other direction to produce the number 610, I'm still in Exactopia. I can keep going in that direction (6100, 610000000, 610000000000000) and they're still exact, exact, exact. But as soon as the decimal crosses some threshold, the numbers are no longer exact. What's going on? Edit: to clarify, I want to stay away from discussion about industry-standard representations, such as IEEE, and stick with what I believe is the mathematically "pure" way. In base 10, the positional values are: ... 1000 100 10 1 1/10 1/100 ... In binary, they would be: ... 8 4 2 1 1/2 1/4 1/8 ... There are also no arbitrary limits placed on these numbers. The positions increase indefinitely to the left and to the right.

    Read the article

  • Detection of negative integers using bit operations

    - by Nawaz
    One approach to check if a given integer is negative or not, could be this: (using bit operations) int num_bits = sizeof(int) * 8; //assuming 8 bits per byte! int sign_bit = given_int & (1 << (num_bits-1)); //sign_bit is either 1 or 0 if ( sign_bit ) { cout << "given integer is negative"<<endl; } else { cout << "given integer is positive"<<endl; } The problem with this solution is that number of bits per byte couldn't be 8, it could be 9,10, 11 even 16 or 40 bits per byte. Byte doesn't necessarily mean 8 bits! Anyway, this problem can be easily fixed by writing, //CHAR_BIT is defined in limits.h int num_bits = sizeof(int) * CHAR_BIT; //no assumption. It seems fine now. But is it really? Is this Standard conformant? What if the negative integer is not represented as 2's complement? What if it's representation in a binary numeration system that doesn't necessitate only negative integers to have 1 in it's most significant bit? Can we write such code that will be both portable and standard conformant? Related topics: Size of Primitive data types Why is a boolean 1 byte and not 1 bit of size?

    Read the article

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • Best way to determine variable type and treat each one differently in F#

    - by James Black
    I have a function that will create a select where clause, but right now everything has to be a string. I would like to look at the variable passed in and determine what type it is and then treat it properly. For example, numeric values don't have single quotes around them, option type will either be null or have some value and boolean will actually be zero or one. member self.BuildSelectWhereQuery (oldUser:'a) = let properties = List.zip oldUser.ToSqlValuesList sqlColumnList let init = false, new StringBuilder() let anyChange, (formatted:StringBuilder) = properties |> Seq.fold (fun (anyChange, sb) (oldVal, name) -> match(anyChange) with | true -> true, sb.AppendFormat(" AND {0} = '{1}'", name, oldVal) | _ -> true, sb.AppendFormat("{0} = '{1}'", name, oldVal) ) init formatted.ToString() Here is one entity: type CityType() = inherit BaseType() let mutable name = "" let mutable stateId = 0 member this.Name with get() = name and set restnameval=name <- restnameval member this.StateId with get() = stateId and set stateidval=stateId <- stateidval override this.ToSqlValuesList = [this.Name; this.StateId.ToString()] So, if name was some other value besides a string, or stateId can be optional, then I have two changes to make: How do I modify ToSqlValuesList to have the variable so I can tell the variable type? How do I change my select function to handle this? I am thinking that I need a new function does the processing, but what is the best FP way to do this, rather than using something like typeof?

    Read the article

  • PHP Function needed for GENERIC sorting of a recordset array

    - by donbriggs
    Somebody must have come up with a solution for this by now. I wrote a PHP class to display a recordset as an HTML table/datagrid, and I wish to expand it so that we can sort the datagrid by whichever column the user selects. In the below example data, we may need to sort the recordset array by Name, Shirt, Assign, or Age fields. I will take care of the display part, I just need help with sorting the data array. As usual, I query a database to get a result, iterate throught he result, and put the records into an assciateiave array. So, we end up with an array of arrays. (See below.) I need to be able to sort by any column in the dataset. However, I will not know the column names at design time, nor will I know if the colums will be string or numeric values. I have seen a ton of solutions to this, but I have not seen a GOOD and GENERIC solution Can somebody please suggest a way that I can sort the recordset array that is GENERIC, and will work on any recordset? Again, I will not know the fields names or datatypes at design time. The array presented below is ONLY an example. Array ( [0] = Array ( [name] = Kirk [shrit] = Gold [assign] = Bridge ) [1] => Array ( [name] => Spock [shrit] => Blue [assign] => Bridge ) [2] => Array ( [name] => Uhura [shrit] => Red [assign] => Bridge ) [3] => Array ( [name] => Scotty [shrit] => Red [assign] => Engineering ) [4] => Array ( [name] => McCoy [shrit] => Blue [assign] => Sick Bay ) )

    Read the article

  • How to get the second sub element of every element in a list in R

    - by PaulHurleyuk
    I know I've come across this problem before, but I'm having a bit of a mental block at the moment. and as I can't find it on SO, I'll post it here so I can find it next time. I have a dataframe that contains a field representing an ID label. This label has two parts, an alpha prefix and a numeric suffix. I want to split it apart and create two new fields with these values in. structure(list(lab = c("N00", "N01", "N02", "B00", "B01", "B02", "Z21", "BA01", "NA03")), .Names = "lab", row.names = c(NA, -9L ), class = "data.frame") df$pre<-strsplit(df$lab, "[0-9]+") df$suf<-strsplit(df$lab, "[A-Z]+") Which gives lab pre suf 1 N00 N , 00 2 N01 N , 01 3 N02 N , 02 4 B00 B , 00 5 B01 B , 01 6 B02 B , 02 7 Z21 Z , 21 8 BA01 BA , 01 9 NA03 NA , 03 So, the first strsplit works fine, but the second gives a list, each having two elements, an empty string and the result I want, and stuffs them both into the dataframe column. How can I select the second sub-element from each element of the list ? (or, is there a better way to do this) Thanks Paul.

    Read the article

  • SQL Server 2005 - Enabling both Named Pipes & TCP/IP protocols?

    - by Clinemi
    We have a SQL Server 2005 database, and currently all our users are connecting to the database via the TCP/IP protocol. The SQL Server Configuration Manager allows you to "enable" both Named Pipes, and TCP/IP connections at the same time. Is this a good idea? My question is not whether we should use named pipes instead of TCP/IP, but are there problems associated with enabling both? One of our client's IT guys, says that enabling database communication with both protocols will limit the bandwidth that either protocol can use - to like 50% of the total. I would think that the bandwidth that TCP/IP could use would be directly tied (inversely) to the amount of traffic that Named Pipes (or any of the other types of traffic) were occupying on the network at that moment. However, this IT person is indicating that the fact that we have enabled two protocols on the server, artificially limits the bandwidth that TCP/IP can use. Is this correct? I did Google searches but could not come up with an answer to this question. Any help would be appreciated.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >