Search Results

Search found 2074 results on 83 pages for 'arbitrary precision'.

Page 16/83 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Math - Convert Arbitrary Length to Range From -1.0 to 1.0?

    - by TheDarkIn1978
    how can i convert a length into a range of -1.0 to 1.0? example: my stage is 440px in length and accepts mouse events. i would like to click in the middle of the stage, and rather than an output of X = 220, i'd like it to be X = 0. similarly, i'd like the real X = 0 to become X = -1.0 and the real X = 440 to become X = 1.0. i don't have access to the stage, so i can't simply center-register it, which would make this process a lot easier. also, it's not possible to dynamically change the actual size of my stage, so i'm looking for a formula that will translate the mouse's real X coordinate of the stage to evenly fit within a range from -1 to 1.

    Read the article

  • Convert arbitrary length to a value between -1.0 a 1.0?

    - by TheDarkIn1978
    How can I convert a length into a value in the range -1.0 to 1.0? Example: my stage is 440px in length and accepts mouse events. I would like to click in the middle of the stage, and rather than an output of X = 220, I'd like it to be X = 0. Similarly, I'd like the real X = 0 to become X = -1.0 and the real X = 440 to become X = 1.0. I don't have access to the stage, so i can't simply center-register it, which would make this process a lot easier. Also, it's not possible to dynamically change the actual size of my stage, so I'm looking for a formula that will translate the mouse's real X coordinate of the stage to evenly fit within a range from -1 to 1.

    Read the article

  • Haskell: How to compose `not` with a function of arbitrary arity?

    - by Hynek -Pichi- Vychodil
    When I have some function of type like f :: (Ord a) => a -> a -> Bool f a b = a > b I should like make function which wrap this function with not. e.g. make function like this g :: (Ord a) => a -> a -> Bool g a b = not $ f a b I can make combinator like n f = (\a -> \b -> not $ f a b) But I don't know how. *Main> let n f = (\a -> \b -> not $ f a b) n :: (t -> t1 -> Bool) -> t -> t1 -> Bool Main> :t n f n f :: (Ord t) => t -> t -> Bool *Main> let g = n f g :: () -> () -> Bool What am I doing wrong? And bonus question how I can do this for function with more and lest parameters e.g. t -> Bool t -> t1 -> Bool t -> t1 -> t2 -> Bool t -> t1 -> t2 -> t3 -> Bool

    Read the article

  • How to gather arbitrary length list data in ASP.NET MVC.

    - by C. Ross
    I need to gather a list of items associated with another item from my user in a ASP.NET MVC project. I would like to have a controller action like bellow. [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(int x, int y, IEnumerable<int> zKeys) { //Do stuff here } How can I setup my form to pass data in this way? If data of this particular form can't be provided, what's the next best way to pass this type of information in ASP.NET MVC?

    Read the article

  • How to read arbitrary number of values using std::copy?

    - by Miro Kropacek
    Hi, I'm trying to code opposite action to this: std::ostream outs; // properly initialized of course std::set<int> my_set; // ditto outs << my_set.size(); std::copy( my_set.begin(), my_set.end(), std::ostream_iterator<int>( outs ) ); it should be something like this: std::istream ins; std::set<int>::size_type size; ins >> size; std::copy( std::istream_iterator<int>( ins ), std::istream_iterator<int>( ins ) ???, std::inserter( my_set, my_set.end() ) ); But I'm stuck with the 'end' iterator -- input interators can't use std::advance and neither I can use two streams with the same source... Is there any elegant way how to solve this? Of course I can use for loop, but maybe there's something nicer :)

    Read the article

  • In Cocoa (or maybe GUI development in general) how do you specify an arbitrary number of things tile

    - by RankWeis
    I'm new to creating GUI's, everything I've done up until this point is using the command line. I'm trying to create a port of minesweeper to the macintosh, as an experiment, and I've got the CLI working, but I'm running into walls everywhere with the gui. The first thing it seems I have to do, however, is be able to tile n x m 'boxes' for grid - and I'm not sure how to do that. The information is ready to be handed to it, but I don't know where to do it, or how. Also, if anyone has any recommendations for sites/Cocoa development books, feel free to drop them in here... Thanks!

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • What DX level does my graphics card support? Does it go to 11?

    - by Daniel Moth
    Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities). There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post. Run DxDiag WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things. Use the Microsoft hardware page There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck. Use Wikipedia or the hardware vendor's website Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version. Check if it is one of these recommended DX11 cards You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable. Some AMD (aka ATI) cards Low end, inexpensive DX11 hardware: Radeon 5450, 5550, 6450, 6570 Mid range (decent perf, single precision): Radeon 5750, 5770, 6770, 6790 High end (capable of double precision): Radeon 5850, 5870, 6950, 6970 Single precision APUs: AMD E-Series APUs AMD A-Series APUs Some NVIDIA cards Low end, inexpensive DX11 hardware: GeForce GT430, GT 440, GT520, GTS 450 Quadro 400, 600 Mid-range (decent perf, single precision): GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti Quadro 2000 High end (capable of double precision): GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595 Quadro 4000, 5000, 6000 Tesla C2050, C2070, C2075 Get the DirectX SDK and run DirectX Caps Viewer Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10. Run a utility from the web that just tells you! Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • What DX level does my graphics card support? Does it go to 11?

    - by Daniel Moth
    Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities). There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post. Run DxDiag WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things. Use the Microsoft hardware page There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck. Use Wikipedia or the hardware vendor's website Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version. Check if it is one of these recommended DX11 cards You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable. Some AMD (aka ATI) cards Low end, inexpensive DX11 hardware: Radeon 5450, 5550, 6450, 6570 Mid range (decent perf, single precision): Radeon 5750, 5770, 6770, 6790 High end (capable of double precision): Radeon 5850, 5870, 6950, 6970 Single precision APUs: AMD E-Series APUs AMD A-Series APUs Some NVIDIA cards Low end, inexpensive DX11 hardware: GeForce GT430, GT 440, GT520, GTS 450 Quadro 400, 600 Mid-range (decent perf, single precision): GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti Quadro 2000 High end (capable of double precision): GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595 Quadro 4000, 5000, 6000 Tesla C2050, C2070, C2075 Get the DirectX SDK and run DirectX Caps Viewer Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10. Run a utility from the web that just tells you! Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Floating point conversion from Fixed point algorithm

    - by Viks
    Hi, I have an application which is using 24 bit fixed point calculation.I am porting it to a hardware which does support floating point, so for speed optimization I need to convert all fixed point based calculation to floating point based calculation. For this code snippet, It is calculating mantissa for(i=0;i<8207;i++) { // Do n^8/7 calculation and store // it in mantissa and exponent, scaled to // fixed point precision. } So since this calculation, does convert an integer to mantissa and exponent scaled to fixed point precision(23 bit). When I tried converting it to float, by dividing the mantissa part by precision bits and subtracting the exponent part by precision bit, it really does' t work. Please help suggesting a better way of doing it.

    Read the article

  • What Determines the Default Setting of the x87 FPU Control Word?

    - by Rick Regan
    What determines the default setting of the x87 FPU control word -- specifically, the precision control field? Does the compiler set it based on the target processor? Is there a compiler option to change it? Using Microsoft Visual C++ 2008 Express Edition on an Intel Core Duo processor, the default setting for the precision control field is "01b", meaning double (53 bit) precision. I'm wondering -- why is the default not "11"b, or extended (64 bit) precision? (I know I can change it using _controlfp.)

    Read the article

  • please help me to interpret the naive bayes result in weka..

    - by resmi
    Anybody please help me to interpret the following result generated in weka for classification using naive bayes.....Please explain clearly what is this Normal Distribution , Mean , StandardDev , WeightSum and Precision.Please help me.Am new in weka. ** Naive Bayes Classifier Class Normal: Prior probability = 0.5 1374195_at: Normal Distribution. Mean = 218.06 StandardDev = 6.0572 WeightSum = 3 Precision = 36.34333334 1373315_at: Normal Distribution. Mean = 1142.58 StandardDev = 21.1589 WeightSum = 3 Precision = 126.95333339999999

    Read the article

  • MPFR Rounding 0.9999 to 1?

    - by Silmaersti
    I'm attempting to store the value 0.9999 into an mpfr_t variable But 0.9999 is rounded to 1 (or some other value != 0.9999) during storage, no matter the round value (GMP_RNDD, GMP_RNDU, GMP_RNDN, GMP_RNDZ) So what's the best method to store 0.9999 in an mpfr_t variable? Is it possible? Here is my test program, it prints "buffer is: 1", instead of the wanted "buffer is: 0.9999": int main() { size_t precision = 4; mpfr_t mpfrValue; mpfr_init2(mpfrValue, precision); mpfr_set_str(mpfrValue, "0.9999", 10, GMP_RNDN); char *buffer = (char*)malloc((sizeof(char) * precision) + 3); mp_exp_t exponent; mpfr_get_str(buffer, &exponent, 10, precision, mpfrValue, GMP_RNDN); printf("buffer is: %s\n", buffer); free(buffer); mpfr_clear(mpfrValue); return 0; } Thanks for any help !

    Read the article

  • Fortran severe (40) Error... Help?!

    - by Taka
    I can compile but when I run I get this error "forrtl: severe (40): recursive I/O operation, unit -1, file unknown" if I set n = 29 or more... Can anyone help with where I might have gone wrong? Thanks. PROGRAM SOLUTION IMPLICIT NONE ! Variable Declaration INTEGER :: i REAL :: dt DOUBLE PRECISION :: st(0:9) DOUBLE PRECISION :: stmean(0:9) DOUBLE PRECISION :: first_argument DOUBLE PRECISION :: second_argument DOUBLE PRECISION :: lci, uci, mean REAL :: exp1, n REAL :: r, segma ! Get inputs WRITE(*,*) 'Please enter number of trials: ' READ(*,*) n WRITE(*,*) dt=1.0 segma=0.2 r=0.1 ! For n Trials st(0)=35.0 stmean(0)=35.0 mean = stmean(0) PRINT *, 'For ', n ,' Trials' PRINT *,' 1 ',st(0) ! Calculate results DO i=0, n-2 first_argument = r-(1/2*(segma*segma))*dt exp1 = -(1/2)*(i*i) second_argument = segma*sqrt(dt)*((1/sqrt(2*3.1416))*exp(exp1)) st(i+1) = st(i) * exp(first_argument+second_argument) IF(st(i+1)<=20) THEN stmean(i+1) = 0.0 st(i+1) = st(i) else stmean(i+1) = st(i+1) ENDIF PRINT *,i+2,' ',stmean(i+1) mean = mean+stmean(i+1) END DO ! Output results uci = mean+(1.96*(segma/sqrt(n))) lci = mean-(1.96*(segma/sqrt(n))) PRINT *,'95% Confidence Interval for ', n, ' trials is between ', lci, ' and ', uci PRINT *,'' END PROGRAM SOLUTION

    Read the article

  • Representing a number in a byte array (java programming)

    - by Mark Roberts
    I'm trying to represent the port number 9876 (or 0x2694 in hex) in a two byte array: class foo { public static void main (String args[]) { byte[] sendData = new byte[1]; sendData[0] = 0x26; sendData[1] = 0x94; } } But I get a warning about possible loss of precision: foo.java:5: possible loss of precision found : int required: byte sendData[1] = 0x94; ^ 1 error How can I represent the number 9876 in a two byte array without losing precision?

    Read the article

  • Is there any use for Bash scripting anymore?

    - by Precision
    I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past. My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something that's easier to do in Bash than in some other hll?

    Read the article

  • Unable to sync time using `ntpdate`, error: "no server suitable for synchronization found"

    - by William Ting
    My ntp.conf file: user@pc[0][07:37:40]:/etc$ cat /etc/ntp.conf idriftfile /var/lib/ntp/ntp.drift server 0.pool.ntp.org server 1.pool.ntp.org server 2.pool.ntp.org server pool.ntp.org Command output: user@pc[0][07:37:24]:/etc$ sudo ntpdate -dv pool.ntp.org 18 Jun 07:37:35 ntpdate[10737]: ntpdate [email protected] Tue Apr 19 07:15:05 UTC 2011 (1) Looking for host pool.ntp.org and service ntp host found : conquest.kjsl.com transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) 198.137.202.16: Server dropped: no data 216.45.57.38: Server dropped: no data 64.6.144.6: Server dropped: no data server 198.137.202.16, port 123 stratum 0, precision 0, leap 00, trust 000 refid [198.137.202.16], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.1f16c1e3 Sat, Jun 18 2011 7:37:39.121 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 server 216.45.57.38, port 123 stratum 0, precision 0, leap 00, trust 000 refid [216.45.57.38], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.524a05dd Sat, Jun 18 2011 7:37:39.321 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 server 64.6.144.6, port 123 stratum 0, precision 0, leap 00, trust 000 refid [64.6.144.6], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.524a05dd Sat, Jun 18 2011 7:37:39.321 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 server 64.6.144.6, port 123 stratum 0, precision 0, leap 00, trust 000 refid [64.6.144.6], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.857c6fbd Sat, Jun 18 2011 7:37:39.521 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 18 Jun 07:37:40 ntpdate[10737]: no server suitable for synchronization found

    Read the article

  • Time passage arithmetic explanation

    - by Cyber Axe
    I ported this from http://www.effectgames.com/effect/article.psp.html/joe/Old_School_Color_Cycling_with_HTML5 some time ago. However i'm now wanting to modify it for the purpose of changing it from floating point to fixed point maths for enhanced efficiency (for those who are going to talk about premature optimization and what not, i want to have my entire engine in fixed point both as a learning process for me and so i can port code more easily to systems in the future that dont have native floating points such as arm cpus) My initial conversion to fixed points just resulted in the cycling stuck on either the first or last frame of cycling. Plus it would be nice to understand better how it works so i can add more options and so forth in the future, my maths however sucks and the comments are limited so i don't really know how the maths work for determining the frame it shoud use (cycleAmount) I was also a beginner when i ported it as i had no idea between floating points and integers and what not. So in summary my question is, can anyone give an explination of the arithmatic used for determining the cycleAmount (which determings the "frame" of the cycle) This is the working floating point maths version of the code: public final void cycle(Colour[] sourceColours, double timeNow, double speedAdjust) { // Cycle all animated colour ranges in palette based on timestamp. sourceColours = sourceColours.clone(); int cycleSize; double cycleRate; double cycleAmount; Cycle cycle; for (int i = 0, len = cycles.length; i < len; ++i) { cycle = cycles[i]; cycleSize = (cycle.HIGH - cycle.LOW) + 1; cycleRate = cycle.RATE / (int) (CYCLE_SPEED / speedAdjust); cycleAmount = 0; if (cycle.REVERSE < 3) { // Standard Cycle cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); if (cycle.REVERSE < 1) { cycleAmount = cycleSize - cycleAmount; // If below 1 make sure its not reversed. } } else if (cycle.REVERSE == 3) { // Ping-Pong cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize << 1); if (cycleAmount >= cycleSize) { cycleAmount = (cycleSize * 2) - cycleAmount; } } else if (cycle.REVERSE < 6) { // Sine Wave cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); cycleAmount = Math.sin((cycleAmount * 3.1415926 * 2) / cycleSize) + 1; if (cycle.REVERSE == 4) { cycleAmount *= (cycleSize / 4); } else if (cycle.REVERSE == 5) { cycleAmount *= (cycleSize >> 1); } } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } if (USE_BLEND_SHIFT) { blendShiftColours(sourceColours, cycle, cycleAmount); } else { shiftColours(sourceColours, cycle, cycleAmount); } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } } colours = sourceColours; } // This utility function allows for variable precision floating point modulus. private double DFLOAT_MOD(final double d, final double b) { return (Math.floor(d * PRECISION) % Math.floor(b * PRECISION)) / PRECISION; }

    Read the article

  • Interesting fact #123423

    - by Tim Dexter
    Question from a customer on an internal mailing list this, succintly answered by RTF Template God, Hok-Min Q: Whats the upper limit for a sum calculation in terms of the largest number BIP can handle? A: Internally, XSL-T processor uses double precession.  Therefore the upper limit and precision will be same as double (IEEE 754 double-precision binary floating-point format, binary64). Approximately 16 significant decimal digits, max is 1.7976931348623157 x 10308 . So, now you know :)

    Read the article

  • SQL SERVER – Solution – Puzzle – Challenge – Error While Converting Money to Decimal

    - by pinaldave
    Earlier I had posted quick puzzle and I had received wonderful response to the same. Today we will go over the solution. The puzzle was posted here: SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal Run following code in SSMS: DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(5,2)) MoneyInt; GO Above code will give following error: Msg 8115, Level 16, State 8, Line 3 Arithmetic overflow error converting money to data type numeric. Why and what is the solution? Solution is as following: DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(7,2)) MoneyInt; GO There were more than 20 valid answers. Here is the reason. Decimal data type is defined as Decimal (Precision, Scale), in other words Decimal (Total digits, Digits after decimal point).. Precision includes Scale. So Decimal (5,2) actually means, we can have 3 digits before decimal and 2 digits after decimal. To accommodate 12345.67 one need higher precision. The correct answer would be DECIMAL (7,2) as it can hold all the seven digits. Here are the list of the experts who have got correct answer and I encourage all of you to read the same over hear. Fbncs Piyush Srivastava Dheeraj Abhishek Anil Gurjar Keval Patel Rajan Patel Himanshu Patel Anurodh Srivastava aasim abdullah Paulo R. Pereira Chintak Chhapia Scott Humphrey Alok Chandra Shahi Imran Mohammed SHIVSHANKER The very first answer was provided by Fbncs and Dheeraj had very interesting comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • YouTube: Promotional AgroSense Movie

    - by Geertjan
    Here's a cool YouTube promotional movie on AgroSense created by Ordina in the Netherlands. AgroSense is an open source Java system for the precision agriculture industry, which won the IT Environment Award in the Netherlands last week: If your understanding of Dutch limits your appreciation of the movie above, here's a rough translation, together with the names of the speakers in the movie: Precision agriculture, an innovative form of agriculture in which local variations in soil, crop, and atmosphere are taken into account, is the high-tech sustainable agriculture of tomorrow. The use of fertilizer, water, and energy can in this way be significantly reduced. "If, ten or twenty years from now, we are to continue having our agricultural industry in good shape, and in a continuing state of health, we'll need to register and work with data because if we want to enable crops to provide higher value, we'll need to create higher levels of transparency throughout the agriculture chain." Lenus Hamster, farmer in Nieuwolda Groningen "Industry is becoming increasingly data intensive. By combining pragmatic usefulness with innovative sustainability, AgroSense offers the Netherlands the possibility to continue being a leading player in the agrofood sector." Art Lighthart, Architect at Ordina AgroSense offers an open source solution in which all services for precision agriculture are brought together. In 2012, co-operation is being sought with organizations to make AgroSense available to around 10,000 Dutch farmers in the arable crop sector. By the way, the last sentence above implies the NetBeans Platform will be used by around 10,000 Dutch farmers.

    Read the article

  • boost::function & boost::lambda again

    - by John Dibling
    Follow-up to post: http://stackoverflow.com/questions/2978096/using-width-precision-specifiers-with-boostformat I'm trying to use boost::function to create a function that uses lambdas to format a string with boost::format. Ultimately what I'm trying to achieve is using width & precision specifiers for strings with format. boost::format does not support the use of the * width & precision specifiers, as indicated in the docs: Width or precision set to asterisk (*) are used by printf to read this field from an argument. e.g. printf("%1$d:%2$.*3$d:%4$.*3$d\n", hour, min, precision, sec); This class does not support this mechanism for now. so such precision or width fields are quietly ignored by the parsing. so I'm trying to find other ways to accomplish the same goal. Here is what I have so far, which isn't working: #include <string> #include <boost\function.hpp> #include <boost\lambda\lambda.hpp> #include <iostream> #include <boost\format.hpp> #include <iomanip> #include <boost\bind.hpp> int main() { using namespace boost::lambda; using namespace std; boost::function<std::string(int, std::string)> f = (boost::format("%s") % boost::io::group(setw(_1*2), setprecision(_2*2), _3)).str(); std::string s = (boost::format("%s") % f(15, "Hello")).str(); return 0; } This generates many compiler errors: 1>------ Build started: Project: hacks, Configuration: Debug x64 ------ 1>Compiling... 1>main.cpp 1>.\main.cpp(15) : error C2872: '_1' : ambiguous symbol 1> could be 'D:\Program Files (x86)\boost\boost_1_42\boost/lambda/core.hpp(69) : boost::lambda::placeholder1_type &boost::lambda::`anonymous-namespace'::_1' 1> or 'D:\Program Files (x86)\boost\boost_1_42\boost/bind/placeholders.hpp(43) : boost::arg<I> `anonymous-namespace'::_1' 1> with 1> [ 1> I=1 1> ] 1>.\main.cpp(15) : error C2664: 'std::setw' : cannot convert parameter 1 from 'boost::lambda::placeholder1_type' to 'std::streamsize' 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called 1>.\main.cpp(15) : error C2872: '_2' : ambiguous symbol 1> could be 'D:\Program Files (x86)\boost\boost_1_42\boost/lambda/core.hpp(70) : boost::lambda::placeholder2_type &boost::lambda::`anonymous-namespace'::_2' 1> or 'D:\Program Files (x86)\boost\boost_1_42\boost/bind/placeholders.hpp(44) : boost::arg<I> `anonymous-namespace'::_2' 1> with 1> [ 1> I=2 1> ] 1>.\main.cpp(15) : error C2664: 'std::setprecision' : cannot convert parameter 1 from 'boost::lambda::placeholder2_type' to 'std::streamsize' 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called 1>.\main.cpp(15) : error C2872: '_3' : ambiguous symbol 1> could be 'D:\Program Files (x86)\boost\boost_1_42\boost/lambda/core.hpp(71) : boost::lambda::placeholder3_type &boost::lambda::`anonymous-namespace'::_3' 1> or 'D:\Program Files (x86)\boost\boost_1_42\boost/bind/placeholders.hpp(45) : boost::arg<I> `anonymous-namespace'::_3' 1> with 1> [ 1> I=3 1> ] 1>.\main.cpp(15) : error C2660: 'boost::io::group' : function does not take 3 arguments 1>.\main.cpp(15) : error C2228: left of '.str' must have class/struct/union 1>Build log was saved at "file://c:\Users\john\Documents\Visual Studio 2005\Projects\hacks\x64\Debug\BuildLog.htm" 1>hacks - 7 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== My fundamental understanding of boost's lambdas and functions is probably lacking. How can I get this to work?

    Read the article

  • Improving performance of a particle system (OpenGL ES)

    - by Jason
    I'm in the process of implementing a simple particle system for a 2D mobile game (using OpenGL ES 2.0). It's working, but it's pretty slow. I start getting frame rate battering after about 400 particles, which I think is pretty low. Here's a summary of my approach: I start with point sprites (GL_POINTS) rendered in a batch just using a native float buffer (I'm in Java-land on Android, so that translates as a java.nio.FloatBuffer). On GL context init, the following are set: GLES20.glViewport(0, 0, width, height); GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GLES20.glEnable(GLES20.GL_CULL_FACE); GLES20.glDisable(GLES20.GL_DEPTH_TEST); Each draw frame sets the following: GLES20.glEnable(GLES20.GL_BLEND); GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA); And I bind a single texture: GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle); GLES20.glUniform1i(mUniformTextureHandle, 0); Which is just a simple circle with some blur (and hence some transparency) http://cl.ly/image/0K2V2p2L1H2x Then there are a bunch of glVertexAttribPointer calls: mBuffer.position(position); mGlEs20.glVertexAttribPointer(mAttributeRGBHandle, valsPerRGB, GLES20.GL_FLOAT, false, stride, mBuffer); ...4 more of these Then I'm drawing: GLES20.glUniformMatrix4fv(mUniformProjectionMatrixHandle, 1, false, Camera.mProjectionMatrix, 0); GLES20.glDrawArrays(GLES20.GL_POINTS, 0, drawCalls); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); My vertex shader does have some computation in it, but given that they're point sprites (with only 2 coordinate values) I'm not sure this is the problem: #ifdef GL_ES // Set the default precision to low. precision lowp float; #endif uniform mat4 u_ProjectionMatrix; attribute vec4 a_Position; attribute float a_PointSize; attribute vec3 a_RGB; attribute float a_Alpha; attribute float a_Burn; varying vec4 v_Color; void main() { vec3 v_FGC = a_RGB * a_Alpha; v_Color = vec4(v_FGC.x, v_FGC.y, v_FGC.z, a_Alpha * (1.0 - a_Burn)); gl_PointSize = a_PointSize; gl_Position = u_ProjectionMatrix * a_Position; } My fragment shader couldn't really be simpler: #ifdef GL_ES // Set the default precision to low. precision lowp float; #endif uniform sampler2D u_Texture; varying vec4 v_Color; void main() { gl_FragColor = texture2D(u_Texture, gl_PointCoord) * v_Color; } That's about it. I had read that transparent pixels in point sprites can cause issues, but surely not at only 400 points? I'm running on a fairly new device (12 month old Galaxy Nexus). My question is less about my approach (although I'm open to suggestion) but more about whether there are any specific OpenGL "no no's" that have leaked into my code. I'm sure there's GL master out there facepalming right now... I'd love to hear any critique.

    Read the article

  • What are the best workarounds for known problems with Hibernate's schema validation of floating poin

    - by Jason Novak
    I have several Java classes with double fields that I am persisting via Hibernate. For example, I have @Entity public class Node ... private double value; When Hibernate's org.hibernate.dialect.Oracle10gDialect creates the DDL for the Node table, it maps the value field to a "double precision" type. create table MDB.Node (... value double precision not null, ... It would appear that in Oracle, "double precision" is an alias for "float". So, when I try to verify the database schema using the org.hibernate.cfg.AnnotationConfiguration.validateSchema() method, Oracle appears to describe the value column as a "float". This causes Hibernate to throw the following Exception org.hibernate.HibernateException: Wrong column type in DBO.ACL_RULE for column value. Found: float, expected: double precision A very similar problem is listed in Hibernate's JIRA database as HHH-1961 (http://opensource.atlassian.com/projects/hibernate/browse/HHH-1961). I'd like to avoid doing anything that will break MySql, Postgres, and Sql Server support so extending the Oracle10gDialect appears to be the most promising of the workarounds mentioned in HHH-1961. But extending a Dialect is something I've never done before and I'm afraid there may be some nasty gotchas. What is the best workaround for this problem that won't break our compatibility with MySql, Postgres, and Sql Server? Thanks for taking the time to look at this!

    Read the article

  • Insert HTML on Radio Label on Formtastic

    - by Adrian Matteo
    I have a form that has a radio button on it (Formtastic), and It works just fine with the :collection I'm passing. The problem is I want some part of the text to be affected by some CSS, but the :collection attribute does not let me put in HTML. Here's my code: = subscription.input :plan_type_id, :as => :radio, :label => false, :wrapper_html => {:class => "plan_type"}, :collection => { @plans[:premium_yearly][:description]+number_to_currency(@plans[:premium_yearly][:amount], :precision => 2) => @plans[:premium_yearly][:value], @plans[:premium_monthly][:description]+number_to_currency(@plans[:premium_monthly][:amount], :precision => 2) => @plans[:premium_monthly][:value] } Has you may may see, I build the label I want to show with my @plans variable and the :collection attribute. Is there any way I can modify the way it renders the label, I want to put some css to some part of the label. I want something like this: = subscription.input :plan_type_id, :as => :radio, :label => false, :wrapper_html => {:class => "plan_type"}, :collection => { @plans[:premium_yearly][:description]+'<b>'+number_to_currency(@plans[:premium_yearly][:amount], :precision => 2)+'<\\b>' => @plans[:premium_yearly][:value], @plans[:premium_monthly][:description]+'<b>'+number_to_currency(@plans[:premium_monthly][:amount], :precision => 2)+'<\\b>' => @plans[:premium_monthly][:value] } Thanks in advanced.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >