Search Results

Search found 44 results on 2 pages for 'dvk'.

Page 2/2 | < Previous Page | 1 2 

  • Should a Perl constructor return an undef or a "invalid" object?

    - by DVK
    Question: What is considered to be "Best practice" - and why - of handling errors in a constructor?. "Best Practice" can be a quote from Schwartz, or 50% of CPAN modules use it, etc...; but I'm happy with well reasoned opinion from anyone even if it explains why the common best practice is not really the best approach. As far as my own view of the topic (informed by software development in Perl for many years), I have seen three main approaches to error handling in a perl module (listed from best to worst in my opinion): Construct an object, set an invalid flag (usually "is_valid" method). Often coupled with setting error message via your class's error handling. Pros: Allows for standard (compared to other method calls) error handling as it allows to use $obj->errors() type calls after a bad constructor just like after any other method call. Allows for additional info to be passed (e.g. 1 error, warnings, etc...) Allows for lightweight "redo"/"fixme" functionality, In other words, if the object that is constructed is very heavy, with many complex attributes that are 100% always OK, and the only reason it is not valid is because someone entered an incorrect date, you can simply do "$obj->setDate()" instead of the overhead of re-executing entire constructor again. This pattern is not always needed, but can be enormously useful in the right design. Cons: None that I'm aware of. Return "undef". Cons: Can not achieve any of the Pros of the first solution (per-object error messages outside of global variables and lightweight "fixme" capability for heavy objects). Die inside the constructor. Outside of some very narrow edge cases, I personally consider this an awful choice for too many reasons to list on the margins of this question. UPDATE: Just to be clear, I consider the (otherwise very worthy and a great design) solution of having very simple constructor that can't fail at all and a heavy initializer method where all the error checking occurs to be merely a subset of either case #1 (if initializer sets error flags) or case #3 (if initializer dies) for the purposes of this question. Obviously, choosing such a design, you automatically reject option #2.

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been.

    Read the article

  • What was your the most impressive technical programming achievement performed to impress a romantic

    - by DVK
    OK, so the archetypal human story is for a guy to go out and impress the girl with some wonderful achievement like slaying a dragon or building a monument or conquering neighboring tribe. This being enlightened 21st century on SO, let's morph this into a: StackOverflower performing a feat of programming to impress a romantic interest. There are two ways to do this: Technical achievement: Impressing a person with suitable background/understanding of programming with actual coding powerss you displayed. A dumb movie example would be that kid in "Hackers" move showing off his hacking skills in front of Angeline Jolie. Artistic achievement: Impressing a person with a result of running said code, whether they understand just how incredible the code itself is. An example is the animated ANSI rose (for a guy who actually wrote the ANSI code) This question is only about the first kind (technical achievements) - e.g. the person of interest was presented with impressive code/design that (s)he was able to properly appreciate. Rules (what doesn't qualify): The target audience must have been a person of romantic interest (prospective or present significant other or random hook-up). E.g. showing your program to your sister who's also a software developer doesn't count. The achievement must have been done specifically with the goal to impress such a person. However, it is OK if the achievement was done to impress a generic qualifying person, not someone specific. Although... if you write code to impress girls in general, I'd say "get a better idea of the opposite sex" The achievement must have been done with the goal of impressing the person. In other words, if you would have done it without romantic interest's knowledge anyway, it doesn't count. As examples, the following does not count: programming for your job. Programming for a coding contest. Open Source program that you'd have done anyway. The precise nature of the awesomeness of the achievement is somewhat irrelevant - from learning entire J2EE in 2 days to writing fancy game engine to implementing Python compiler in LOGO. As long as it's programming/software development related. The achievement should preferably be something other people would rank highly as well. If your date was impressed with your skill at calculating Fibonacci sequence without recursive function calls, it doesn't mean most developers will be. But it does mean you need to start finding better things to do on dates ;)

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • Sybase stored procedure - how do I create an index on a #table?

    - by DVK
    I have a stored procedure which creates and works with a temporary #table Some of the queries would be tremendously optimized if that temporary #table would have an index created on it. However, creating an index within the stored procedure fails: create procedure test1 as SELECT f1, f2, f3 INTO #table1 FROM main_table WHERE 1 = 2 -- insert rows into #table1 create index my_idx on #table1 (f1) SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X" When I call the above, the query plan for "QUERY X" shows a table scan. If I simply run the code above outside the stored procedure, the messages show the following warning: Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead. This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation: create index my_idx on #table1 (f1) go Now, "QUERY X" query plan shows the use of index "my_idx". QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. P.S. If it matters, this is on Sybase 12.

    Read the article

  • What is a good way to assign order #s to ordered rows in a table in Sybase

    - by DVK
    I have a table T (structure below) which initially contains all-NULL values in an integer order column: col1 varchar(30), col2 varchar(30), order int NULL I also have a way to order the "colN" columns, e.g. SELECT * FROM T ORDER BY some_expression_involving_col1_and_col2 What's the best way to assign - IN SQL - numeric order values 1-N to the order table, so that the order values match the order of rows returned by the above ORDER BY? In other words, I would like a single query (Sybase SQL syntax so no Oracle's rowcount) which assigns order values so that SELECT * FROM T ORDER BY order returns 100% same order of rows as the query above. The query does NOT necessarily need to update the table T in place, I'm OK with creating a copy of the table T2 if that'll make the query simpler. NOTE1: A solution must be real query or a set of queries, not involving a loop or a cursor. NOTE2: Assume that the data is uniquely orderable according to the order by above - no need to worry about situation when 2 rows can be assigned the same order at random. NOTE3: I would prefer a generic solution, but if you wish a specific example of ordering expression, let's say: SELECT * FROM T ORDER BY CASE WHEN col1="" THEN "AAAAAA" ELSE col1 END, ISNULL(col2, "ZZZ")

    Read the article

  • Are there any widely-agreed upon guidelines for rating your language knowledge on a scale?

    - by DVK
    The question was imagined after a co-worker was complaining for an hour about some guy who could not answer basic Java questions on an interview after self-identifying himself as "8 out of 10" on Java. While that was an obvious fib, I personally always had major trouble defining my specific language skills on a sliding scale unless I'm given specific guidelines (remember 40 standard libraries by heart? Able to solve 10 random Project Euler problems in <30 mins each? Can write implementation of A, B and C data-structures from scratch in 30 mins? Know 30% of standard? Can answer 50% of questions on StackOverflow pertaining to the language?) So, I was wondering - is there some sort of commonly accepted methodology for translating such tangible benchmarks into "rate yourself on a language between 1-10"? "Kernighan gets an A, God gets a B, everyone else gets C and less" type jokes are not helpful :)

    Read the article

  • What was the most refreshingly honest non-technical comment you saw?

    - by DVK
    OK, so we all saw the lists of "funny" or "bad" comments. However, today, when maintaining an old stored procedure, I stumbled upon a comment which I couldn't classify other than "refreshingly brutally honest", left by a previous maintainer around a really freakish (both performance and readability-wise) page-long query: -- Feel free to optimize this if you can understand what it means So, in the first (and hopefully only) poll type question in my history of Stack Overflow, I'd like to hear some other "refreshingly brutally honest" code comments you encountered or written.

    Read the article

  • Why implement DB connection pointer object as a reference counting pointer? (C++)

    - by DVK
    At our company one of the core C++ classes (Database connection pointer) is implemented as a reference counting pointer. To be clear, the objects are NOT DB connections themselves, but pointers to a DB connection object. The library is very old, and nobody who designed is around anymore. So far, nether I, nor any C++ experts in the company that I asked have come up with a good reason for why this particular design was chosen. Any ideas? It is introducing some problems (partially due to awful reference pointer implementation used), and I'm trying to understand if this design actually has some deep underlying reasons? The usage pattern these days seems to be that the DB connection pointer object is returned by a DB connection manager class, and it's somewhat unclear whether DB connection pointers were designed to be able to be used independently of DB connection manager.

    Read the article

  • MVC (model-view-controller) - can it be explained in simple terms?

    - by DVK
    I need to explain to a not-very-technical manager the MVC (model-view-controller) concept and ran into trouble. The problem is that the explanation needs to be on a "your grandma will get it" level - e.g. even the fairly straightforward explanation offered on MVC Wiki page didn't work, at least with my commentary. Does anyone have a reference to a good MVC explanation in simple terms? It would ideally be done with non-techie metaphor examples (e.g. similar to "Decorator pattern is like glasses") - one reason I failed was that all MVC examples I could come up with were development related. I once saw a list of pattern explanations but to the best of my memory MVC was not on it. Thanks!

    Read the article

  • How can I create the XML::Simple data structure using a Perl XML SAX parser?

    - by DVK
    Summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog—due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. I'm looking for an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is; e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • What was the most refreshingly honest non-technical comment you saw in the code?

    - by DVK
    OK, so we all saw the lists of "funny" or "bad" comments. However, today, when maintaining an old stored proc, I stumbled upon a comment which I couldn't classify other than "refreshingly brutally honest", left by a previous maintainer around a really freakish (both performance and readability-wise) page-long query: -- Feel free to optimize this if you can understand what it means So, in the first (and hopefully only) poll type question in my history of Stack Overflow, I'd like to hear some other "refreshingly brutally honest" code comments you encountered or written.

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • CodePlex Daily Summary for Saturday, May 15, 2010

    CodePlex Daily Summary for Saturday, May 15, 2010New ProjectsBizTalk EDI Guidance: BizTalk EDI Guidance is intended to simplify the delivery of EDI solutions by leveraging the ESB Toolkit. This project is currently Alpha and sh...Continues Integration Sample: I'm providing a series of blog post to show a complete CI process using CruiseControl.Net and msbuild. The source code for this series is hosted here.DioM2D: My Dragons in our Midst RPG. Runs on my custom Starlight Engine.Ethical Hacking ASP.NET: Security tools and guidelines for white-hat hacking and protecting ASP.NET web applications.Farseer Engine with XNATouch: Farseer is great engine for game physics. This implementation uses XNATouch framework.Feature Builder Guidance Extensions: Feature Builder Guidance Extensions are Feature Extensions which extend the guidance for the Feature Building experience. Each FBGX will be suppli...Microsoft Office Document Security: MODS is a plugin for office 2007 thats includes Hash Encryption, Hex Convertion and more. Plugins: MODS For Word still working on (MODS for Excel ...Minimize Engine (XNA): The Minimize Engine is a basic 3D Games Engine created using XNA, with its primary focus around Grid Based games.MSForge TownCrier: This project is meant to build a notification and calling system for MSForge.net User Groups.NatureProtector: Silverlight 4 project.OutSync: OutSync is a free Windows desktop application that syncs photos of your Facebook friends with matching contacts in Microsoft Outlook. It allows you...Quick Save Images, Clipboard save to file, Quick save, bmp, png, jpeg, Image: ClipSa is a very small tool for very quick picture saving. You put some picture into the clipboard (PrintScrn/Alt-PrintScrn/Ctrl-C), ClipSa saves ...ResHelper Manager: Resource strings management tool that creates localization files for any type of localization target (asp.net, wpf and so on...)SecureCookieHttpModule: Secure your session cookie (and other session-based) cookies for replay attacks using this easy to use ASP.NET HttpModule.simpleChMS: A Church Management System (ChMS) designed for churches or ministries like youth groups that want to facilitate better care or theie membership. Fo...sMAPtool: -SPDomainObject: mapping strong type objects to sp listsSQL Trim: This project aims at developing a universal trim function for Microsoft SQL Server. It trims: 1) pre spaces 2) post spaces 3) double spaces 3) subs...TurretGunner: mt-experienceNew ReleasesBeanProxy: BeanProxy 3.0: BeanProxy is a C# (.NET 3.5) library housing classes that facilitates unit testing. Any non-static, public interface/class or abstract class can be...Blueset Studio Opensource Projects: 蓝色之风记事本 0.2 Alpha: 一个超级Bug版本……CSharp Intellisense: V2.1: - Bug fix (Pascal Casing)DioM2D: DioM2D0.01: http://www.dragonsinourmidst.com/forums/showthread.php?p=690058#post690058Ethical Hacking ASP.NET: Version 1.0.0.1: This is the initial release of the project. Read more about the available tests and features on the Documentation tab. You need the full .NET Frame...Event Scavenger: Collector service update - version 3.2.4: Added check if the database connection string is set up in the config file.Feature Builder Guidance Extensions: FBGX-Binaries: This release consists of a zip file containing all the VSIXs resulting from building each of the FBGX packages found here as source. This will mak...Floe IRC Client: Floe IRC Client 2010-05 R2: - Detaching windows (right click on the tabs to detach them) - Highlight lines with your nick or other patterns - Fixed several bugs - Tabs can now...Free language translator and file converter: Free Language Translator 1.96: Fixed some minor bugs and improved the UI a bit. If you can not install the msi file you might be missing some prerequisites. You can try running t...Geocache Downloader: release 1.0: This is the first release.kp.net: Alpha release is avalable: The goal of this alpha release is to try the code in some production scenarios and find out what features should be tuned.Live-Exchange Calendar Sync: Live-Exchange Calendar Sync: Live-Exchange Calendar Sync Beta May 14, 2010 release of Live-Exchange Calendar Sync 1.0 BETA. (Version 45334) Getting StartedInfo about installat...MAPILab Explorer for SharePoint: MAPILab Explorer for SharePoint ver 2.1.1: 1) Small bug fixed that appears on first start (when earliers versions wasn't installed). How to install:Download ZIP file and extract it on Sha...Microsoft Office Document Security: MODS 4 WORD (SOURCE INCLUDED): Includes Source CodeMoonyDesk (windows desktop widgets): MoonyDesk Alpha: MoonyDesk Alpha (some memory improvements)OnTopReplica: Release 2.9.3: Some bugfixes and improvements. Czech translation added (thanks René Mihula).OutSync: OutSync v1.0.100.0: OutSync v1.0.100.0 is the final release by Mel before the move to CodePlex. I have tested it on Windows 7 32bit and 64bit with Office 2007 and it ...Quick Save Images, Clipboard save to file, Quick save, bmp, png, jpeg, Image: Clipsa v 0.1: Download and extract to any place 2 files - clipSa.exe and clipSa.exe.config Run clipSa.exe. That's all.ResHelper Manager: ResHelperManager: List of changes applied to this version of ResHelper is included in main download zip package. Example sourcesIn Source Code tab are sources of De...Rx Contrib: V1.3: - Bug Fix - BufferWithTimeOrCount with flexible time period setting when ever the time period elapsed...SharePoint DVK Integration: SharePoint 2007 DVK integration v1.0.3: Fixes Fixed default field bindings. I rebound too many fields on every page load. Fixed extension replacing on creating target url (threw it out)...ShoutcastStast for DotNetNuke: DNN_ShoutcastStats alpha 05.00.495: First Alpha release of ShoutcastStats Module for DotNetNuke This first alpha version of the ShoutcastStats Module for DotNetNuke is still in devel...SilverPart 2.1: SilverPart 2.1: SilverPart 2.1 This interim release fixes some major bugs related to Firefox and anonymous access. - Fix for Issue ID 4005 - SilverPart does not w...sMAPtool: sMAPedit v0.7c (Base Release with Maps): Fixed: force a gargabe collection update to prevent pictureBox's memory leak Added: essential map pack with all basic maps in jpg format Added:...SQL Trim: Trim: Initial releaseSSIS Multiple Hash: Multiple Hash V1.2.1: This is version 1.2.1 of the Multiple Hash SSIS Component. It supports SQL 2005 and SQL 2008, although you have to download the correct install pa...StreamInsight Yahoo Finance input adapter example: StockTicker_v1_0_RTM: Updated for StreamInsight RTM.Update Controls .NET: 2.1.0.0: Automatic dependency management for WPF and Silverlight data binding. This release combines both the WPF and Silverlight assemblies into one insta...VCC: Latest build, v2.1.30514.0: Automatic drop of latest buildMost Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersShake - C# MakeStyleCop

    Read the article

  • TSQL - make a literal float value

    - by David B
    I understand the host of issues in comparing floats, and lament their use in this case - but I'm not the table author and have only a small hurdle to climb... Someone has decided to use floats as you'd expect GUIDs to be used. I need to retrieve all the records with a specific float value. sp_help MyTable -- Column_name Type Computed Length Prec -- RandomGrouping float no 8 53 Here's my naive attempt: --yields no results SELECT RandomGrouping FROM MyTable WHERE RandomGrouping = 0.867153569942739 And here's an approximately working attempt: --yields 2 records SELECT RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.867153569942739 - 0.00000001 AND 0.867153569942739 + 0.00000001 -- 0.867153569942739 -- 0.867153569942739 In my naive attempt, is that literal a floating point literal? Or is it really a decimal literal that gets converted later? If my literal is not a floating point literal, what is the syntax for making a floating point literal? EDIT: Another possibility has occurred to me... it may be that a more precise number than is displayed is stored in this column. It may be impossible to create a literal that represents this number. I will accept answers that demonstrate that this is the case. EDIT: response to DVK. TSQL is MSSQLServer's dialect of SQL. This script works, and so equality can be performed deterministically between float types: DECLARE @X float SELECT top 1 @X = RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.839110948199148 - 0.000000000001 AND 0.839110948199148 + 0.000000000001 --yields two records SELECT * FROM MyTable WHERE RandomGrouping = @X I said "approximately" because that method tests for a range. With that method I could get values that are not equal to my intended value. The linked article doesn't apply because I'm not (intentionally) trying to straddle the world boundaries between decimal and float. I'm trying to work with only floats. This isn't about the non-convertibility of decimals to floats.

    Read the article

< Previous Page | 1 2