Search Results

Search found 1215 results on 49 pages for 'numeric keypad'.

Page 37/49 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • SQL SERVER – How to Ignore Columnstore Index Usage in Query

    - by pinaldave
    Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following. “We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow. Any workaround in this situation?” In summary the question in simple words “How can we ignore using columnstore index in selective queries?” Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index. Here is the quick script to prove the same. We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO Now we have created columnstore index so if we run following query it will use for sure the same index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) GO Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Linq To SQL: Behaviour for table field which is NotNull and having Default value or binding

    - by kaushalparik27
    I found this something interesting while wandering over community which I would like to share. The post is whole about: DBML is not considering the table field's "Default value or Binding" setting which is a NotNull. I mean the field which can not be null but having default value set needs to be set IsDbGenerated = true in DBML file explicitly.Consider this situation: There is a simple tblEmployee table with below structure: The fields are simple. EmployeeID is a Primary Key with Identity Specification = True with Identity Seed = 1 to autogenerate numeric value for this field. EmployeeName and their EmailAddress to store in rest of 2 fields. And the last one is "DateAdded" with DateTime datatype which doesn't allow NULL but having Default Value/Binding with "GetDate()". That means if we don't pass any value to this field then SQL will insert current date in "DateAdded" field.So, I start with a new website, add a DBML file and dropped the said table to generate LINQ To SQL context class. Finally, I write a simple code snippet to insert data into the tblEmployee table; BUT, I am not passing any value to "DateAdded" field. Because I am considering SQL Server's "Default Value or Binding (GetDate())" setting to this field and understand that SQL will insert current date to this field.        using (TestDatabaseDataContext context = new TestDatabaseDataContext())        {            tblEmployee tblEmpObjet = new tblEmployee();            tblEmpObjet.EmployeeName = "KaushaL";            tblEmpObjet.EmployeeEmailAddress = "[email protected]";            context.tblEmployees.InsertOnSubmit(tblEmpObjet);            context.SubmitChanges();        }Here comes the twist when application give me below error:  This is something not expecting! From the error it clearly depicts that LINQ is passing NULL value to "DateAdded" Field while according to my understanding it should respect Sql Server's "Default value or Binding" setting for this field. A bit googling and I found very interesting related to this problem.When we set Primary Key to any field with "Identity Specification" Property set to true; DBML set one important property "IsDbGenerated=true" for this field. BUT, when we set "Default Value or Biding" property for some field; we need to explicitly tell the DBML/LINQ to let it know that this field is having default binding at DB side that needs to be respected if I don't pass any value. So, the solution is: You need to explicitly set "IsDbGenerated=true" for such field to tell the LINQ that the field is having default value or binding at Sql Server side so, please don't worry if i don't pass any value for it.You can select the field and set this property from property window in DBML Designer file or write the property in DBML.Designer.cs file directly. I have attached a working example with required table script with this post here. I hope this would be helpful for someone hunting for the same. Happy Discovery!

    Read the article

  • SQL SERVER – Updating Data in A Columnstore Index

    - by pinaldave
    So far I have written two articles on Columnstore Indexes, and both of them got very interesting readership. In fact, just recently I got a query on my previous article on Columnstore Index. Read the following two articles to get familiar with the Columnstore Index. They will give you a reference to the question which was asked by a certain reader: SQL SERVER – Fundamentals of Columnstore Index SQL SERVER – How to Ignore Columnstore Index Usage in Query Here is the reader’s question: ” When I tried to update my table after creating the Columnstore index, it gives me an error. What should I do?” When the Columnstore index is created on the table, the table becomes Read-Only table and it does not let any insert/update/delete on the table. The basic understanding is that Columnstore Index will be created on the table that is very huge and holds lots of data. If a table is small enough, there is no need to create a Columnstore index. The regular index should just help it. The reason why Columnstore index was needed is because the table was so big that retrieving the data was taking a really, really long time. Now, updating such a huge table is always a challenge by itself. If the Columnstore Index is created on the table, and the table needs to be updated, you need to know that there are various ways to update it. The easiest way is to disable the Index and enable it. Consider the following code: USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO /* It will throw following error Msg 35330, Level 15, State 1, Line 2 UPDATE statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the UPDATE statement, then rebuilding the columnstore index after UPDATE is complete. */ A similar error also shows up for Insert/Delete function. Here is the workaround. Disable the Columnstore Index and performance update, enable the Columnstore Index: -- Disable the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] DISABLE GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO -- Rebuild the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] REBUILD GO This time it will not throw an error while the update of the table goes successfully. Let us do a cleanup of our tables using this code: -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In the next post we will see how we can use Partition to update the Columnstore Index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • HSSFS Part 2.1 - Parsing @@VERSION

    - by Most Valuable Yak (Rob Volk)
    For Part 2 of the Handy SQL Server Function Series I decided to tackle parsing useful information from the @@VERSION function, because I am an idiot.  It turns out I was confused about CHARINDEX() vs. PATINDEX() and it pretty much invalidated my original solution.  All is not lost though, this mistake turned out to be informative for me, and hopefully for you. Referring back to the "Version" view in the prelude I started with the following query to extract the version number: SELECT DISTINCT SQLVersion, SUBSTRING(VersionString,PATINDEX('%-%',VersionString)+2, 12) VerNum FROM VERSION I used PATINDEX() to find the first hyphen "-" character in the string, since the version number appears 2 positions after it, and got these results: SQLVersion VerNum ----------- ------------ 2000 8.00.2055 (I 2005 9.00.3080.00 2005 9.00.4053.00 2008 10.50.1600.1 As you can see it was good enough for most of the values, but not for the SQL 2000 @@VERSION.  You'll notice it has only 3 version sections/octets where the others have 4, and the SUBSTRING() grabbed the non-numeric characters after.  To properly parse the version number will require a non-fixed value for the 3rd parameter of SUBSTRING(), which is the number of characters to extract. The best value is the position of the first space to occur after the version number (VN), the trick is to figure out how to find it.  Here's where my confusion about PATINDEX() came about.  The CHARINDEX() function has a handy optional 3rd parameter: CHARINDEX (expression1 ,expression2 [ ,start_location ] ) While PATINDEX(): PATINDEX ('%pattern%',expression ) Does not.  I had expected to use PATINDEX() to start searching for a space AFTER the position of the VN, but it doesn't work that way.  Since there are plenty of spaces before the VN, I thought I'd try PATINDEX() on another character that doesn't appear before, and tried "(": SELECT SQLVersion, SUBSTRING(VersionString,PATINDEX('%-%',VersionString)+2, PATINDEX('%(%',VersionString)) FROM VERSION Unfortunately this messes up the length calculation and yields: SQLVersion VerNum ----------- --------------------------- 2000 8.00.2055 (Intel X86) Dec 16 2008 19:4 2005 9.00.3080.00 (Intel X86) Sep 6 2009 01: 2005 9.00.4053.00 (Intel X86) May 26 2009 14: 2008 10.50.1600.1 (Intel X86) Apr 2008 10.50.1600.1 (X64) Apr 2 20 Yuck.  The problem is that PATINDEX() returns position, and SUBSTRING() needs length, so I have to subtract the VN starting position: SELECT SQLVersion, SUBSTRING(VersionString,PATINDEX('%-%',VersionString)+2, PATINDEX('%(%',VersionString)-PATINDEX('%-%',VersionString)) VerNum FROM VERSION And the results are: SQLVersion VerNum ----------- -------------------------------------------------------- 2000 8.00.2055 (I 2005 9.00.4053.00 (I Msg 537, Level 16, State 2, Line 1 Invalid length parameter passed to the LEFT or SUBSTRING function. Ummmm, whoops.  Turns out SQL Server 2008 R2 includes "(RTM)" before the VN, and that causes the length to turn negative. So now that that blew up, I started to think about matching digit and dot (.) patterns.  Sadly, a quick look at the first set of results will quickly scuttle that idea, since different versions have different digit patterns and lengths. At this point (which took far longer than I wanted) I decided to cut my losses and redo the query using CHARINDEX(), which I'll cover in Part 2.2.  So to do a little post-mortem on this technique: PATINDEX() doesn't have the flexibility to match the digit pattern of the version number; PATINDEX() doesn't have a "start" parameter like CHARINDEX(), that allows us to skip over parts of the string; The SUBSTRING() expression is getting pretty complicated for this relatively simple task! This doesn't mean that PATINDEX() isn't useful, it's just not a good fit for this particular problem.  I'll include a version in the next post that extracts the version number properly. UPDATE: Sorry if you saw the unformatted version of this earlier, I'm on a quest to find blog software that ACTUALLY WORKS.

    Read the article

  • Ruby 1.9.3 segmentation fault on rackspace

    - by user531065
    I'm having a similar problem as "ruby 1.9 segmentation fault on exit." I followed the solution and updated libselinux with yum and have the following version installed: Package libselinux-2.0.96-6.fc14.1.x86_64 already installed and latest version I am running a RackSpace machine, kernel version: 2.6.35.4-rscloud. My C backtrace is: -- C level backtrace information ------------------------------------------- /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x18910a) [0x7f3cdb6c010a] vm_dump.c:796 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x5eb04) [0x7f3cdb595b04] error.c:258 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_bug+0xb8) [0x7f3cdb5969d8] error.c:277 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x119865) [0x7f3cdb650865] signal.c:609 /lib64/libpthread.so.0() [0x328540eeb0] /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x179272) [0x7f3cdb6b0272] ./include/ruby/ruby.h:1306 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1800d5) [0x7f3cdb6b70d5] vm.c:624 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_yield+0x44) [0x7f3cdb6bb274] vm.c:654 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0xab821) [0x7f3cdb5e2821] numeric.c:3204 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x183601) [0x7f3cdb6ba601] vm_insnhelper.c:404 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1792b4) [0x7f3cdb6b02b4] insns.def:1015 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1800d5) [0x7f3cdb6b70d5] vm.c:624 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_yield+0x44) [0x7f3cdb6bb274] vm.c:654 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_ary_each+0x54) [0x7f3cdb5622e4] array.c:1478 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x183601) [0x7f3cdb6ba601] vm_insnhelper.c:404 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1792b4) [0x7f3cdb6b02b4] insns.def:1015 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_iseq_eval+0x1f0) [0x7f3cdb6bbae0] vm.c:1447 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x67bfd) [0x7f3cdb59ebfd] load.c:310 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_require_safe+0x6ef) [0x7f3cdb5a02df] load.c:619 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x183601) [0x7f3cdb6ba601] vm_insnhelper.c:404 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1792b4) [0x7f3cdb6b02b4] insns.def:1015 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1800d5) [0x7f3cdb6b70d5] vm.c:624 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_yield+0x44) [0x7f3cdb6bb274] vm.c:654 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_ary_each+0x54) [0x7f3cdb5622e4] array.c:1478 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x183601) [0x7f3cdb6ba601] vm_insnhelper.c:404 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1792b4) [0x7f3cdb6b02b4] insns.def:1015 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_iseq_eval_main+0xaf) [0x7f3cdb6bbbdf] vm.c:1461 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x6471a) [0x7f3cdb59b71a] eval.c:204 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(ruby_exec_node+0x1d) [0x7f3cdb59c56d] eval.c:251 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(ruby_run_node+0x1e) [0x7f3cdb59e60e] eval.c:244 ruby() [0x4008eb] /lib64/libc.so.6(__libc_start_main+0xfd) [0x3284c1ee5d] ruby() [0x4007d9]

    Read the article

  • Do we still have a case against the goto statement? [closed]

    - by FredOverflow
    Possible Duplicate: Is it ever worthwhile using goto? In a recent article, Andrew Koenig writes: When asked why goto statements are harmful, most programmers will say something like "because they make programs hard to understand." Press harder, and you may well hear something like "I don't really know, but that's what I was taught." For that reason, I'd like to summarize Dijkstra's arguments. He then shows two program fragments, one without a goto and and one with a goto: if (n < 0) n = 0; Assuming that n is a variable of a built-in numeric type, we know that after this code, n is nonnegative. Suppose we rewrite this fragment: if (n >= 0) goto nonneg; n = 0; nonneg: ; In theory, this rewrite should have the same effect as the original. However, rewriting has changed something important: It has opened the possibility of transferring control to nonneg from anywhere else in the program. I emphasized the part that I don't agree with. Modern languages like C++ do not allow goto to transfer control arbitrarily. Here are two examples: You cannot jump to a label that is defined in a different function. You cannot jump over a variable initialization. Now consider composing your code of tiny functions that adhere to the single responsibility principle: int clamp_to_zero(int n) { if (n >= 0) goto n_is_not_negative: n = 0; n_is_not_negative: return n; } The classic argument against the goto statement is that control could have transferred from anywhere inside your program to the label n_is_not_negative, but this simply is not (and was never) true in C++. If you try it, you will get a compiler error, because labels are scoped. The rest of the program doesn't even see the name n_is_not_negative, so it's just not possible to jump there. This is a static guarantee! Now, I'm not saying that this version is better then the one without the goto, but to make the latter as expressive as the first one, we would at least have to insert a comment, or even better yet, an assertion: int clamp_to_zero(int n) { if (n < 0) n = 0; // n is not negative at this point assert(n >= 0); return n; } Note that you basically get the assertion for free in the goto version, because the condition n >= 0 is already written in line 1, and n = 0; satisfies the condition trivially. But that's just a random observation. It seems to me that "don't use gotos!" is one of those dogmas like "don't use multiple returns!" that stem from a time where the real problem were functions of hundreds or even thousand of lines of code. So, do we still have a case against the goto statement, other than that it is not particularly useful? I haven't written a goto in at least a decade, but it's not like I was running away in terror whenever I encountered one. 1 Ideally, I would like to see a strong and valid argument against gotos that still holds when you adhere to established programming principles for clean code like the SRP. "You can jump anywhere" is not (and has never been) a valid argument in C++, and somehow I don't like teaching stuff that is not true. 1: Also, I have never been able to resurrect even a single velociraptor, no matter how many gotos I tried :(

    Read the article

  • Guessing Excel Data Types

    - by AjarnMark
    Note to Self HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel: TypeGuessRows = 0 means scan everything. Note to Others About 10 years ago I stumbled across this bit of information just when I needed it and it saved my project.  Then for some reason, a few years later when it would have been nice, but not critical, for some reason I could not find it again anywhere.  Well, now I have stumbled across it again, and to preserve my future self from nightmares and sudden baldness due to pulling my hair out, I have decided to blog it in the hopes that I can find it again this way. Here’s the story…  When you query data from an Excel spreadsheet, such as with old-fashioned DTS packages in SQL 2000 (my first reference) or simply with an OLEDB Data Adapter from ASP.NET (recent task) and if you are using the Microsoft Jet 4.0 driver (newer ones may deal with this differently) then you can get funny results where the query reports back that a cell value is null even when you know it contains data. What happens is that Excel doesn’t really have data types.  While you can format information in cells to appear like certain data types (e.g. Date, Time, Decimal, Text, etc.) that is not really defining the cell as being of a certain type like we think of when working with databases.  But, presumably, to make things more convenient for the user (programmer) when you issue a query against Excel, the query processor tries to guess what type of data is contained in each column and returns it in an appropriate manner.  This is all well and good IF your data is consistent in every row and matches what the processor guessed.  And, for efficiency’s sake, when the query processor is trying to figure out each column’s data type, it does so by analyzing only the first 8 rows of data (default setting). Now here’s the problem, suppose that your spreadsheet contains information about clothing, and one of the columns is Size.  Now suppose that in the first 8 rows, all of your sizes look like 32, 34, 18, 10, and so on, using numbers, but then, somewhere after the 8th row, you have some rows with sizes like S, M, L, XL.  What happens is that by examining only the first 8 rows, the query processor inferred that the column contained numerical data, and then when it hits the non-numerical data in later rows, it comes back blank.  Major bummer, and a real pain to track down if you don’t know that Excel is doing this, because you study the spreadsheet and say, “the data is RIGHT THERE!  WHY doesn’t the query see it?!?!”  And the hair-pulling begins. So, what’s a developer to do?  One option is to go to the registry setting noted above and change the DWORD value of TypeGuessRows from the default of 8 to 0 (zero).  Setting this value to zero will force Jet to scan every row in the spreadsheet before making its determination as to what type of data the column contains.  And that means that in the example above, it would have treated the column as a string rather than as numeric, and presto! your query now returns all of the values that you know are in there. Of course, there is a caveat… if you are querying large spreadsheets, making Jet scan every row can be quite a performance hit.  You could enter a different number (more than 8) that you believe is a better sampling of rows to make the guess, but you still have the possibility that every row scanned looks alike, but that later rows are different, and that you might get blanks when there really is data there.  That’s the type of gamble, I really don’t like to take with my data. Anyone with a better approach, or with experience with more recent drivers that have a better way of handling data types, please chime in!

    Read the article

  • T-SQL Tuesday #025 &ndash; CHECK Constraint Tricks

    - by Most Valuable Yak (Rob Volk)
    Allen White (blog | twitter), marathoner, SQL Server MVP and presenter, and all-around awesome author is hosting this month's T-SQL Tuesday on sharing SQL Server Tips and Tricks.  And for those of you who have attended my Revenge: The SQL presentation, you know that I have 1 or 2 of them.  You'll also know that I don't recommend using anything I talk about in a production system, and will continue that advice here…although you might be sorely tempted.  Suffice it to say I'm not using these examples myself, but I think they're worth sharing anyway. Some of you have seen or read about SQL Server constraints and have applied them to your table designs…unless you're a vendor ;)…and may even use CHECK constraints to limit numeric values, or length of strings, allowable characters and such.  CHECK constraints can, however, do more than that, and can even provide enhanced security and other restrictions. One tip or trick that I didn't cover very well in the presentation is using constraints to do unusual things; specifically, limiting or preventing inserts into tables.  The idea was to use a CHECK constraint in a way that didn't depend on the actual data: -- create a table that cannot accept data CREATE TABLE dbo.JustTryIt(a BIT NOT NULL PRIMARY KEY, CONSTRAINT chk_no_insert CHECK (GETDATE()=GETDATE()+1)) INSERT dbo.JustTryIt VALUES(1)   I'll let you run that yourself, but I'm sure you'll see that this is a pretty stupid table to have, since the CHECK condition will always be false, and therefore will prevent any data from ever being inserted.  I can't remember why I used this example but it was for some vague and esoteric purpose that applies to about, maybe, zero people.  I come up with a lot of examples like that. However, if you realize that these CHECKs are not limited to column references, and if you explore the SQL Server function list, you could come up with a few that might be useful.  I'll let the names describe what they do instead of explaining them all: CREATE TABLE NoSA(a int not null, CONSTRAINT CHK_No_sa CHECK (SUSER_SNAME()<>'sa')) CREATE TABLE NoSysAdmin(a int not null, CONSTRAINT CHK_No_sysadmin CHECK (IS_SRVROLEMEMBER('sysadmin')=0)) CREATE TABLE NoAdHoc(a int not null, CONSTRAINT CHK_No_AdHoc CHECK (OBJECT_NAME(@@PROCID) IS NOT NULL)) CREATE TABLE NoAdHoc2(a int not null, CONSTRAINT CHK_No_AdHoc2 CHECK (@@NESTLEVEL>0)) CREATE TABLE NoCursors(a int not null, CONSTRAINT CHK_No_Cursors CHECK (@@CURSOR_ROWS=0)) CREATE TABLE ANSI_PADDING_ON(a int not null, CONSTRAINT CHK_ANSI_PADDING_ON CHECK (@@OPTIONS & 16=16)) CREATE TABLE TimeOfDay(a int not null, CONSTRAINT CHK_TimeOfDay CHECK (DATEPART(hour,GETDATE()) BETWEEN 0 AND 1)) GO -- log in as sa or a sysadmin server role member, and try this: INSERT NoSA VALUES(1) INSERT NoSysAdmin VALUES(1) -- note the difference when using sa vs. non-sa -- then try it again with a non-sysadmin login -- see if this works: INSERT NoAdHoc VALUES(1) INSERT NoAdHoc2 VALUES(1) GO -- then try this: CREATE PROCEDURE NotAdHoc @val1 int, @val2 int AS SET NOCOUNT ON; INSERT NoAdHoc VALUES(@val1) INSERT NoAdHoc2 VALUES(@val2) GO EXEC NotAdHoc 2,2 -- which values got inserted? SELECT * FROM NoAdHoc SELECT * FROM NoAdHoc2   -- and this one just makes me happy :) INSERT NoCursors VALUES(1) DECLARE curs CURSOR FOR SELECT 1 OPEN curs INSERT NoCursors VALUES(2) CLOSE curs DEALLOCATE curs INSERT NoCursors VALUES(3) SELECT * FROM NoCursors   I'll leave the ANSI_PADDING_ON and TimeOfDay tables for you to test on your own, I think you get the idea.  (Also take a look at the NoCursors example, notice anything interesting?)  The real eye-opener, for me anyway, is the ability to limit bad coding practices like cursors, ad-hoc SQL, and sa use/abuse by using declarative SQL objects.  I'm sure you can see how and why this would come up when discussing Revenge: The SQL.;) And the best part IMHO is that these work on pretty much any version of SQL Server, without needing Policy Based Management, DDL/login triggers, or similar tools to enforce best practices. All seriousness aside, I highly recommend that you spend some time letting your mind go wild with the possibilities and see how far you can take things.  There are no rules! (Hmmmm, what can I do with rules?) #TSQL2sDay

    Read the article

  • perl comparing 2 data file as array 2D for finding match one to one [migrated]

    - by roman serpa
    I'm doing a program that uses combinations of variables ( combiData.txt 63 rows x different number of columns) for analysing a data table ( j1j2_1.csv, 1000filas x 19 columns ) , to choose how many times each combination is repeated in data table and which rows come from (for instance, tableData[row][4]). I have tried to compile it , however I get the following message : Use of uninitialized value $val in numeric eq (==) at rowInData.pl line 34. Use of reference "ARRAY(0x1a2eae4)" as array index at rowInData.pl line 56. Use of reference "ARRAY(0x1a1334c)" as array index at rowInData.pl line 56. Use of uninitialized value in subtraction (-) at rowInData.pl line 56. Modification of non-creatable array value attempted, subscript -1 at rowInData.pl line 56. nothing This is my code: #!/usr/bin/perl use strict; use warnings; my $line_match; my $countTrue; open (FILE1, "<combiData.txt") or die "can't open file text1.txt\n"; my @tableCombi; while(<FILE1>) { my @row = split(' ', $_); push(@tableCombi, \@row); } close FILE1 || die $!; open (FILE2, "<j1j2_1.csv") or die "can't open file text1.txt\n"; my @tableData; while(<FILE2>) { my @row2 = split(/\s*,\s*/, $_); push(@tableData, \@row2); } close FILE2 || die $!; #function transform combiData.txt variable (position ) to the real value that i have to find in the data table. sub trueVal($){ my ($val) = $_[0]; if($val == 7){ return ('nonsynonymous_SNV'); } elsif( $val == 14) { return '1'; } elsif( $val == 15) { return '1';} elsif( $val == 16) { return '1'; } elsif( $val == 17) { return '1'; } elsif( $val == 18) { return '1';} elsif( $val == 19) { return '1';} else { print 'nothing'; } } #function IntToStr ( ) , i'm not sure if it is necessary) that transforms $ to strings , to use the function <eq> in the third loop for the array of combinations compared with the data array . sub IntToStr { return "$_[0]"; } for my $combi (@tableCombi) { $line_match = 0; for my $sheetData (@tableData) { $countTrue=0; for my $cell ( @$combi) { #my $temp =\$tableCombi[$combi][$cell] ; #if ( trueVal($tableCombi[$combi][$cell] ) eq $tableData[$sheetData][ $tableCombi[$combi][$cell] - 1 ] ){ #if ( IntToStr(trueVal($$temp )) eq IntToStr( $tableData[$sheetData][ $$temp-1] ) ){ if ( IntToStr(trueVal($tableCombi[$combi][$cell]) ) eq IntToStr($tableData[$sheetData][ $tableCombi[$combi][$cell] -1]) ){ $countTrue++;} if ($countTrue==@$combi){ $line_match++; #if ($line_match < 50){ print $tableData[$sheetData][4]." "; #} } } } print $line_match." \n"; }

    Read the article

  • CodePlex Daily Summary for Friday, December 14, 2012

    CodePlex Daily Summary for Friday, December 14, 2012Popular ReleasesCommand Line Parser Library: 1.9.3.31 rc0: Main assembly CommandLine.dll signed. Removed old commented code. Added missing XML documentation comments. Two (very) minor code refactoring changes.BlackJumboDog: Ver5.7.4: 2012.12.13 Ver5.7.4 (1)Web???????、???????????????????????????????????????????VFPX: ssClasses A1.0: My initial release. See https://vfpx.codeplex.com/wikipage?title=ssClasses&referringTitle=Home for a brief description of what is inside this releasesb0t v.5: sb0t 5.01 alpha 1: GUI support for Spanish language (Thank you Di3go and Oysterhead) Captcha database updates Bug fix where some hosts could not joinHome Access Plus+: v8.6: v8.6.1213.1220 Added: Group look up to the visible property of the Booking System Fixed: Switched to using the outlook/exchange thumbnailPhoto instead of jpegPhoto Added: Add a blank paragraph below the tiles. This means that the browser displays a vertical scroller when resizing the window. Previously it was possible for the bottom edge of a tile not to be visible if the browser window was resized. Added: Booking System: Only Display Day+Month on the booking Home Page. This allows for the cs...Layered Architecture Solution Guidance (LASG): LASG 1.0.0.8 for Visual Studio 2012: PRE-REQUISITES Open GAX (Please install Oct 4, 2012 version) Microsoft® System CLR Types for Microsoft® SQL Server® 2012 Microsoft® SQL Server® 2012 Shared Management Objects Microsoft Enterprise Library 5.0 (for the generated code) Windows Azure SDK (for layered cloud applications) Silverlight 5 SDK (for Silverlight applications) THE RELEASE This release only works on Visual Studio 2012. Known Issue If you choose the Database project, the solution unfolding time will be slow....Fiskalizacija za developere: FiskalizacijaDev 2.0: Prva prava produkcijska verzija - Zakon je tu, ova je verzija uskladena sa trenutno važecom Tehnickom specifikacijom (v1.2. od 04.12.2012.) i spremna je za produkcijsko korištenje. Verzije iza ove ce ovisiti o naknadnim izmjenama Zakona i/ili Tehnicke specifikacije, odnosno, o eventualnim greškama u radu/zahtjevima community-a za novim feature-ima. Novosti u v2.0 su: - That assembly does not allow partially trusted callers (http://fiskalizacija.codeplex.com/workitem/699) - scheme IznosType...Simple Injector: Simple Injector v1.6.1: This patch release fixes a bug in the integration libraries that disallowed the application to start when .NET 4.5 was not installed on the machine (but only .NET 4.0). The following packages are affected: SimpleInjector.Integration.Web.dll SimpleInjector.Integration.Web.Mvc.dll SimpleInjector.Integration.Wcf.dll SimpleInjector.Extensions.LifetimeScoping.dllBootstrap Helpers: Version 1: First releasesheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.2.0: Main featuresOptimizations for intersectionsThe main purpose of this release was to further optimize rendering performance by skipping object intersections with other sheets. From now by default an object's sheets will only intersect its own sheets and never other static or dynamic sheets. This is the usual scenario since objects will never bump into other sheets when using collision detection. DocumentationMany of you have been asking for proper documentation, so here it goes. Check out the...DirectX Tool Kit: December 11, 2012: December 11, 2012 Ex versions of DDSTextureLoader and WICTextureLoader Removed use of ATL's CComPtr in favor of WRL's ComPtr for all platforms to support VS Express editions Updated VS 2010 project for official 'property sheet' integration for Windows 8.0 SDK Minor fix to CommonStates for Feature Level 9.1 Tweaked AlphaTestEffect.cpp to work around ARM NEON compiler codegen bug Added dxguid.lib as a default library for Debug builds to resolve GUID link issuesDNN Flash Viewer: Source (same as default release): SourceArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.1 Final for 10.1: We are proud to announce the release of ArcGIS Editor for OpenStreetMap version 2.1. This download is compatible with ArcGIS 10.1, and includes setups for the Desktop Component, Desktop Component when 64 bit Background Geoprocessing is installed, and the Server Component. Important: if you already have ArcGIS Editor for OSM installed but want to install this new version, you will need to uninstall your previous version and then install this one. This release includes support for the ArcGIS 1...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8.2: This release just contains some fixes that have been done since the last release. Plus, this is strong named as well. I apologize for the lack of updates but my free time is less these days.Media Companion: MediaCompanion3.511b release: Two more bug fixes: - General Preferences were not getting restored - Fanart and poster image files were being locked, preventing changes to themVodigi Open Source Interactive Digital Signage: Vodigi Release 5.5: The following enhancements and fixes are included in Vodigi 5.5. Vodigi Administrator - Manage Music Files - Add Music Files to Image Slide Shows - Manage System Messages - Display System Messages to Users During Login - Ported to Visual Studio 2012 and MVC 4 - Added New Vodigi Administrator User Guide Vodigi Player - Improved Login/Schedule Startup Procedure - Startup Using Last Known Schedule when Disconnected on Startup - Improved Check for Schedule Changes - Now Every 15 Minutes - Pla...VidCoder: 1.4.10 Beta: Added progress percent to the title bar/task bar icon. Added MPLS information to Blu-ray titles. Fixed the following display issues in Windows 8: Uncentered text in textbox controls Disabled controls not having gray text making them hard to identify as disabled Drop-down menus having hard-to distinguish white on light-blue text Added more logging to proxy disconnect issues and increased timeout on initial call to help prevent timeouts. Fixed encoding window showing the built-in pre...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.400: Version 2.5.0.400 (Release): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete Update the documentation. InfoMan: Write the documentation. Other Downloads Downloads OverviewBee OPOA Platform: Bee OPOA Demo V1.0.001: Initial version.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.78: Fix for issue #18924 - using -pretty option left in ///#DEBUG blocks. Fix for issue #18980 - bad += optimization caused bug in resulting code. Optimization has been removed pending further review.New Projects.net demo: This is a demo project. 12141325: tesing12141327: testingAW User Applications: This project is used to enable AccessWeb User Applications for Database Application Users. Outline: The degree of professionalism permissions access options OkBattery Life Live Tile: Windows 8 app that displays the battery percentage in a live tile on the homescreen with the help of a system tray application. only available on intel devices.CMS .NET: CMS .net HTML5 multilingual+FORUM+GALLERY+Responsive Web Design+WIKI+COMMUNITY: Easy user-friendly, fast 10X,;multi site in different domain; multi servercookieTerm: A simple BBS terminal that can run in unicode environmentDevCow: This is a location for all of the community projects that help support DevCow.comDevville.NET: The project contains some of a very useful helpers and extension methods for .NET and SharePoint.Echo Garden: Echo Garden is a modification of Alphalabs' Windows Phone app, Node Garden, that represents the nodes through sound.FluentNavigationCoercion: Simple library that makes navigation coercion on WP simple and readableGet Music: Internet radio's parser. Parse radio logs, store received data to store. Make statistic analyze of radio. jKinect - Kinectify any web site.: Provide a unique Kinect User Experience for your website. With jKinect, turn any web site into a Kinect enabled web application.Lagos Single Mothers: This is a web2.0 site that will help single mothers in the city of Lagos (in Nigeria) share ideas on how to raise children, as single mothers. LIF11: Projet logique classiqueMalkiSum: Malki SumMyLittleAdressBook: Just a little project to test MVVM pattern on a multiview solution with authentificationNopCommerce 2.7x Multi Store version: NopCommerce 7.x Multi Store / Vendor versionPodcastToMp3: Automated tool to make MP3s from M4A chapters.Primer Demo ClickOnce: Demo ClickOncePulsus: A simple .NET logging library for modern applications.Sharp6800 - ET-3400 Microprocessor Trainer Emulator: A Heathkit ET-3400 Microprocessor Trainer Emulator. It features a 6800 emulator core and simulated 7-segment display and keypad. Written in 100% C#TEdit: TEdit is a source code editor that mainly used for InfoBasic Programming language. Running in Windows platform. Developed using Scintilla & ScintillaNet.VsPackageUtils: VsPackageUtils is a basic utility helper class for common operations in a VsPackage.WiFiShare: WiFiShare shares your LAN connection to WiFi using Internet Connection Sharing(ICS) from your Windows operating system.Windows 8 RSS App Kit: A Windows 8 App "kit" that allows you to build a fixed-list RSS reader with auto-image-detection in feeds.

    Read the article

  • Detect Unicode Usage in SQL Column

    One optimization you can make to a SQL table that is overly large is to change from nvarchar (or nchar) to varchar (or char).  Doing so will cut the size used by the data in half, from 2 bytes per character (+ 2 bytes of overhead for varchar) to only 1 byte per character.  However, you will lose the ability to store Unicode characters, such as those used by many non-English alphabets.  If the tables are storing user-input, and your application is or might one day be used internationally, its likely that using Unicode for your characters is a good thing.  However, if instead the data is being generated by your application itself or your development team (such as lookup data), and you can be certain that Unicode character sets are not required, then switching such columns to varchar/char can be an easy improvement to make. Avoid Premature Optimization If you are working with a lookup table that has a small number of rows, and is only ever referenced in the application by its numeric ID column, then you wont see any benefit to using varchar vs. nvarchar.  More generally, for small tables, you wont see any significant benefit.  Thus, if you have a general policy in place to use nvarchar/nchar because it offers more flexibility, do not take this post as a recommendation to go against this policy anywhere you can.  You really only want to act on measurable evidence that suggests that using Unicode is resulting in a problem, and that you wont lose anything by switching to varchar/char. Obviously the main reason to make this change is to reduce the amount of space required by each row.  This in turn affects how many rows SQL Server can page through at a time, and can also impact index size and how much disk I/O is required to respond to queries, etc.  If for example you have a table with 100 million records in it and this table has a column of type nchar(5), this column will use 5 * 2 = 10 bytes per row, and with 100M rows that works out to 10 bytes * 100 million = 1000 MBytes or 1GB.  If it turns out that this column only ever stores ASCII characters, then changing it to char(5) would reduce this to 5*1 = 5 bytes per row, and only 500MB.  Of course, if it turns out that it only ever stores the values true and false then you could go further and replace it with a bit data type which uses only 1 byte per row (100MB  total). Detecting Whether Unicode Is In Use So by now you think that you have a problem and that it might be alleviated by switching some columns from nvarchar/nchar to varchar/char but youre not sure whether youre currently using Unicode in these columns.  By definition, you should only be thinking about this for a column that has a lot of rows in it, since the benefits just arent there for a small table, so you cant just eyeball it and look for any non-ASCII characters.  Instead, you need a query.  Its actually very simple: SELECT DISTINCT(CategoryName)FROM CategoriesWHERE CategoryName <> CONVERT(varchar, CategoryName) Summary Gregg Stark for the tip. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • DataBinding a dictionary to a combobox in WPF.

    - by Ashish Ashu
    I have a Dictionary which is binded to a combobox. I have used dictionary to provide spaces in enum. public enum Option {Enter_Value, Select_Value}; Dictionary<Option,string> Options; <ComboBox x:Name="optionComboBox" SelectionChanged="optionComboBox_SelectionChanged" SelectedValuePath="Key" DisplayMemberPath="Value" SelectedItem="{Binding Path = SelectedOption}" ItemsSource="{Binding Path = Options}" /> This works fine. My queries: 1. I am not able to set the initial value to a combo box. In above XAML snippet the line SelectedItem="{Binding Path = SelectedOption}" is not working. I have declared SelectOption in my viewmodel. This is of type string and I have intialized this string value in my view model as below: SelectedOption = Options[Options.Enter_Value].ToString(); 2. I want if user selects Options.Enter_Value from the combo box , the combo box becomes editable and user can enter the numeric value in it. If user selects the second option Options.Select_Value then a dialog is poped up which allows user to select the predifined values. And the combo box becomes read only and shows the selected value. Please Help!!

    Read the article

  • How do I filter out NaN FLOAT values in Teradata SQL?

    - by Paul Hooper
    With the Teradata database, it is possible to load values of NaN, -Inf, and +Inf into FLOAT columns through Java. Unfortunately, once those values get into the tables, they make life difficult when writing SQL that needs to filter them out. There is no IsNaN() function, nor can you "CAST ('NaN' as FLOAT)" and use an equality comparison. What I would like to do is, SELECT SUM(VAL**2) FROM DTM WHERE NOT ABS(VAL) > 1e+21 AND NOT VAL = CAST ('NaN' AS FLOAT) but that fails with error 2620, "The format or data contains a bad character.", specifically on the CAST. I've tried simply "... AND NOT VAL = 'NaN'", which also fails for a similar reason (3535, "A character string failed conversion to a numeric value."). I cannot seem to figure out how to represent NaN within the SQL statement. Even if I could represent NaN successfully in an SQL statement, I would be concerned that the comparison would fail. According to the IEEE 754 spec, NaN = NaN should evaluate to false. What I really seem to need is an IsNaN() function. Yet that function does not seem to exist.

    Read the article

  • Making flexible C# code in MVC2 for Stored Procedures

    - by cc0
    Thanks to Darin Dimitrov's suggestion I got a big step further in understanding good MVC code, but I'm having some problems making it flexible. I implemented Darin's suggested solution, and it works perfectly for single controllers. However I'm having some trouble implementing it with some flexibility. What I'm looking for is this; To be able to make dynamic column names in json Instead of using "Column1: 'value', ..." and "Column2: 'value', ..." inside the json, I'd like to use for example "id: 'value', ..." and "place: 'value' ..." for one stored procedure, and "animal" and "type" in another (inside the json format). To be able to make dynamic amounts of columns dependent on which stored procedure is called Some stored procedures I'll want to read more than 2 rows from, is there a smart way of accomplishing that? To be able to make numeric (floats and integers) rows from the database be presented inside the json without quotes Like this (name and age); { Column1: "John", Column2: 53 }, I would be very grateful for any feedback and suggestions / code examples I can get here. Even imperfect ones.

    Read the article

  • function not working R

    - by user3722828
    I've never programmed before and am trying to learn. I'm following that "coursera" course that I've seen other people post about — a course offered by Johns Hopkins on R programming. Anyway, this was supposed to be my first function. Yet, it doesn't work! But when I type out all the steps individually, it runs just fine... Can anyone tell me why? > pollutantmean <- function(directory, pollutant, id = 1:332){ + x<- list.files("/Users/mike******/Desktop/directory", full.names=TRUE) + y<- lapply(x, read.csv) + z<- do.call(rbind.data.frame, y[id]) + + mean(z$pollutant, na.rm=TRUE) + } > pollutantmean(specdata,nitrate,1:10) [1] NA Warning message: In mean.default(z$pollutant, na.rm = TRUE) : argument is not numeric or logical: returning NA #### > x<- list.files("/Users/mike******/Desktop/specdata",full.names=TRUE) > y<- lapply(x,read.csv) > z<- do.call(rbind.data.frame,y[1:10]) > mean(z$nitrate,na.rm=TRUE) [1] 0.7976266

    Read the article

  • How do I shrink a matrix using an array mask in MATLAB?

    - by Pyrolistical
    This seems to be a very common problem of mine: data = [1 2 3; 4 5 6]; mask = [true false true]; mask = repmat(mask, 2, 1); data(mask) ==> [1; 4; 3; 6] What I wanted was [1 3; 4 6]. Yes I can just reshape it to the right size, but that seems the wrong way to do it. Is there a better way? Why doesn't data(mask) return a matrix when it is actually rectangular? I understand in the general case it may not be, but in my case since my original mask is an array it always will be. Corollary Thanks for the answer, I just also wanted to point out this also works with anything that returns a numeric index like ismember, sort, or unique. I used to take the second return value from sort and apply it to every column manually when you can use this notion to do it one shot.

    Read the article

  • NHibernate query with Projections.Cast to DateTime

    - by stiank81
    I'm experimenting with using a string for storing different kind of data types in a database. When I do queries I need to cast the strings to the right type in the query itself. I'm using .Net with NHibernate, and was glad to learn that there exists functionality for this. Consider the simple class: public class Foo { public string Text { get; set; } } I successfully use Projections.Cast to cast to numeric values, e.g. the following query correctly returns all Foos with an interger stored as int - between 1-10. var result = Session.CreateCriteria<Foo>() .Add(Restrictions.Between(Projections.Cast(NHibernateUtil.Int32, Projections.Property("Text")), 1, 10)) .List<Foo>(); Now if I try using this for DateTime I'm not able to make it work no matter what I try. Why?! var date = new DateTime(2010, 5, 21, 11, 30, 00); AddFooToDb(new Foo { Text = date.ToString() } ); // Will add it to the database... var result = Session .CreateCriteria<Foo>() .Add(Restrictions.Eq(Projections.Cast(NHibernateUtil.DateTime, Projections.Property("Text")), date)) .List<Foo>();

    Read the article

  • online quiz using ASP dot NET

    - by wacky_coder
    Hii, I need to develop an online quiz website that would be having MCQs. I would want to have one question appearing per page with a Numeric Pager so that the user can go back and forth. I tried using the FormView for displaying the questions and RadioButtons. I created a class QANS that would hold the answer selected by the user for the questions that he did answer so that I can later sum up the total Score. Now, the problem I'm facing is as below: Let the user select a RadioButton, say R, on a PageIndex, say I, and then go to some other page, say K. When the user returns back to page I, none of the RadioButtons is selected. I tried adding code for setting the Checked property of the RadioButton true in the Page_Load(), but it didn't help. I also tried the same with the PageIndexChanged and PageIndexChanging event, but the RadioButton doesn't get checked. The RadioButton that was selected for a particular question has been stored and I am able to print which one it was using Response.Write() but I'm not able to keep the RadioButton Checked. How shall this be done?

    Read the article

  • Complex regex question, data may or may not be in brackets

    - by martinpetts
    I need to extract data from a source that presents it in one of two ways. The data could be formatted like this: Francis (Lab) 18,077 (60.05%); Waller (LD) 4,140 (13.75%); Evans (PC) 3,545 (11.78%); Rees-Mogg (C) 3,064 (10.18%); Wright (Veritas) 768 (2.55%); La Vey (Green) 510 (1.69%) Or like this: Lab 8,994 (33.00%); C 7,924 (29.07%); LD 5,197 (19.07%); PC 3,818 (14.01%); Others 517 (1.90%); Green 512 (1.88%); UKIP 296 (1.09%) The data I need to extract is the percentage and the party (these are election results), which is either in brackets (first example) or is the only non-numeric text. So far I have this: preg_match('/(.*)\(([^)]*)%\)/', $value, $match); Which is giving me the following matches (for first example): Array ( [0] => Francis (Lab) 18,077 (60.05%) [1] => Francis (Lab) 18,077 [2] => 60.05 ) So I have the percentage, but I also need the party label, which may or may not be in brackets and may or may not be the only text. Can anyone help?

    Read the article

  • How can I use Convert.ChangeType to convert string into numerics with group separator?

    - by Loic
    Hello, I want to make a generic string to numeric converter, and provide it as a string extension, so I wrote the following code: public static bool TryParse<T>( this string text, out T result, IFormatProvider formatProvider ) where T : struct try { result = (T)Convert.ChangeType( text, typeof( T ), formatProvider ); return true; } catch(... I call it like this: int value; var ok = "123".TryParse(out value, NumberFormatInfo.CurrentInfo) It works fine until I want to use a group separator: As I live in France, where the thousand separator is a space and the decimal separator is a comma, the string "1 234 567,89" should be equals to 1234567.89 (in Invariant culture). But, the function crashes! When a try to perform a non generic conversion, like double.Parse(...), I can use an overload which accepts a NumberStyles parameter. I specify NumberStyles.Number and this time it works! So, the questions are : Why the parsing does not respect my NumberFormatInfo (where the NumberGroupSeparator is well specified to a space, as I specified in my OS) How could I make work the generic version with Convert.ChangeTime, as it has no overload wich accepts a NumberStyles parameter ?

    Read the article

  • jquery Ajax call resulting in Undefined Error in Firefox

    - by Kamal
    Hi I've been pulling my hair out for the last few hours with this problem. And the googling has been hampered by the very vagueness of this. So let me apologise for that first. Basically I'm using jquery and ajax (with C#) to return data from the backend and display that to the screen. The code works perfectly for firefox and IE. But when the data gets too large (??) (1500+ table rows) all I get is an undefined popup. Debugging in firefox (3.6) it doesn't even go into the success method. Worse still it doesn't even go into the error method. A lot of superfluous information there, but I'd rather show everything I'm doing. The Code $j.ajax( { type: "POST", url: "AdminDetails.aspx/LoadCallDetails", data: "{" + data + "}", contentType: "application/json;charset=utf-8", dataType: "json", success: function(msg) { $j("#CallDetailsHolder").html(msg.d); $j(".pointingHand").hide(); var oTable = $j('#dt').dataTable({ "bProcessing": true, "bPaginate": true, "bSort": true, "bAutoWidth": false, "aoColumns": [ { "sType": 'html' }, { "sType": 'custdate' }, { "sType": 'html-numeric' }, { "sType": 'ariary' }, { "sType": 'html' }, { "sType": 'html' } ], "oLanguage": { "sProcessing": "Traitement...", "sLengthMenu": "_MENU_ Montrer", "sZeroRecords": "Aucun enregistrement", "sInfo": "_START_ à _END_ de _TOTAL_", "sInfoEmpty": "0 à 0 de 0", "sInfoFiltered": "(filtrée à partir de _MAX_ )", "sInfoPostFix": "", "sSearch": "Rechercher", "sUrl": "", "oPaginate": { "sFirst": "premier", "sPrevious": "Précédent", "sNext": "suivant", "sLast": "dernier" } }, "sDom": 'T<"clear">lfrtip' }); $j('#CompteBlocRight0').unblock(); $j('#btnRangeSearch').click(function() { oTable.fnDraw(); }); }, error: function(msg) { DisplayError(msg); $j('#CompteBlocRight0').unblock(); } }); //$.ajax } The code definitely works. And even displays in IE without any issues. Any help???

    Read the article

  • Specifying formatting for csv.writer in Python

    - by user248237
    I am using csv.DictWriter to output csv files from a set of dictionaries. I use the following function: def dictlist2file(dictrows, filename, fieldnames, delimiter='\t', lineterminator='\n'): out_f = open(filename, 'w') # Write out header header = delimiter.join(fieldnames) + lineterminator out_f.write(header) # Write out dictionary data = csv.DictWriter(out_f, fieldnames, delimiter=delimiter, lineterminator=lineterminator) data.writerows(dictrows) out_f.close() where dictrows is a list of dictionaries, and fieldnames provides the headers that should be serialized to file. Some of the values in my dictionary list (dictrows) are numeric -- e.g. floats, and I'd like to specify the formatting of these. For example, I might want floats to be serialized with "%.2f" rather than full precision. Ideally, I'd like to specify some kind of mapping that says how to format each type, e.g. {float: "%.2f"} that says that if you see a float, format it with %.2f. Is there an easy way to do this? I don't want to subclass DictWriter or anything complicated like that -- this seems like very generic functionality. How can this be done? The only other solution I can think of is: instead of messing with the formatting of DictWriter, just use the decimal package to specify the decimal precision of floats to be %.2 which will cause to be serialized as such. Don't know if this is a better solution? thanks very much for your help.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >