Search Results

Search found 321 results on 13 pages for 'scalar'.

Page 6/13 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How to define a "complicated" ComputedColumn in SQL Server?

    - by Slauma
    SQL Server Beginner question: I'm trying to introduce a computed column in SQL Server (2008). In the table designer of SQL Server Management Studio I can do this, but the designer only offers me one single edit cell to define the expression for this column. Since my computed column will be rather complicated (depending on several database fields and with some case differentiations) I'd like to have a more comfortable and maintainable way to enter the column definition (including line breaks for formatting and so on). I've seen there is an option to define functions in SQL Server (scalar value or table value functions). Is it perhaps better to define such a function and use this function as the column specification? And what kind of function (scalar value, table value)? To make a simplified example: I have two database columns: DateTime1 (smalldatetime, NULL) DateTime2 (smalldatetime, NULL) Now I want to define a computed column "Status" which can have four possible values. In Dummy language: if (DateTime1 IS NULL and DateTime2 IS NULL) set Status = 0 else if (DateTime1 IS NULL and DateTime2 IS NOT NULL) set Status = 1 else if (DateTime1 IS NOT NULL and DateTime2 IS NULL) set Status = 2 else set Status = 3 Ideally I would like to have a function GetStatus() which can access the different column values of the table row which I want to compute the value of "Status" for, and then only define the computed column specification as GetStatus() without parameters. Is that possible at all? Or what is the best way to work with "complicated" computed column definitions? Thank you for tips in advance!

    Read the article

  • Ruby Alias and module_function

    - by Jesse J
    I'm trying to debug someone else's code and having trouble figuring out what's wrong. When I run rake, one of the errors I get is: 2) Error: test_math(TestRubyUnits): NoMethodError: undefined method `unit_sin' for CMath:Module /home/user/ruby-units/lib/ruby_units/math.rb:21:in `sin' This is the function that calls the method: assert_equal Math.sin(pi), Math.sin("180 deg".unit) And this is what the class looks like: module Math alias unit_sin sin def sin(n) Unit === n ? unit_sin(n.to('radian').scalar) : unit_sin(n) end alias unit_cos cos def cos(n) Unit === n ? unit_cos(n.to('radian').scalar) : unit_cos(n) end ... module_function :unit_sin module_function :sin module_function :unit_cos module_function :cos ... end (The ellipsis means "more of the same"). As far as I can see, this is valid Ruby code. Is there something I'm missing here that's causing the error, or could the error be coming from something else? Update: I'm wondering if the problem has to do with namespaces. This code is attempting to extend CMath, so perhaps the alias and/or module_function isn't actually getting into CMath, or something like that....

    Read the article

  • Abstract away a compound identity value for use in business logic?

    - by John K
    While separating business logic and data access logic into two different assemblies, I want to abstract away the concept of identity so that the business logic deals with one consistent identity without having to understand its actual representation in the data source. I've been calling this a compound identity abstraction. Data sources in this project are swappable and various and the business logic shouldn't care which data source is currently in use. The identity is the toughest part because its implementation can change per kind of data source, whereas other fields like name, address, etc are consistently scalar values. What I'm searching for is a good way to abstract the concept of identity, whether it be an existing library, a software pattern or just a solid good idea of some kind is provided. The proposed compound identity value would have to be comparable and usable in the business logic and passed back to the data source to specify records, entities and/or documents to affect, so the data source must be able to parse back out the details of its own compound ids. Data Source Examples: This serves to provide an idea of what I mean by various data sources having different identity implementations. A relational data source might express a piece of content with an integer identifier plus a language specific code. For example. content_id language Other Columns expressing details of content 1 en_us 1 fr_ca The identity of the first record in the above example is: 1 + en_us However when a NoSQL data source is substituted, it might somehow represent each piece of content with a GUID string 936DA01F-9ABD-4d9d-80C7-02AF85C822A8 plus language code of a different standardization, And a third kind of data source might use just a simple scalar value. So on and so forth, you get the idea.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • "Marching cubes" voxel terrain - triplanar texturing with depth?

    - by Dan the Man
    I am currently working on a voxel terrain that uses the marching cubes algorithm for polygonizing the scalar field of voxels. I am using a triplanar texturing shader for texturing. say I have a grass texture set to the Y axis and a dirt texture for both the X and Z axes. Now, when my player digs downwards, it still appears as grass. How would I make it to appear as dirt? I have been thinking about this for a while, and the only thing I can think of to make this effect, would be to mark vertices that have been dug with a certain vertex color. When it has that vertex color, the shader would apply that dirt texture to the vertices marked. Is there a better method?

    Read the article

  • Spherical harmonics lighting - what does it accomplish?

    - by TravisG
    From my understanding, spherical harmonics are sometimes used to approximate certain aspects of lighting (depending on the application). For example, it seems like you can approximate the diffuse lighting cause by a directional light source on a surface point, or parts of it, by calculating the SH coefficients for all bands you're using (for whatever accuracy you desire) in the direction of the surface normal and scaling it with whatever you need to scale it with (e.g. light colored intensity, dot(n,l),etc.). What I don't understand yet is what this is supposed to accomplish. What are the actual advantages of doing it this way as opposed to evaluating the diffuse BRDF the normal way. Do you save calculations somewhere? Is there some additional information contained in the SH representation that you can't get out of the scalar results of the normal evaluation?

    Read the article

  • Speaking in Omaha: December 7, 2011

    - by Bill Graziano
    I’m presenting in Omaha on Writing Faster SQL at 6PM on December 7th.  You can find meeting details on the Omaha SQL Server User Group page. The meeting location requires an RSVP so building security has a list of attendees. The presentation is a series of suggestions on improving performance.  It ranges from simple things like comparing indexed columns to scalar values up to tips for reducing query compiles and asynchronous processing patterns.  Nearly all of these come from specific issues I’ve encountered working on poorly performing SQL Servers.

    Read the article

  • Binding to Silverlight ComboBox and Using SelectedValue, SelectedValuePath and DisplayMemberPath

    How do you bind a ComboBox to a collection of objects, and then bind a property from the selected objects to some other scalar property? I received this question today from a friend of mine (a variation of this question). I decided to walk through the scenario here in case anyone else runs into it. This is one of those things that can be confusing it is simple, but it is is much easier shown the explained. This post lays out the scenario and you can download the sample code at the end. When we...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Detect rotated rectangle collision

    - by handyface
    I'm trying to implement a script that detects whether two rotated rectangles collide for my game. I used the method explained in the following article for my implementation in Google Dart. 2D Rotated Rectangle Collision I tried to implement this code into my game. Basically from what I understood was that I have two rectangles, these two rectangles can produce four axis (two per rectangle) by subtracting adjacent corner coordinates. Then all the corners from both rectangles need to be projected onto each axis, then multiplying the coordinates of the projection by the axis coordinates (point.x*axis.x+point.y*axis.y) to make a scalar value and checking whether the range of both the rectangle's projections overlap. When all the axis have overlapping projections, there's a collision. First of all, I'm wondering whether my comprehension about this algorithm is correct. If so I'd like to get some pointers in where my implementation (written in Dart, which is very readable for people comfortable with C-syntax) goes wrong. Thanks!

    Read the article

  • Memory allocation problem with SVMs in OpenCV

    - by worksintheory
    Hi, I've been using OpenCV happily for a while, but now I have a problem which has bugged me for quite some time. The following code is reasonably minimal example of my problem: #include <cv.h> #include <ml.h> using namespace cv; int main(int argc, char **argv) { int sampleCountForTesting = 2731; //BROKEN: Breaks svm.train_auto(...) for values of 2731 or greater! Mat trainingData( sampleCountForTesting, 1, CV_32FC1, Scalar::all(0.0) ); Mat trainingResponses( sampleCountForTesting, 1, CV_32FC1, Scalar::all(0.0) ); for(int j = 0; j < 6; j++) { trainingData.at<float>( j, 0 ) = (float) (j%2); trainingResponses.at<float>( j, 0 ) = (float) (j%2); //Setting a few values so I don't get a "single class" error } CvSVMParams svmParams( 100, //100 is CvSVM::C_SVC, 2, //2 is CvSVM::RBF, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, NULL, TermCriteria( TermCriteria::MAX_ITER | TermCriteria::EPS, 2, 1.0 ) ); CvSVM svm = CvSVM(); svm.train_auto( trainingData, trainingResponses, Mat(), Mat(), svmParams ); return 0; } I just create matrices to hold the training data and responses, then set a few entries to some value other than zero, then run the SVM. But it breaks whenever there are 2731 rows or more: OpenCV Error: One of arguments' values is out of range (requested size is negative or too big) in cvMemStorageAlloc, file [omitted]/opencv/OpenCV-2.2.0/modules/core/src/datastructs.cpp, line 332 With fewer rows, it seems to be fine and a classifier trained in a similar manner to the above seems to be giving reasonable output. Am I doing something wrong? I'm pretty sure it's not actually anything to do with lack of memory, as I've got 6GB and also the code works fine when the data has 2730 rows and 10000 columns, which is a much bigger allocation. I'm running OpenCV 2.2 on OSX 10.6 and initially I thought the problem might be related to this bug if for some reason the fix wasn't included in the MacPorts version. Now I've also tried downloading the most recent stable version from the OpenCV site and building with cmake and using that, but I still get the same error, and the fix is definitely included in that version. Any help would be much appreciated! Thanks,

    Read the article

  • How to handle Oracle Store Procedure call with Oracle Types as input or output using EclipseLink

    - by MAK
    Hi, I doing a Proof Of Concept to figure out how efficient to call a store procedure using EclipseLink. I was able to call oracle store procedure using EclispeLink with Scalar/primitive data types (link Integer, varchar etc). I wanted to understand how can I handle Oracle Store procedure from EclipseLink with collection(Oracle Types/User defined types) as input or output parameters. I would really appreciate if some one help me understand this with an example. Thanks MAK

    Read the article

  • Modifying documents in memory in yaml-cpp

    - by Mike Mueller
    I want to read a YML document, filter it by modifying some nodes in memory, and then spit it back out with an emitter. The problem is that YAML::Node appears to be designed to be read-only. Is there a way to replace a node's value (with a scalar in this case) that I'm missing?

    Read the article

  • How can I tell if a set of parens in perl code will act as grouping parens or form a list?

    - by Ryan Thompson
    In perl, parentheses are used for overriding precedence (as in most programming languages) as well as for creating lists. How can I tell if a particular pair of parens will be treated as a grouping construct or a one-element list? For example, I'm pretty sure this is a scalar and not a one-element list: (1 + 1) But what about more complex expressions? Is there an easy way to tell?

    Read the article

  • Updating with Related Entities - Entity Framework v4

    - by Vincent BOUZON
    Hi, I use Entity Framework V4 and i want to update Customer who have Visits. My code : EntityKey key; object originalItem; key = this._modelContainer.CreateEntityKey("Customers", customer); if (this._modelContainer.TryGetObjectByKey(key, out originalItem)) { this._modelContainer.ApplyCurrentValues(key.EntitySetName, customer); } this._modelContainer.SaveChanges(); It works for Scalar Property only. The customers.Visits collection is not updated. Best Regards :)

    Read the article

  • What's FirstOrDefault for DateTime in Linq?

    - by Keltex
    If I have a query that returns a DateTime, what's the value of FirstOrDefault? Is there a generic way to get the default value of a C# scalar? Example: var list = (from item in db.Items where item.ID==1234 select item.StartDate).FirstOrDefault(); Edit: Assume that the column StartDate can't be null.

    Read the article

  • How can I fix my program from crashing in C++?

    - by Rachel
    I'm very new to programming and I am trying to write a program that adds and subtracts polynomials. My program sometimes works, but most of the time, it randomly crashes and I have no idea why. It's very buggy and has other problems I'm trying to fix, but I am unable to really get any further coding done since it crashes. I'm completely new here but any help would be greatly appreciated. Here's the code: #include <iostream> #include <cstdlib> using namespace std; int getChoice(void); class Polynomial10 { private: double* coef; int degreePoly; public: Polynomial10(int max); //Constructor for a new Polynomial10 int getDegree(){return degreePoly;}; void print(); //Print the polynomial in standard form void read(); //Read a polynomial from the user void add(const Polynomial10& pol); //Add a polynomial void multc(double factor); //Multiply the poly by scalar void subtract(const Polynomial10& pol); //Subtract polynom }; void Polynomial10::read() { cout << "Enter degree of a polynom between 1 and 10 : "; cin >> degreePoly; cout << "Enter space separated coefficients starting from highest degree" << endl; for (int i = 0; i <= degreePoly; i++) { cin >> coef[i]; } } void Polynomial10::print() { for(int i=0;i<=degreePoly;i++) { if(coef[i] == 0) { cout << ""; } else if(i>=0) { if(coef[i] > 0 && i!=0) { cout<<"+"; } if((coef[i] != 1 && coef[i] != -1) || i == degreePoly) { cout << coef[i]; } if((coef[i] != 1 && coef[i] != -1) && i!=degreePoly ) { cout << "*"; } if (i != degreePoly && coef[i] == -1) { cout << "-"; } if(i != degreePoly) { cout << "x"; } if ((degreePoly - i) != 1 && i != degreePoly) { cout << "^"; cout << degreePoly-i; } } } } void Polynomial10::add(const Polynomial10& pol) { for(int i = 0; i<degreePoly; i++) { int degree = degreePoly; coef[degreePoly-i] += pol.coef[degreePoly-(i+1)]; } } void Polynomial10::subtract(const Polynomial10& pol) { for(int i = 0; i<degreePoly; i++) { coef[degreePoly-i] -= pol.coef[degreePoly-(i+1)]; } } void Polynomial10::multc(double factor) { //int degreePoly=0; //double coef[degreePoly]; cout << "Enter the scalar multiplier : "; cin >> factor; for(int i = 0; i<degreePoly; i++) { coef[i] *= factor; } }; Polynomial10::Polynomial10(int max) { degreePoly=max; coef = new double[degreePoly]; for(int i; i<degreePoly; i++) { coef[i] = 0; } } int main() { int choice; Polynomial10 p1(1),p2(1); cout << endl << "CGS 2421: The Polynomial10 Class" << endl << endl << endl; cout << "0. Quit\n" << "1. Enter polynomial\n" << "2. Print polynomial\n" << "3. Add another polynomial\n" << "4. Subtract another polynomial\n" << "5. Multiply by scalar\n\n"; int choiceFirst = getChoice(); if (choiceFirst != 1) { cout << "Enter a Polynomial first!"; } if (choiceFirst == 1) {choiceFirst = choice;} while(choice != 0) { switch(choice) { case 0: return 0; case 1: p1.read(); break; case 2: p1.print(); break; case 3: p2.read(); p1.add(p2); cout << "Updated Polynomial: "; p1.print(); break; case 4: p2.read(); p1.subtract(p2); cout << "Updated Polynomial: "; p1.print(); break; case 5: p1.multc(10); cout << "Updated Polynomial: "; p1.print(); break; } choice = getChoice(); } return 0; } int getChoice(void) { int c; cout << "\nEnter your choice : "; cin >> c; return c; }

    Read the article

  • delete row from selected gridview and database

    - by user175084
    i am trying to delete a row from the gridview and database... It should be deleted if a delte linkbutton is clicked in the gridview.. I am gettin the row index as follows: protected void LinkButton1_Click(object sender, EventArgs e) { LinkButton btn = (LinkButton)sender; GridViewRow row = (GridViewRow)btn.NamingContainer; if (row != null) { LinkButton LinkButton1 = (LinkButton)sender; // Get reference to the row that hold the button GridViewRow gvr = (GridViewRow)LinkButton1.NamingContainer; // Get row index from the row int rowIndex = gvr.RowIndex; string str = rowIndex.ToString(); //string str = GridView1.DataKeys[row.RowIndex].Value.ToString(); RemoveData(str); //call the delete method } } now i want to delete it... so i am having problems with this code.. i get an error Must declare the scalar variable "@original_MachineGroupName"... any suggestions private void RemoveData(string item) { SqlConnection conn = new SqlConnection(@"Data Source=JAGMIT-PC\SQLEXPRESS; Initial Catalog=SumooHAgentDB;Integrated Security=True"); string sql = "DELETE FROM [MachineGroups] WHERE [MachineGroupID] = @original_MachineGroupID; SqlCommand cmd = new SqlCommand(sql, conn); cmd.Parameters.AddWithValue("@original_MachineGroupID", item); conn.Open(); cmd.ExecuteNonQuery(); conn.Close(); } Blockquote <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:SumooHAgentDBConnectionString %>" SelectCommand="SELECT MachineGroups.MachineGroupID, MachineGroups.MachineGroupName, MachineGroups.MachineGroupDesc, MachineGroups.TimeAdded, MachineGroups.CanBeDeleted, COUNT(Machines.MachineName) AS Expr1, DATENAME(month, (MachineGroups.TimeAdded - 599266080000000000) / 864000000000) + SPACE(1) + DATENAME(d, (MachineGroups.TimeAdded - 599266080000000000) / 864000000000) + ', ' + DATENAME(year, (MachineGroups.TimeAdded - 599266080000000000) / 864000000000) AS Expr2 FROM MachineGroups FULL OUTER JOIN Machines ON Machines.MachineGroupID = MachineGroups.MachineGroupID GROUP BY MachineGroups.MachineGroupID, MachineGroups.MachineGroupName, MachineGroups.MachineGroupDesc, MachineGroups.TimeAdded, MachineGroups.CanBeDeleted" DeleteCommand="DELETE FROM [MachineGroups] WHERE [MachineGroupID] =@original_MachineGroupID" > <DeleteParameters> <asp:Parameter Name="@original_MachineGroupID" Type="Int16" /> <asp:Parameter Name="@original_MachineGroupName" Type="String" /> <asp:Parameter Name="@original_MachineGroupDesc" Type="String" /> <asp:Parameter Name="@original_CanBeDeleted" Type="Boolean" /> <asp:Parameter Name="@original_TimeAdded" Type="Int64" /> </DeleteParameters> </asp:SqlDataSource> I still get an error : Must declare the scalar variable "@original_MachineGroupID"

    Read the article

  • How to write sum calculation based on ria service?

    - by KentZhou
    When using ria service for SL app, I can issue following async call to get a group of entity list. LoadOperation<Person> ch = this.AMSContext.Load(this.AMSContext.GetPersonQuery().Where(a => a.PersonID == this.performer.PersonID)); But I want to get some calculation, for example, sum(Commission), sum(Salary), the result is not entity, just a scalar value. How can I do this?

    Read the article

  • Filter list as a parameter in a compiled query

    - by JK
    I have the following compiled query that I want to return a list of "groups" that don't have a "GroupID" that's contained in a filtered list: CompiledQuery.Compile(ConfigEntities contexty, List list) = from c in context.Groups where (!csList.Contains(c.GroupID)) select c).ToList() However I'm getting the following run-time error: The specified parameter 'categories' of type 'System.Collections.Generic.List`1[[System.Int32, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c261364e126]]' is not valid. Only scalar parameters (such as Int32, Decimal, and Guid) are supported. Any ideas?

    Read the article

  • How do I send information to an HLSL effect in DirectX 10?

    - by pypmannetjies
    I'd like to send my view vector to an ID3D10Effect variable in order to calculate specular lighting. How do I send a vector or even just scalar values to the HLSL from the running DirectX program? I want to do something like render() { //do transformations D3DXMatrix view = camera->getViewMatrix(); basicEffect.setVariable(viewVector, view); //render stuff }

    Read the article

  • How do I call a function name that is stored in a hash in Perl?

    - by Ether
    I'm sure this is covered in the documentation somewhere but I have been unable to find it... I'm looking for the syntactic sugar that will make it possible to call a method on a class whose name is stored in a hash (as opposed to a simple scalar): use strict; use warnings; package Foo; sub foo { print "in foo()\n" } package main; my %hash = (func => 'foo'); Foo->$hash{func}; If I copy $hash{func} into a scalar variable first, then I can call Foo->$func just fine... but what is missing to enable Foo->$hash{func} to work? (EDIT: I don't mean to do anything special by calling a method on class Foo -- this could just as easily be a blessed object (and in my actual code it is); it was just easier to write up a self-contained example using a class method.) EDIT 2: Just for completeness re the comments below, this is what I'm actually doing (this is in a library of Moose attribute sugar, created with Moose::Exporter): # adds an accessor to a sibling module sub foreignTable { my ($meta, $table, %args) = @_; my $class = 'MyApp::Dir1::Dir2::' . $table; my $dbAccessor = lcfirst $table; eval "require $class" or do { die "Can't load $class: $@" }; $meta->add_attribute( $table, is => 'ro', isa => $class, init_arg => undef, # don't allow in constructor lazy => 1, predicate => 'has_' . $table, default => sub { my $this = shift; $this->debug("in builder for $class"); ### here's the line that uses a hash value as the method name my @args = ($args{primaryKey} => $this->${\$args{primaryKey}}); push @args, ( _dbObject => $this->_dbObject->$dbAccessor ) if $args{fkRelationshipExists}; $this->debug("passing these values to $class -> new: @args"); $class->new(@args); }, ); } I've replaced the marked line above with this: my $pk_accessor = $this->meta->find_attribute_by_name($args{primaryKey})->get_read_method_ref; my @args = ($args{primaryKey} => $this->$pk_accessor); PS. I've just noticed that this same technique (using the Moose meta class to look up the coderef rather than assuming its naming convention) cannot also be used for predicates, as Class::MOP::Attribute does not have a similar get_predicate_method_ref accessor. :(

    Read the article

  • What's behind the 'system' function in perl?

    - by JohnJohnGa
    i can thought that it will open a shell, execute the parameter (shell command) and return the result in a scalar. But, execute 'system' function in a perl script is faster than a shell command. It will call this command in C? If yes, what's the difference between rmdir foo and system('rmdir foo'); Thanks,

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >