Search Results

Search found 2727 results on 110 pages for 'operator overloading'.

Page 41/110 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • SCORM 2004 Sequencing: What Am I doing wrong?

    - by Van
    This quiz is the last SCO in a grouping of 4 SCOs. SCO 1,2,3 have to be completed before this quiz becomes available. The problem is that when 1,2,3 are completed the menu skips right over this quiz and goes to the first page in the next module. This quiz stats grayed out the entire time. I think it has to do with the precondition logic or the objectives but I've tried everything I can think of and nothing works. <item identifier="quiz1_100" identifierref="res-quiz1" isvisible="true"> <title>Quiz 1</title> <imsss:sequencing> <imsss:controlMode choice="true" choiceExit="false" flow="true" forwardOnly="false" useCurrentAttemptObjectiveInfo="false" useCurrentAttemptProgressInfo="false" /> <imsss:sequencingRules> <imsss:preConditionRule> <imsss:ruleConditions conditionCombination="any"> <imsss:ruleCondition referencedObjective="obj_1000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_2000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_3000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="quiz_primary" operator="not" condition="objectiveStatusKnown" /> </imsss:ruleConditions> <imsss:ruleAction action="disabled" /> </imsss:preConditionRule> <imsss:preConditionRule> <imsss:ruleConditions conditionCombination="any"> <imsss:ruleCondition referencedObjective="obj_1000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_2000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_3000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="quiz_primary" operator="not" condition="objectiveStatusKnown" /> </imsss:ruleConditions> <imsss:ruleAction action="skip" /> </imsss:preConditionRule> <imsss:preConditionRule> <imsss:ruleConditions conditionCombination="all"> <imsss:ruleCondition condition="completed" /> </imsss:ruleConditions> <imsss:ruleAction action="skip" /> </imsss:preConditionRule> </imsss:sequencingRules> <imsss:objectives> <imsss:primaryObjective objectiveID="quiz_primary" satisfiedByMeasure="true"> <imsss:minNormalizedMeasure>0.8</imsss:minNormalizedMeasure> <imsss:mapInfo targetObjectiveID="quiz_complete" writeNormalizedMeasure="true" writeSatisfiedStatus="true" /> </imsss:primaryObjective> <imsss:objective satisfiedByMeasure="false" objectiveID="obj_1000_VHKP_test"> <imsss:mapInfo targetObjectiveID="gObj_1000_VHKP" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> <imsss:objective satisfiedByMeasure="false" objectiveID="obj_2000_VHKP_test"> <imsss:mapInfo targetObjectiveID="gObj_2000_VHKP" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> <imsss:objective satisfiedByMeasure="false" objectiveID="obj_3000_VHKP_test"> <imsss:mapInfo targetObjectiveID="gObj_3000_VHKP" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> <!-- <imsss:objective satisfiedByMeasure="false" objectiveID="obj_quiz1"> <imsss:mapInfo targetObjectiveID="quiz_primary" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> --> <imsss:objective satisfiedByMeasure="false" objectiveID="course_complete"> <imsss:mapInfo targetObjectiveID="obj_EJBOWNADV_primary" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> </imsss:objectives> <imsss:deliveryControls tracked="true" completionSetByContent="true" objectiveSetByContent="false" /> </imsss:sequencing> </item>

    Read the article

  • a question in NS c programming

    - by bahar
    Hi I added a new patch to my NS and I've seen thise two errors. Does anyone Know what I can do? error: specialization of 'bool std::less<_Tp::operator()(const _Tp&, const _Tp&) const [with _Tp = _AlgorithmTime]' in different namespace from definition of 'bool std::less<_Tp::operator()(const _Tp&, const _Tp&) const [with _Tp = _AlgorithmTime]' and the errors are from this code typedef struct _AlgorithmTime { // Round. int round; // Fase. int fase; // Valore massimo di fase. int last_fase; public: _AlgorithmTime() { round = 0; fase = 0; last_fase = 0; } // Costruttore. _AlgorithmTime(int r, int f, int l) { round = r; fase = f; last_fase = l; } // Costruttore. _AlgorithmTime(const _AlgorithmTime & t) { round = t.round; fase = t.fase; last_fase = t.last_fase; } // Operatore di uguaglianza. bool operator== (struct _AlgorithmTime & t) { return ((t.fase == fase) && (t.round == round)); } // Operatore minore. bool operator < (struct _AlgorithmTime & t) { if (round < t.round) return true; if (round > t.round) return false; if (fase < t.fase) return true; return false; } // Operatore maggiore. bool operator > (struct _AlgorithmTime & t) { if (round > t.round) return true; if (round < t.round) return false; if (fase > t.fase) return true; return false; } void operator++ () { if (fase == last_fase) { round++; fase = 0; return; } fase++; } void operator-- () { if (fase == 0) { round--; fase = last_fase; return; } fase--; } }AlgorithmTime; template< bool std::less::operator()(const AlgorithmTime & t1, const AlgorithmTime & t2)const { if (t1.round < t2.round) return true; if (t1.round t2.round) return false; if (t1.fase < t2.fase) return true; return false; } Thanks

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • LEMP Stack on Ubuntu Server 13.04 not parsing PHP Switch Statement Properly

    - by schester
    On my Ubuntu 12.04 Server LTS on nginx 1.1.19, the following PHP code works properly: switch($_SESSION['user']['permissions']) { case 9: echo "Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } However, I have a backup server running Ubuntu Server 13.04 on nginx 1.4.1 which has the exact same copy of the script (synced) but instead of breaking on the break; command, it echos the whole php script. The output on the 12.04 Box is similar to this: You are logged in with Super Admin Privileges But on the 13.04 Box, the output is like this: You are logged in logged in with Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } ?> I have also tried changing the script from switch statement to if statements but same results. Any idea what is wrong?

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Would it be a good idea to work on letting people add arrays of numbers in javascript?

    - by OneThreeSeven
    I am a very mathematically oriented programmer, and I happen to be doing a lot of java script these days. I am really disappointed in the math aspects of javascript: the Math object is almost a joke because it has so few methods you can't use ^ for exponentiation the + operator is very limited, you cant add array's of numbers or do scalar multiplication on arrays Now I have written some pretty basic extensions to the Math object and have considered writing a library of advanced Math features, amazingly there doesn't seem to be any sort of standard library already out even for calculus, although there is one for vectors and matricies I was able find. The notation for working with vectors and matricies is really bad when you can't use the + operator on arrays, and you cant do scalar multiplication. For example, here is a hideous expression for subtracting two vectors, A - B: Math.vectorAddition(A,Math.scalarMultiplication(-1,B)); I have been looking for some kind of open-source project to contribute to for awhile, and even though my C++ is a bit rusty I would very much like to get into the code for V8 engine and extend the + operator to work on arrays, to get scalar multiplication to work, and possibly to get the ^ operator to work for exponentiation. These things would greatly enhance the utility of any mathematical javascript framework. I really don't know how to get involved in something like the V8 engine other than download the code and start working on it. Of course I'm afraid that since V8 is chrome specific, that without browser cross-compatibility a fundamental change of this type is likely to be rejected for V8. I was hoping someone could either tell me why this is a bad idea, or else give me some pointers about how to proceed at this point to get some kind of approval to add these features. Thanks!

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Coldfusion Report Builder - How can you set different datasources externally between prod/staging/de

    - by Smooth Operator
    Coldfusion Report Builder is great. One small issue. We use ANT+CFANT to deploy. When we create the report, say in a datasource called MyApp_dev on a dev box. Everything works great when the report is created. We deploy the report to our staging server, which has a datasource of MyApp_Staging. That server also, may or may not, have the live app working under MyApp_Live. Ant pushes the update to Staging just great. Run the report, crashes and burns. Why? It seems the report is looking for the MyApp_Dev data_source, even though the application is using the MyApp_Staging datasource. In digging around I found a few approaches, I would like to do this one, final, ideal way from the beginning instead of having to go back to do dozens of reports differently when I have a new Aha! moment. 1) Obvious: Pass in the datasource in to the cfreport tag. Doesn't work for ColdFusion Builder Reports as of v8, or v9 as tested on Linux. 2) Most realistic option (but painful) so far: Pass in the query as an object into the ColdFusion Builder report. Let's think about this: Create the Report with the report builder to my heart's content using the RDS, etc on my local box. When I'm done, copy the query into a snippet of code, or into a database column to be dynamically be injected at runtime with correct datasource. Modify my "run report" event to find the query from the database column, insert it into another dynamic cfquery and potentially... evaluate (!?!) it? Fun side is I can set the cfquery datasource to what I would need for each environment. When I modify the report's columns in CF Report Builder, I always have to update the query in the database. Is there a snippet of code that can extract this for me? Hmm. 3) Less than ideal. Suck it up and let all the reports in staging run off the live server. Maybe copy the live data into staging (sans structural changes) to let it seem similar. Are there any eloquent ways to accomplish the above? Thanks in Advance!

    Read the article

  • How can I setup ANT with Subversion and ColdFusion Builder (eclipse) to check out a local build to w

    - by Smooth Operator
    I am not sure if there's an answer for this already -- couldn't find one for this (hopefully common) setup: I recently converted one of my ColdFusion projects to deploy via ANT. I have a local ant script that instructs a remote server to check out the code, and run the application's specific build file, remotely on the server. I have a few endpoints: Live - production (on the production server) Staging - on the production server, different datasource, etc. dev - on the local box. What I have run into it seems is a simple and common problem. I now need ANT to create any build, even locally. Fine, created a local endpoint and it configures for my box. Issue? How do I get it to show up as a project (automatically if possible) in Eclipse/ColdFusion builder. What I envision is instead of checking out a branch via the subversion plugin in CFBuilder/Eclipse, I now use ANT to do that for me. Since I use ColdFusion Builder (Eclipse + Adobe's plugin), I have all of eclipse's tools and plugins available to solve the problem of : how can I best call ANT from within Eclipse/ColdFusion Builder, to setup the local build as a project that I can develop and work on? I think when I check the code back in from the local box, I'd have to be sure not to check in any files with local config paths, etc. I hope this is a detailed and clear enough explanation, if not, please ask. Thanks in advance!

    Read the article

  • C++ non-member functions for nested template classes

    - by beldaz
    I have been writing several class templates that contain nested iterator classes, for which an equality comparison is required. As I believe is fairly typical, the comparison is performed with a non-member (and non-friend) operator== function. In doing so, my compiler (I'm using Mingw32 GCC 4.4 with flags -O3 -g -Wall) fails to find the function and I have run out of possible reasons. In the rather large block of code below there are three classes: a Base class, a Composed class that holds a Base object, and a Nested class identical to the Composed class except that it is nested within an Outer class. Non-member operator== functions are supplied for each. These classes are in templated and untemplated forms (in their own respective namespaces), with the latter equivalent to the former specialised for unsigned integers. In main, two identical objects for each class are compared. For the untemplated case there is no problem, but for the templated case the compiler fails to find operator==. What's going on? #include <iostream> namespace templated { template<typename T> class Base { T t_; public: explicit Base(const T& t) : t_(t) {} bool equal(const Base& x) const { return x.t_==t_; } }; template<typename T> bool operator==(const Base<T> &x, const Base<T> &y) { return x.equal(y); } template<typename T> class Composed { typedef Base<T> Base_; Base_ base_; public: explicit Composed(const T& t) : base_(t) {} bool equal(const Composed& x) const {return x.base_==base_;} }; template<typename T> bool operator==(const Composed<T> &x, const Composed<T> &y) { return x.equal(y); } template<typename T> class Outer { public: class Nested { typedef Base<T> Base_; Base_ base_; public: explicit Nested(const T& t) : base_(t) {} bool equal(const Nested& x) const {return x.base_==base_;} }; }; template<typename T> bool operator==(const typename Outer<T>::Nested &x, const typename Outer<T>::Nested &y) { return x.equal(y); } } // namespace templated namespace untemplated { class Base { unsigned int t_; public: explicit Base(const unsigned int& t) : t_(t) {} bool equal(const Base& x) const { return x.t_==t_; } }; bool operator==(const Base &x, const Base &y) { return x.equal(y); } class Composed { typedef Base Base_; Base_ base_; public: explicit Composed(const unsigned int& t) : base_(t) {} bool equal(const Composed& x) const {return x.base_==base_;} }; bool operator==(const Composed &x, const Composed &y) { return x.equal(y); } class Outer { public: class Nested { typedef Base Base_; Base_ base_; public: explicit Nested(const unsigned int& t) : base_(t) {} bool equal(const Nested& x) const {return x.base_==base_;} }; }; bool operator==(const Outer::Nested &x, const Outer::Nested &y) { return x.equal(y); } } // namespace untemplated int main() { using std::cout; unsigned int testVal=3; { // No templates first typedef untemplated::Base Base_t; Base_t a(testVal); Base_t b(testVal); cout << "a=b=" << testVal << "\n"; cout << "a==b ? " << (a==b ? "TRUE" : "FALSE") << "\n"; typedef untemplated::Composed Composed_t; Composed_t c(testVal); Composed_t d(testVal); cout << "c=d=" << testVal << "\n"; cout << "c==d ? " << (c==d ? "TRUE" : "FALSE") << "\n"; typedef untemplated::Outer::Nested Nested_t; Nested_t e(testVal); Nested_t f(testVal); cout << "e=f=" << testVal << "\n"; cout << "e==f ? " << (e==f ? "TRUE" : "FALSE") << "\n"; } { // Now with templates typedef templated::Base<unsigned int> Base_t; Base_t a(testVal); Base_t b(testVal); cout << "a=b=" << testVal << "\n"; cout << "a==b ? " << (a==b ? "TRUE" : "FALSE") << "\n"; typedef templated::Composed<unsigned int> Composed_t; Composed_t c(testVal); Composed_t d(testVal); cout << "c=d=" << testVal << "\n"; cout << "d==c ? " << (c==d ? "TRUE" : "FALSE") << "\n"; typedef templated::Outer<unsigned int>::Nested Nested_t; Nested_t e(testVal); Nested_t f(testVal); cout << "e=f=" << testVal << "\n"; cout << "e==f ? " << (e==f ? "TRUE" : "FALSE") << "\n"; // Above line causes compiler error: // error: no match for 'operator==' in 'e == f' } cout << std::endl; return 0; }

    Read the article

  • SQL Concatenate

    - by Bunch
    Concatenating output from a SELECT statement is a pretty basic thing to do in SQL. The main ways to perform this would be to use either the CONCAT() function, the || operator or the + operator. It really all depends on which version of SQL you are using. The following examples use T-SQL (MS SQL Server 2005) so it uses the + operator but other SQL versions have similar syntax. If you wanted to join two fields together for a full name: SELECT (lname + ', ' + fname) AS Name FROM tblCustomers To add some static text to a value: SELECT (lname + ' - SS') AS Name FROM tblPlayers WHERE PlayerPosition = 6 Or to select some text and an integer together: SELECT (lname + cast(playerNumber as varchar) AS Name FORM tblPlayers Technorati Tags: SQL

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • SQL SERVER – Partition Parallelism Support in expressor 3.6

    - by pinaldave
    I am very excited to learn that there is a new version of expressor’s data integration platform coming out in March of this year.  It will be version 3.6, and I look forward to using it and telling everyone about it.  Let me describe a little bit more about what will be so great in expressor 3.6: Greatly enhanced user interface Parallel Processing Bulk Artifact Upgrading The User Interface First let me cover the most obvious enhancements. The expressor Studio user interface (UI) has had some significant work done. Kudos to the expressor Engineering team; the entire UI is a visual masterpiece that is very responsive and intuitive. The improvements are more than just eye candy; they provide significant productivity gains when developing expressor Dataflows. Operator shape icons now include a description that identifies the function of each operator, instead of having to guess at the function by the icon. Operator shapes and highlighting depict the current function and status: Disabled, enabled, complete, incomplete, and error. Each status displays an appropriate message in the message panel with correction suggestions. Floating or docking property panels provide descriptive tool tips for each property as well as auto resize when adjusting the canvas, without having to search Help or the need to scroll around to get access to the property. Progress and status indicators let you know when an operation is working. “No limit” canvas with snap-to-grid allows automatic sizing and accurate positioning when you have numerous operators in the Dataflow. The inline tool bar offers quick access to pan, zoom, fit and overview functions. Selecting multiple artifacts with a right click context allows you to easily manage your workspace more efficiently. Partitioning and Parallel Processing Partitioning allows each operator to process multiple subsets of records in parallel as opposed to processing all records that flow through that operator in a single sequential set. This capability allows the user to configure the expressor Dataflow to run in a way that most efficiently utilizes the resources of the hardware where the Dataflow is running. Partitions can exist in most individual operators. Using partitions increases the speed of an expressor data integration application, therefore improving performance and load times. With the expressor 3.6 Enterprise Edition, expressor simplifies enabling parallel processing by adding intuitive partition settings that are easy to configure. Bulk Artifact Upgrading Bulk Artifact Upgrading sounds a bit intimidating, but it actually is not and it is a welcome addition to expressor Studio. In past releases, users were prompted to confirm that they wanted to upgrade their individual artifacts only when opened. This was a cumbersome and repetitive process. Now with bulk artifact upgrading, a user can easily select what artifact or group of artifacts to upgrade all at once. As you can see, there are many new features and upgrade options that will prove to make expressor Studio quicker and more efficient.  I hope I’m not the only one who is excited about all these new upgrades, and that I you try expressor and share your experience with me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • constructing paramater query SQL - LIKE % in .cs and in grid view

    - by Indranil Mutsuddy
    Hello friends, I am trying to implement admin login and operators(india,australia,america) login, now operator id starts with 'IND123' 'AUS123' and 'AM123' when operator india logs in he can see the details of only members have id 'IND%', the same applies for an australian and american users and when admin logs in he can see details of members with id 'IND%' or 'AUS%' or 'AM%' i have a table which defines the role i.e admin or operator and their prefix(IND,AUS respectively) In loginpage i created a session for Role and prefix PREFIX = myReader["Prefix"].ToString(); Session["LoginRole"] = myReader["Role"].ToString(); Session["LoginPrefix"] = String.Concat(PREFIX + "%"); works fine In main page(after login) i have to count the number of member so i wrote string prefix = Session["LoginPrefix"].ToString(); string role = Session["LoginRole"].ToString(); if (role.Equals("ADMIN")) StrMemberId = "select count(*) from MemberList"; else StrMemberId = "select count(*) from MemberList where MemberID like '"+prefix+"'"; thatworks fine too Problem: 1. i want to constructor parameter something like StrMemberId = "select count(*) from MemberList where MemberID like '@prefix+'"; myCommd.Parameters.AddWithValue("@prefix", prefix); Which is not working 2 Displaying the members in gridview i need to give condition (something like if (role.Equals("ADMIN")) show all members else show member depending on the operator prefix)the list of members in operator mode and admin mode. - where to put the condition in gridview how to apply these please suggest something Regards Indranil

    Read the article

  • Official names for pointer operators

    - by FredOverflow
    What are the official names for the operators * and & in the context of pointers? They seem to be frequently called dereference operator and address-of operator respectively, but unfortunately, the section on unary operators in the standard does not name them. I really don't want to name & address-of anymore, because & returns a pointer, not an address. (A pointer is a language mechanism, while an address is an implementation detail. Addresses are untyped, while pointers aren't, except for void*.) The standard is very clear about this: The result of the unary & operator is a pointer to its operand. Symmetry suggests to name & reference operator which is a little unfortunate because of the collision with references in C++. The fact that & returns a pointer suggests pointer operator. Are there any official sources that would confirm these (or other) namings?

    Read the article

  • How to do a STAR search in LINQ

    - by aNui
    I'm not sure about clarity of the STAR word. I'm implementing a search method using linq-to-object in c#. And I want to do a search with * (star) operator like most of search apps or web can do. e.g. If I typed "p*", results should be everything starting with "p". And it should work for prefix star, suffix star or star in the middle. And it would be great if the search can do with "-" (minus/NOT) operator or "+" (plus/OR) operator or "AND" operator Thanks in advance.

    Read the article

  • Sorting objects in Python

    - by Curious2learn
    I want to sort objects using by one of their attributes. As of now, I am doing it in the following way USpeople.sort(key=lambda person: person.utility[chosenCar],reverse=True) This works fine, but I have read that using operator.attrgetter() might be a faster way to achieve this sort. First, is this correct? Assuming that it is correct, how do I use operator.attrgetter() to achieve this sort? I tried, keyFunc=operator.attrgetter('utility[chosenCar]') USpeople.sort(key=keyFunc,reverse=True) However, I get an error saying that there is no attribute 'utility[chosenCar]'. The problem is that the attribute by which I want to sort is in a dictionary. For example, the utility attribute is in the following form: utility={chosenCar:25000,anotherCar:24000,yetAnotherCar:24500} I want to sort by the utility of the chosenCar using operator.attrgetter(). How could I do this? Thanks in advance.

    Read the article

  • Containers of reference_wrappers (comparison operators required?)

    - by kloffy
    If you use stl containers together with reference_wrappers of POD types, the following code works just fine: int i = 3; std::vector< boost::reference_wrapper<int> > is; is.push_back(boost::ref(i)); std::cout << (std::find(is.begin(),is.end(),i)!=is.end()) << std::endl; However, if you use non-POD types such as (contrived example): struct Integer { int value; bool operator==(const Integer& rhs) const { return value==rhs.value; } bool operator!=(const Integer& rhs) const { return !(*this == rhs); } }; It doesn't suffice to declare those comparison operators, instead you have to declare: bool operator==(const boost::reference_wrapper<Integer>& lhs, const Integer& rhs) { return boost::unwrap_ref(lhs)==rhs; } And possibly also: bool operator==(const Integer& lhs, const boost::reference_wrapper<Integer>& rhs) { return lhs==boost::unwrap_ref(rhs); } In order to get the equivalent code to work: Integer j = { 0 }; std::vector< boost::reference_wrapper<Integer> > js; js.push_back(boost::ref(j)); std::cout << (std::find(js.begin(),js.end(),j)!=js.end()) << std::endl; Now, I'm wondering if this is really the way it's meant to be done, since it seems impractical. It just seems there should be a simpler solution, e.g. templates: template<class T> bool operator==(const boost::reference_wrapper<T>& lhs, const T& rhs) { return boost::unwrap_ref(lhs)==rhs; } template<class T> bool operator==(const T& lhs, const boost::reference_wrapper<T>& rhs) { return lhs==boost::unwrap_ref(rhs); } There's probably a good reason why reference_wrapper behaves the way it does (possibly to accomodate non-POD types without comparison operators?). Maybe there already is an elegant solution and I just haven't found it.

    Read the article

  • Turn class "Interfaceable"

    - by scooterman
    Hi folks, On my company system, we use a class to represent beans. It is just a holder of information using boost::variant and some serialization/deserialization stuff. It works well, but we have a problem: it is not over an interface, and since we use modularization through dlls, building an interface for it is getting very complicated, since it is used in almost every part of our app, and sadly interfaces (abstract classes ) on c++ have to be accessed through pointers, witch makes almost impossible to refactor the entire system. Our structure is: dll A: interface definition through abstract class dll B: interface implementation there is a painless way to achieve that (maybe using templates, I don't know) or I should forget about making this work and simply link everything with dll B? thanks Edit: Here is my example. this is on dll A BeanProtocol is a holder of N dataprotocol itens, wich are acessed by a index. class DataProtocol; class UTILS_EXPORT BeanProtocol { public: virtual DataProtocol& get(const unsigned int ) const { throw std::runtime_error("Not implemented"); } virtual void getFields(std::list<unsigned int>&) const { throw std::runtime_error("Not implemented"); } virtual DataProtocol& operator[](const unsigned int ) { throw std::runtime_error("Not implemented"); } virtual DataProtocol& operator[](const unsigned int ) const { throw std::runtime_error("Not implemented"); } virtual void fromString(const std::string&) { throw std::runtime_error("Not implemented"); } virtual std::string toString() const { throw std::runtime_error("Not implemented"); } virtual void fromBinary(const std::string&) { throw std::runtime_error("Not implemented"); } virtual std::string toBinary() const { throw std::runtime_error("Not implemented"); } virtual BeanProtocol& operator=(const BeanProtocol&) { throw std::runtime_error("Not implemented"); } virtual bool operator==(const BeanProtocol&) const { throw std::runtime_error("Not implemented"); } virtual bool operator!=(const BeanProtocol&) const { throw std::runtime_error("Not implemented"); } virtual bool operator==(const char*) const { throw std::runtime_error("Not implemented"); } virtual bool hasKey(unsigned int field) const { throw std::runtime_error("Not implemented"); } }; the other class (named GenericBean) implements it. This is the only way I've found to make this work, but now I want to turn it in a truly interface and remove the UTILS_EXPORT (which is an _declspec macro), and finally remove the forced linkage of B with A.

    Read the article

  • Deployment of broadband network

    - by sthustfo
    Hi all, My query is related to broadband network deployment. I have a DSL modem connection provided by my operator. Now the DSL modem has a built-in NAT and DHCP server, hence it allocates IP addresses to any client devices (laptops, PC, mobile) that connect to it. However, the DSL modem also gets a public IP address X that is provisioned by the operator. My question is Whether this IP address X provisioned by operator is an IP address that is directly on the public Internet? Is it likely (practical scenario) that my broadband operator will put in one more NAT+DHCP server and provide IP addresses to all the modems within his broadband network. In this case, the IP addresses allotted to the modem devices will not be directly on the public Internet. Thanks in advance.

    Read the article

  • Why is this default template parameter not allowed?

    - by Matt Joiner
    I have the following class: template <typename Type = void> class AlignedMemory { public: AlignedMemory(size_t alignment, size_t size) : memptr_(0) { int iret(posix_memalign((void **)&memptr_, alignment, size)); if (iret) throw system_error("posix_memalign"); } virtual ~AlignedMemory() { free(memptr_); } operator Type *() const { return memptr_; } Type *operator->() const { return memptr_; } //operator Type &() { return *memptr_; } //Type &operator[](size_t index) const; private: Type *memptr_; }; And attempt to instantiate an automatic variable like this: AlignedMemory blah(512, 512); This gives the following error: src/cpfs/entry.cpp:438: error: missing template arguments before ‘buf’ What am I doing wrong? Is void not an allowed default parameter?

    Read the article

  • Turning temporary stringstream to c_str() in single statement

    - by AshleysBrain
    Consider the following function: void f(const char* str); Suppose I want to generate a string using stringstream and pass it to this function. If I want to do it in one statement, I might try: f((std::ostringstream() << "Value: " << 5).str().c_str()); // error This gives an error: 'str()' is not a member of 'basic_ostream'. OK, so operator<< is returning ostream instead of ostringstream - how about casting it back to an ostringstream? 1) Is this cast safe? f(static_cast<std::ostringstream&>(std::ostringstream() << "Value: " << 5).str().c_str()); // incorrect output Now with this, it turns out for the operator<<("Value: ") call, it's actually calling ostream's operator<<(void*) and printing a hex address. This is wrong, I want the text. 2) Why does operator<< on the temporary std::ostringstream() call the ostream operator? Surely the temporary has a type of 'ostringstream' not 'ostream'? I can cast the temporary to force the correct operator call too! f(static_cast<std::ostringstream&>(static_cast<std::ostringstream&>(std::ostringstream()) << "Value: " << 5).str().c_str()); This appears to work and passes "Value: 5" to f(). 3) Am I relying on undefined behavior now? The casts look unusual. I'm aware the best alternative is something like this: std::ostringstream ss; ss << "Value: " << 5; f(ss.str().c_str()); ...but I'm interested in the behavior of doing it in one line. Suppose someone wanted to make a (dubious) macro: #define make_temporary_cstr(x) (static_cast<std::ostringstream&>(static_cast<std::ostringstream&>(std::ostringstream()) << x).str().c_str()) // ... f(make_temporary_cstr("Value: " << 5)); Would this function as expected?

    Read the article

  • heterogeneous comparisons in python3

    - by Matt Anderson
    I'm 99+% still using python 2.x, but I'm trying to think ahead to the day when I switch. So, I know that using comparison operators (less/greater than, or equal to) on heterogeneous types that don't have a natural ordering is no longer supported in python3.x -- instead of some consistent (but arbitrary) result we raise TypeError instead. I see the logic in that, and even mostly think its a good thing. Consistency and refusing to guess is a virtue. But what if you essentially want the python2.x behavior? What's the best way to go about getting it? For fun (more or less) I was recently implementing a Skip List, a data structure that keeps its elements sorted. I wanted to use heterogeneous types as keys in the data structure, and I've got to compare keys to one another as I walk the data structure. The python2.x way of comparing makes this really convenient -- you get an understandable ordering amongst elements that have a natural ordering, and some ordering amongst those that don't. Consistently using a sort/comparison key like (type(obj).__name__, obj) has the disadvantage of not interleaving the objects that do have a natural ordering; you get all your floats clustered together before your ints, and your str-derived class separates from your strs. I came up with the following: import operator def hetero_sort_key(obj): cls = type(obj) return (cls.__name__+'_'+cls.__module__, obj) def make_hetero_comparitor(fn): def comparator(a, b): try: return fn(a, b) except TypeError: return fn(hetero_sort_key(a), hetero_sort_key(b)) return comparator hetero_lt = make_hetero_comparitor(operator.lt) hetero_gt = make_hetero_comparitor(operator.gt) hetero_le = make_hetero_comparitor(operator.le) hetero_ge = make_hetero_comparitor(operator.gt) Is there a better way? I suspect one could construct a corner case that this would screw up -- a situation where you can compare type A to B and type A to C, but where B and C raise TypeError when compared, and you can end up with something illogical like a > b, a < c, and yet b > c (because of how their class names sorted). I don't know how likely it is that you'd run into this in practice.

    Read the article

  • Internal class and access to external members.

    - by Knowing me knowing you
    I always thought that internal class has access to all data in its external class but having code: template<class T> class Vector { template<class T> friend std::ostream& operator<<(std::ostream& out, const Vector<T>& obj); private: T** myData_; std::size_t myIndex_; std::size_t mySize_; public: Vector():myData_(nullptr), myIndex_(0), mySize_(0) { } Vector(const Vector<T>& pattern); void insert(const T&); Vector<T> makeUnion(const Vector<T>&)const; Vector<T> makeIntersection(const Vector<T>&)const; class Iterator : public std::iterator<std::bidirectional_iterator_tag,T> { private: T** itData_; public: Iterator()//<<<<<<<<<<<<<------------COMMENT { /*HERE I'M TRYING TO USE ANY MEMBER FROM Vector<T> AND I'M GETTING ERR SAYING: ILLEGAL CALL OF NON-STATIC MEMBER FUNCTION*/} Iterator(T** ty) { itData_ = ty; } Iterator operator++() { return ++itData_; } T operator*() { return *itData_[0]; } bool operator==(const Iterator& obj) { return *itData_ == *obj.itData_; } bool operator!=(const Iterator& obj) { return *itData_ != *obj.itData_; } bool operator<(const Iterator& obj) { return *itData_ < *obj.itData_; } }; typedef Iterator iterator; iterator begin()const { assert(mySize_ > 0); return myData_; } iterator end()const { return myData_ + myIndex_; } }; See line marked as COMMENT. So can I or I can't use members from external class while in internal class? Don't bother about naming, it's not a Vector it's a Set. Thank you.

    Read the article

  • Potential g++ template bug?

    - by Evan Teran
    I've encountered some code which I think should compile, but doesn't. So I'm hoping some of the local standards experts here at SO can help :-). I basically have some code which resembles this: #include <iostream> template <class T = int> class A { public: class U { }; public: U f() const { return U(); } }; // test either the work around or the code I want... #ifndef USE_FIX template <class T> bool operator==(const typename A<T>::U &x, int y) { return true; } #else typedef A<int> AI; bool operator==(const AI::U &x, int y) { return true; } #endif int main() { A<int> a; std::cout << (a.f() == 1) << std::endl; } So, to describe what is going on here. I have a class template (A) which has an internal class (U) and at least one member function which can return an instance of that internal class (f()). Then I am attempting to create an operator== function which compares this internal type to some other type (in this case an int, but it doesn't seem to matter). When USE_FIX is not defined I get the following error: test.cc: In function 'int main()': test.cc:27:25: error: no match for 'operator==' in 'a.A<T>::f [with T = int]() == 1' Which seems odd, because I am clearly (I think) defining a templated operator== which should cover this, in fact if I just do a little of the work for the compiler (enable USE_FIX), then I no longer get an error. Unfortunately, the "fix" doesn't work generically, only for a specific instantiation of the template. Is this supposed to work as I expected? Or is this simply not allowed? BTW: if it matters I am using gcc 4.5.2.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >