Search Results

Search found 2498 results on 100 pages for 'unary operator'.

Page 35/100 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • insert ... select with divide operator in select errors?

    - by Mark
    Hi, the following query CREATE TABLE IF NOT EXISTS XY ( x INT NOT NULL , y FLOAT NULL , PRIMARY KEY(x) ) INSERT INTO XY (x,y) (select 1 as x ,(1/7) as y); errors with Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO XY (x,y) (select 1 as x ,(1/7) as y)' at line 7 Line 1, column 1 any ideas?

    Read the article

  • Is there a .NET class that represents operator types?

    - by user323774
    I would like to do the following: *OperatorType* o = *OperatorType*.GreaterThan; int i = 50; int increment = -1; int l = 0; for(i; i o l; i = i + increment) { //code } this concept can be kludged in javascript using an eval()... but this idea is to have a loop that can go forward or backward based on values set at runtime. is this possible? Thanks

    Read the article

  • Why does the assignment operator return a value and not a reference?

    - by Nick Lowman
    I saw the example below explained on this site and thought both answers would be 20 and not the 10 that is returned. He wrote that both the comma and assignment returns a value, not a reference. I don't quite understand what that means. I understand it in relation to passing variables into functions or methods i.e primitive types are passed in by value and objects by reference but I'm not sure how it applies in this case. I also understand about context and the value of 'this' (after help from stackoverflow) but I thought in both cases I would still be invoking it as a method, foo.bar() which would mean foo is the context but it seems both result in a function call bar(). Why is that and what does it all mean? var x = 10; var foo = { x: 20, bar: function () {return this.x;} }; (foo.bar = foo.bar)();//returns 10 (foo.bar, foo.bar)();//returns 10

    Read the article

  • is the + in += on a Map a prefix operator of =?

    - by Steve
    In the book "Programming in Scala" from Martin Odersky there is a simple example in the first chapter: var capital = Map("US" -> "Washington", "France" -> "Paris") capital += ("Japan" -> "Tokyo") The second line can also be written as capital = capital + ("Japan" -> "Tokyo") I am curious about the += notation. In the class Map, I didn't found a += method. I was able to the same behaviour in an own example like class Foo() { def +(value:String) = { println(value) this } } object Main { def main(args: Array[String]) = { var foo = new Foo() foo = foo + "bar" foo += "bar" } } I am questioning myself, why the += notation is possible. It doesn't work if the method in the class Foo is called test for example. This lead me to the prefix notation. Is the + a prefix notation for the assignment sign (=)? Can somebody explain this behaviour?

    Read the article

  • How to skip an empty LIKE operator in a multiple LIKE query?

    - by alex
    I notice my query doesn't behave correctly if one of the like variables is empty: SELECT name FROM employee WHERE name LIKE '%a%' AND color LIKE '%A%' AND city LIKE '%b%' AND country LIKE '%B%' AND sport LIKE '%c%' AND hobby LIKE '%C%' Now when a and A are not empty it works but when a, A and c are not empty the c part is not excuted so it seems? How can I fix this?

    Read the article

  • C# performance of static string[] contains() (slooooow) vs. == operator

    - by Andrew White
    Hiya, Just a quick query: I had a piece of code which compared a string against a long list of values, e.g. if(str == "string1" || str = "string2" || str == "string3" || str = "string4". DoSomething(); And the interest of code clarity and maintainability I changed it to public static string[] strValues = { "String1", "String2", "String3", "String4"}; ... if(strValues.Contains(str) DoSomething(); Only to find the code execution time went from 2.5secs to 6.8secs (executed ca. 200,000 times). I certainly understand a slight performance trade off, but 300%? Anyway I could define the static strings differently to enhance performance? Cheers.

    Read the article

  • Javascript === vs == : Does it matter which "equal" operator I use?

    - by bcasp
    I'm using JSLint to go through some horrific JavaScript at work and it's returning a huge number of suggestions to replace == with === when doing things like comparing 'idSele_UNVEHtype.value.length == 0' inside of an if statement. I'm basically wondering if there is a performance benefit to replacing == with ===. Any performance improvement would probably be welcomed as there are hundreds (if not thousands) of these comparison operators being used throughout the file. I tried searching for relevant information to this question, but trying to search for something like '=== vs ==' doesn't seem to work so well with search engines...

    Read the article

  • Why would you use the ternary operator without assigning a value for the "true" condition?

    - by RickNotFred
    In the Android open-source qemu code I ran across this line of code: machine->max_cpus = machine->max_cpus ?: 1; /* Default to UP */ It this just a confusing way of saying: if (machine->max_cpus) { ; //do nothing } else { machine->max_cpus = 1; } If so, wouldn't it be clearer as: if (machine->max_cpus == 0) machine->max_cpus = 1; Interestingly, this compiles and works fine with gcc, but doesn't compile on http://www.comeaucomputing.com/tryitout/ .

    Read the article

  • Is it bad practice to make an iterator that is aware of its own end

    - by aaronman
    For some background of why I am asking this question here is an example. In python the method chain chains an arbitrary number of ranges together and makes them into one without making copies. Here is a link in case you don't understand it. I decided I would implement chain in c++ using variadic templates. As far as I can tell the only way to make an iterator for chain that will successfully go to the next container is for each iterator to to know about the end of the container (I thought of a sort of hack in where when != is called against the end it will know to go to the next container, but the first way seemed easier and safer and more versatile). My question is if there is anything inherently wrong with an iterator knowing about its own end, my code is in c++ but this can be language agnostic since many languages have iterators. #ifndef CHAIN_HPP #define CHAIN_HPP #include "iterator_range.hpp" namespace iter { template <typename ... Containers> struct chain_iter; template <typename Container> struct chain_iter<Container> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end;//never really used but kept it for consistency public: chain_iter(Container & container, bool is_end=false) : begin(container.begin()),end(container.end()) { if(is_end) begin = container.end(); } chain_iter & operator++() { ++begin; return *this; } auto operator*()->decltype(*begin) { return *begin; } bool operator!=(const chain_iter & rhs) const{ return this->begin != rhs.begin; } }; template <typename Container, typename ... Containers> struct chain_iter<Container,Containers...> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end; bool end_reached = false; chain_iter<Containers...> next_iter; public: chain_iter(Container & container, Containers& ... rest, bool is_end=false) : begin(container.begin()), end(container.end()), next_iter(rest...,is_end) { if(is_end) begin = container.end(); } chain_iter & operator++() { if (begin == end) { ++next_iter; } else { ++begin; } return *this; } auto operator*()->decltype(*begin) { if (begin == end) { return *next_iter; } else { return *begin; } } bool operator !=(const chain_iter & rhs) const { if (begin == end) { return this->next_iter != rhs.next_iter; } else return this->begin != rhs.begin; } }; template <typename ... Containers> iterator_range<chain_iter<Containers...>> chain(Containers& ... containers) { auto begin = chain_iter<Containers...>(containers...); auto end = chain_iter<Containers...>(containers...,true); return iterator_range<chain_iter<Containers...>>(begin,end); } } #endif //CHAIN_HPP

    Read the article

  • Objective-C scanf spaces issue

    - by Rob
    I am learning objective-C and for the life of me can't figure out why this is happening. When the user inputs when the code is: scanf("%c %lf", &operator, &number); For some reason it messes with this code: doQuit = 0; [deskCalc setAccumulator: 0]; while (doQuit == 0) { NSLog(@"Please input an operation and then a number:"); scanf("%c %lf", &operator, &number); switch (operator) { case '+': [deskCalc add: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case '-': [deskCalc subtract: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case '*': case 'x': [deskCalc multiply: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case '/': if (number == 0) NSLog(@"You can't divide by zero."); else [deskCalc divide: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case 'S': [deskCalc setAccumulator: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case 'E': doQuit = 1; break; default: NSLog(@"You did not enter a valid operator."); break; } } When the user inputs for example "E 10" it will exit the loop but it will also print "You did not enter a valid operator." When I change the code to: scanf(" %c %lf", &operator, &number); It all of a sudden doesn't print this last line. What is it about the space before %c that fixes this?

    Read the article

  • Why we can't we overload "=" using friend function?

    - by ashish-sangwan
    Why it is not allowed to overload "=" using friend function? I have written a small program but it is giving error. class comp { int real; int imaginary; public: comp(){real=0; imaginary=0;} void show(){cout << "Real="<<real<<" Imaginary="<<imaginary<<endl;} void set(int i,int j){real=i;imaginary=j;} friend comp operator=(comp &op1,const comp &op2); }; comp operator=(comp &op1,const comp &op2) { op1.imaginary=op2.imaginary; op1.real=op2.real; return op1; } int main() { comp a,b; a.set(10,20); b=a; b.show(); return 0; } The compilation gives the following error :- [root@dogmatix stackoverflow]# g++ prog4.cpp prog4.cpp:11: error: ‘comp operator=(comp&, const comp&)’ must be a nonstatic member function prog4.cpp:14: error: ‘comp operator=(comp&, const comp&)’ must be a nonstatic member function prog4.cpp: In function ‘int main()’: prog4.cpp:25: error: ambiguous overload for ‘operator=’ in ‘b = a’ prog4.cpp:4: note: candidates are: comp& comp::operator=(const comp&) prog4.cpp:14: note: comp operator=(comp&, const comp&)

    Read the article

  • C++ overloading comparative operators for a MyString class

    - by Taylor Gang
    bool operator == (const MyString& left, const MyString& right) { if(left.value == right.value) return true; else return false; } bool operator != (const MyString& left, const MyString& right) { if(left == right) return false; else return true; } bool operator < (const MyString& left, const MyString& right) { if(strcmp(left.value, right.value) == -1) return true; else return false; } bool operator > (const MyString& left, const MyString& right) { if(strcmp(left.value, right.value) == 1) return true; else return false; } bool operator <= (const MyString& left, const MyString& right) { if(strcmp(left.value, right.value) == -1 || strcmp(left.value, right.value) == 0) return true; else return false; } bool operator >= (const MyString& left, const MyString& right) { if(strcmp(left.value, right.value) == 1 || strcmp(left.value, right.value) == 0) return true; else return false; } So these are my implemented comparison operators for my MyString class, they fail the test program that my professor gave me and could use some direction. Thanks in advance for any and all help I receive.

    Read the article

  • C++ unrestricted union workaround

    - by Chris
    #include <stdio.h> struct B { int x,y; }; struct A : public B { // This whines about "copy assignment operator not allowed in union" //A& operator =(const A& a) { printf("A=A should do the exact same thing as A=B\n"); } A& operator =(const B& b) { printf("A = B\n"); } }; union U { A a; B b; }; int main(int argc, const char* argv[]) { U u1, u2; u1.a = u2.b; // You can do this and it calls the operator = u1.a = (B)u2.a; // This works too u1.a = u2.a; // This calls the default assignment operator >:@ } Is there any workaround to be able to do that last line u1.a = u2.a with the exact same syntax, but have it call the operator = (don't care if it's =(B&) or =(A&)) instead of just copying data? Or are unrestricted unions (not supported even in Visual Studio 2010) the only option?

    Read the article

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

  • Patterns for Handling Changing Property Sets in C++

    - by Bhargav Bhat
    I have a bunch "Property Sets" (which are simple structs containing POD members). I'd like to modify these property sets (eg: add a new member) at run time so that the definition of the property sets can be externalized and the code itself can be re-used with multiple versions/types of property sets with minimal/no changes. For example, a property set could look like this: struct PropSetA { bool activeFlag; int processingCount; /* snip few other such fields*/ }; But instead of setting its definition in stone at compile time, I'd like to create it dynamically at run time. Something like: class PropSet propSetA; propSetA("activeFlag",true); //overloading the function call operator propSetA("processingCount",0); And the code dependent on the property sets (possibly in some other library) will use the data like so: bool actvFlag = propSet["activeFlag"]; if(actvFlag == true) { //Do Stuff } The current implementation behind all of this is as follows: class PropValue { public: // Variant like class for holding multiple data-types // overloaded Conversion operator. Eg: operator bool() { return (baseType == BOOLEAN) ? this->ToBoolean() : false; } // And a method to create PropValues various base datatypes static FromBool(bool baseValue); }; class PropSet { public: // overloaded[] operator for adding properties void operator()(std::string propName, bool propVal) { propMap.insert(std::make_pair(propName, PropVal::FromBool(propVal))); } protected: // the property map std::map<std::string, PropValue> propMap; }; This problem at hand is similar to this question on SO and the current approach (described above) is based on this answer. But as noted over at SO this is more of a hack than a proper solution. The fundamental issues that I have with this approach are as follows: Extending this for supporting new types will require significant code change. At the bare minimum overloaded operators need to be extended to support the new type. Supporting complex properties (eg: struct containing struct) is tricky. Supporting a reference mechanism (needed for an optimization of not duplicating identical property sets) is tricky. This also applies to supporting pointers and multi-dimensional arrays in general. Are there any known patterns for dealing with this scenario? Essentially, I'm looking for the equivalent of the visitor pattern, but for extending class properties rather than methods. Edit: Modified problem statement for clarity and added some more code from current implementation.

    Read the article

  • C++11 Tidbits: Decltype (Part 2, trailing return type)

    - by Paolo Carlini
    Following on from last tidbit showing how the decltype operator essentially queries the type of an expression, the second part of this overview discusses how decltype can be syntactically combined with auto (itself the subject of the March 2010 tidbit). This combination can be used to specify trailing return types, also known informally as "late specified return types". Leaving aside the technical jargon, a simple example from section 8.3.5 of the C++11 standard usefully introduces this month's topic. Let's consider a template function like: template <class T, class U> ??? foo(T t, U u) { return t + u; } The question is: what should replace the question marks? The problem is that we are dealing with a template, thus we don't know at the outset the types of T and U. Even if they were restricted to be arithmetic builtin types, non-trivial rules in C++ relate the type of the sum to the types of T and U. In the past - in the GNU C++ runtime library too - programmers used to address these situations by way of rather ugly tricks involving __typeof__ which now, with decltype, could be rewritten as: template <class T, class U> decltype((*(T*)0) + (*(U*)0)) foo(T t, U u) { return t + u; } Of course the latter is guaranteed to work only for builtin arithmetic types, eg, '0' must make sense. In short: it's a hack. On the other hand, in C++11 you can use auto: template <class T, class U> auto foo(T t, U u) -> decltype(t + u) { return t + u; } This is much better. It's generic and a construct fully supported by the language. Finally, let's see a real-life example directly taken from the C++11 runtime library as implemented in GCC: template<typename _IteratorL, typename _IteratorR> inline auto operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) -> decltype(__y.base() - __x.base()) { return __y.base() - __x.base(); } By now it should appear be completely straightforward. The availability of trailing return types in C++11 allowed fixing a real bug in the C++98 implementation of this operator (and many similar ones). In GCC, C++98 mode, this operator is: template<typename _IteratorL, typename _IteratorR> inline typename reverse_iterator<_IteratorL>::difference_type operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) { return __y.base() - __x.base(); } This was guaranteed to work well with heterogeneous reverse_iterator types only if difference_type was the same for both types.

    Read the article

  • extern(al) problem

    - by Knowing me knowing you
    Why can't I compile this code? //main #include "stdafx.h" #include "X.h" #include "Y.h" //#include "def.h" extern X operator*(X, Y);//HERE ARE DECLARED EXTERNAL *(X,Y) AND f(X) extern int f(X); /*GLOBALS*/ X x = 1; Y y = x; int i = 2; int _tmain(int argc, _TCHAR* argv[]) { i + 10; y + 10; y + 10 * y; //x + (y + i); x * x + i; f(7); //f(y); //y + y; //106 + y; return 0; } //X struct X { int i; X(int value):i(value) { } X operator+(int value) { return X(i + value); } operator int() { return i; } }; //Y struct Y { int i; Y(X x):i(x.i) { } Y operator+(X x) { return Y(i + x.i); } }; //def.h int f(X x); X operator*(X x, Y y); //def.cpp #include "stdafx.h" #include "def.h" #include "X.h" #include "Y.h" int f(X x) { return x; } X operator*(X x, Y y) { return x * y; } I'm getting err msg: Error 2 error LNK2019: unresolved external symbol "int __cdecl f(struct X)" Error 3 error LNK2019: unresolved external symbol "struct X __cdecl operator*(struct X,struct Y)" Another interesting thing is that if I place the implementation in def.h file it does compiles without errs. But then what about def.cpp? Why I'm not getting err msg that function f(X) is already defined? Here shouldn't apply ODR rule. Second concern I'm having is that if in def.cpp I change the return type of f from int to double intelliSense underlines this as an error but program still compiles? Why?

    Read the article

  • Undefined Behavior and Sequence Points Reloaded

    - by Nawaz
    Consider this topic a sequel of the following topic: Previous Installment Undefined Behavior and Sequence Points Let's revisit this funny and convoluted expression (the italicized phrases are taken from the above topic *smile* ): i += ++i; We say this invokes undefined-behavior. I presume that when say this, we implicitly assume that type of i is one of built-in types. So my question is: what if the type of i is a user-defined type? Say it's type is Index which is defined later in this post (see below). Would it still invoke undefined-behavior? If yes, why? Is it not equivalent to writing i.operator+=(i.operator ++()); or even syntactically simpler i.add(i.inc());? Or, do they too invoke undefined-behavior? If no, why not? After all, the object i gets modified twice between consecutive sequence points. Please recall the rule of thumb : an expression can modify an object's value only once between consecutive "sequence points. And if i += ++i is an expression, then it must invoke undefined-behavior. If so, then it's equivalents i.operator+=(i.operator ++()); and i.add(i.inc()); must also invoke undefined-behavior which seems to be untrue! (as far as I understand) Or, i += ++i is not an expression to begin with? If so, then what is it and what is the definition of expression? If it's an expression, and at the same time, it's behavior is also well-defined, then it implies that number of sequence points associated with an expression somehow depends on the type of operands involved in the expression. Am I correct (even partly)? By the way, how about this expression? a[++i] = i; //taken from the previous topic. but here type of `i` is Index. class Index { int state; public: Index(int s) : state(s) {} Index& operator++() { state++; return *this; } Index& operator+=(const Index & index) { state+= index.state; return *this; } operator int() { return state; } Index & add(const Index & index) { state += index.state; return *this; } Index & inc() { state++; return *this; } };

    Read the article

  • Deployment of broadband network

    - by sthustfo
    Hi all, My query is related to broadband network deployment. I have a DSL modem connection provided by my operator. Now the DSL modem has a built-in NAT and DHCP server, hence it allocates IP addresses to any client devices (laptops, PC, mobile) that connect to it. However, the DSL modem also gets a public IP address X that is provisioned by the operator. My question is Whether this IP address X provisioned by operator is an IP address that is directly on the public Internet? Is it likely (practical scenario) that my broadband operator will put in one more NAT+DHCP server and provide IP addresses to all the modems within his broadband network. In this case, the IP addresses allotted to the modem devices will not be directly on the public Internet. Thanks in advance.

    Read the article

  • SCORM 2004 Sequencing: What Am I doing wrong?

    - by Van
    This quiz is the last SCO in a grouping of 4 SCOs. SCO 1,2,3 have to be completed before this quiz becomes available. The problem is that when 1,2,3 are completed the menu skips right over this quiz and goes to the first page in the next module. This quiz stats grayed out the entire time. I think it has to do with the precondition logic or the objectives but I've tried everything I can think of and nothing works. <item identifier="quiz1_100" identifierref="res-quiz1" isvisible="true"> <title>Quiz 1</title> <imsss:sequencing> <imsss:controlMode choice="true" choiceExit="false" flow="true" forwardOnly="false" useCurrentAttemptObjectiveInfo="false" useCurrentAttemptProgressInfo="false" /> <imsss:sequencingRules> <imsss:preConditionRule> <imsss:ruleConditions conditionCombination="any"> <imsss:ruleCondition referencedObjective="obj_1000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_2000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_3000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="quiz_primary" operator="not" condition="objectiveStatusKnown" /> </imsss:ruleConditions> <imsss:ruleAction action="disabled" /> </imsss:preConditionRule> <imsss:preConditionRule> <imsss:ruleConditions conditionCombination="any"> <imsss:ruleCondition referencedObjective="obj_1000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_2000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_3000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="quiz_primary" operator="not" condition="objectiveStatusKnown" /> </imsss:ruleConditions> <imsss:ruleAction action="skip" /> </imsss:preConditionRule> <imsss:preConditionRule> <imsss:ruleConditions conditionCombination="all"> <imsss:ruleCondition condition="completed" /> </imsss:ruleConditions> <imsss:ruleAction action="skip" /> </imsss:preConditionRule> </imsss:sequencingRules> <imsss:objectives> <imsss:primaryObjective objectiveID="quiz_primary" satisfiedByMeasure="true"> <imsss:minNormalizedMeasure>0.8</imsss:minNormalizedMeasure> <imsss:mapInfo targetObjectiveID="quiz_complete" writeNormalizedMeasure="true" writeSatisfiedStatus="true" /> </imsss:primaryObjective> <imsss:objective satisfiedByMeasure="false" objectiveID="obj_1000_VHKP_test"> <imsss:mapInfo targetObjectiveID="gObj_1000_VHKP" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> <imsss:objective satisfiedByMeasure="false" objectiveID="obj_2000_VHKP_test"> <imsss:mapInfo targetObjectiveID="gObj_2000_VHKP" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> <imsss:objective satisfiedByMeasure="false" objectiveID="obj_3000_VHKP_test"> <imsss:mapInfo targetObjectiveID="gObj_3000_VHKP" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> <!-- <imsss:objective satisfiedByMeasure="false" objectiveID="obj_quiz1"> <imsss:mapInfo targetObjectiveID="quiz_primary" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> --> <imsss:objective satisfiedByMeasure="false" objectiveID="course_complete"> <imsss:mapInfo targetObjectiveID="obj_EJBOWNADV_primary" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> </imsss:objectives> <imsss:deliveryControls tracked="true" completionSetByContent="true" objectiveSetByContent="false" /> </imsss:sequencing> </item>

    Read the article

  • a question in NS c programming

    - by bahar
    Hi I added a new patch to my NS and I've seen thise two errors. Does anyone Know what I can do? error: specialization of 'bool std::less<_Tp::operator()(const _Tp&, const _Tp&) const [with _Tp = _AlgorithmTime]' in different namespace from definition of 'bool std::less<_Tp::operator()(const _Tp&, const _Tp&) const [with _Tp = _AlgorithmTime]' and the errors are from this code typedef struct _AlgorithmTime { // Round. int round; // Fase. int fase; // Valore massimo di fase. int last_fase; public: _AlgorithmTime() { round = 0; fase = 0; last_fase = 0; } // Costruttore. _AlgorithmTime(int r, int f, int l) { round = r; fase = f; last_fase = l; } // Costruttore. _AlgorithmTime(const _AlgorithmTime & t) { round = t.round; fase = t.fase; last_fase = t.last_fase; } // Operatore di uguaglianza. bool operator== (struct _AlgorithmTime & t) { return ((t.fase == fase) && (t.round == round)); } // Operatore minore. bool operator < (struct _AlgorithmTime & t) { if (round < t.round) return true; if (round > t.round) return false; if (fase < t.fase) return true; return false; } // Operatore maggiore. bool operator > (struct _AlgorithmTime & t) { if (round > t.round) return true; if (round < t.round) return false; if (fase > t.fase) return true; return false; } void operator++ () { if (fase == last_fase) { round++; fase = 0; return; } fase++; } void operator-- () { if (fase == 0) { round--; fase = last_fase; return; } fase--; } }AlgorithmTime; template< bool std::less::operator()(const AlgorithmTime & t1, const AlgorithmTime & t2)const { if (t1.round < t2.round) return true; if (t1.round t2.round) return false; if (t1.fase < t2.fase) return true; return false; } Thanks

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • LEMP Stack on Ubuntu Server 13.04 not parsing PHP Switch Statement Properly

    - by schester
    On my Ubuntu 12.04 Server LTS on nginx 1.1.19, the following PHP code works properly: switch($_SESSION['user']['permissions']) { case 9: echo "Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } However, I have a backup server running Ubuntu Server 13.04 on nginx 1.4.1 which has the exact same copy of the script (synced) but instead of breaking on the break; command, it echos the whole php script. The output on the 12.04 Box is similar to this: You are logged in with Super Admin Privileges But on the 13.04 Box, the output is like this: You are logged in logged in with Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } ?> I have also tried changing the script from switch statement to if statements but same results. Any idea what is wrong?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >