Search Results

Search found 4993 results on 200 pages for 'conversion operator'.

Page 104/200 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • What is the point of the logical operators in C?

    - by reubensammut
    I was just wondering if there is an XOR logical operator in C (something like && for AND but for XOR). I know I can split an XOR into ANDs, NOTs and ORs but a simple XOR would be much better. Then it occurred to me that if I use the normal XOR bitwise operator between two conditions, it might just work. And for my tests it did. Consider: int i = 3; int j = 7; int k = 8; Just for the sake of this rather stupid example, if I need k to be either greater than i or greater than j but not both, XOR would be quite handy. if ((k > i) XOR (k > j)) printf("Valid"); else printf("Invalid"); or printf("%s",((k > i) XOR (k > j)) ? "Valid" : "Invalid"); I put the bitwise XOR ^ and it produced "Invalid". Putting the results of the two comparisons in two integers resulted in the 2 integers to contain a 1, hence the XOR produced a false. I've then tried it with the & and | bitwise operators and both gave the expected results. All this makes sense knowing that true conditions have a non zero value, whilst false conditions have zero values. I was wondering, is there a reason to use the logical && and || when the bitwise operators &, | and ^ work just the same? Thanks Reuben

    Read the article

  • I deleted a full set of columns in a save, but have the original larger sheet, Can I get that back?

    - by Ben Henley
    I have an original sheet that has over 39000 lines in it. I knocked it down to 1800 lines that I want to improt into my database, however. The Dumb#$$ that I am, I selected only the visible cells and killed like 10 columns that I need. Is there a way to compare to the original sheet using a specific column... i.e. SKU and pull the data from the origianal to put back in the missing columns or do I have to just re-edit the whole thing down again. Please help as this takes a good day or two to minimize. Any and all help is much appreciated. Below is the column list on the edited sheet vs. the original sheet. SKU DESCRIPTION VENDOR PART # RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG BLISH VENDOR # FINE LINE CLASS ITEM ACTION FLAG PRIMARY UPC STOCK U/M WEIGHT LENGTH WIDTH HEIGHT SHIP-VIA EXCLUSION HAZARDOUS CODE PRICE SUGGESTED RETAIL SKU DESCRIPTION RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG FINE LINE CLASS HAZARDOUS CODE PRICE SUGGESTED RETAIL RETAIL SENSITIVITY CODE 2ND UPC CODE 3RD UPC CODE 4TH UPC CODE HEADLINE BULLET #1 BULLET #2 BULLET #3 BULLET #4 BULLET #5 BULLET #6 BULLET #7 BULLET #8 BULLET #9 SIZE COLOR CASE QUANTITY PRODUCT LINE Thanks, Ben

    Read the article

  • Is it okay to implement reference counting through composition?

    - by Billy ONeal
    Most common re-usable reference counted objects use private inheritance to implement re-use. I'm not a huge fan of private inheritance, and I'm curious if this is an acceptable way of handling things: class ReferenceCounter { std::size_t * referenceCount; public: ReferenceCounter() : referenceCount(NULL) {}; ReferenceCounter(ReferenceCounter& other) : referenceCount(other.referenceCount) { if (!referenceCount) { referenceCount = new std::size_t(1); other.referenceCount = referenceCount; } else { ++(*referenceCount); } }; ReferenceCounter& operator=(const ReferenceCounter& other) { ReferenceCounter temp(other); swap(temp); return *this; }; void swap(ReferenceCounter& other) { std::swap(referenceCount, other.referenceCount); }; ~ReferenceCounter() { if (referenceCount) { --(*referenceCount); if (!*referenceCount) delete referenceCount; } }; operator bool() const { return referenceCount && (*referenceCount != 0); }; }; class SomeClientClass { HANDLE someHandleThingy; ReferenceCounter objectsStillActive; public: SomeClientClass() { //Construct handle thingy } ~SomeClientClass() { if (objectsStillActive) return; //Release resources }; }; or are there subtle problems with this I'm not seeing?

    Read the article

  • Dealing with Time Zones

    - by RavIncredible
    Hi All, I need some guidance with one of my project requirements, I am developing an application which has to deal with various time zones. Scenario: User 1 is from India, so his time zone would be GMT+05:30 User 2 is from UK, so his time zone would be GMT+01:00 If the User 1 sends a message to User 2, I want to show the Message Sent/Received Time as per the user’s time zone. For example User 1 sends a message at 6:30 Indian time, when User 2 would view the message it would show as 2:00 UK time. Here goes my question, whenever I save the message should I convert it to GMT+00, so all my base times stamps are the same and then later when I display the message, I convert it back to User specific time zone. Would this be complex? Is this the best way of doing this? I like to get views for both saving and displaying, also when I should do the time conversion from optimization point of view. I would need do deal with any/all timezones. I am developing this application with PHP and MySQL and I am aware of timezone conversion method come with both PHP and MySQL. I am just trying to figure out the best way of doing this. Look forward to have all valuable suggestions. Note : As of now I am not much worried with day/light savings. Thanks Ravi

    Read the article

  • Java UnknownFormatConversionException

    - by user1672458
    The code below is throwing this error, and I'm not sure why. It's clearly a problem with outputting String.format to the str variable, but I don't know what's wrong with it. Exception in thread "main" java.util.UnknownFormatConversionException: Conversion = 'i' at java.util.Formatter$FormatSpecifier.conversion(Unknown Source) at java.util.Formatter$FormatSpecifier.<init>(Unknown Source) at java.util.Formatter.parse(Unknown Source) at java.util.Formatter.format(Unknown Source) at java.util.Formatter.format(Unknown Source) at java.lang.String.format(Unknown Source) at Donor.toString(Donor.java:41) at Donor.main(Donor.java:65) - import java.util.Scanner; public class Donor { public String name; public int age; public double donation; Donor() { //Initialized to these values for debugging name = "NoName"; age = 0; donation = 0; } Donor(String nameinit, int ageinit, double donationinit) { name = nameinit; age = ageinit; donation = donationinit; } public String toString() { String str = ""; str = String.format("%s-30%i-6$%d-20", name, age, donation); return str; } public static void main(String[] args) { Scanner input = new Scanner(System.in); String nameinit = null; int ageinit = -1; double donationinit = -1; String outp = null; System.out.print("Enter the donor's name: "); nameinit = input.nextLine(); System.out.print("Enter the donor's age: "); ageinit = input.nextInt(); System.out.print("Enter the donation amount: "); donationinit = input.nextDouble(); Donor d = new Donor(nameinit, ageinit, donationinit); outp = d.toString(); System.out.printf("%s30 %s6 %s10", "Name", "Age", "Donation"); System.out.print("\n" + outp); input.close(); } }

    Read the article

  • Changing associativity

    - by Sorush Rabiee
    Hi... The associativity of stream insertion operator is rtl, forgetting this fact sometimes cause to runtime or logical errors. for example: 1st- int F() { static int internal_counter c=0; return ++c; } in the main function: //....here is main() cout<<”1st=”<<F()<<”,2nd=”<<F()<<”,3rd=”<<F(); and the output is: 1st=3,2nd=2,3rd=1 that is different from what we expect at first look. 2nd- suppose that we have an implementation of stack data structure like this: // //... a Stack<DataType> class …… // Stack<int> st(10); for(int i=1;i<11;i++) st.push(i); cout<<st.pop()<<endl<<st.pop()<<endl<<st.pop()<<endl<<st.pop()<<endl; expected output is something like: 10 9 8 7 but we have: 7 8 9 10 There is no internal bug of << implementation but it can be so confusing... and finally[:-)] my question: is there any way to change assocativity of an operator by overloading it?

    Read the article

  • How does the last integer promotion rule ever get applied in C?

    - by SiegeX
    6.3.1.8p1: Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands: If both operands have the same type, then no further conversion is needed. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank. Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type. Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type. For the bolded rule to be applied it would seem to imply you need to have have an unsigned interger type who's rank is less than the signed integer type and the signed integer type cannot hold all the values of the unsigned integer type. Is there a real world example of such a case or is this statement serving as a catch-all to cover all possible permutations?

    Read the article

  • restrict documents for mapreduce with mongoid

    - by theBernd
    I implemented the pearson product correlation via map / reduce / finalize. The missing part is to restrict the documents (representing users) to be processed via a filter query. For a simple query like mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name => 'Bernd' }) I get this to work. But my filter criteria is a little bit more complicated: I have one set of preferences which need to have at least one common element and another set of preferences which may not have a common element. In a later step I also want to restrict this to documents (users) within a certain geographical distance. Currently I have this code working in my map function, but I would prefer to separate this into either query params as supported by mongoid or a javascript function. All my attempts to solve this failed since the code is either ignored or raises an error. I did a couple of tests. A regular find like User.where(:name.in => ['Arno', 'Bernd', 'Claudia']) works and returns #<Mongoid::Criteria:0x00000101f0ea40 @selector={:name=>{"$in"=>["Arno", "Bernd", "Claudia"]}}, @options={}, @klass=User, @documents=[]> Trying the same with mapreduce User.collection. mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name.in => ['Arno', 'Bernd', 'Claudia'] }) fails with `serialize': keys must be strings or symbols (TypeError) in bson-1.1.5 The intermediate query parameter looks like this :query=>{#<Mongoid::Criterion::Complex:0x00000101a209e8 @key=:name, @operator="in">=>["Arno", "Bernd", "Claudia"]} and at least @operator looks a bit weird to me. I'm also uncertain if the class name can be omitted. BTW - I'm using mongodb 1.6.5-x86_64, and the mongoid 2.0.0.beta.20, mongo 1.1.5 and bson 1.1.5 gems on MacOS. What am I doing wrong? Thanks in advance.

    Read the article

  • Jekyll - How to approach asset processing (minification, spriting...)

    - by Gromix
    I recently switched to Jekyll and I find the conversion pipeline works really well. However I'm stuck on which approach to take when the process is many inputs to one output (ex: concatenating CSS files, creating image sprites...) I know several tools that can do it, that can be called either from the command line or in Ruby code directly. For ex: Jammit css sprites Compass sprites My current solution is a few Jekyll plugins that call these tools. However, it has the following problems: 1. SASS files should be processed, then concatenated/minified SASS-CSS is a Converter, and the concatenation is a Generator run on the output. Unfortunately generators are run first, which means the concatenation is always a step behind (I have to run the build twice) 2. Jekyll does not know about the source/output relationship With converters, when I run Jekyll in server mode, if I change a SASS file it automatically runs the conversion to CSS. When dealing with concatenation/spriting, I haven't found a way to do the same. I end up having to run a "normal" Jekyll build (not server auto) to update the concatenated files and sprites. Thanks for any ideas!

    Read the article

  • C++ error: expected initializer before ‘&’ token

    - by Werner
    Hi, the following piece of C++ code compiled two years ago in a suse 10.1 Linux machine. #ifndef DATA_H #define DATA_H #include <iostream> #include <iomanip> inline double sqr(double x) { return x*x; } enum Direction { X,Y,Z }; inline Direction next(const Direction d) { switch(d) { case X: return Y; case Y: return Z; case Z: return X; } } inline ostream& operator<<(ostream& os,const Direction d) { switch(d) { case X: return os << "X"; case Y: return os << "Y"; case Z: return os << "Z"; } } ... ... Now, I am trying to compile it on Ubuntu 9.10 and I get the error: data.h:20: error: expected initializer before ‘&’ token which is referred to the line of: inline ostream& operator<<(ostream& os,const Direction d) the g++ used on this machine is: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.4.1-4ubuntu9' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 --program-suffix=-4.4 --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --disable-werror --with-arch-32=i486 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.4.1 (Ubuntu 4.4.1-4ubuntu9) Could you give me some hint about this error? Thanks

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

  • Refactoring exercise with generics

    - by Berryl
    I have a variation on a Quantity (Fowler) class that is designed to facilitate conversion between units. The type is declared as: public class QuantityConvertibleUnits<TFactory> where TFactory : ConvertableUnitFactory, new() { ... } In order to do math operations between dissimilar units, I convert the right hand side of the operation to the equivalent Quantity of whatever unit the left hand side is in, and do the math on the amount (which is a double) before creating a new Quantity. Inside the generic Quantity class, I have the following: protected static TQuantity _Add<TQuantity>(TQuantity lhs, TQuantity rhs) where TQuantity : QuantityConvertibleUnits<TFactory>, new() { var toUnit = lhs.ConvertableUnit; var equivalentRhs = _Convert<TQuantity>(rhs.Quantity, toUnit); var newAmount = lhs.Quantity.Amount + equivalentRhs.Quantity.Amount; return _Convert<TQuantity>(new Quantity(newAmount, toUnit.Unit), toUnit); } protected static TQuantity _Subtract<TQuantity>(TQuantity lhs, TQuantity rhs) where TQuantity : QuantityConvertibleUnits<TFactory>, new() { var toUnit = lhs.ConvertableUnit; var equivalentRhs = _Convert<TQuantity>(rhs.Quantity, toUnit); var newAmount = lhs.Quantity.Amount - equivalentRhs.Quantity.Amount; return _Convert<TQuantity>(new Quantity(newAmount, toUnit.Unit), toUnit); } ... same for multiply and also divide I need to get the typing right for a concrete Quantity, so an example of an add op looks like: public static ImperialLengthQuantity operator +(ImperialLengthQuantity lhs, ImperialLengthQuantity rhs) { return _Add(lhs, rhs); } The question is those verbose methods in the Quantity class. The only change between the code is the math operator (+, -, *, etc.) so it seems that there should be a way to refactor them into a common method, but I am just not seeing it. How can I refactor that code? Cheers, Berryl

    Read the article

  • How much effort do you have to put in to get gains from using SSE?

    - by John
    Case One Say you have a little class: class Point3D { private: float x,y,z; public: operator+=() ...etc }; Point3D &Point3D::operator+=(Point3D &other) { this->x += other.x; this->y += other.y; this->z += other.z; } A naive use of SSE would simply replace these function bodies with using a few intrinsics. But would we expect this to make much difference? MMX used to involve costly state cahnges IIRC, does SSE or are they just like other instructions? And even if there's no direct "use SSE" overhead, would moving the values into SSE registers and back out again really make it any faster? Case Two Instead, you're working with a less OO-based code base. Rather than an array/vector of Point3D objects, you simply have a big array of floats: float coordinateData[NUM_POINTS*3]; void add(int i,int j) //yes it's unsafe, no overlap check... example only { for (int x=0;x<3;++x) { coordinateData[i*3+x] += coordinateData[j*3+x]; } } What about use of SSE here? Any better? In conclusion Is trying to optimise single vector operations using SSE actually worthwhile, or is it really only valuable when doing bulk operations?

    Read the article

  • templete class c++

    - by inna karpasas
    hi! i try to design a templete for my universty project. i wrote the follwing cod: #ifndef _LinkedList_H_ #define _LinkedList_H_ #include "Link.h" #include <ostream> template <class L>//error one class LinkedList { private: Link<L> *pm_head; Link<L> * pm_tail; int m_numOfElements; Link<L>* FindLink(L * dataToFind); public: LinkedList(); ~LinkedList(); int GetNumOfElements(){return m_numOfElements;} bool Add( L * data); L *FindData(L * data); template <class L> friend ostream & operator<<(ostream& os,const LinkedList<L> listToprint);//error two L* GetDataOnTop(); bool RemoveFromHead(); L* Remove(L * toRemove); this templete uses the link class templete #ifndef _Link_H_ #define _Link_H_ template <class T>//error 3 class Link { private: T* m_data; Link* m_next; Link* m_prev; public: Link(T* data); ~Link(void); bool Link::operator ==(const Link& other)const; /*getters*/ Link* GetNext()const {return m_next;} Link* GetPrev()const {return m_prev;} T* GetData()const {return m_data;} //setters void SetNext(Link* next) {m_next = next;} void SetPrev(Link* prev) {m_prev = prev;} void SetData(T* data) {m_data = data;} }; error one: shadows template parm class L' error two:declaration ofclass L' error three: shadows template parm `class T' i dont understand what is the problem. i can relly usr your help thank you :)

    Read the article

  • Unable to store NSNumber in core data

    - by Kamlesh
    Hi all, I am using Core Data in my application.I have an attribute named PINCode which has property Int64. Now,in my app I take the PIN code from a text field,convert it into NSNumber and try to store as an attribute value for an entity.But I am unable to do so.The exception that it shows on the console is:- Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unacceptable type of value for attribute: property = "PINCode"; desired type = NSNumber; given type = NSCFString; value = 121.' Here is the code:- (Conversion to NSNumber):- NSString *str= txtPINCode.text; NSNumber *num = [[NSNumber alloc]init]; int tempNum = [str intValue]; num = [NSNumber numberWithInt:tempNum]; (Storing in core data entity):- NSManagedObjectContext *context = [appDelegate managedObjectContext]; NSManagedObject *newContact; newContact = [NSEntityDescription insertNewObjectForEntityForName:@"MemberTable" inManagedObjectContext:context]; [newContact setValue:num forKey:@"PINCode"]; The app crashes at this point.I am unable to find the reason of crash.I have also tried to check the conversion by the following code:- NSNumber *testNumber = [[NSNumber alloc]init]; id test = num; BOOL testMe = [test isKindOfClass:(Class)testNumber]; Can anybody help me with this?? Thanks in advance.

    Read the article

  • Why do bind1st and bind2nd require constant function objects?

    - by rlbond
    So, I was writing a C++ program which would allow me to take control of the entire world. I was all done writing the final translation unit, but I got an error: error C3848: expression having type 'const `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>' would lose some const-volatile qualifiers in order to call 'void `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>::operator ()(const point::Point &,const int &)' with [ T=SideCounter, BinaryFunction=std::plus<int> ] c:\program files (x86)\microsoft visual studio 9.0\vc\include\functional(324) : while compiling class template member function 'void std::binder2nd<_Fn2>::operator ()(point::Point &) const' with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] c:\users\****\documents\visual studio 2008\projects\TAKE_OVER_THE_WORLD\grid_divider.cpp(361) : see reference to class template instantiation 'std::binder2nd<_Fn2>' being compiled with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] I looked in the specifications of binder2nd and there it was: it took a const AdaptibleBinaryFunction. So, not a big deal, I thought. I just used boost::bind instead, right? Wrong! Now my take-over-the-world program takes too long to compile (bind is used inside a template which is instantiated quite a lot)! At this rate, my nemesis is going to take over the world first! I can't let that happen -- he uses Java! So can someone tell me why this design decision was made? It seems like an odd decision. I guess I'll have to make some of the elements of my class mutable for now...

    Read the article

  • Error using to_char // to_timestamp

    - by pepersview
    Hello, I have a database in PostgreSQL and I'm developing an application in PHP using this database. The problem is that when I execute the following query I get a nice result in phpPgAdmin but in my PHP application I get an error. The query: SELECT t.t_name, t.t_firstname FROM teachers AS t WHERE t.id_teacher IN (SELECT id_teacher FROM teacher_course AS tcourse JOIN course_timetable AS coursetime ON tcourse.course = coursetime.course AND to_char(to_timestamp('2010-4-12', 'YYYY-MM-DD'),'FMD') = (coursetime.day +1)) AND t.id_teacher NOT IN (SELECT id_teacher FROM teachers_fill WHERE date = '2010-4-12') ORDER BY t.t_name ASC And this is the error in PHP operator does not exist: text = integer (to_timestamp('', 'YYYY-MM-DD'),'FMD') = (courset... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. The purpose to solve this error is to use the ORIGINAL query in php with : $date = "2010"."-".$selected_month."-".$selected_day; SELECT t.t_name, t.t_firstname FROM teachers AS t WHERE t.id_teacher IN (SELECT id_teacher FROM teacher_course AS tcourse JOIN course_timetable AS coursetime ON tcourse.course = coursetime.course AND to_char(to_timestamp('$date', 'YYYY-MM-DD'),'FMD') = (coursetime.day +1)) AND t.id_teacher NOT IN (SELECT id_teacher FROM teachers_fill WHERE date = '$date') ORDER BY t.t_name ASC

    Read the article

  • lambda traits inconsistency across C++0x compilers

    - by Sumant
    I observed some inconsistency between two compilers (g++ 4.5, VS2010 RC) in the way they match lambdas with partial specializations of class templates. I was trying to implement something like boost::function_types for lambdas to extract type traits. Check this for more details. In g++ 4.5, the type of the operator() of a lambda appears to be like that of a free standing function (R (*)(...)) whereas in VS2010 RC, it appears to be like that of a member function (R (C::*)(...)). So the question is are compiler writers free to interpret any way they want? If not, which compiler is correct? See the details below. template <typename T> struct function_traits : function_traits<decltype(&T::operator())> { // This generic template is instantiated on both the compilers as expected. }; template <typename R, typename C> struct function_traits<R (C::*)() const> { // inherits from this one on VS2010 RC typedef R result_type; }; template <typename R> struct function_traits<R (*)()> { // // inherits from this one g++ 4.5 typedef R result_type; }; int main(void) { auto lambda = []{}; function_traits<decltype(lambda)>::result_type *r; // void * } This program compiles on both g++ 4.5 and VS2010 but the function_traits that are instantiated are different as noted in the code.

    Read the article

  • SQL: Order randomly when inserting objects to a table

    - by Ekaterina
    I have an UDF that selects top 6 objects from a table (with a union - code below) and inserts it into another table. (btw SQL 2005) So I paste the UDF below and what the code does is: selects objects for a specific city and add a level to those (from table Europe) union that selection with a selection from the same table for objects that are from the same country and add a level to those From the union, selection is made to get top 6 objects, order by level, so the objects from the same city will be first, and if there aren't any available, then objects from the same country will be returned from the selection. And my problem is, that I want to make a random selection to get random objects from table Europe, but because I insert the result of my selection into a table, I can't use order by newid() or rand() function because they are time-dependent, so I get the following errors: Invalid use of side-effecting or time-dependent operator in 'newid' within a function. Invalid use of side-effecting or time-dependent operator in 'rand' within a function. UDF: ALTER FUNCTION [dbo].[Objects] (@id uniqueidentifier) RETURNS @objects TABLE ( ObjectId uniqueidentifier NOT NULL, InternalId uniqueidentifier NOT NULL ) AS BEGIN declare @city varchar(50) declare @country int select @city = city, @country = country from Europe where internalId = @id insert @objects select @id, internalId from ( select distinct top 6 [level], internalId from ( select top 6 1 as [level], internalId from Europe N4 where N4.city = @city and N4.internalId != @id union select top 6 2 as [level], internalId from Europe N5 where N5.countryId = @country and N5.internalId != @id ) as selection_1 order by [level] ) as selection_2 return END If you have fresh ideas, please share them with me. (Just please, don't suggest to order by newid() or to add a column rand() with seed DateTime (by ms or sthg), because that won't work.)

    Read the article

  • Getting a handle on GIS math, where do I start?

    - by Joshua
    I am in charge of a program that is used to create a set of nodes and paths for consumption by an autonomous ground vehicle. The program keeps track of the locations of all items in its map by indicating the item's position as being x meters north and y meters east of an origin point of 0,0. In the real world, the vehicle knows the location of the origin's lat and long, as it is determined by a dgps system and is accurate down to a couple centimeters. My program is ignorant of any lat long coordinates. It is one of my goals to modify the program to keep track of lat long coords of items in addition to an origin point and items' x,y position in relation to that origin. At first blush, it seems that I am going to modify the program to allow the lat long coords of the origin to be passed in, and after that I desire that the program will automatically calculate the lat long of every item currently in a map. From what I've researched so far, I believe that I will need to figure out the math behind converting to lat long coords from a UTM like projection where I specify the origin points and meridians etc as opposed to whatever is defined already for UTM. I've come to ask of you GIS programmers, am I on the right track? It seems to me like there is so much to wrap ones head around, and I'm not sure if the answer isn't something as simple as, "oh yea theres a conversion from meters to lat long, here" Currently, due to the nature of DGPS, the system really doesn't need to care about locations more than oh, what... 40 km? radius away from the origin. Given this, and the fact that I need to make sure that the error on my coordinates is not greater than .5 meters, do I need anything more complex than a simple lat/long to meters conversion constant? I'm knee deep in materials here. I could use some pointers about what concepts to research. Thanks much!

    Read the article

  • Converting an AnsiString to a Unicode String

    - by jrodenhi
    I'm converting a D2006 program to D2010. I have a value stored in a single byte per character string in my database and I need to load it into a control that has a LoadFromStream, so my plan was to write the string to a stream and use that with LoadFromStream. But it did not work. In studying the problem, I see an issue that tells me that I don't really understand how conversion from AnsiString to Unicode string works. Here is some code I am puzzling over: oStringStream := TStringStream.Create(sBuffer); sUnicodeStream := oPayGrid.sStream; //explicit conversion to unicode string iSize1 := StringElementSize(oPaygrid.sStream); iSize2 := StringElementSize(sUnicodeStream); oStringStream.WriteString(sUnicodeStream); When I get to the last line, iSize1 does equal 1 and iSize2 does equal 2, so that part is what I understood from my reading. But, on the last line, after I write the string to the stream, and look at the Bytes Property of the string, it shows this (the string starts as '16,159'): (49 {$31}, 54 {$36}, 44 {$2C}, 49 {$31}, 53 {$35}, 57 {$39} ... I was expecting that it might look something like (49 {$31}, 00 {$00}, 54 {$36}, 00 {$00}, 44 {$2C}, 00 {$00}, 49 {$31}, 00 {$00}, 53 {$35}, 00 {$00}, 57 {$39}, 00 {$00} ... I'm not getting the right results out of the LoadFromStream because it is reading from the stream two bytes at a time, but the data it is receiving is not arranged that way. What is it that I should do to give the LoadFromStream a well formed stream of data based on a unicode string? Thank you for your help.

    Read the article

  • Execute process conditionally in Windows PowerShell (e.g. the && and || operators in Bash)

    - by Dustin
    I'm wondering if anybody knows of a way to conditionally execute a program depending on the exit success/failure of the previous program. Is there any way for me to execute a program2 immediately after program1 if program1 exits successfully without testing the LASTEXITCODE variable? I tried the -band and -and operators to no avail, though I had a feeling they wouldn't work anyway, and the best substitute is a combination of a semicolon and an if statement. I mean, when it comes to building a package somewhat automatically from source on Linux, the && operator can't be beaten: # Configure a package, compile it and install it ./configure && make && sudo make install PowerShell would require me to do the following, assuming I could actually use the same build system in PowerShell: # Configure a package, compile it and install it .\configure ; if ($LASTEXITCODE -eq 0) { make ; if ($LASTEXITCODE -eq 0) { sudo make install } } Sure, I could use multiple lines, save it in a file and execute the script, but the idea is for it to be concise (save keystrokes). Perhaps it's just a difference between PowerShell and Bash (and even the built-in Windows command prompt which supports the && operator) I'll need to adjust to, but if there's a cleaner way to do it, I'd love to know.

    Read the article

  • Postgres error with Sinatra/Haml/DataMapper on Heroku

    - by sevennineteen
    I'm trying to move a simple Sinatra app over to Heroku. Migration of the Ruby app code and existing MySQL database using Taps went smoothly, but I'm getting the following Postgres error: PostgresError - ERROR: operator does not exist: text = integer LINE 1: ...d_at", "post_id" FROM "comments" WHERE ("post_id" IN (4, 17,... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. It's evident that the problem is related to a type mismatch in the query, but this is being issued from a Haml template by the DataMapper ORM at a very high level of abstraction, so I'm not sure how I'd go about controlling this... Specifically, this seems to be throwing up on a call of p.comments from my Haml template, where p represents a given post. The Datamapper models are related as follows: class Post property :id, Serial ... has n, :comments end class Comment property :id, Serial ... belongs_to :post end This works fine on my local and current hosted environment using MySQL, but Postgres is clearly more strict. There must be hundreds of Datamapper & Haml apps running on Postgres DBs, and this model relationship is super-conventional, so hopefully someone has seen (and determined how to fix) this. Thanks!

    Read the article

  • Compilation errors calling find_if using a functor

    - by Jim Wong
    We are having a bit of trouble using find_if to search a vector of pairs for an entry in which the first element of the pair matches a particular value. To make this work, we have defined a trivial functor whose operator() takes a pair as input and compares the first entry against a string. Unfortunately, when we actually add a call to find_if using an instance of our functor constructed using a temporary string value, the compiler produces a raft of error messages. Oddly (to me, anyway), if we replace the temporary with a string that we've created on the stack, things seem to work. Here's what the code (including both versions) looks like: typedef std::pair<std::string, std::string> MyPair; typedef std::vector<MyPair> MyVector; struct MyFunctor: std::unary_function <const MyPair&, bool> { explicit MyFunctor(const std::string& val) : m_val(val) {} bool operator() (const MyPair& p) { return p.first == m_val; } const std::string m_val; }; bool f(const char* s) { MyFunctor f(std::string(s)); // ERROR // std::string str(s); // MyFunctor f(str); // OK MyVector vec; MyVector::const_iterator i = std::find_if(vec.begin(), vec.end(), f); return i != vec.end(); } And here's what the most interesting error message looks like: /usr/include/c++/4.2.1/bits/stl_algo.h:260: error: conversion from ‘std::pair, std::allocator , std::basic_string, std::allocator ’ to non-scalar type ‘std::string’ requested Because we have a workaround, we're mostly curious as to why the first form causes problems. I'm sure we're missing something, but we haven't been able to figure out what it is.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >