Search Results

Search found 3340 results on 134 pages for 'comma operator'.

Page 69/134 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • C++ error: expected initializer before ‘&’ token

    - by Werner
    Hi, the following piece of C++ code compiled two years ago in a suse 10.1 Linux machine. #ifndef DATA_H #define DATA_H #include <iostream> #include <iomanip> inline double sqr(double x) { return x*x; } enum Direction { X,Y,Z }; inline Direction next(const Direction d) { switch(d) { case X: return Y; case Y: return Z; case Z: return X; } } inline ostream& operator<<(ostream& os,const Direction d) { switch(d) { case X: return os << "X"; case Y: return os << "Y"; case Z: return os << "Z"; } } ... ... Now, I am trying to compile it on Ubuntu 9.10 and I get the error: data.h:20: error: expected initializer before ‘&’ token which is referred to the line of: inline ostream& operator<<(ostream& os,const Direction d) the g++ used on this machine is: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.4.1-4ubuntu9' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 --program-suffix=-4.4 --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --disable-werror --with-arch-32=i486 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.4.1 (Ubuntu 4.4.1-4ubuntu9) Could you give me some hint about this error? Thanks

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

  • How much effort do you have to put in to get gains from using SSE?

    - by John
    Case One Say you have a little class: class Point3D { private: float x,y,z; public: operator+=() ...etc }; Point3D &Point3D::operator+=(Point3D &other) { this->x += other.x; this->y += other.y; this->z += other.z; } A naive use of SSE would simply replace these function bodies with using a few intrinsics. But would we expect this to make much difference? MMX used to involve costly state cahnges IIRC, does SSE or are they just like other instructions? And even if there's no direct "use SSE" overhead, would moving the values into SSE registers and back out again really make it any faster? Case Two Instead, you're working with a less OO-based code base. Rather than an array/vector of Point3D objects, you simply have a big array of floats: float coordinateData[NUM_POINTS*3]; void add(int i,int j) //yes it's unsafe, no overlap check... example only { for (int x=0;x<3;++x) { coordinateData[i*3+x] += coordinateData[j*3+x]; } } What about use of SSE here? Any better? In conclusion Is trying to optimise single vector operations using SSE actually worthwhile, or is it really only valuable when doing bulk operations?

    Read the article

  • Weird bug with C++ lambda expressions in VS2010

    - by Andrei Tita
    In a couple of my projects, the following code: class SmallClass { public: int x1, y1; void TestFunc() { auto BadLambda = [&]() { int g = x1 + 1; //ok int h = y1 + 1; //c2296 int l = static_cast<int>(y1); //c2440 }; int y1_copy = y1; //it works if you create a local copy auto GoodLambda = [&]() { int h = y1_copy + 1; //ok int l = this->y1 + 1; //ok }; } }; generates error C2296: '+' : illegal, left operand has type 'double (__cdecl *)(double)' or alternatively error C2440: 'static_cast' : cannot convert from 'double (__cdecl *)(double)' to 'int' You get the picture. It also happens if catching by value. The error seems to be tied to the member name "y1". It happened in different classes, different projects and with (seemingly) any type for y1; for example, this code: [...] MyClass y1; void TestFunc() { auto BadLambda = [&]()->void { int l = static_cast<int>(y1); //c2440 }; } generates both these errors: error C2440: 'static_cast' : cannot convert from 'MyClass' to 'int' No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called error C2440: 'static_cast' : cannot convert from 'double (__cdecl *)(double)' to 'int' There is no context in which this conversion is possible It didn't, however, happen in a completely new project. I thought maybe it was related to Lua (the projects where I managed to reproduce this bug both used Lua), but I did not manage to reproduce it in a new project linking Lua. It doesn't seem to be a known bug, and I'm at a loss. Any ideas as to why this happens? (I don't need a workaround; there are a few in the code already). Using Visual Studio 2010 Express version 10.0.40219.1 Sp1Rel.

    Read the article

  • Refactoring exercise with generics

    - by Berryl
    I have a variation on a Quantity (Fowler) class that is designed to facilitate conversion between units. The type is declared as: public class QuantityConvertibleUnits<TFactory> where TFactory : ConvertableUnitFactory, new() { ... } In order to do math operations between dissimilar units, I convert the right hand side of the operation to the equivalent Quantity of whatever unit the left hand side is in, and do the math on the amount (which is a double) before creating a new Quantity. Inside the generic Quantity class, I have the following: protected static TQuantity _Add<TQuantity>(TQuantity lhs, TQuantity rhs) where TQuantity : QuantityConvertibleUnits<TFactory>, new() { var toUnit = lhs.ConvertableUnit; var equivalentRhs = _Convert<TQuantity>(rhs.Quantity, toUnit); var newAmount = lhs.Quantity.Amount + equivalentRhs.Quantity.Amount; return _Convert<TQuantity>(new Quantity(newAmount, toUnit.Unit), toUnit); } protected static TQuantity _Subtract<TQuantity>(TQuantity lhs, TQuantity rhs) where TQuantity : QuantityConvertibleUnits<TFactory>, new() { var toUnit = lhs.ConvertableUnit; var equivalentRhs = _Convert<TQuantity>(rhs.Quantity, toUnit); var newAmount = lhs.Quantity.Amount - equivalentRhs.Quantity.Amount; return _Convert<TQuantity>(new Quantity(newAmount, toUnit.Unit), toUnit); } ... same for multiply and also divide I need to get the typing right for a concrete Quantity, so an example of an add op looks like: public static ImperialLengthQuantity operator +(ImperialLengthQuantity lhs, ImperialLengthQuantity rhs) { return _Add(lhs, rhs); } The question is those verbose methods in the Quantity class. The only change between the code is the math operator (+, -, *, etc.) so it seems that there should be a way to refactor them into a common method, but I am just not seeing it. How can I refactor that code? Cheers, Berryl

    Read the article

  • templete class c++

    - by inna karpasas
    hi! i try to design a templete for my universty project. i wrote the follwing cod: #ifndef _LinkedList_H_ #define _LinkedList_H_ #include "Link.h" #include <ostream> template <class L>//error one class LinkedList { private: Link<L> *pm_head; Link<L> * pm_tail; int m_numOfElements; Link<L>* FindLink(L * dataToFind); public: LinkedList(); ~LinkedList(); int GetNumOfElements(){return m_numOfElements;} bool Add( L * data); L *FindData(L * data); template <class L> friend ostream & operator<<(ostream& os,const LinkedList<L> listToprint);//error two L* GetDataOnTop(); bool RemoveFromHead(); L* Remove(L * toRemove); this templete uses the link class templete #ifndef _Link_H_ #define _Link_H_ template <class T>//error 3 class Link { private: T* m_data; Link* m_next; Link* m_prev; public: Link(T* data); ~Link(void); bool Link::operator ==(const Link& other)const; /*getters*/ Link* GetNext()const {return m_next;} Link* GetPrev()const {return m_prev;} T* GetData()const {return m_data;} //setters void SetNext(Link* next) {m_next = next;} void SetPrev(Link* prev) {m_prev = prev;} void SetData(T* data) {m_data = data;} }; error one: shadows template parm class L' error two:declaration ofclass L' error three: shadows template parm `class T' i dont understand what is the problem. i can relly usr your help thank you :)

    Read the article

  • Why do bind1st and bind2nd require constant function objects?

    - by rlbond
    So, I was writing a C++ program which would allow me to take control of the entire world. I was all done writing the final translation unit, but I got an error: error C3848: expression having type 'const `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>' would lose some const-volatile qualifiers in order to call 'void `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>::operator ()(const point::Point &,const int &)' with [ T=SideCounter, BinaryFunction=std::plus<int> ] c:\program files (x86)\microsoft visual studio 9.0\vc\include\functional(324) : while compiling class template member function 'void std::binder2nd<_Fn2>::operator ()(point::Point &) const' with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] c:\users\****\documents\visual studio 2008\projects\TAKE_OVER_THE_WORLD\grid_divider.cpp(361) : see reference to class template instantiation 'std::binder2nd<_Fn2>' being compiled with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] I looked in the specifications of binder2nd and there it was: it took a const AdaptibleBinaryFunction. So, not a big deal, I thought. I just used boost::bind instead, right? Wrong! Now my take-over-the-world program takes too long to compile (bind is used inside a template which is instantiated quite a lot)! At this rate, my nemesis is going to take over the world first! I can't let that happen -- he uses Java! So can someone tell me why this design decision was made? It seems like an odd decision. I guess I'll have to make some of the elements of my class mutable for now...

    Read the article

  • Error using to_char // to_timestamp

    - by pepersview
    Hello, I have a database in PostgreSQL and I'm developing an application in PHP using this database. The problem is that when I execute the following query I get a nice result in phpPgAdmin but in my PHP application I get an error. The query: SELECT t.t_name, t.t_firstname FROM teachers AS t WHERE t.id_teacher IN (SELECT id_teacher FROM teacher_course AS tcourse JOIN course_timetable AS coursetime ON tcourse.course = coursetime.course AND to_char(to_timestamp('2010-4-12', 'YYYY-MM-DD'),'FMD') = (coursetime.day +1)) AND t.id_teacher NOT IN (SELECT id_teacher FROM teachers_fill WHERE date = '2010-4-12') ORDER BY t.t_name ASC And this is the error in PHP operator does not exist: text = integer (to_timestamp('', 'YYYY-MM-DD'),'FMD') = (courset... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. The purpose to solve this error is to use the ORIGINAL query in php with : $date = "2010"."-".$selected_month."-".$selected_day; SELECT t.t_name, t.t_firstname FROM teachers AS t WHERE t.id_teacher IN (SELECT id_teacher FROM teacher_course AS tcourse JOIN course_timetable AS coursetime ON tcourse.course = coursetime.course AND to_char(to_timestamp('$date', 'YYYY-MM-DD'),'FMD') = (coursetime.day +1)) AND t.id_teacher NOT IN (SELECT id_teacher FROM teachers_fill WHERE date = '$date') ORDER BY t.t_name ASC

    Read the article

  • Execute process conditionally in Windows PowerShell (e.g. the && and || operators in Bash)

    - by Dustin
    I'm wondering if anybody knows of a way to conditionally execute a program depending on the exit success/failure of the previous program. Is there any way for me to execute a program2 immediately after program1 if program1 exits successfully without testing the LASTEXITCODE variable? I tried the -band and -and operators to no avail, though I had a feeling they wouldn't work anyway, and the best substitute is a combination of a semicolon and an if statement. I mean, when it comes to building a package somewhat automatically from source on Linux, the && operator can't be beaten: # Configure a package, compile it and install it ./configure && make && sudo make install PowerShell would require me to do the following, assuming I could actually use the same build system in PowerShell: # Configure a package, compile it and install it .\configure ; if ($LASTEXITCODE -eq 0) { make ; if ($LASTEXITCODE -eq 0) { sudo make install } } Sure, I could use multiple lines, save it in a file and execute the script, but the idea is for it to be concise (save keystrokes). Perhaps it's just a difference between PowerShell and Bash (and even the built-in Windows command prompt which supports the && operator) I'll need to adjust to, but if there's a cleaner way to do it, I'd love to know.

    Read the article

  • SQL: Order randomly when inserting objects to a table

    - by Ekaterina
    I have an UDF that selects top 6 objects from a table (with a union - code below) and inserts it into another table. (btw SQL 2005) So I paste the UDF below and what the code does is: selects objects for a specific city and add a level to those (from table Europe) union that selection with a selection from the same table for objects that are from the same country and add a level to those From the union, selection is made to get top 6 objects, order by level, so the objects from the same city will be first, and if there aren't any available, then objects from the same country will be returned from the selection. And my problem is, that I want to make a random selection to get random objects from table Europe, but because I insert the result of my selection into a table, I can't use order by newid() or rand() function because they are time-dependent, so I get the following errors: Invalid use of side-effecting or time-dependent operator in 'newid' within a function. Invalid use of side-effecting or time-dependent operator in 'rand' within a function. UDF: ALTER FUNCTION [dbo].[Objects] (@id uniqueidentifier) RETURNS @objects TABLE ( ObjectId uniqueidentifier NOT NULL, InternalId uniqueidentifier NOT NULL ) AS BEGIN declare @city varchar(50) declare @country int select @city = city, @country = country from Europe where internalId = @id insert @objects select @id, internalId from ( select distinct top 6 [level], internalId from ( select top 6 1 as [level], internalId from Europe N4 where N4.city = @city and N4.internalId != @id union select top 6 2 as [level], internalId from Europe N5 where N5.countryId = @country and N5.internalId != @id ) as selection_1 order by [level] ) as selection_2 return END If you have fresh ideas, please share them with me. (Just please, don't suggest to order by newid() or to add a column rand() with seed DateTime (by ms or sthg), because that won't work.)

    Read the article

  • lambda traits inconsistency across C++0x compilers

    - by Sumant
    I observed some inconsistency between two compilers (g++ 4.5, VS2010 RC) in the way they match lambdas with partial specializations of class templates. I was trying to implement something like boost::function_types for lambdas to extract type traits. Check this for more details. In g++ 4.5, the type of the operator() of a lambda appears to be like that of a free standing function (R (*)(...)) whereas in VS2010 RC, it appears to be like that of a member function (R (C::*)(...)). So the question is are compiler writers free to interpret any way they want? If not, which compiler is correct? See the details below. template <typename T> struct function_traits : function_traits<decltype(&T::operator())> { // This generic template is instantiated on both the compilers as expected. }; template <typename R, typename C> struct function_traits<R (C::*)() const> { // inherits from this one on VS2010 RC typedef R result_type; }; template <typename R> struct function_traits<R (*)()> { // // inherits from this one g++ 4.5 typedef R result_type; }; int main(void) { auto lambda = []{}; function_traits<decltype(lambda)>::result_type *r; // void * } This program compiles on both g++ 4.5 and VS2010 but the function_traits that are instantiated are different as noted in the code.

    Read the article

  • Postgres error with Sinatra/Haml/DataMapper on Heroku

    - by sevennineteen
    I'm trying to move a simple Sinatra app over to Heroku. Migration of the Ruby app code and existing MySQL database using Taps went smoothly, but I'm getting the following Postgres error: PostgresError - ERROR: operator does not exist: text = integer LINE 1: ...d_at", "post_id" FROM "comments" WHERE ("post_id" IN (4, 17,... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. It's evident that the problem is related to a type mismatch in the query, but this is being issued from a Haml template by the DataMapper ORM at a very high level of abstraction, so I'm not sure how I'd go about controlling this... Specifically, this seems to be throwing up on a call of p.comments from my Haml template, where p represents a given post. The Datamapper models are related as follows: class Post property :id, Serial ... has n, :comments end class Comment property :id, Serial ... belongs_to :post end This works fine on my local and current hosted environment using MySQL, but Postgres is clearly more strict. There must be hundreds of Datamapper & Haml apps running on Postgres DBs, and this model relationship is super-conventional, so hopefully someone has seen (and determined how to fix) this. Thanks!

    Read the article

  • Compilation errors calling find_if using a functor

    - by Jim Wong
    We are having a bit of trouble using find_if to search a vector of pairs for an entry in which the first element of the pair matches a particular value. To make this work, we have defined a trivial functor whose operator() takes a pair as input and compares the first entry against a string. Unfortunately, when we actually add a call to find_if using an instance of our functor constructed using a temporary string value, the compiler produces a raft of error messages. Oddly (to me, anyway), if we replace the temporary with a string that we've created on the stack, things seem to work. Here's what the code (including both versions) looks like: typedef std::pair<std::string, std::string> MyPair; typedef std::vector<MyPair> MyVector; struct MyFunctor: std::unary_function <const MyPair&, bool> { explicit MyFunctor(const std::string& val) : m_val(val) {} bool operator() (const MyPair& p) { return p.first == m_val; } const std::string m_val; }; bool f(const char* s) { MyFunctor f(std::string(s)); // ERROR // std::string str(s); // MyFunctor f(str); // OK MyVector vec; MyVector::const_iterator i = std::find_if(vec.begin(), vec.end(), f); return i != vec.end(); } And here's what the most interesting error message looks like: /usr/include/c++/4.2.1/bits/stl_algo.h:260: error: conversion from ‘std::pair, std::allocator , std::basic_string, std::allocator ’ to non-scalar type ‘std::string’ requested Because we have a workaround, we're mostly curious as to why the first form causes problems. I'm sure we're missing something, but we haven't been able to figure out what it is.

    Read the article

  • Generic InBetween Function.

    - by Luiscencio
    I am tired of writing x > min && x < max so i wawnt to write a simple function but I am not sure if I am doing it right... actually I am not cuz I get an error: bool inBetween<T>(T x, T min, T max) where T:IComparable { return (x > min && x < max); } errors: Operator '>' cannot be applied to operands of type 'T' and 'T' Operator '<' cannot be applied to operands of type 'T' and 'T' may I have a bad understanding of the where part in the function declaring note: for those who are going to tell me that I will be writing more code than before... think on readability =) any help will be appreciated EDIT deleted cuz it was resolved =) ANOTHER EDIT so after some headache I came out with this (ummm) thing following @Jay Idea of extreme readability: public static class test { public static comparision Between<T>(this T a,T b) where T : IComparable { var ttt = new comparision(); ttt.init(a); ttt.result = a.CompareTo(b) > 0; return ttt; } public static bool And<T>(this comparision state, T c) where T : IComparable { return state.a.CompareTo(c) < 0 && state.result; } public class comparision { public IComparable a; public bool result; public void init<T>(T ia) where T : IComparable { a = ia; } } } now you can compare anything with extreme readability =) what do you think.. I am no performance guru so any tweaks are welcome

    Read the article

  • What's the recommended implementation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • How would I code a complex formula parser manually?

    - by StormianRootSolver
    Hm, this is language - agnostic, I would prefer doing it in C# or F#, but I'm more interested this time in the question "how would that work anyway". What I want to accomplish ist: a) I want to LEARN it - it's about my ego this time, it's for a fun project where I want to show myself that I'm a really good at this stuff b) I know a tiny little bit about EBNF (although I don't know yet, how operator precedence works in EBNF - Irony.NET does it right, I checked the examples, but this is a bit ominous to me) c) My parser should be able to take this: 5 * (3 + (2 - 9 * (5 / 7)) + 9) for example and give me the right results d) To be quite frankly, this seems to be the biggest problem in writing a compiler or even an interpreter for me. I would have no problem generating even 64 bit assembler code (I CAN write assembler manually), but the formula parser... e) Another thought: even simple computers (like my old Sharp 1246S with only about 2kB of RAM) can do that... it can't be THAT hard, right? And even very, very old programming languages have formula evaluation... BASIC is from 1964 and they already could calculate the kind of formula I presented as an example f) A few ideas, a few inspirations would be really enough - I just have no clue how to do operator precedence and the parentheses - I DO, however, know that it involves an AST and that many people use a stack So, what do you think?

    Read the article

  • Mutual class instances in C++

    - by SepiDev
    Hi guys. What is the issue with this code? Here we have two files: classA.h and classB.h classA.h: #ifndef _class_a_h_ #define _class_a_h_ #include "classB.h" class B; //???? class A { public: A() { ptr_b = new B(); //???? } virtual ~A() { if(ptr_b) delete ptr_b; //???? num_a = 0; } int num_a; B* ptr_b; //???? }; #endif //_class_a_h_ classB.h: #ifndef _class_b_h_ #define _class_b_h_ #include "classA.h" class A; //???? class B { public: B() { ptr_a = new A(); //???? num_b = 0; } virtual ~B() { if(ptr_a) delete ptr_a; //???? } int num_b; A* ptr_a; //???? }; #endif //_class_b_h_ when I try to compile it, the compiler (g++) says: classB.h: In constructor ‘B::B()’: classB.h:12: error: invalid use of incomplete type ‘struct A’ classB.h:6: error: forward declaration of ‘struct A’ classB.h: In destructor ‘virtual B::~B()’: classB.h:16: warning: possible problem detected in invocation of delete operator: classB.h:16: warning: invalid use of incomplete type ‘struct A’ classB.h:6: warning: forward declaration of ‘struct A’ classB.h:16: note: neither the destructor nor the class-specific operator delete will be called, even if they are declared when the class is defined.

    Read the article

  • Unable to send '+' through AJAX post?

    - by Harish Kurup
    I am using Ajax POST method to send data, but i am not able to send '+'(operator to the server i.e if i want to send 1+ or 20k+ it will only send 1 or 20k..just wipe out '+') HTML code goes here.. <form method='post' onsubmit='return false;' action='#'> <input type='input' name='salary' id='salary' /> <input type='submit' onclick='submitVal();' /> </form> and javascript code goes here, function submitVal() { var sal=document.getElementById("salary").value; alert(sal); var request=getHttpRequest(); request.open('post','updateSal.php',false); request.setRequestHeader("Content-Type","application/x-www-form-urlencoded"); request.send("sal="+sal); if(request.readyState == 4) { alert("update"); } } function getHttpRequest() { var request=false; if(window.XMLHttpRequest) { request=new XMLHttpRequest(); } else if(window.ActiveXObject) { try { request=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try { request=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e) { request=false; } } } return request; } in the function submitVal() it first alert's the salary value as it is(if 1+ then alerts 1+), but when it is posted it just post's value without '+' operator which is needed... is it any problem with query string, as the PHP backend code is working fine...

    Read the article

  • XPath ordered priority attribute search

    - by user94000
    I want to write an XPath that can return some link elements on an HTML DOM. The syntax is wrong, but here is the gist of what I want: //web:link[@text='Login' THEN_TRY @href='login.php' THEN_TRY @index=0] THEN_TRY is a made-up operator, because I can't find what operator(s) to use. If many links exist on the page for the given set of [attribute=name] pairs, the link which matches the most left-most attribute(s) should be returned instead of any others. For example, consider a case where the above example XPath finds 3 links that match any of the given attributes: link A: text='Sign In', href='Login.php', index=0 link B: text='Login', href='Signin.php', index=15 link C: text='Login', href='Login.php', index=22 Link C ranks as the best match because it matches the First and Second attributes. Link B ranks second because it only matches the First attribute. Link A ranks last because it does not match the First attribute; it only matches the Second and Third attributes. The XPath should return the best match, Link C. If more than one link were tied for "best match", the XPath should return the first best link that it found on the page.

    Read the article

  • Accept templated parameter of stl_container_type<string>::iterator

    - by Rodion Ingles
    I have a function where I have a container which holds strings (eg vector<string>, set<string>, list<string>) and, given a start iterator and an end iterator, go through the iterator range processing the strings. Currently the function is declared like this: template< typename ContainerIter> void ProcessStrings(ContainerIter begin, ContainerIter end); Now this will accept any type which conforms to the implicit interface of implementing operator*, prefix operator++ and whatever other calls are in the function body. What I really want to do is have a definition like the one below which explicitly restricts the amount of input (pseudocode warning): template< typename Container<string>::iterator> void ProcessStrings(Container<string>::iterator begin, Container<string>::iterator end); so that I can use it as such: vector<string> str_vec; list<string> str_list; set<SomeOtherClass> so_set; ProcessStrings(str_vec.begin(), str_vec.end()); // OK ProcessStrings(str_list.begin(), str_list.end()); //OK ProcessStrings(so_set.begin(), so_set.end()); // Error Essentially, what I am trying to do is restrict the function specification to make it obvious to a user of the function what it accepts and if the code fails to compile they get a message that they are using the wrong parameter types rather than something in the function body that XXX function could not be found for XXX class.

    Read the article

  • Potential problem with C standard malloc'ing chars.

    - by paxdiablo
    When answering a comment to another answer of mine here, I found what I think may be a hole in the C standard (c1x, I haven't checked the earlier ones and yes, I know it's incredibly unlikely that I alone among all the planet's inhabitants have found a bug in the standard). Information follows: Section 6.5.3.4 ("The sizeof operator") para 2 states "The sizeof operator yields the size (in bytes) of its operand". Para 3 of that section states: "When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1". Section 7.20.3.3 describes void *malloc(size_t sz) but all it says is "The malloc function allocates space for an object whose size is specified by size and whose value is indeterminate". It makes no mention at all what units are used for the argument. Annex E startes the 8 is the minimum value for CHAR_BIT so chars can be more than one byte in length. My question is simply this: In an environment where a char is 16 bits wide, will malloc(10 * sizeof(char)) allocate 10 chars (20 bytes) or 10 bytes? Point 1 above seems to indicate the former, point 2 indicates the latter. Anyone with more C-standard-fu than me have an answer for this?

    Read the article

  • Groovy as a substitute for Java when using BigDecimal?

    - by geejay
    I have just completed an evaluation of Java, Groovy and Scala. The factors I considered were: readability, precision The factors I would like to know: performance, ease of integration I needed a BigDecimal level of precision. Here are my results: Java void someOp() { BigDecimal del_theta_1 = toDec(6); BigDecimal del_theta_2 = toDec(2); BigDecimal del_theta_m = toDec(0); del_theta_m = abs(del_theta_1.subtract(del_theta_2)) .divide(log(del_theta_1.divide(del_theta_2))); } Groovy void someOp() { def del_theta_1 = 6.0 def del_theta_2 = 2.0 def del_theta_m = 0.0 del_theta_m = Math.abs(del_theta_1 - del_theta_2) / Math.log(del_theta_1 / del_theta_2); } Scala def other(){ var del_theta_1 = toDec(6); var del_theta_2 = toDec(2); var del_theta_m = toDec(0); del_theta_m = ( abs(del_theta_1 - del_theta_2) / log(del_theta_1 / del_theta_2) ) } Note that in Java and Scala I used static imports. Java: Pros: it is Java Cons: no operator overloading (lots o methods), barely readable/codeable Groovy: Pros: default BigDecimal means no visible typing, least surprising BigDecimal support for all operations (division included) Cons: another language to learn Scala: Pros: has operator overloading for BigDecimal Cons: some surprising behaviour with division (fixed with Decimal128), another language to learn

    Read the article

  • How to avoid the following purify detected memory leak in C++?

    - by Abhijeet
    Hi, I am getting the following memory leak.Its being probably caused by std::string. how can i avoid it? PLK: 23 bytes potentially leaked at 0xeb68278 * Suppressed in /vobs/ubtssw_brrm/test/testcases/.purify [line 3] * This memory was allocated from: malloc [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] operator new(unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] operator new(unsigned) [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_S_create(unsigned, unsigned, std::allocator<char> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/ x86/586/target/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_M_clone(std::allocator<char> const&, unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/tar get/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::string<char, std::char_traits<char>, std::allocator<char>>(std::string<char, std::char_traits<char>, std::alloc ator<char>> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] uec_UEDir::getEntryToUpdateAfterInsertion(rcapi_ImsiGsmMap const&, rcapi_ImsiGsmMap&, std::_Rb_tree_iterator<std::pair<std::string<char, std::char_traits<char>, std::allocator< char>> const, UEDirData >>&) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:2278] uec_UEDir::addUpdate(rcapi_ImsiGsmMap const&, LocalUEDirInfo&, rcapi_ImsiGsmMap&, int, unsigned char) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:282] ucx_UEDirHandler::addUpdateUEDir(rcapi_ImsiGsmMap, UEDirUpdateType, acap_PresenceEvent) [/vobs/ubtssw_brrm/ucx/linux-x86/../src/ucx_UEDirHandler.cc:374]

    Read the article

  • User defined literal arguments are not constexpr?

    - by Pubby
    I'm testing out user defined literals. I want to make _fac return the factorial of the number. Having it call a constexpr function works, however it doesn't let me do it with templates as the compiler complains that the arguments are not and cannot be constexpr. I'm confused by this - aren't literals constant expressions? The 5 in 5_fac is always a literal that can be evaluated during compile time, so why can't I use it as such? First method: constexpr int factorial_function(int x) { return (x > 0) ? x * factorial_function(x - 1) : 1; } constexpr int operator "" _fac(unsigned long long x) { return factorial_function(x); // this works } Second method: template <int N> struct factorial { static const unsigned int value = N * factorial<N - 1>::value; }; template <> struct factorial<0> { static const unsigned int value = 1; }; constexpr int operator "" _fac(unsigned long long x) { return factorial_template<x>::value; // doesn't work - x is not a constexpr }

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >