Search Results

Search found 3861 results on 155 pages for 'assignment operator'.

Page 136/155 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Javascript Getting Objects to Fallback to One Another

    - by Ian
    Here's a ugly bit of Javascript it would be nice to find a workaround. Javascript has no classes, and that is a good thing. But it implements fallback between objects in a rather ugly way. The foundational construct should be to have one object that, when a property fails to be found, it falls back to another object. So if we want a to fall back to b we would want to do something like: a = {sun:1}; b = {dock:2}; a.__fallback__ = b; then a.dock == 2; But, Javascript instead provides a new operator and prototypes. So we do the far less elegant: function A(sun) { this.sun = sun; }; A.prototype.dock = 2; a = new A(1); a.dock == 2; But aside from elegance, this is also strictly less powerful, because it means that anything created with A gets the same fallback object. What I would like to do is liberate Javascript from this artificial limitation and have the ability to give any individual object any other individual object as its fallback. That way I could keep the current behavior when it makes sense, but use object-level inheritance when that makes sense. My initial approach is to create a dummy constructor function: function setFallback(from_obj, to_obj) { from_obj.constructor = function () {}; from_obj.constructor.prototype = to_obj; } a = {sun:1}; b = {dock:2}; setFallback(a, b); But unfortunately: a.dock == undefined; Any ideas why this doesn't work, or any solutions for an implementation of setFallback? (I'm running on V8, via node.js, in case this is platform dependent)

    Read the article

  • Linq2Sql: query - subquery optimisation

    - by Budda
    I have the following query: IList<InfrStadium> stadiums = (from sector in DbContext.sectors where sector.Type=typeValue select new InfrStadium(sector.TeamId) ).ToList(); and InfrStadium class constructor: private InfrStadium(int teamId) { IList<Sector> teamSectors = (from sector in DbContext.sectors where sector.TeamId==teamId select sector) .ToList<>(); ... work with data } Current implementation perform 1+n queries, where n - number of records fetched the 1st time. I want to optimize that. And another one I would love to do using 'group' operator in way like this: IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IEnumerable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } But attempt to launch query causes the following error: Expression of type 'System.Int32' cannot be used for constructor parameter of type 'System.Collections.Generic.IEnumerable`1[InfrStadiumSector]' Question 1: Could you please explain, what is wrong here, I don't understand why 'team_sectors' is applied as 'System.Int32'? I've tried to change query a little (replace IEnumerable with IQueryeable): IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors.AsQueryable()) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IQueryeable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } In this case I've received another but similar error: Expression of type 'System.Int32' cannot be used for parameter of type 'System.Collections.Generic.IEnumerable1[InfrStadiumSector]' of method 'System.Linq.IQueryable1[InfrStadiumSector] AsQueryableInfrStadiumSector' Question 2: Actually, the same question: can't understand at all what is going on here... P.S. I have another to optimize query idea (describe here: Linq2Sql: query optimisation) but I would love to find a solution with 1 request to DB).

    Read the article

  • Solved: Help with this compile error

    - by Scott
    I just picked up an old project and I'm not sure what the following error could mean. g++ -o BufferedReader.o -c -g -Wall -std=c++0x -I/usr/include/xmms2 -Ijsoncpp/include/json/ -fopenmp -I/usr/include/ImageMagick -I/usr/include/xmms2 -I/usr/include/libvisual-0.4 -D_GNU_SOURCE=1 -D_REENTRANT -I/usr/include/SDL -DQT_CORE_LIB -DQT_GUI_LIB -DQT_SCRIPT_LIB -DQT_SHARED -I/usr/include/QtCore -I/usr/include/QtGui -I/usr/include/QtScript BufferedReader.cpp In file included from BufferedReader.cpp:23: /usr/include/string.h:36:42: error: missing binary operator before token "(" In file included from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/cwchar:47, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/bits/postypes.h:42, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/iosfwd:42, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/ios:39, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/istream:40, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/sstream:39, from BufferedReader.cpp:24: At line 24 of BufferedReader.cpp is #include <string.h>. I've tried it with just <string> but get the same thing. Any clue? Here's the snippet of code from string.h /* Tell the caller that we provide correct C++ prototypes. */ #if defined __cplusplus && __GNUC_PREREQ (4, 4) //line 36 # define __CORRECT_ISO_CPP_STRING_H_PROTO #endif Does that mean __GNUC_PREREQ isn't defined? Edit: Changing -Ijsoncpp/include/json/ to Ijsoncpp/include stopped the errors. I noticed I was including <json/json.h>. I'm about to switch to JsonGlib though, which is the reason I pulled the project up again. So it's all good. :)

    Read the article

  • Combining two data sets and plotting in matlab

    - by bautrey
    I am doing experiments with different operational amplifier circuits and I need to plot my measured results onto a graph. I have two data sets: freq1 = [.1 .2 .5 .7 1 3 4 6 10 20 35 45 60 75 90 100]; %kHz Vo1 = [1.2 1.6 1.2 2 2 2.4 14.8 20.4 26.4 30.4 53.6 68.8 90 114 140 152]; %mV V1 = 19.6; Acm = Vo1/(1000*V1); And: freq2 = [.1 .5 1 30 60 70 85 100]; %kHz Vo1 = [3.96 3.96 3.96 3.84 3.86 3.88 3.88 3.88]; %V V1 = .96; Ad = Vo1/(2*V1); (I would show my plots but apparently I need more reps for that) I need to plot the equation, CMRR vs freq: CMRR = 20*log10(abs(Ad/Acm)); The size of Ad and Acm are different and the frequency points do not match up, but the boundaries of both of these is the same, 100Hz to 100kHz (x-axis). On the line of CMRR, Matlab says that Ad and Acm matrix dimensions do not agree. How I think I would solve this is using freq1 as the x-axis for CMRR and then taking approximated points from Ad according to the value on freq1. Or I could do function approximations of Ad and Acm and then do the divide operator on those. I do not know how I would code up this two ideas. Any other ideas would helpful, especially simpler ones. Thanks

    Read the article

  • Casting Type array to Generic array?

    - by George R
    The short version of the question - why can't I do this? I'm restricted to .NET 3.5. T[] genericArray; // Obviously T should be float! genericArray = new T[3]{ 1.0f, 2.0f, 0.0f }; // Can't do this either, why the hell not genericArray = new float[3]{ 1.0f, 2.0f, 0.0f }; Longer version - I'm working with the Unity engine here, although that's not important. What is - I'm trying to throw conversion between its fixed Vector2 (2 floats) and Vector3 (3 floats) and my generic Vector< class. I can't cast types directly to a generic array. using UnityEngine; public struct Vector { private readonly T[] _axes; #region Constructors public Vector(int axisCount) { this._axes = new T[axisCount]; } public Vector(T x, T y) { this._axes = new T[2] { x, y }; } public Vector(T x, T y, T z) { this._axes = new T[3]{x, y, z}; } public Vector(Vector2 vector2) { // This doesn't work this._axes = new T[2] { vector2.x, vector2.y }; } public Vector(Vector3 vector3) { // Nor does this this._axes = new T[3] { vector3.x, vector3.y, vector3.z }; } #endregion #region Properties public T this[int i] { get { return _axes[i]; } set { _axes[i] = value; } } public T X { get { return _axes[0];} set { _axes[0] = value; } } public T Y { get { return _axes[1]; } set { _axes[1] = value; } } public T Z { get { return this._axes.Length (Vector2 vector2) { Vector vector = new Vector(vector2); return vector; } public static explicit operator Vector(Vector3 vector3) { Vector vector = new Vector(vector3); return vector; } #endregion }

    Read the article

  • Coordinating typedefs and structs in std::multiset (C++)

    - by Sarah
    I'm not a professional programmer, so please don't hesitate to state the obvious. My goal is to use a std::multiset container (typedef EventMultiSet) called currentEvents to organize a list of structs, of type Event, and to have members of class Host occasionally add new Event structs to currentEvents. The structs are supposed to be sorted by one of their members, time. I am not sure how much of what I am trying to do is legal; the g++ compiler reports (in "Host.h") "error: 'EventMultiSet' has not been declared." Here's what I'm doing: // Event.h struct Event { public: bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; int eventID; int hostID; }; // Host.h ... void calcLifeHist( double, EventMultiSet * ); // produces compiler error ... void addEvent( double, int, int, EventMultiSet * ); // produces compiler error // Host.cpp #include "Event.h" ... // main.cpp #include "Event.h" ... typedef std::multiset< Event, std::less< Event > > EventMultiSet; EventMultiSet currentEvents; EventMultiSet * cePtr = &currentEvents; ... Major questions Where should I include the EventMultiSet typedef? Are my EventMultiSet pointers obviously problematic? Is the compare function within my Event struct (in theory) okay? Thank you very much in advance.

    Read the article

  • Understanding C++ pointers (when they point to a pointer)

    - by Stephano
    I think I understand references and pointers pretty well. Here is what I (think I) know: int i = 5; //i is a primitive type, the value is 5, i do not know the address. int *ptr; //a pointer to an int. i have no way if knowing the value yet. ptr = &i; //now i have an address for the value of i (called ptr) *ptr = 10; //go get the value stored at ptr and change it to 10 Please feel free to comment or correct these statements. Now I'm trying to make the jump to arrays of pointers. Here is what I do not know: char **char_ptrs = new char *[50]; Node **node_ptrs = new Node *[50]; My understanding is that I have 2 arrays of pointers, one set of pointers to chars and one to nodes. So if I wanted to set the values, I would do something like this: char_ptrs[0] = new char[20]; node_ptrs[0] = new Node; Now I have a pointer, in the 0 position of my array, in each respective array. Again, feel free to comment here if I'm confused. So, what does the ** operator do? Likewise, what is putting a single * next to the instantiation doing (*[50])? (what is that called exactly, instantiation?)

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • LINQ to Entites - Left Outer Join - SQL 2000

    - by user255234
    Hi! I'm using Linq to Entities. I have the following query in my code, it includes left outer Join: var surgeonList = (from item in context.T1_STM_Surgeon.Include("T1_STM_SurgeonTitle") .Include("OTER").Include("OSLP") join reptable in context.OSLP on item.Rep equals reptable.SlpCode into surgRepresentative where item.ID == surgeonId select new { ID = item.ID, First = item.First, Last = item.Last, Rep = (surgRepresentative.FirstOrDefault() != null) ? surgRepresentative.FirstOrDefault().SlpName : "N/A", Reg = item.OTER.descript, PrimClinic = item.T1_STM_ClinicalCenter.Name, Titles = item.T1_STM_SurgeonTitle, Phone = item.Phone, Email = item.Email, Address1 = item.Address1, Address2 = item.Address2, City = item.City, State = item.State, Zip = item.Zip, Comments = item.Comments, Active = item.Active, DateEntered = item.DateEntered }) .ToList(); My DEV server has SQL 2008, so the code works just fine. When I moved this code to client's production server - they use SQL 2000, I started getting "Incorrect syntax near '(' ". I've tried changing the ProviderManifestToken to 2000 in my .edmx file, then I started getting "The execution of this query requires the APPLY operator, which is not supported in versions of SQL Server earlier than SQL Server 2005." I tied changing the token to 2005, the "Incorrect syntax near '(' " is back. Can anybody help me to find a workaround for this? Thank you very much in advance!

    Read the article

  • C++ CRTP(template pattern) question

    - by aaa
    following piece of code does not compile, the problem is in T::rank not be inaccessible (I think) or uninitialized in parent template. Can you tell me exactly what the problem is? is passing rank explicitly the only way? or is there a way to query tensor class directly? Thank you #include <boost/utility/enable_if.hpp> template<class T, // size_t N, class enable = void> struct tensor_operator; // template<class T, size_t N> template<class T> struct tensor_operator<T, typename boost::enable_if_c< T::rank == 4>::type > { tensor_operator(T &tensor) : tensor_(tensor) {} T& operator()(int i,int j,int k,int l) { return tensor_.layout.element_at(i, j, k, l); } T &tensor_; }; template<size_t N, typename T = double> // struct tensor : tensor_operator<tensor<N,T>, N> { struct tensor : tensor_operator<tensor<N,T> > { static const size_t rank = N; }; I know the workaround, however am interested in mechanics of template instantiation for self-education

    Read the article

  • C++ Suppress Automatic Initialization and Destruction

    - by Travis G
    How does one suppress the automatic initialization and destruction of a type? While it is wonderful that T buffer[100] automatically initializes all the elements of buffer, and destroys them when they fall out of scope, this is not the behavior I want. #include <iostream> static int created = 0, destroyed = 0; struct S { S() { ++created; } ~S() { ++destroyed; } }; template <typename T, size_t KCount> class Array { private: T m_buffer[KCount]; public: Array() { // some way to suppress the automatic initialization of m_buffer } ~Array() { // some way to suppress the automatic destruction of m_buffer } }; int main() { { Array<S, 100> arr; } std::cout << "Created:\t" << created << std::endl; std::cout << "Destroyed:\t" << destroyed << std::endl; return 0; } The output of this program is: Created: 100 Destroyed: 100 I would like it to be: Created: 0 Destroyed: 0 My only idea is to make m_buffer some trivially constructed and destructed type like char and then rely on operator[] to wrap the pointer math for me, although this seems like a horribly hacked solution. Another solution would be to use malloc and free, but that gives a level of indirection that I do not want.

    Read the article

  • How to use a class's type as the type argument for an inherited collection property in C#

    - by Edelweiss Peimann
    I am trying to create a representation of various types of card that inherit from a generic card class and which all contain references to their owning decks. I tried re-declaring them, as suggested here, but it still won't convert to the specific card type. The code I currently have is as such: public class Deck<T> : List<T> where T : Card { void Shuffle() { throw new NotImplementedException("Shuffle not yet implemented."); } } public class Card { public Deck<Card> OwningDeck { get; set; } } public class FooCard : Card { public Deck<FooCard> OwningDeck { get { return (Deck<FooCard>)base.OwningDeck; } set { OwningDeck = value; } } } The compile-time error I am getting: Error 2 Cannot convert type 'Game.Cards.Deck' to 'Game.Cards.Deck' And a warning suggesting I use a new operator to specify that the hiding is intentional. Would doing so be a violation of convention? Is there a better way? My question to stackoverflow is this: Can what I am trying to do be done elegantly in the .NET type system? If so, can some examples be provided?

    Read the article

  • Template trick to optimize out allocations

    - by anon
    I have: struct DoubleVec { std::vector<double> data; }; DoubleVec operator+(const DoubleVec& lhs, const DoubleVec& rhs) { DoubleVec ans(lhs.size()); for(int i = 0; i < lhs.size(); ++i) { ans[i] = lhs[i]] + rhs[i]; // assume lhs.size() == rhs.size() } return ans; } DoubleVec someFunc(DoubleVec a, DoubleVec b, DoubleVec c, DoubleVec d) { DoubleVec ans = a + b + c + d; } Now, in the above, the "a + b + c + d" will cause the creation of 3 temporary DoubleVec's -- is there a way to optimize this away with some type of template magic ... i.e. to optimize it down to something equivalent to: DoubleVec ans(a.size()); for(int i = 0; i < ans.size(); i++) ans[i] = a[i] + b[i] + c[i] + d[i]; You can assume all DoubleVec's have the same # of elements. The high level idea is to have do some type of templateied magic on "+", which "delays the computation" until the =, at which point it looks into itself, goes hmm ... I'm just adding thes numbers, and syntheizes a[i] + b[i] + c[i] + d[i] ... instead of all the temporaries. Thanks!

    Read the article

  • C# Type conversion between two similar Datatable objects

    - by Ali
    I have .NET project with sync framework and two separate Datasets for MS SQL and Compact SQL. in my base class I have a generic DataTable object. in my derived classed I assign Typed DataTable to the generic object based on whether the application is operating online or offline: example: if (online) _dataTable = new MSSQLDataSet.Customer; else _dataTable = new CompactSQLDataSet.Customer; Now every where in my code i have to check and do a cast based on the current network mode like this: public void changeCustomerID(int ID) { if (online) (MSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; else (CompactMSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; } but I don't think this is very efficient and I believe it can be done in a smarter way to only use one line of code by dynamically getting the Type of _dataTable on the run time. my problem is at the design time, in order to acess datatable porperties such as "CustomerID" it has to be casted to either MSSQLDataSet.CustomerDataTable or CompactMSSQLDataSet.CustomerDataTable. Is there a way to have a function or a operator to convert the _datatable to its runtime type but still be able to use it's design time properties which are the same between the two types? something like: ((aType)_dataTable)[i].CustomerID = value; //or GetRuntimeType(_dataTable)[i].CustomerID = value;

    Read the article

  • GUID or int entity key with SQL Compact/EF4?

    - by David Veeneman
    This is a follow-up to an earlier question I posted on EF4 entity keys with SQL Compact. SQL Compact doesn't allow server-generated identity keys, so I am left with creating my own keys as objects are added to the ObjectContext. My first choice would be an integer key, and the previous answer linked to a blog post that shows an extension method that uses the Max operator with a selector expression to find the next available key: public static TResult NextId<TSource, TResult>(this ObjectSet<TSource> table, Expression<Func<TSource, TResult>> selector) where TSource : class { TResult lastId = table.Any() ? table.Max(selector) : default(TResult); if (lastId is int) { lastId = (TResult)(object)(((int)(object)lastId) + 1); } return lastId; } Here's my take on the extension method: It will work fine if the ObjectContext that I am working with has an unfiltered entity set. In that case, the ObjectContext will contain all rows from the data table, and I will get an accurate result. But if the entity set is the result of a query filter, the method will return the last entity key in the filtered entity set, which will not necessarily be the last key in the data table. So I think the extension method won't really work. At this point, the obvious solution seems to be to simply use a GUID as the entity key. That way, I only need to call Guid.NewGuid() method to set the ID property before I add a new entity to my ObjectContext. Here is my question: Is there a simple way of getting the last primary key in the data store from EF4 (without having to create a second ObjectContext for that purpose)? Any other reason not to take the easy way out and simply use a GUID? Thanks for your help.

    Read the article

  • Instrumenting a string

    - by George Polevoy
    Somewhere in C++ era i have crafted a library, which enabled string representation of the computation history. Having a math expression like: TScalar Compute(TScalar a, TScalar b, TScalar c) { return ( a + b ) * c; } I could render it's string representation: r = Compute(VerbalScalar("a", 1), VerbalScalar("b", 2), VerbalScalar("c", 3)); Assert.AreEqual(9, r.Value); Assert.AreEqual("(a+b)*c==(1+2)*3", r.History ); C++ operator overloading allowed for substitution of a simple type with a complex self-tracking entity with an internal tree representation of everything happening with the objects. Now i would like to have the same possibility for NET strings, only instead of variable names i would like to see a stack traces of all the places in code which affected a string. And i want it to work with existing code, and existing compiled assemblies. Also i want all this to hook into visual studio debugger, so i could set a breakpoint, and see everything that happened with a string. Which technology would allow this kind of things? I know it sound like an utopia, but I think visual studio code coverage tools actually do the same kind of job while instrumenting the assemblies.

    Read the article

  • Simple matrix example using C++ template class

    - by skyeagle
    I am trying to write a trivial Matrix class, using C++ templates in an attempt to brush up my C++, and also to explain something to a fellow coder. This is what I have som far: template class<T> class Matrix { public: Matrix(const unsigned int rows, const unsigned int cols); Matrix(const Matrix& m); Matrix& operator=(const Matrix& m); ~Matrix(); unsigned int getNumRows() const; unsigned int getNumCols() const; template <class T> T getCellValue(unsigned int row, unsigned col) const; template <class T> void setCellValue(unsigned int row, unsigned col, T value) const; private: // Note: intentionally NOT using smart pointers here ... T * m_values; }; template<class T> inline T Matrix::getCellValue(unsigned int row, unsigned col) const { } template<class T> inline void Matrix::setCellValue(unsigned int row, unsigned col, T value) { } I'm stuck on the ctor, since I need to allocate a new[] T, it seems like it needs to be a template method - however, I'm not sure I have come accross a templated ctor before. How can I implemnt the ctor?

    Read the article

  • How to write a custom predicate for multi_index_containder with composite_key?

    - by Titan
    I googled and searched in the boost's man, but didn't find any examples. May be it's a stupid question...anyway. So we have the famous phonebook from the man: typedef multi_index_container< phonebook_entry, indexed_by< ordered_non_unique< composite_key< phonebook_entry, member<phonebook_entry,std::string,&phonebook_entry::family_name>, member<phonebook_entry,std::string,&phonebook_entry::given_name> >, composite_key_compare< std::less<std::string>, // family names sorted as by default std::greater<std::string> // given names reversed > >, ordered_unique< member<phonebook_entry,std::string,&phonebook_entry::phone_number> > > > phonebook; phonebook pb; ... // look for all Whites std::pair<phonebook::iterator,phonebook::iterator> p= pb.equal_range(boost::make_tuple("White"), my_custom_comp()); How should my_custom_comp() look like? I mean it's clear for me then it takes boost::multi_index::composite_key_result<CompositeKey> as an argumen (due to compilation errors :) ), but what is CompositeKey in that particular case? struct my_custom_comp { bool operator()( ?? boost::multi_index::composite_key_result<CompositeKey> ?? ) const { return blah_blah_blah; } }; Thanks in advance.

    Read the article

  • Modify passed, nested dict/list

    - by Gerenuk
    I was thinking of writing a function to normalize some data. A simple approach is def normalize(l, aggregate=sum, norm_by=operator.truediv): aggregated=aggregate(l) for i in range(len(l)): l[i]=norm_by(l[i], aggregated) l=[1,2,3,4] normalize(l) l -> [0.1, 0.2, 0.3, 0.4] However for nested lists and dicts where I want to normalize over an inner index this doesnt work. I mean I'd like to get l=[[1,100],[2,100],[3,100],[4,100]] normalize(l, ?? ) l -> [[0.1,100],[0.2,100],[0.3,100],[0.4,100]] Any ideas how I could implement such a normalize function? Maybe it would be crazy cool to write normalize(l[...][0]) Is it possible to make this work?? Or any other ideas? Also not only lists but also dict could be nested. Hmm... EDIT: I just found out that numpy offers such a syntax (for lists however). Anyone know how I would implement the ellipsis trick myself?

    Read the article

  • Help with this compile error

    - by Scott
    I just picked up an old project and I'm not sure what the following error could mean. g++ -o BufferedReader.o -c -g -Wall -std=c++0x -I/usr/include/xmms2 -Ijsoncpp/include/json/ -fopenmp -I/usr/include/ImageMagick -I/usr/include/xmms2 -I/usr/include/libvisual-0.4 -D_GNU_SOURCE=1 -D_REENTRANT -I/usr/include/SDL -DQT_CORE_LIB -DQT_GUI_LIB -DQT_SCRIPT_LIB -DQT_SHARED -I/usr/include/QtCore -I/usr/include/QtGui -I/usr/include/QtScript BufferedReader.cpp In file included from BufferedReader.cpp:23: /usr/include/string.h:36:42: error: missing binary operator before token "(" In file included from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/cwchar:47, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/bits/postypes.h:42, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/iosfwd:42, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/ios:39, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/istream:40, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/sstream:39, from BufferedReader.cpp:24: At line 24 of BufferedReader.cpp is #include <string.h>. I've tried it with just <string> but get the same thing. Any clue? Here's the snippet of code from string.h /* Tell the caller that we provide correct C++ prototypes. */ #if defined __cplusplus && __GNUC_PREREQ (4, 4) //line 36 # define __CORRECT_ISO_CPP_STRING_H_PROTO #endif Does that mean __GNUC_PREREQ isn't defined?

    Read the article

  • Compiling GWT 2.6.1 at Java 7 source level

    - by Neeko
    I've recently updated my GWT project to 2.6.1, and started to make use of Java 7 syntax since 2.6 now supports Java 7. However, when I attempt to compile, I'm receiving compiler errors such as [ERROR] Line 42: '<>' operator is not allowed for source level below 1.7 How do I specify the GWT compiler to target 1.7? I was under the impression that it would do that by default, but I guess not. I've attempted cleaning the project, including deleting the gwt-unitCache directory but to no avail. Here is my Ant compile target. <target name="compile" depends="prepare"> <javac includeantruntime="false" debug="on" debuglevel="lines,vars,source" srcdir="${src.dir}" destdir="${build.dir}"> <classpath refid="project.classpath"/> </javac> </target> <target name="gwt-compile" depends="compile"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <!-- src dir is added to ensure the module.xml file(s) are on the classpath --> <pathelement location="${src.dir}"/> <pathelement location="${build.dir}"/> <path refid="project.classpath"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${gwt.module.name}"/> </java> </target>

    Read the article

  • Databinding expression for retrieving value of related collection using LINQ

    - by joshb
    I have a GridView that is bound to a LINQDataSource control that is returning a collection of customers. Within my DataGrid I need to display the home phone number of a customer, if they have one. The phone numbers of a customer are stored in a separate table with a foreign key pointing to the customer table. The following binding expression gets me the first phone number for a customer: <asp:TemplateField HeaderText="LastName" SortExpression="LastName"> <ItemTemplate> <asp:Label ID="PhoneLabel" runat="server" Text='<%# Eval("Phones[0].PhoneNumber") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> I need to figure out how to get the home phone number specifically (filter based on phone type) and handle the scenario where the customer does not have a home phone in the database. Right now it's throwing an out of range exception if the customer does not have any phone numbers. I've tried using the Where operator with a lambda expression to filter the phone type but it doesn't work: <%# Eval("Phones.Where(p => p.PhoneTypeId == 2).PhoneNumber") %> Solutions or links to any good articles on the subject would be much appreciated.

    Read the article

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

  • SFINAE + sizeof = detect if expression compiles

    - by FredOverflow
    I just found out how to check if operator<< is provided for a type. template<class T> T& lvalue_of_type(); template<class T> T rvalue_of_type(); template<class T> struct is_printable { template<class U> static char test(char(*)[sizeof( lvalue_of_type<std::ostream>() << rvalue_of_type<U>() )]); template<class U> static long test(...); enum { value = 1 == sizeof test<T>(0) }; typedef boost::integral_constant<bool, value> type; }; Is this trick well-known, or have I just won the metaprogramming Nobel prize? ;) EDIT: I made the code simpler to understand and easier to adapt with two global function template declarations lvalue_of_type and rvalue_of_type.

    Read the article

  • How do you calculate div and mod of floating point numbers?

    - by boost
    In Perl, the % operator seems to assume integers. For instance: sub foo { my $n1 = shift; my $n2 = shift; print "perl's mod=" . $n1 % $n2, "\n"; my $res = $n1 / $n2; my $t = int($res); print "my div=$t", "\n"; $res = $res - $t; $res = $res * $n2; print "my mod=" . $res . "\n\n"; } foo( 3044.952963, 7.1 ); foo( 3044.952963, -7.1 ); foo( -3044.952963, 7.1 ); foo( -3044.952963, -7.1 ); gives perl's mod=6 my div=428 my mod=6.15296300000033 perl's mod=-1 my div=-428 my mod=6.15296300000033 perl's mod=1 my div=-428 my mod=-6.15296300000033 perl's mod=-6 my div=428 my mod=-6.15296300000033 Now as you can see, I've come up with a "solution" already for calculating div and mod. However, what I don't understand is what effect the sign of each argument should have on the result. Wouldn't the div always be positive, being the number of times n2 fits into n1? How's the arithmetic supposed to work in this situation?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >