Search Results

Search found 10071 results on 403 pages for 'operator module'.

Page 154/403 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • ADF Business Components

    - by Arda Eralp
    ADF Business Components and JDeveloper simplify the development, delivery, and customization of business applications for the Java EE platform. With ADF Business Components, developers aren't required to write the application infrastructure code required by the typical Java EE application to: Connect to the database Retrieve data Lock database records Manage transactions   ADF Business Components addresses these tasks through its library of reusable software components and through the supporting design time facilities in JDeveloper. Most importantly, developers save time using ADF Business Components since the JDeveloper design time makes typical development tasks entirely declarative. In particular, JDeveloper supports declarative development with ADF Business Components to: Author and test business logic in components which automatically integrate with databases Reuse business logic through multiple SQL-based views of data, supporting different application tasks Access and update the views from browser, desktop, mobile, and web service clients Customize application functionality in layers without requiring modification of the delivered application The goal of ADF Business Components is to make the business services developer more productive.   ADF Business Components provides a foundation of Java classes that allow your business-tier application components to leverage the functionality provided in the following areas: Simplifying Data Access Design a data model for client displays, including only necessary data Include master-detail hierarchies of any complexity as part of the data model Implement end-user Query-by-Example data filtering without code Automatically coordinate data model changes with business services layer Automatically validate and save any changes to the database   Enforcing Business Domain Validation and Business Logic Declaratively enforce required fields, primary key uniqueness, data precision-scale, and foreign key references Easily capture and enforce both simple and complex business rules, programmatically or declaratively, with multilevel validation support Navigate relationships between business domain objects and enforce constraints related to compound components   Supporting Sophisticated UIs with Multipage Units of Work Automatically reflect changes made by business service application logic in the user interface Retrieve reference information from related tables, and automatically maintain the information when the user changes foreign-key values Simplify multistep web-based business transactions with automatic web-tier state management Handle images, video, sound, and documents without having to use code Synchronize pending data changes across multiple views of data Consistently apply prompts, tooltips, format masks, and error messages in any application Define custom metadata for any business components to support metadata-driven user interface or application functionality Add dynamic attributes at runtime to simplify per-row state management   Implementing High-Performance Service-Oriented Architecture Support highly functional web service interfaces for business integration without writing code Enforce best-practice interface-based programming style Simplify application security with automatic JAAS integration and audit maintenance "Write once, run anywhere": use the same business service as plain Java class, EJB session bean, or web service   Streamlining Application Customization Extend component functionality after delivery without modifying source code Globally substitute delivered components with extended ones without modifying the application   ADF Business Components implements the business service through the following set of cooperating components: Entity object An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. These are basically your 1 to 1 representation of a database table. Each table in the database will have 1 and only 1 EO. The EO contains the mapping between columns and attributes. EO's also contain the business logic and validation. These are you core data services. They are responsible for updating, inserting and deleting records. The Attributes tab displays the actual mapping between attributes and columns, the mapping has following fields: Name : contains the name of the attribute we expose in our data model. Type : defines the data type of the attribute in our application. Column : specifies the column to which we want to map the attribute with Column Type : contains the type of the column in the database   View object A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. The attributes in the View Objects are actually coming from the Entity Object. In the end the VO will generate a query but you basically build a VO by selecting which EO need to participate in the VO and which attributes of those EO you want to use. That's why you have the Entity Usage column so you can see the relation between VO and EO. In the query tab you can clearly see the query that will be generated for the VO. At this stage we don't need it and just use it for information purpose. In later stages we might use it. Application module An application module is the controller of your data layer. It is responsible for keeping hold of the transaction. It exposes the data model to the view layer. You expose the VO's through the Application Module. This is the abstraction of your data layer which you want to show to the outside word.It defines an updatable data model and top-level procedures and functions (called service methods) related to a logical unit of work related to an end-user task. While the base components handle all the common cases through built-in behavior, customization is always possible and the default behavior provided by the base components can be easily overridden or augmented. When you create EO's, a foreign key will be translated into an association in our model. It defines the type of relation and who is the master and child as well as how the visibility of the association looks like. A similar concept exists to identify relations between view objects. These are called view links. These are almost identical as association except that a view link is based upon attributes defined in the view object. It can also be based upon an association. Here's a short summary: Entity Objects: representations of tables Association: Relations between EO's. Representations of foreign keys View Objects: Logical model View Links: Relationships between view objects Application Model: interface to your application  

    Read the article

  • Error Handling in T-SQL Scalar Function

    - by hydroparadise
    Ok.. this question could easily take multiple paths, so I will hit the more specific path first. While working with SQL Server 2005, I'm trying to create a scalar funtion that acts as a 'TryCast' from varchar to int. Where I encounter a problem is when I add a TRY block in the function; CREATE FUNCTION u_TryCastInt ( @Value as VARCHAR(MAX) ) RETURNS Int AS BEGIN DECLARE @Output AS Int BEGIN TRY SET @Output = CONVERT(Int, @Value) END TRY BEGIN CATCH SET @Output = 0 END CATCH RETURN @Output END Turns out theres all sorts of things wrong with this statement including "Invalid use of side-effecting or time-dependent operator in 'BEGIN TRY' within a function" and "Invalid use of side-effecting or time-dependent operator in 'END TRY' within a function". I can't seem to find any examples of using try statements within a scalar function, which got me thinking, is error handling in a function is possible? The goal here is to make a robust version of the Convert or Cast functions to allow a SELECT statement carry through depsite conversion errors. For example, take the following; CREATE TABLE tblTest ( f1 VARCHAR(50) ) GO INSERT INTO tblTest(f1) VALUES('1') INSERT INTO tblTest(f1) VALUES('2') INSERT INTO tblTest(f1) VALUES('3') INSERT INTO tblTest(f1) VALUES('f') INSERT INTO tblTest(f1) VALUES('5') INSERT INTO tblTest(f1) VALUES('1.1') SELECT CONVERT(int,f1) AS f1_num FROM tblTest DROP TABLE tblTest It never reaches point of dropping the table because the execution gets hung on trying to convert 'f' to an integer. I want to be able to do something like this; SELECT u_TryCastInt(f1) AS f1_num FROM tblTest fi_num __________ 1 2 3 0 5 0 Any thoughts on this? Is there anything that exists that handles this? Also, I would like to try and expand the conversation to support SQL Server 2000 since Try blocks are not an option in that scenario. Thanks in advance.

    Read the article

  • Problem with boost::find_format_all, boost::regex_finder and custom regex formatter (bug boost 1.42)

    - by Nikko
    I have a code that has been working for almost 4 years (since boost 1.33) and today I went from boost 1.36 to boost 1.42 and now I have a problem. I'm calling a custom formatter on a string to format parts of the string that match a REGEX. For instance, a string like: "abc;def:" will be changed to "abc\2Cdef\3B" if the REGEX contains "([;:])" boost::find_format_all( mystring, boost::regex_finder( REGEX ), custom_formatter() ); The custom formatter looks like this: struct custom_formatter() { template< typename T > std::string operator()( const T & s ) const { std::string matchStr = s.match_results().str(1); // perform substitutions return matchStr; } } This worked fine but with boost 1.42 I know have "non initialized" s.match_results() which yield to boost::exception_detail::clone_implINS0_::error_info_injectorISt11logic_errorEEEE - Attempt to access an uninitialzed boost::match_results< class. This means that sometimes I am in the functor to format a string but there is no match. Am I doing something wrong? Or is it normal to enter the functor when there is no match and I should check against something? for now my solution is to try{}catch(){} the exception and everything works fine, but somehow that doesn't feel very good. EDIT1 Actually I have a new empty match at the end of each string to parse. EDIT2 : one solution inspired by ablaeul template< typename T > std::string operator()( const T & s ) const { if( s.begin() == s.end() ) return std::string(); std::string matchStr = s.match_results().str(1); // perform substitutions return matchStr; } *EDIT3 Seems to be a bug in (at least) boost 1.42 *

    Read the article

  • What's the recommended implemenation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • Parsing an arithmetic expression and building a tree from it in Java

    - by ChocolateBear
    Hi, I needed some help with creating custom trees given an arithmetic expression. Say, for example, you input this arithmetic expression: (5+2)*7 The result tree should look like: * / \ + 7 / \ 5 2 I have some custom classes to represent the different types of nodes, i.e. PlusOp, LeafInt, etc. I don't need to evaluate the expression, just create the tree, so I can perform other functions on it later. Additionally, the negative operator '-' can only have one child, and to represent '5-2', you must input it as 5 + (-2). Some validation on the expression would be required to ensure each type of operator has the correct the no. of arguments/children, each opening bracket is accompanied by a closing bracket. Also, I should probably mention my friend has already written code which converts the input string into a stack of tokens, if that's going to be helpful for this. I'd appreciate any help at all. Thanks :) (I read that you can write a grammar and use antlr/JavaCC, etc. to create the parse tree, but I'm not familiar with these tools or with writing grammars, so if that's your solution, I'd be grateful if you could provide some helpful tutorials/links for them.)

    Read the article

  • Strange - Clicking Update button doesn’t cause a postback due to <!-- tag

    - by AspOnMyNet
    If I define the following template inside DetailsView, then upon clicking an Update or Insert button, the page is posted back to the server: <EditItemTemplate> <asp:TextBox ID="txtDate" runat="server" Text='<%# Bind("Date") %>'></asp:TextBox> <asp:CompareValidator ID="valDateType" runat="server" ControlToValidate="txtDate" Type="Date" Operator="DataTypeCheck" Display="Dynamic" >*</asp:CompareValidator> </EditItemTemplate> If I remove CompareValidator control from the above code by simply deleting it, then page still gets posted back.But if instead I remove CompareValidator control by enclosing it within <!-- --> tags, then for some reason clicking an Update or Insert button doesn’t cause a postback...instead nothing happens: <EditItemTemplate> <asp:TextBox ID="txtDate" runat="server" Text='<%# Bind("Date") %>'></asp:TextBox> <!-- <asp:CompareValidator ID="valDateType" runat="server" ControlToValidate="txtDate" Type="Date" Operator="DataTypeCheck" Display="Dynamic" >*</asp:CompareValidator> --> </EditItemTemplate> </EditItemTemplate> Any idea why page doesn't get posted back? thanx

    Read the article

  • Model Binding, a simple, simple question

    - by Paul Hatcherian
    I have a struct which works much like the System.Nullable type: public struct SpecialProperty<T> { public static implicit operator T(SpecialProperty<T> value) { return value.Value; } public static implicit operator SpecialProperty<T>(T value) { return new TrackChanges<T> { Value = value }; } T internalValue; public T Value { get { return internalValue; } set { internalValue = value; } } public override bool Equals(object other) { return Value.Equals(other); } public override int GetHashCode() { return Value.GetHashCode(); } public override string ToString() { return Value.ToString(); } } I'm trying to use it with ASP.NET MVC binding. Using the default customer model binder the property will always yield null. I can fix this by adding ".Value" to the end of every form input name, but I just want it to bind to the new type directly using some sort of custom model binder, but all the solutions I've tried seemed needlessly complex. I feel like I should be able to extend the default binder and with a few lines of code redirect the property binding to the entire model using implicit conversion. I don't quite get the binding paradigm of the default binder, but it seems really stuck on this distinction between the model and model properties. What is the simplest method to do this? Thanks!

    Read the article

  • trouble with algorithm

    - by rebel_UA
    David likes number of estimates with base "k" and not a multiple(a%2!=0) of the number of zeros at the end. Set system and the number of the order and print it I need to optimi this algoritm: class David{ private: int k; public: David(); David(int); int operator[] (int); }; David::David(){ k=10; }; David::David(int k){ this->k=k; } int David::operator[] (int n){ int q; int p; int i=1; for(int r=0;r<n;i++){ q=0; p=i; for(;;){ if(p%k) break; if(p==0) break; ++q; p/=k; } if(q%2){ r++; } } return i-1; }

    Read the article

  • Am I deleting this properly?

    - by atch
    I have some struct: struct A { const char* name_; A* left_; A* right_; A(const char* name):name_(name), left_(nullptr), right_(nullptr){} A(const A&); //A(const A*);//ToDo A& operator=(const A&); ~A() { /*ToDo*/ }; }; /*Just to compile*/ A& A::operator=(const A& pattern) { //check for self-assignment if (this != &pattern) { void* p = new char[sizeof(A)]; } return *this; } A::A(const A& pat) { void* p = new char[sizeof(A)]; A* tmp = new (p) A("tmp"); tmp->~A(); delete tmp;//I WONDER IF HERE I SHOULD USE DIFFERENT delete[]? } int _tmain(int argc, _TCHAR* argv[]) { A a("a"); A b = a; cin.get(); return 0; } Guys I know this is far from ideal and far from finished. But please don't tell me the answer how to do it properly. I'm trying to figure it out myself. The only think I would like to know if I'm deleting my memory in proper way. Thanks.

    Read the article

  • Adding Functions to an Implementation of Vector

    - by Meursault
    I have this implementation of vector that I've been working on for a few days using examples from a textbook: #include <iostream> #include <string> #include <cassert> #include <algorithm> #include <cstring> // Vector.h using namespace std; template <class T> class Vector { public: typedef T * iterator; Vector(); Vector(unsigned int size); Vector(unsigned int size, const T & initial); Vector(const Vector<T> & v); // copy constructor ~Vector(); unsigned int capacity() const; // return capacity of vector (in elements) unsigned int size() const; // return the number of elements in the vector bool empty() const; iterator begin(); // return an iterator pointing to the first element iterator end(); // return an iterator pointing to one past the last element T & front(); // return a reference to the first element T & back(); // return a reference to the last element void push_back(const T & value); // add a new element void pop_back(); // remove the last element void reserve(unsigned int capacity); // adjust capacity void resize(unsigned int size); // adjust size void erase(unsigned int size); // deletes an element from the vector T & operator[](unsigned int index); // return reference to numbered element Vector<T> & operator=(const Vector<T> &); private: unsigned int my_size; unsigned int my_capacity; T * buffer; }; template<class T>// Vector<T>::Vector() { my_capacity = 0; my_size = 0; buffer = 0; } template<class T> Vector<T>::Vector(const Vector<T> & v) { my_size = v.my_size; my_capacity = v.my_capacity; buffer = new T[my_size]; for (int i = 0; i < my_size; i++) buffer[i] = v.buffer[i]; } template<class T>// Vector<T>::Vector(unsigned int size) { my_capacity = size; my_size = size; buffer = new T[size]; } template<class T>// Vector<T>::Vector(unsigned int size, const T & initial) { my_size = size; //added = size my_capacity = size; buffer = new T [size]; for (int i = 0; i < size; i++) buffer[i] = initial; } template<class T>// Vector<T> & Vector<T>::operator = (const Vector<T> & v) { delete[ ] buffer; my_size = v.my_size; my_capacity = v.my_capacity; buffer = new T [my_size]; for (int i = 0; i < my_size; i++) buffer[i] = v.buffer[i]; return *this; } template<class T>// typename Vector<T>::iterator Vector<T>::begin() { return buffer; } template<class T>// typename Vector<T>::iterator Vector<T>::end() { return buffer + size(); } template<class T>// T& Vector<T>::Vector<T>::front() { return buffer[0]; } template<class T>// T& Vector<T>::Vector<T>::back() { return buffer[size - 1]; } template<class T> void Vector<T>::push_back(const T & v) { if (my_size >= my_capacity) reserve(my_capacity +5); buffer [my_size++] = v; } template<class T>// void Vector<T>::pop_back() { my_size--; } template<class T>// void Vector<T>::reserve(unsigned int capacity) { if(buffer == 0) { my_size = 0; my_capacity = 0; } if (capacity <= my_capacity) return; T * new_buffer = new T [capacity]; assert(new_buffer); copy (buffer, buffer + my_size, new_buffer); my_capacity = capacity; delete[] buffer; buffer = new_buffer; } template<class T>// unsigned int Vector<T>::size()const { return my_size; } template<class T>// void Vector<T>::resize(unsigned int size) { reserve(size); my_size = size; } template<class T>// T& Vector<T>::operator[](unsigned int index) { return buffer[index]; } template<class T>// unsigned int Vector<T>::capacity()const { return my_capacity; } template<class T>// Vector<T>::~Vector() { delete[]buffer; } template<class T> void Vector<T>::erase(unsigned int size) { } int main() { Vector<int> v; v.reserve(2); assert(v.capacity() == 2); Vector<string> v1(2); assert(v1.capacity() == 2); assert(v1.size() == 2); assert(v1[0] == ""); assert(v1[1] == ""); v1[0] = "hi"; assert(v1[0] == "hi"); Vector<int> v2(2, 7); assert(v2[1] == 7); Vector<int> v10(v2); assert(v10[1] == 7); Vector<string> v3(2, "hello"); assert(v3.size() == 2); assert(v3.capacity() == 2); assert(v3[0] == "hello"); assert(v3[1] == "hello"); v3.resize(1); assert(v3.size() == 1); assert(v3[0] == "hello"); Vector<string> v4 = v3; assert(v4.size() == 1); assert(v4[0] == v3[0]); v3[0] = "test"; assert(v4[0] != v3[0]); assert(v4[0] == "hello"); v3.pop_back(); assert(v3.size() == 0); Vector<int> v5(7, 9); Vector<int>::iterator it = v5.begin(); while (it != v5.end()) { assert(*it == 9); ++it; } Vector<int> v6; v6.push_back(100); assert(v6.size() == 1); assert(v6[0] == 100); v6.push_back(101); assert(v6.size() == 2); assert(v6[0] == 100); v6.push_back(101); cout << "SUCCESS\n"; } So far it works pretty well, but I want to add a couple of functions to it that I can't find examples for, a SWAP function that would look at two elements of the vector and switch their values and and an ERASE function that would delete a specific value or range of values in the vector. How should I begin implementing the two extra functions?

    Read the article

  • LINQ - is SkipWhile broken?

    - by Judah Himango
    I'm a bit surprised to find the results of the following code, where I simply want to remove all 3s from a sequence of ints: var sequence = new [] { 1, 1, 2, 3 }; var result = sequence.SkipWhile(i => i == 3); // Oh noes! Returns { 1, 1, 2, 3 } Why isn't 3 skipped? My next thought was, OK, the Except operator will do the trick: var sequence = new [] { 1, 1, 2, 3 }; var result = sequence.Except(i => i == 3); // Oh noes! Returns { 1, 2 } In summary, Except removes the 3, but also removes non-distinct elements. Grr. SkipWhile doesn't skip the last element, even if it matches the condition. Grr. Can someone explain why SkipWhile doesn't skip the last element? And can anyone suggest what LINQ operator I can use to remove the '3' from the sequence above?

    Read the article

  • InvalidCastException in DataGridView

    - by Max Yaffe
    (Using VS 2010 Beta 2 - .Net 4.0 B2 Rel) I have a class, MyTable, derived from BindingList where S is a struct. S is made up of several other structs, for example: public class MyTable<S>:BindingList<S> where S: struct { ... } public struct MyStruct { public MyReal r1; public MyReal r2; public MyReal R1 {get{...} set{...}} public MyReal R2 {get{...} set{...}} ... } public struct MyReal { private Double d; private void InitFromString(string) {this.d = ...;} public MyReal(Double d) { this.d = d;} public MyReal(string val) { this.d = default(Double); InitFromString(val);} public override string ToString() { return this.real.ToString();} public static explicit operator MyReal(string s) { return new MyReal(s);} public static implicit operator String(MyReal r) { return r.ToString();} ... } OK, I use the MyTable as a binding source for a DataGridView. I can load the data grid easily using InitFromString on individual fields in MyStruct. The problem comes when I try to edit a value in a cell of the DataGridView. Going to the first row, first column, I change the value of the existing number. It gives an exception blizzard, the first line of which says: System.FormatException: Invalid cast from 'System.String' to 'MyReal' I've looked at the casting discussions and reference books but don't see any obvious problems. Any ideas?

    Read the article

  • Possible mem leak?

    - by LCD Fire
    I'm new to the concept so don't be hard on me. why doesn't this code produce a destructor call ? The names of the classes are self-explanatory. The SString will print a message in ~SString(). It only prints one destructor message. int main(int argc, TCHAR* argv[]) { smart_ptr<SString> smt(new SString("not lost")); new smart_ptr<SString>(new SString("but lost")); return 0; } Is this a memory leak? The impl. for smart_ptr is from here edited: //copy ctor smart_ptr(const smart_ptr<T>& ptrCopy) { m_AutoPtr = new T(ptrCopy.get()); } //overloading = operator smart_ptr<T>& operator=(smart_ptr<T>& ptrCopy) { if(m_AutoPtr) delete m_AutoPtr; m_AutoPtr = new T(*ptrCopy.get()); return *this; }

    Read the article

  • Why doesn't Perl file glob() work outside of a loop in scalar context?

    - by Rob
    According to the Perl documentation on file globbing, the <*> operator or glob() function, when used in a scalar context, should iterate through the list of files matching the specified pattern, returning the next file name each time it is called or undef when there are no more files. But, the iterating process only seems to work from within a loop. If it isn't in a loop, then it seems to start over immediately before all values have been read. From the Perl docs: In scalar context, glob iterates through such filename expansions, returning undef when the list is exhausted. http://perldoc.perl.org/functions/glob.html However, in scalar context the operator returns the next value each time it's called, or undef when the list has run out. http://perldoc.perl.org/perlop.html#I/O-Operators Example code: use warnings; use strict; my $filename; # in scalar context, <*> should return the next file name # each time it is called or undef when the list has run out $filename = <*>; print "$filename\n"; $filename = <*>; # doesn't work as documented, starts over and print "$filename\n"; # always returns the same file name $filename = <*>; print "$filename\n"; print "\n"; print "$filename\n" while $filename = <*>; # works in a loop, returns next file # each time it is called In a directory with 3 files...file1.txt, file2.txt, and file3.txt, the above code will output: file1.txt file1.txt file1.txt file1.txt file2.txt file3.txt Note: The actual perl script should be outside the test directory, or you will see the file name of the script in the output as well. Am I doing something wrong here, or is this how it is supposed to work?

    Read the article

  • Template with constant expression: error C2975 with VC++2008

    - by Arman
    Hello, I am trying to use elements of meta programming, but hit the wall with the first trial. I would like to have a comparator structure which can be used as following: intersect_by<ID>(L1.data, L2.data, "By ID: "); intersect_by<IDf>(L1.data, L2.data, "By IDf: "); Where: struct ID{};// Tag used for original IDs struct IDf{};// Tag used for the file position //following Boost.MultiIndex examples template<typename Tag,typename MultiIndexContainer> void intersect_by( const MultiIndexContainer& L1,const MultiIndexContainer& L2,std::string msg, Tag* =0 /* fixes a MSVC++ 6.0 bug with implicit template function parms / ) { / obtain a reference to the index tagged by Tag */ const typename boost::multi_index::index<MultiIndexContainer,Tag>::type& L1_ID_index= get<Tag>(L1); const typename boost::multi_index::index<MultiIndexContainer,Tag>::type& L2_ID_index= get<Tag>(L2); std::set_intersection( L1_ID_index.begin(), L1_ID_index.end(), L2_ID_index.begin(), L2_ID_index.end(), std::inserter(s, s.begin()), strComparator() // Here I get the C2975 error ); } template<int N> struct strComparator; template<> struct strComparator<0>{ bool operator () (const particleID& id1, const particleID& id2) const { return id1.ID struct strComparator<1{ bool operator () (const particleID& id1, const particleID& id2) const { return id1.IDf }; What I am missing? kind regards Arman.

    Read the article

  • Rename a file with perl

    - by perlnoob
    I have a file in a different folder I want to rename in perl, I was looking at a solution earlier that showed something like this: for (<backup.rar>) { my $file = $_; my $new = $_ 'backup'. @test .'.rar'; rename $file, $new or die "Error, can not rename $file as $new: $!"; } however backup.rar is in a different folder, I did try putting "C:\backup\backup.rar" in the < above, however I got the same error. C:\Program Files\WinRARperl backup.pl String found where operator expected at backup.pl line 35, near "$_ 'backup'" (Missing operator before 'backup'?) syntax error at backup.pl line 35, near "$_ 'backup'" Execution of backup.pl aborted due to compilation errors. I was using # Get time my @test = POSIX::strftime("%m-%d-%Y--%H-%M-%S\n", localtime); print @test; To get the current time, however I couldn't seem to get it to rename correctly. What can I do to fix this? Please note I am doing this on a windows box.

    Read the article

  • Function that prints something to std::ostream and returns std::ostream?

    - by dehmann
    I want to write a function that outputs something to a ostream that's passed in, and return the stream, like this: std::ostream& MyPrint(int val, std::ostream* out) { *out << val; return *out; } int main(int argc, char** argv){ std::cout << "Value: " << MyPrint(12, &std::cout) << std::endl; return 0; } It would be convenient to print the value like this and embed the function call in the output operator chain, like I did in main(). It doesn't work, however, and prints this: $ ./a.out 12Value: 0x6013a8 The desired output would be this: Value: 12 How can I fix this? Do I have to define an operator<< instead? UPDATE: Clarified what the desired output would be. UPDATE2: Some people didn't understand why I would print a number like that, using a function instead of printing it directly. This is a simplified example, and in reality the function prints a complex object rather than an int.

    Read the article

  • Can a function return an object? Objective-C and NSMutableArray

    - by seaworthy
    I have an NSMutableArray. It's members eventually become members of an array instance in a class. I want to put the instantiantion of NSMutable into a function and to return an array object. If I can do this, I can make some of my code easier to read. Is this possible? Here is what I am trying to figure out. //Definition: > function Objects (float a, float b) { > NSMutableArray *array = [[NSMutableArray alloc] init]; > [array addObject:[NSNumber numberWithFloat:a]]; > [array addObject:[NSNumber numberWithFloat:b]]; > //[release array]; ???????? return array; > } //Declaration: Math *operator = [[Math alloc] init]; [operator findSum:Objects(20.0,30.0)]; My code compiles if I instantiate NSMutableArray right before I send the message to the receiver. I know I can have an array argument along with the method. What I have problem seeing is how to use a function and to replace the argument with a function call. Any help is appreciated. I am interested in the concept not in suggestions to replace the findSum method.

    Read the article

  • What is the point of the logical operators in C?

    - by reubensammut
    I was just wondering if there is an XOR logical operator in C (something like && for AND but for XOR). I know I can split an XOR into ANDs, NOTs and ORs but a simple XOR would be much better. Then it occurred to me that if I use the normal XOR bitwise operator between two conditions, it might just work. And for my tests it did. Consider: int i = 3; int j = 7; int k = 8; Just for the sake of this rather stupid example, if I need k to be either greater than i or greater than j but not both, XOR would be quite handy. if ((k > i) XOR (k > j)) printf("Valid"); else printf("Invalid"); or printf("%s",((k > i) XOR (k > j)) ? "Valid" : "Invalid"); I put the bitwise XOR ^ and it produced "Invalid". Putting the results of the two comparisons in two integers resulted in the 2 integers to contain a 1, hence the XOR produced a false. I've then tried it with the & and | bitwise operators and both gave the expected results. All this makes sense knowing that true conditions have a non zero value, whilst false conditions have zero values. I was wondering, is there a reason to use the logical && and || when the bitwise operators &, | and ^ work just the same? Thanks Reuben

    Read the article

  • Is it okay to implement reference counting through composition?

    - by Billy ONeal
    Most common re-usable reference counted objects use private inheritance to implement re-use. I'm not a huge fan of private inheritance, and I'm curious if this is an acceptable way of handling things: class ReferenceCounter { std::size_t * referenceCount; public: ReferenceCounter() : referenceCount(NULL) {}; ReferenceCounter(ReferenceCounter& other) : referenceCount(other.referenceCount) { if (!referenceCount) { referenceCount = new std::size_t(1); other.referenceCount = referenceCount; } else { ++(*referenceCount); } }; ReferenceCounter& operator=(const ReferenceCounter& other) { ReferenceCounter temp(other); swap(temp); return *this; }; void swap(ReferenceCounter& other) { std::swap(referenceCount, other.referenceCount); }; ~ReferenceCounter() { if (referenceCount) { --(*referenceCount); if (!*referenceCount) delete referenceCount; } }; operator bool() const { return referenceCount && (*referenceCount != 0); }; }; class SomeClientClass { HANDLE someHandleThingy; ReferenceCounter objectsStillActive; public: SomeClientClass() { //Construct handle thingy } ~SomeClientClass() { if (objectsStillActive) return; //Release resources }; }; or are there subtle problems with this I'm not seeing?

    Read the article

  • Changing associativity

    - by Sorush Rabiee
    Hi... The associativity of stream insertion operator is rtl, forgetting this fact sometimes cause to runtime or logical errors. for example: 1st- int F() { static int internal_counter c=0; return ++c; } in the main function: //....here is main() cout<<”1st=”<<F()<<”,2nd=”<<F()<<”,3rd=”<<F(); and the output is: 1st=3,2nd=2,3rd=1 that is different from what we expect at first look. 2nd- suppose that we have an implementation of stack data structure like this: // //... a Stack<DataType> class …… // Stack<int> st(10); for(int i=1;i<11;i++) st.push(i); cout<<st.pop()<<endl<<st.pop()<<endl<<st.pop()<<endl<<st.pop()<<endl; expected output is something like: 10 9 8 7 but we have: 7 8 9 10 There is no internal bug of << implementation but it can be so confusing... and finally[:-)] my question: is there any way to change assocativity of an operator by overloading it?

    Read the article

  • restrict documents for mapreduce with mongoid

    - by theBernd
    I implemented the pearson product correlation via map / reduce / finalize. The missing part is to restrict the documents (representing users) to be processed via a filter query. For a simple query like mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name => 'Bernd' }) I get this to work. But my filter criteria is a little bit more complicated: I have one set of preferences which need to have at least one common element and another set of preferences which may not have a common element. In a later step I also want to restrict this to documents (users) within a certain geographical distance. Currently I have this code working in my map function, but I would prefer to separate this into either query params as supported by mongoid or a javascript function. All my attempts to solve this failed since the code is either ignored or raises an error. I did a couple of tests. A regular find like User.where(:name.in => ['Arno', 'Bernd', 'Claudia']) works and returns #<Mongoid::Criteria:0x00000101f0ea40 @selector={:name=>{"$in"=>["Arno", "Bernd", "Claudia"]}}, @options={}, @klass=User, @documents=[]> Trying the same with mapreduce User.collection. mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name.in => ['Arno', 'Bernd', 'Claudia'] }) fails with `serialize': keys must be strings or symbols (TypeError) in bson-1.1.5 The intermediate query parameter looks like this :query=>{#<Mongoid::Criterion::Complex:0x00000101a209e8 @key=:name, @operator="in">=>["Arno", "Bernd", "Claudia"]} and at least @operator looks a bit weird to me. I'm also uncertain if the class name can be omitted. BTW - I'm using mongodb 1.6.5-x86_64, and the mongoid 2.0.0.beta.20, mongo 1.1.5 and bson 1.1.5 gems on MacOS. What am I doing wrong? Thanks in advance.

    Read the article

  • C++ error: expected initializer before ‘&’ token

    - by Werner
    Hi, the following piece of C++ code compiled two years ago in a suse 10.1 Linux machine. #ifndef DATA_H #define DATA_H #include <iostream> #include <iomanip> inline double sqr(double x) { return x*x; } enum Direction { X,Y,Z }; inline Direction next(const Direction d) { switch(d) { case X: return Y; case Y: return Z; case Z: return X; } } inline ostream& operator<<(ostream& os,const Direction d) { switch(d) { case X: return os << "X"; case Y: return os << "Y"; case Z: return os << "Z"; } } ... ... Now, I am trying to compile it on Ubuntu 9.10 and I get the error: data.h:20: error: expected initializer before ‘&’ token which is referred to the line of: inline ostream& operator<<(ostream& os,const Direction d) the g++ used on this machine is: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.4.1-4ubuntu9' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 --program-suffix=-4.4 --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --disable-werror --with-arch-32=i486 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.4.1 (Ubuntu 4.4.1-4ubuntu9) Could you give me some hint about this error? Thanks

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >