Search Results

Search found 2644 results on 106 pages for 'diamond operator'.

Page 88/106 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Is it better to always copy and delete, rather than move?

    - by nbolton
    Generally speaking, I find myself panicking when I realise that if I cancel a file move, it could cause the target or source to be incomplete. This question applies to Windows and Unix-based platforms. I can never remember exactly how the move command works in either case. For example, if you're moving a directory; does it copy the entire directory, then delete it after, or does it copy then delete each file individually? I always realise after typing something like, mv verybigdir dest that I really should have typed cp -R verybigdir dest && rm verybigdir (where the && operator only moves to the next command if the first was successful) -- or is this pointless? What happens exactly when I press Ctrl+C half way through a move? Likewise, what exactly happens on Windows when I press the cancel button? I can't count the number of times I've moved something (the last time was when using svn) and had two directories, with split contents. I guess the answer is difficult, because not all applications move groups of files in the same way.

    Read the article

  • Can Haproxy deny a request by IP if its stick-table is full?

    - by bantic
    In my haproxy configs I'm setting a stick-table of size 5 that stores every incoming IP address (for 1 minute), and it is set as nopurge so new entries won't get stored in the table. What I'd like to have happen is that they would get denied, but that isn't happening. The stick-table line is: stick-table type ip size 5 expire 1m nopurge store gpc0 And the whole configs are: global maxconn 30000 ulimit-n 65536 log 127.0.0.1 local0 log 127.0.0.1 local1 debug stats socket /var/run/haproxy.stat mode 600 level operator defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms backend fragile_backend tcp-request content track-sc2 src stick-table type ip size 5 expire 1m nopurge store gpc0 server fragile_backend1 A.B.C.D:80 frontend http_proxy bind *:80 mode http option forwardfor default_backend fragile_backend I have confirmed (connecting to haproxy's stats using socat readline /var/run/haproxy.stat) that the stick-table fills up with 5 IP addresses, but then every request after that from a new IP just goes straight through -- it isn't added to the stick-table, nothing is removed from the stick-table, and the request is not denied. What I'd like to do is deny the request if the stick-table is full. Is this possible? I'm using haproxy 1.5.

    Read the article

  • Does Google sometimes ignore "special" characters, possibly depending on your location or font type settings? [closed]

    - by RLH
    TLDR Google tends to ignore special characters in my search strings. Is there anything that I can do about it and is it, possibly, happening because Google makes certain assumptions based off of my default text-encoding settings and my location? I just posted this question over at StackOverflow. I had found a C preprocessor that I'd never seen before. As I should have done, I Googled it and tried to find out further information. I attempted various search terms which were all variations of "C Operator ##" (some times with and some times without the double-quotes.) Google didn't bring back anything of use so I posted my question on SO. As you can see from the comments, someone mentioned a search string (ironically one which I did try to search) and stated that I could have even hit the "I'm feeling lucky" button and have gotten my answer. The problem is I did search that, and the results that I received were far more basic and even after following the top results and searching the resulting pages, I could find nothing referencing the string "##". I'm not posting this question to complain but it does provide an empirical example of something I've seen before that really bugs me-- Google often ignores special characters in my search strings and the results are often useless. As a developer I often need to search for string values containing non-alphanumeric characters. Some characters (like the underscore or hyphen) can be used without trouble. However, other characters (such as the ampersand, carat, tilde and pound sign) are often ignored in my query strings. Is there a way to prevent this from happening so that I can get meaningful results from Google? NOTE I stay logged into Google and I live in the US. I wonder if Google detects some form of text-encoding setting or derives my results based off of certain, localized text-based assumptions. Regardless, I would like to for Google to search for what I give it. Is there anything that I can do to improve my results?

    Read the article

  • program not working as expected!

    - by wilson88
    Can anyone just help spot why my program is not returning the expected output.related to my previous question.Am passing a vector by refrence, I want to see whats in the container before I copy them to another loaction.if u remove comments on loadRage, u will see bids are generated by the trader. #include <iostream> #include <vector> #include <string> #include <algorithm> #include <cstdlib> #include <iomanip> using namespace std; const int NUMSELLER = 1; const int NUMBUYER = 1; const int NUMBIDS = 20; const int MINQUANTITY = 1; const int MAXQUANTITY = 30; const int MINPRICE =100; const int MAXPRICE = 150; int s=0; int trdId; // Bid, simple container for values struct Bid { int bidId, trdId, qty, price; char type; // for sort and find. bool operator<(const Bid &other) const { return price < other.price; } bool operator==(int bidId) const { return this->bidId == bidId; } }; // alias to the list, make type consistent typedef vector<Bid> BidList; // this class generates bids! class Trader { private: int nextBidId; public: Trader(); Bid getNextBid(); Bid getNextBid(char type); // generate a number of bids void loadRange(BidList &, int size); void loadRange(BidList &, char type, int size); void setVector(); }; Trader::Trader() : nextBidId(1) {} #define RAND_RANGE(min, max) ((rand() % (max-min+1)) + min) Bid Trader::getNextBid() { char type = RAND_RANGE('A','B'); return getNextBid(type); } Bid Trader::getNextBid(char type) { for(int i = 0; i < NUMSELLER+NUMBUYER; i++) { // int trdId = RAND_RANGE(1,9); if (s<10){trdId=0;type='A';} else {trdId=1;type='B';} s++; int qty = RAND_RANGE(MINQUANTITY, MAXQUANTITY); int price = RAND_RANGE(MINPRICE, MAXPRICE); Bid bid = {nextBidId++, trdId, qty, price, type}; return bid; } } //void Trader::loadRange(BidList &list, int size) { // for (int i=0; i<size; i++) { list.push_back(getNextBid()); } //} // //void Trader::loadRange(BidList &list, char type, int size) { // for (int i=0; i<size; i++) { list.push_back(getNextBid(type)); } //} //---------------------------AUCTIONEER------------------------------------------- class Auctioneer { vector<Auctioneer> List; Trader trader; vector<Bid> list; public: Auctioneer(){}; void accept_bids(const BidList& bid); }; typedef vector<Auctioneer*> bidlist; void Auctioneer::accept_bids(const BidList& bid){ BidList list; //copy (BidList.begin(),BidList.end(),list); } //all the happy display commands void show(const Bid &bid) { cout << "\tBid\t(" << setw(3) << bid.bidId << "\t " << setw(3) << bid.trdId << "\t " << setw(3) << bid.type <<"\t " << setw(3) << bid.qty <<"\t " << setw(3) << bid.price <<")\t\n " ; } void show(const BidList &list) { cout << "\t\tBidID | TradID | Type | Qty | Price \n\n"; for(BidList::const_iterator itr=list.begin(); itr != list.end(); ++itr) { //cout <<"\t\t"; show(*itr); cout << endl; } cout << endl; } //search now checks for failure void show(const char *msg, const BidList &list) { cout << msg << endl; show(list); } void searchTest(BidList &list, int bidId) { cout << "Searching for Bid " << bidId << endl; BidList::const_iterator itr = find(list.begin(), list.end(), bidId); if (itr==list.end()) { cout << "Bid not found."; } else { cout << "Bid has been found. Its : "; show(*itr); } cout << endl; } //comparator function for price: returns true when x belongs before y bool compareBidList(Bid one, Bid two) { if (one.type == 'A' && two.type == 'B') return (one.price < two.price); return false; } void sort(BidList &bidlist) { sort(bidlist.begin(), bidlist.end(), compareBidList); } int main(int argc, char **argv) { Trader trader; BidList bidlist; Auctioneer auctioneer; //bidlist list; auctioneer.accept_bids(bidlist); //trader.loadRange(bidlist, NUMBIDS); show("Bids before sort:", bidlist); sort(bidlist); show("Bids after sort:", bidlist); system("pause"); return 0; }

    Read the article

  • c++ stl priority queue insert bad_alloc exception

    - by bsg
    Hi, I am working on a query processor that reads in long lists of document id's from memory and looks for matching id's. When it finds one, it creates a DOC struct containing the docid (an int) and the document's rank (a double) and pushes it on to a priority queue. My problem is that when the word(s) searched for has a long list, when I try to push the DOC on to the queue, I get the following exception: Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012ee88.. When the word has a short list, it works fine. I tried pushing DOC's onto the queue in several places in my code, and they all work until a certain line; after that, I get the above error. I am completely at a loss as to what is wrong because the longest list read in is less than 1 MB and I free all memory that I allocate. Why should there suddenly be a bad_alloc exception when I try to push a DOC onto a queue that has a capacity to hold it (I used a vector with enough space reserved as the underlying data structure for the priority queue)? I know that questions like this are almost impossible to answer without seeing all the code, but it's too long to post here. I'm putting as much as I can and am anxiously hoping that someone can give me an answer, because I am at my wits' end. The NextGEQ function is too long to put here, but it reads a list of compressed blocks of docids block by block. That is, if it sees that the lastdocid in the block (in a separate list) is larger than the docid passed in, it decompresses the block and searches until it finds the right one. If it sees that it was already decompressed, it just searches. Below, when I call the function the first time, it decompresses a block and finds the docid; the push onto the queue after that works. The second time, it doesn't even need to decompress; that is, no new memory is allocated, but after that time, pushing on to the queue gives a bad_alloc error. struct DOC{ long int docid; long double rank; public: DOC() { docid = 0; rank = 0.0; } DOC(int num, double ranking) { docid = num; rank = ranking; } bool operator>( const DOC & d ) const { return rank > d.rank; } bool operator<( const DOC & d ) const { return rank < d.rank; } }; struct listnode{ int* metapointer; int* blockpointer; int docposition; int frequency; int numberdocs; int* iquery; listnode* nextnode; }; void QUERYMANAGER::SubmitQuery(char *query){ vector<DOC> docvec; docvec.reserve(20); DOC doct; //create a priority queue to use as a min-heap to store the documents and rankings; //although the priority queue uses the heap as its underlying data structure, //I found it easier to use the STL priority queue implementation priority_queue<DOC, vector<DOC>,std::greater<DOC>> q(docvec.begin(), docvec.end()); q.push(doct); //do some processing here; startlist is a pointer to a listnode struct that starts the //linked list cout << "Opening lists:" << endl; //point the linked list start pointer to the node returned by the OpenList method startlist = &OpenList(value); listnode* minpointer; q.push(doct); //more processing here; else{ //start by finding the first docid in the shortest list int i = 0; q.push(doct); num = NextGEQ(0, *startlist); q.push(doct); while(num != -1) cout << "finding nextGEQ from shortest list" << endl; q.push(doct); //the is where the problem starts - every previous q.push(doct) works; the one after //NextGEQ(num +1, *startlist) gives the bad_alloc error num = NextGEQ(num + 1, *startlist); q.push(doct); //if you didn't break out of the loop; i.e., all lists contain a matching docid, //calculate the document's rank; if it's one of the top 20, create a struct //containing the docid and the rank and add it to the priority queue if(!loop) { cout << "found match" << endl; if(num < 0) { cout << "reached end of list" << endl; //reached the end of the shortest list; close the list CloseList(startlist); break; } rank = calculateRanking(table, num); try{ //if the heap is not full, create a DOC struct with the docid and //rank and add it to the heap if(q.size() < 20) { doc.docid = num; doc.rank = rank; q.push(doct); q.push(doc); } } catch (exception& e) { cout << e.what() << endl; } } } Thank you very much, bsg.

    Read the article

  • How to declare a(n) vector/array of reducer objects in Cilk++?

    - by Jin
    Hi All, I had a problem when I am using Cilk++, an extension to C++ for parallel computing. I found that I can't declare a vector of reducer objects: typedef cilk::reducer_opadd<int> T_reducer; vector<T_reducer> bitmiss_vec; for (int i = 0; i < 24; ++i) { T_reducer r; bitmiss_vec.push_back(r); } However, when I compile the code with Cilk++, it complains at the push_back() line: cilk++ geneAttack.cilk -O1 -g -lcilkutil -o geneAttack /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In member function ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Tp*, const _Tp&) [with _Tp = cilk::reducer_opadd<int>]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:601: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/ext/new_allocator.h:107: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In member function ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:252: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:230: error: ‘cilk::reducer_opadd<Type>& cilk::reducer_opadd<Type>::operator=(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:256: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In static member function ‘static _BI2 std::__copy_backward<_BoolType, std::random_access_iterator_tag>::__copy_b(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*, bool _BoolType = false]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:465: instantiated from ‘_BI2 std::__copy_backward_aux(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:474: instantiated from ‘static _BI2 std::__copy_backward_normal<<anonymous>, <anonymous> >::__copy_b_n(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*, bool <anonymous> = false, bool <anonymous> = false]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:540: instantiated from ‘_BI2 std::copy_backward(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:253: instantiated from ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:230: error: ‘cilk::reducer_opadd<Type>& cilk::reducer_opadd<Type>::operator=(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:433: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In function ‘void std::_Construct(_T1*, const _T2&) [with _T1 = cilk::reducer_opadd<int>, _T2 = cilk::reducer_opadd<int>]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:87: instantiated from ‘_ForwardIterator std::__uninitialized_copy_aux(_InputIterator, _InputIterator, _ForwardIterator, std::__false_type) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:114: instantiated from ‘_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:254: instantiated from ‘_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, std::allocator<_Tp>) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*, _Tp = cilk::reducer_opadd<int>]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:275: instantiated from ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_construct.h:81: error: within this context make: *** [geneAttack] Error 1 jinchen@galactica:~/workspace/biometrics/genAttack$ make cilk++ geneAttack.cilk -O1 -g -lcilkutil -o geneAttack geneAttack.cilk: In function ‘int cilk cilk_main(int, char**)’: geneAttack.cilk:670: error: expected primary-expression before ‘,’ token geneAttack.cilk:670: error: expected primary-expression before ‘}’ token geneAttack.cilk:674: error: ‘bitmiss_vec’ was not declared in this scope make: *** [geneAttack] Error 1 The Cilk++ manule says it supports array/vector of reducers, although there are performance issues to consider: "If you create a large number of reducers (for example, an array or vector of reducers) you must be aware that there is an overhead at steal and reduce that is proportional to the number of reducers in the program. " Anyone knows what is going on? How should I declare/use vector of reducers? Thank you

    Read the article

  • Higher order function « filter » in C++

    - by Red Hyena
    Hi all. I wanted to write a higher order function filter with C++. The code I have come up with so far is as follows: #include <iostream> #include <string> #include <functional> #include <algorithm> #include <vector> #include <list> #include <iterator> using namespace std; bool isOdd(int const i) { return i % 2 != 0; } template < template <class, class> class Container, class Predicate, class Allocator, class A > Container<A, Allocator> filter(Container<A, Allocator> const & container, Predicate const & pred) { Container<A, Allocator> filtered(container); container.erase(remove_if(filtered.begin(), filtered.end(), pred), filtered.end()); return filtered; } int main() { int const a[] = {23, 12, 78, 21, 97, 64}; vector<int const> const v(a, a + 6); vector<int const> const filtered = filter(v, isOdd); copy(filtered.begin(), filtered.end(), ostream_iterator<int const>(cout, " ")); } However on compiling this code, I get the following error messages that I am unable to understand and hence get rid of: /usr/include/c++/4.3/ext/new_allocator.h: In instantiation of ‘__gnu_cxx::new_allocator<const int>’: /usr/include/c++/4.3/bits/allocator.h:84: instantiated from ‘std::allocator<const int>’ /usr/include/c++/4.3/bits/stl_vector.h:75: instantiated from ‘std::_Vector_base<const int, std::allocator<const int> >’ /usr/include/c++/4.3/bits/stl_vector.h:176: instantiated from ‘std::vector<const int, std::allocator<const int> >’ Filter.cpp:29: instantiated from here /usr/include/c++/4.3/ext/new_allocator.h:82: error: ‘const _Tp* __gnu_cxx::new_allocator<_Tp>::address(const _Tp&) const [with _Tp = const int]’ cannot be overloaded /usr/include/c++/4.3/ext/new_allocator.h:79: error: with ‘_Tp* __gnu_cxx::new_allocator<_Tp>::address(_Tp&) const [with _Tp = const int]’ Filter.cpp: In function ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’: Filter.cpp:30: instantiated from here Filter.cpp:23: error: passing ‘const std::vector<const int, std::allocator<const int> >’ as ‘this’ argument of ‘__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> > std::vector<_Tp, _Alloc>::erase(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, __gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >) [with _Tp = const int, _Alloc = std::allocator<const int>]’ discards qualifiers /usr/include/c++/4.3/bits/stl_algo.h: In function ‘_FIter std::remove_if(_FIter, _FIter, _Predicate) [with _FIter = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _Predicate = bool (*)(int)]’: Filter.cpp:23: instantiated from ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’ Filter.cpp:30: instantiated from here /usr/include/c++/4.3/bits/stl_algo.h:821: error: assignment of read-only location ‘__result.__gnu_cxx::__normal_iterator<_Iterator, _Container>::operator* [with _Iterator = const int*, _Container = std::vector<const int, std::allocator<const int> >]()’ /usr/include/c++/4.3/ext/new_allocator.h: In member function ‘void __gnu_cxx::new_allocator<_Tp>::deallocate(_Tp*, size_t) [with _Tp = const int]’: /usr/include/c++/4.3/bits/stl_vector.h:150: instantiated from ‘void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(_Tp*, size_t) [with _Tp = const int, _Alloc = std::allocator<const int>]’ /usr/include/c++/4.3/bits/stl_vector.h:136: instantiated from ‘std::_Vector_base<_Tp, _Alloc>::~_Vector_base() [with _Tp = const int, _Alloc = std::allocator<const int>]’ /usr/include/c++/4.3/bits/stl_vector.h:286: instantiated from ‘std::vector<_Tp, _Alloc>::vector(_InputIterator, _InputIterator, const _Alloc&) [with _InputIterator = const int*, _Tp = const int, _Alloc = std::allocator<const int>]’ Filter.cpp:29: instantiated from here /usr/include/c++/4.3/ext/new_allocator.h:98: error: invalid conversion from ‘const void*’ to ‘void*’ /usr/include/c++/4.3/ext/new_allocator.h:98: error: initializing argument 1 of ‘void operator delete(void*)’ /usr/include/c++/4.3/bits/stl_algobase.h: In function ‘_OI std::__copy_move_a(_II, _II, _OI) [with bool _IsMove = false, _II = const int*, _OI = const int*]’: /usr/include/c++/4.3/bits/stl_algobase.h:435: instantiated from ‘_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false, _II = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _OI = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >]’ /usr/include/c++/4.3/bits/stl_algobase.h:466: instantiated from ‘_OI std::copy(_II, _II, _OI) [with _II = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _OI = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >]’ /usr/include/c++/4.3/bits/vector.tcc:136: instantiated from ‘__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> > std::vector<_Tp, _Alloc>::erase(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, __gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >) [with _Tp = const int, _Alloc = std::allocator<const int>]’ Filter.cpp:23: instantiated from ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’ Filter.cpp:30: instantiated from here /usr/include/c++/4.3/bits/stl_algobase.h:396: error: no matching function for call to ‘std::__copy_move<false, true, std::random_access_iterator_tag>::__copy_m(const int*&, const int*&, const int*&)’ Please tell me what I am doing wrong here and what is the correct way to achieve the kind of higher order polymorphism I want. Thanks.

    Read the article

  • How to declare a vector or array of reducer objects in Cilk++?

    - by Jin
    Hi All, I had a problem when I am using Cilk++, an extension to C++ for parallel computing. I found that I can't declare a vector of reducer objects: typedef cilk::reducer_opadd<int> T_reducer; vector<T_reducer> bitmiss_vec; for (int i = 0; i < 24; ++i) { T_reducer r; bitmiss_vec.push_back(r); } However, when I compile the code with Cilk++, it complains at the push_back() line: cilk++ geneAttack.cilk -O1 -g -lcilkutil -o geneAttack /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In member function ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Tp*, const _Tp&) [with _Tp = cilk::reducer_opadd<int>]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:601: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/ext/new_allocator.h:107: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In member function ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:252: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:230: error: ‘cilk::reducer_opadd<Type>& cilk::reducer_opadd<Type>::operator=(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:256: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In static member function ‘static _BI2 std::__copy_backward<_BoolType, std::random_access_iterator_tag>::__copy_b(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*, bool _BoolType = false]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:465: instantiated from ‘_BI2 std::__copy_backward_aux(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:474: instantiated from ‘static _BI2 std::__copy_backward_normal<<anonymous>, <anonymous> >::__copy_b_n(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*, bool <anonymous> = false, bool <anonymous> = false]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:540: instantiated from ‘_BI2 std::copy_backward(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:253: instantiated from ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:230: error: ‘cilk::reducer_opadd<Type>& cilk::reducer_opadd<Type>::operator=(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:433: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In function ‘void std::_Construct(_T1*, const _T2&) [with _T1 = cilk::reducer_opadd<int>, _T2 = cilk::reducer_opadd<int>]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:87: instantiated from ‘_ForwardIterator std::__uninitialized_copy_aux(_InputIterator, _InputIterator, _ForwardIterator, std::__false_type) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:114: instantiated from ‘_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:254: instantiated from ‘_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, std::allocator<_Tp>) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*, _Tp = cilk::reducer_opadd<int>]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:275: instantiated from ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_construct.h:81: error: within this context make: *** [geneAttack] Error 1 jinchen@galactica:~/workspace/biometrics/genAttack$ make cilk++ geneAttack.cilk -O1 -g -lcilkutil -o geneAttack geneAttack.cilk: In function ‘int cilk cilk_main(int, char**)’: geneAttack.cilk:670: error: expected primary-expression before ‘,’ token geneAttack.cilk:670: error: expected primary-expression before ‘}’ token geneAttack.cilk:674: error: ‘bitmiss_vec’ was not declared in this scope make: *** [geneAttack] Error 1 The Cilk++ manule says it supports array/vector of reducers, although there are performance issues to consider: "If you create a large number of reducers (for example, an array or vector of reducers) you must be aware that there is an overhead at steal and reduce that is proportional to the number of reducers in the program. " Anyone knows what is going on? How should I declare/use vector of reducers? Thank you

    Read the article

  • Is this too much code for a header only library?

    - by Billy ONeal
    It seems like I had to inline quite a bit of code here. I'm wondering if it's bad design practice to leave this entirely in a header file like this: #pragma once #include <string> #include <boost/noncopyable.hpp> #include <boost/make_shared.hpp> #include <boost/iterator/iterator_facade.hpp> #include <Windows.h> #include "../Exception.hpp" namespace WindowsAPI { namespace FileSystem { class FileData; struct AllResults; struct FilesOnly; template <typename Filter_T = AllResults> class DirectoryIterator; namespace detail { class DirectoryIteratorImpl : public boost::noncopyable { WIN32_FIND_DATAW currentData; HANDLE hFind; std::wstring root; public: inline DirectoryIteratorImpl(); inline explicit DirectoryIteratorImpl(const std::wstring& pathSpec); inline void increment(); inline bool equal(const DirectoryIteratorImpl& other) const; inline const std::wstring& GetPathRoot() const; inline const WIN32_FIND_DATAW& GetCurrentFindData() const; inline ~DirectoryIteratorImpl(); }; } class FileData //Serves as a proxy to the WIN32_FIND_DATA struture inside the iterator. { boost::shared_ptr<detail::DirectoryIteratorImpl> iteratorSource; public: FileData(const boost::shared_ptr<detail::DirectoryIteratorImpl>& parent) : iteratorSource(parent) {}; DWORD GetAttributes() const { return iteratorSource->GetCurrentFindData().dwFileAttributes; }; bool IsDirectory() const { return (GetAttributes() | FILE_ATTRIBUTE_DIRECTORY) != 0; }; bool IsFile() const { return !IsDirectory(); }; bool IsArchive() const { return (GetAttributes() | FILE_ATTRIBUTE_ARCHIVE) != 0; }; bool IsReadOnly() const { return (GetAttributes() | FILE_ATTRIBUTE_READONLY) != 0; }; unsigned __int64 GetSize() const { ULARGE_INTEGER intValue; intValue.LowPart = iteratorSource->GetCurrentFindData().nFileSizeLow; intValue.HighPart = iteratorSource->GetCurrentFindData().nFileSizeHigh; return intValue.QuadPart; }; std::wstring GetFolderPath() const { return iteratorSource->GetPathRoot(); }; std::wstring GetFileName() const { return iteratorSource->GetCurrentFindData().cFileName; }; std::wstring GetFullFileName() const { return GetFolderPath() + GetFileName(); }; std::wstring GetShortFileName() const { return iteratorSource->GetCurrentFindData().cAlternateFileName; }; FILETIME GetCreationTime() const { return iteratorSource->GetCurrentFindData().ftCreationTime; }; FILETIME GetLastAccessTime() const { return iteratorSource->GetCurrentFindData().ftLastAccessTime; }; FILETIME GetLastWriteTime() const { return iteratorSource->GetCurrentFindData().ftLastWriteTime; }; }; struct AllResults : public std::unary_function<const FileData&, bool> { bool operator()(const FileData&) { return true; }; }; struct FilesOnly : public std::unary_function<const FileData&, bool> { bool operator()(const FileData& arg) { return arg.IsFile(); }; }; template <typename Filter_T> class DirectoryIterator : public boost::iterator_facade<DirectoryIterator<Filter_T>, const FileData, std::input_iterator_tag> { friend class boost::iterator_core_access; boost::shared_ptr<detail::DirectoryIteratorImpl> impl; FileData current; Filter_T filter; void increment() { do { impl->increment(); } while (! filter(current)); }; bool equal(const DirectoryIterator& other) const { return impl->equal(*other.impl); }; const FileData& dereference() const { return current; }; public: DirectoryIterator(Filter_T functor = Filter_T()) : impl(boost::make_shared<detail::DirectoryIteratorImpl>()), current(impl), filter(functor) { }; explicit DirectoryIterator(const std::wstring& pathSpec, Filter_T functor = Filter_T()) : impl(boost::make_shared<detail::DirectoryIteratorImpl>(pathSpec)), current(impl), filter(functor) { }; }; namespace detail { DirectoryIteratorImpl::DirectoryIteratorImpl() : hFind(INVALID_HANDLE_VALUE) { } DirectoryIteratorImpl::DirectoryIteratorImpl(const std::wstring& pathSpec) { std::wstring::const_iterator lastSlash = std::find(pathSpec.rbegin(), pathSpec.rend(), L'\\').base(); root.assign(pathSpec.begin(), lastSlash); hFind = FindFirstFileW(pathSpec.c_str(), &currentData); if (hFind == INVALID_HANDLE_VALUE) WindowsApiException::ThrowFromLastError(); while (!wcscmp(currentData.cFileName, L".") || !wcscmp(currentData.cFileName, L"..")) { increment(); } } void DirectoryIteratorImpl::increment() { BOOL success = FindNextFile(hFind, &currentData); if (success) return; DWORD error = GetLastError(); if (error == ERROR_NO_MORE_FILES) { FindClose(hFind); hFind = INVALID_HANDLE_VALUE; } else { WindowsApiException::Throw(error); } } DirectoryIteratorImpl::~DirectoryIteratorImpl() { if (hFind != INVALID_HANDLE_VALUE) FindClose(hFind); } bool DirectoryIteratorImpl::equal(const DirectoryIteratorImpl& other) const { if (this == &other) return true; return hFind == other.hFind; } const std::wstring& DirectoryIteratorImpl::GetPathRoot() const { return root; } const WIN32_FIND_DATAW& DirectoryIteratorImpl::GetCurrentFindData() const { return currentData; } } }}

    Read the article

  • C++ simple logging class with UTF-8 output [code example]

    - by Andrew
    Hello everyone, I was working on one of my academic projects and for the first time I needed pure C++ without GUI. After googling for a while, I did not find any simple and easy to use implementation for logging and created my own. This is a simple implementation with iostreams that logs messages to screen and to the file simultaneously. I was thinking of using templates but then I realized that I do not expect any changes and removed that. It is modified std::wostream with two added modifiers: 1. TimeStamp - prints time-stamp 2. LogMode(LogModes) - switches output: file only, screen only, file+screen. *Boost::utf8_codecvt_facet* is used for UTF-8 output. // ############################################################################ // # Name: MyLog.h # // # Purpose: Logging Class Header # // # Author: Andrew Drach # // # Modified by: <somebody> # // # Created: 03/21/10 # // # SVN-ID: $Id$ # // # Copyright: (c) 2010 Andrew Drach # // # Licence: <license> # // ############################################################################ #ifndef INCLUDED_MYLOG_H #define INCLUDED_MYLOG_H // headers -------------------------------------------------------------------- #include <string> #include <iostream> #include <fstream> #include <exception> #include <boost/program_options/detail/utf8_codecvt_facet.hpp> using namespace std; // definitions ---------------------------------------------------------------- // ---------------------------------------------------------------------------- // DblBuf class // Splits up output stream into two // Inspired by http://wordaligned.org/articles/cpp-streambufs // ---------------------------------------------------------------------------- class DblBuf : public wstreambuf { private: // private member declarations DblBuf(); wstreambuf *bf1; wstreambuf *bf2; virtual int_type overflow(int_type ch) { int_type eof = traits_type::eof(); int_type not_eof = !eof; if ( traits_type::eq_int_type(ch,eof) ) return not_eof; else { char_type ch1 = traits_type::to_char_type(ch); int_type r1( bf1on ? bf1->sputc(ch1) : not_eof ); int_type r2( bf2on ? bf2->sputc(ch1) : not_eof ); return (traits_type::eq_int_type(r1,eof) || traits_type::eq_int_type(r2,eof) ) ? eof : ch; } } virtual int sync() { int r1( bf1on ? bf1->pubsync() : NULL ); int r2( bf2on ? bf2->pubsync() : NULL ); return (r1 == 0 && r2 == 0) ? 0 : -1; } public: // public member declarations explicit DblBuf(wstreambuf *bf1, wstreambuf *bf2) : bf1(bf1), bf2(bf2) { if (bf1) bf1on = true; else bf1on = false; if (bf2) bf2on = true; else bf2on = false; } bool bf1on; bool bf2on; }; // ---------------------------------------------------------------------------- // logstream class // Wrapper for a standard wostream with access to modified buffer // ---------------------------------------------------------------------------- class logstream : public wostream { private: // private member declarations logstream(); public: // public member declarations DblBuf *buf; explicit logstream(wstreambuf *StrBuf, bool isStd = false) : wostream(StrBuf, isStd), buf((DblBuf*)StrBuf) {} }; // ---------------------------------------------------------------------------- // Logging mode Class // ---------------------------------------------------------------------------- enum LogModes{LogToFile=1, LogToScreen, LogToBoth}; class LogMode { private: // private member declarations LogMode(); short mode; public: // public member declarations LogMode(short mode1) : mode(mode1) {} logstream& operator()(logstream &stream1) { switch(mode) { case LogToFile: stream1.buf->bf1on = true; stream1.buf->bf2on = false; break; case LogToScreen: stream1.buf->bf1on = false; stream1.buf->bf2on = true; break; case LogToBoth: stream1.buf->bf1on = true; stream1.buf->bf2on = true; } return stream1; } }; logstream& operator<<(logstream &out, LogMode mode) { return mode(out); } wostream& TimeStamp1(wostream &out1) { time_t time1; struct tm timeinfo; wchar_t timestr[512]; // Get current time and convert it to a string time(&time1); localtime_s (&timeinfo, &time1); wcsftime(timestr, 512,L"[%Y-%b-%d %H:%M:%S %p] ",&timeinfo); return out1 << timestr; } // ---------------------------------------------------------------------------- // MyLog class // Logs events to both file and screen // ---------------------------------------------------------------------------- class MyLog { private: // private member declarations MyLog(); auto_ptr<DblBuf> buf; string mErrorMsg1; string mErrorMsg2; string mErrorMsg3; string mErrorMsg4; public: // public member declarations explicit MyLog(string FileName1, wostream *ScrLog1, locale utf8locale1); ~MyLog(); void NewEvent(wstring str1, bool TimeStamp = true); string FileName; wostream *ScrLog; wofstream File; auto_ptr<logstream> Log; locale utf8locale; }; // ---------------------------------------------------------------------------- // MyLog constructor // ---------------------------------------------------------------------------- MyLog::MyLog(string FileName1, wostream *ScrLog1, locale utf8locale1) : // ctors mErrorMsg1("Failed to open file for application logging! []"), mErrorMsg2("Failed to write BOM! []"), mErrorMsg3("Failed to write to file! []"), mErrorMsg4("Failed to close file! []"), FileName(FileName1), ScrLog(ScrLog1), utf8locale(utf8locale1), File(FileName1.c_str()) { // Adjust error strings mErrorMsg1.insert(mErrorMsg1.length()-1,FileName1); mErrorMsg2.insert(mErrorMsg2.length()-1,FileName1); mErrorMsg3.insert(mErrorMsg3.length()-1,FileName1); mErrorMsg4.insert(mErrorMsg4.length()-1,FileName1); // check for file open errors if ( !File ) throw ofstream::failure(mErrorMsg1); // write UTF-8 BOM File << wchar_t(0xEF) << wchar_t(0xBB) << wchar_t(0xBF); // switch locale to UTF-8 File.imbue(utf8locale); // check for write errors if ( File.bad() ) throw ofstream::failure(mErrorMsg2); buf.reset( new DblBuf(File.rdbuf(),ScrLog->rdbuf()) ); Log.reset( new logstream(&*buf) ); } // ---------------------------------------------------------------------------- // MyLog destructor // ---------------------------------------------------------------------------- MyLog::~MyLog() { *Log << TimeStamp1 << "Log finished." << endl; // clean up objects Log.reset(); buf.reset(); File.close(); // check for file close errors if ( File.bad() ) throw ofstream::failure(mErrorMsg4); } //--------------------------------------------------------------------------- #endif // INCLUDED_MYLOG_H Tested on MSVC 2008, boost 1.42. I do not know if this is the right place to share it. Hope it helps anybody. Feel free to make it better.

    Read the article

  • Need help with copy constructor for very basic implementation of singly linked lists

    - by Jesus
    Last week, we created a program that manages sets of strings, using classes and vectors. I was able to complete this 100%. This week, we have to replace the vector we used to store strings in our class with simple singly linked lists. The function basically allows users to declare sets of strings that are empty, and sets with only one element. In the main file, there is a vector whose elements are a struct that contain setName and strSet (class). HERE IS MY PROBLEM: It deals with the copy constructor of the class. When I remove/comment out the copy constructor, I can declare as many empty or single sets as I want, and output their values without a problem. But I know I will obviously need the copy constructor for when I implement the rest of the program. When I leave the copy constructor in, I can declare one set, either single or empty, and output its value. But if I declare a 2nd set, and i try to output either of the first two sets, i get a Segmentation Fault. Moreover, if i try to declare more then 2 sets, I get a Segmentation Fault. Any help would be appreciated!! Here is my code for a very basic implementation of everything: Here is the setcalc.cpp: (main file) #include <iostream> #include <cctype> #include <cstring> #include <string> #include "help.h" #include "strset2.h" using namespace std; // Declares of structure to hold all the sets defined struct setsOfStr { string nameOfSet; strSet stringSet; }; // Checks if the set name inputted is unique bool isSetNameUnique( vector<setsOfStr> strSetArr, string setName) { for(unsigned int i = 0; i < strSetArr.size(); i++) { if( strSetArr[i].nameOfSet == setName ) { return false; } } return true; } int main(int argc, char *argv[]) { char commandChoice; // Declares a vector with our declared structure as the type vector<setsOfStr> strSetVec; string setName; string singleEle; // Sets a loop that will constantly ask for a command until 'q' is typed while (1) { // declaring a set to be empty if(commandChoice == 'd') { cin >> setName; // Check that the set name inputted is unique if (isSetNameUnique(strSetVec, setName) == true) { strSet emptyStrSet; setsOfStr set1; set1.nameOfSet = setName; set1.stringSet = emptyStrSet; strSetVec.push_back(set1); } else { cerr << "ERROR: Re-declaration of set '" << setName << "'\n"; } } // declaring a set to be a singleton else if(commandChoice == 's') { cin >> setName; cin >> singleEle; // Check that the set name inputted is unique if (isSetNameUnique(strSetVec, setName) == true) { strSet singleStrSet(singleEle); setsOfStr set2; set2.nameOfSet = setName; set2.stringSet = singleStrSet; strSetVec.push_back(set2); } else { cerr << "ERROR: Re-declaration of set '" << setName << "'\n"; } } // using the output function else if(commandChoice == 'o') { cin >> setName; if(isSetNameUnique(strSetVec, setName) == false) { // loop through until the set name is matched and call output on its strSet for(unsigned int k = 0; k < strSetVec.size(); k++) { if( strSetVec[k].nameOfSet == setName ) { (strSetVec[k].stringSet).output(); } } } else { cerr << "ERROR: No such set '" << setName << "'\n"; } } // quitting else if(commandChoice == 'q') { break; } else { cerr << "ERROR: Ignoring bad command: '" << commandChoice << "'\n"; } } return 0; } Here is the strSet2.h: #ifndef _STRSET_ #define _STRSET_ #include <iostream> #include <vector> #include <string> struct node { std::string s1; node * next; }; class strSet { private: node * first; public: strSet (); // Create empty set strSet (std::string s); // Create singleton set strSet (const strSet &copy); // Copy constructor // will implement destructor later void output() const; strSet& operator = (const strSet& rtSide); // Assignment }; // End of strSet class #endif // _STRSET_ And here is the strSet2.cpp (implementation of class) #include <iostream> #include <vector> #include <string> #include "strset2.h" using namespace std; strSet::strSet() { first = NULL; } strSet::strSet(string s) { node *temp; temp = new node; temp->s1 = s; temp->next = NULL; first = temp; } strSet::strSet(const strSet& copy) { cout << "copy-cst\n"; node *n = copy.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } void strSet::output() const { if(first == NULL) { cout << "Empty set\n"; } else { node *temp; temp = first; while(1) { cout << temp->s1 << endl; if(temp->next == NULL) break; temp = temp->next; } } } strSet& strSet::operator = (const strSet& rtSide) { first = rtSide.first; return *this; }

    Read the article

  • [C++] A minimalistic smart array (container) class template

    - by legends2k
    I've written a (array) container class template (lets call it smart array) for using it in the BREW platform (which doesn't allow many C++ constructs like STD library, exceptions, etc. It has a very minimal C++ runtime support); while writing this my friend said that something like this already exists in Boost called MultiArray, I tried it but the ARM compiler (RVCT) cries with 100s of errors. I've not seen Boost.MultiArray's source, I've just started learning template only lately; template meta programming interests me a lot, although am not sure if this is strictly one, which can be categorised thus. So I want all my fellow C++ aficionados to review it ~ point out flaws, potential bugs, suggestions, optimisations, etc.; somthing like "you've not written your own Big Three which might lead to...". Possibly any criticism that'll help me improve this class and thereby my C++ skills. smart_array.h #include <vector> using std::vector; template <typename T, size_t N> class smart_array { vector < smart_array<T, N - 1> > vec; public: explicit smart_array(vector <size_t> &dimensions) { assert(N == dimensions.size()); vector <size_t>::iterator it = ++dimensions.begin(); vector <size_t> dimensions_remaining(it, dimensions.end()); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimensions[0], temp_smart_array); } explicit smart_array(size_t dimension_1 = 1, ...) { static_assert(N > 0, "Error: smart_array expects 1 or more dimension(s)"); assert(dimension_1 > 1); va_list dim_list; vector <size_t> dimensions_remaining(N - 1); va_start(dim_list, dimension_1); for(size_t i = 0; i < N - 1; ++i) { size_t dimension_n = va_arg(dim_list, size_t); assert(dimension_n > 0); dimensions_remaining[i] = dimension_n; } va_end(dim_list); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimension_1, temp_smart_array); } smart_array<T, N - 1>& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() const { return vec.size(); } }; template<typename T> class smart_array<T, 1> { vector <T> vec; public: explicit smart_array(vector <size_t> &dimension) : vec(dimension[0]) { assert(dimension[0] > 0); } explicit smart_array(size_t dimension_1 = 1) : vec(dimension_1) { assert(dimension_1 > 0); } T& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() { return vec.size(); } }; Sample Usage: #include <iostream> using std::cout; using std::endl; int main() { // testing 1 dimension smart_array <int, 1> x(3); x[0] = 0, x[1] = 1, x[2] = 2; cout << "x.length(): " << x.length() << endl; // testing 2 dimensions smart_array <float, 2> y(2, 3); y[0][0] = y[0][1] = y[0][2] = 0; y[1][0] = y[1][1] = y[1][2] = 1; cout << "y.length(): " << y.length() << endl; cout << "y[0].length(): " << y[0].length() << endl; // testing 3 dimensions smart_array <char, 3> z(2, 4, 5); cout << "z.length(): " << z.length() << endl; cout << "z[0].length(): " << z[0].length() << endl; cout << "z[0][0].length(): " << z[0][0].length() << endl; z[0][0][4] = 'c'; cout << z[0][0][4] << endl; // testing 4 dimensions smart_array <bool, 4> r(2, 3, 4, 5); cout << "z.length(): " << r.length() << endl; cout << "z[0].length(): " << r[0].length() << endl; cout << "z[0][0].length(): " << r[0][0].length() << endl; cout << "z[0][0][0].length(): " << r[0][0][0].length() << endl; // testing copy constructor smart_array <float, 2> copy_y(y); cout << "copy_y.length(): " << copy_y.length() << endl; cout << "copy_x[0].length(): " << copy_y[0].length() << endl; cout << copy_y[0][0] << "\t" << copy_y[1][0] << "\t" << copy_y[0][1] << "\t" << copy_y[1][1] << "\t" << copy_y[0][2] << "\t" << copy_y[1][2] << endl; return 0; }

    Read the article

  • A minimalistic smart array (container) class template

    - by legends2k
    I've written a (array) container class template (lets call it smart array) for using it in the BREW platform (which doesn't allow many C++ constructs like STD library, exceptions, etc. It has a very minimal C++ runtime support); while writing this my friend said that something like this already exists in Boost called MultiArray, I tried it but the ARM compiler (RVCT) cries with 100s of errors. I've not seen Boost.MultiArray's source, I've started learning templates only lately; template meta programming interests me a lot, although am not sure if this is strictly one that can be categorized thus. So I want all my fellow C++ aficionados to review it ~ point out flaws, potential bugs, suggestions, optimizations, etc.; something like "you've not written your own Big Three which might lead to...". Possibly any criticism that will help me improve this class and thereby my C++ skills. Edit: I've used std::vector since it's easily understood, later it will be replaced by a custom written vector class template made to work in the BREW platform. Also C++0x related syntax like static_assert will also be removed in the final code. smart_array.h #include <vector> #include <cassert> #include <cstdarg> using std::vector; template <typename T, size_t N> class smart_array { vector < smart_array<T, N - 1> > vec; public: explicit smart_array(vector <size_t> &dimensions) { assert(N == dimensions.size()); vector <size_t>::iterator it = ++dimensions.begin(); vector <size_t> dimensions_remaining(it, dimensions.end()); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimensions[0], temp_smart_array); } explicit smart_array(size_t dimension_1 = 1, ...) { static_assert(N > 0, "Error: smart_array expects 1 or more dimension(s)"); assert(dimension_1 > 1); va_list dim_list; vector <size_t> dimensions_remaining(N - 1); va_start(dim_list, dimension_1); for(size_t i = 0; i < N - 1; ++i) { size_t dimension_n = va_arg(dim_list, size_t); assert(dimension_n > 0); dimensions_remaining[i] = dimension_n; } va_end(dim_list); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimension_1, temp_smart_array); } smart_array<T, N - 1>& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() const { return vec.size(); } }; template<typename T> class smart_array<T, 1> { vector <T> vec; public: explicit smart_array(vector <size_t> &dimension) : vec(dimension[0]) { assert(dimension[0] > 0); } explicit smart_array(size_t dimension_1 = 1) : vec(dimension_1) { assert(dimension_1 > 0); } T& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() { return vec.size(); } }; Sample Usage: #include "smart_array.h" #include <iostream> using std::cout; using std::endl; int main() { // testing 1 dimension smart_array <int, 1> x(3); x[0] = 0, x[1] = 1, x[2] = 2; cout << "x.length(): " << x.length() << endl; // testing 2 dimensions smart_array <float, 2> y(2, 3); y[0][0] = y[0][1] = y[0][2] = 0; y[1][0] = y[1][1] = y[1][2] = 1; cout << "y.length(): " << y.length() << endl; cout << "y[0].length(): " << y[0].length() << endl; // testing 3 dimensions smart_array <char, 3> z(2, 4, 5); cout << "z.length(): " << z.length() << endl; cout << "z[0].length(): " << z[0].length() << endl; cout << "z[0][0].length(): " << z[0][0].length() << endl; z[0][0][4] = 'c'; cout << z[0][0][4] << endl; // testing 4 dimensions smart_array <bool, 4> r(2, 3, 4, 5); cout << "z.length(): " << r.length() << endl; cout << "z[0].length(): " << r[0].length() << endl; cout << "z[0][0].length(): " << r[0][0].length() << endl; cout << "z[0][0][0].length(): " << r[0][0][0].length() << endl; // testing copy constructor smart_array <float, 2> copy_y(y); cout << "copy_y.length(): " << copy_y.length() << endl; cout << "copy_x[0].length(): " << copy_y[0].length() << endl; cout << copy_y[0][0] << "\t" << copy_y[1][0] << "\t" << copy_y[0][1] << "\t" << copy_y[1][1] << "\t" << copy_y[0][2] << "\t" << copy_y[1][2] << endl; return 0; }

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • CodePlex Daily Summary for Tuesday, November 30, 2010

    CodePlex Daily Summary for Tuesday, November 30, 2010Popular ReleasesSense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.DotSpatial: DotSpatial 11-28-2001: This release introduces some exciting improvements. Support for big raster, both in display and changing the scheme. Faster raster scheme creation for all rasters. Caching of the "sample" values so once obtained the raster symbolizer dialog loads faster. Reprojection supported for raster and image classes. Affine transform fully supported for images and rasters, so skewed images are now possible. Projection uses better checks when loading unprojected layers. GDAL raster support f...Virtu: Virtu 0.9.0: Source Requirements.NET Framework 4 Visual Studio 2010 or Visual Studio 2010 Express Silverlight 4 Tools for Visual Studio 2010 Windows Phone 7 Developer Tools (which includes XNA Game Studio 4) Binaries RequirementsSilverlight 4 .NET Framework 4 XNA Framework 4SuperWebSocket: SuperWebSocket(60438): It is the first release of SuperWebSocket. Because it is base on SuperSocket, most features of SuperSocket are supported in SuperWebSocket. The source code include a LiveChat demo.MDownloader: MDownloader-0.15.25.7002: Fixed updater Fixed FileServe Fixed LetItBitNotepad.NET: Notepad.NET 0.7 Preview 1: Whats New?* Optimized Code Generation: Which means it will run significantly faster. * Preview of Syntax Highlighting: Only VB.NET highlighting is supported, C# and Ruby will come in Preview 2. * Improved Editing Updates (when the line number, etc updates) to be more graceful. * Recent Documents works! * Images can be inserted but they're extremely large. Known Bugs* The Update Process hangs: This is a bug apparently spawning since 0.5. It will be fixed in Preview 2. Until then, perform a fr...Cropper: 1.9.4: Mostly fixes for issues with a few feature requests. Fixed Issues 2730 & 3638 & 14467 11044 11447 11448 11449 14665 Implemented Features 6123 11581PFC: PFC for PB 11.5: This is just a migration from the 11.0 code. No changes have been made yet (and they are needed) for it to work properly with 11.5.PDF Rider: PDF Rider 0.5: This release does not add any new feature for pdf manipulation, but enables automatic updates checking, so it is reccomended to install it in order to stay updated with next releases. Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0...BCLExtensions: BCL Extensions v1.0: The files associated with v1.0 of the BCL Extensions library.XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.Sexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...New ProjectsActiveRecordTest: ActiveRecordTest is a sample project that is really a quick guide for start using Castle ActiveRecord within an ASP.NET web application.BacteriaManage: just test codeplexDS CMS: Diamond Shop - open source project. 1. ASP.NET MVC 3.0 2. Entity Framework 3. Jquery 4. LinqGeneral Media Access WebService: This project is focused on building a general purpose media access webservice based on WCF.JavaEE server for XUNU: C'est le serveur internet du site à ChoupieLearning management system: Learning management system to help teachers on their work.LogWriterReader using Named pipe: LogWriterReader using Named pipeNMix: NMix???EntLib,NHibernate,log4net??????????,????????????????,?????????、?????、????、????、?????????。Nosso Rico Dinheirinho: Financial control system like Microsoft Money, but via web.Post Template: Post Template (for now) is for craigslist posters looking to make their posts more visually appealing. Abstracting the styling and layout details of HTML and CSS, Post Template eliminates the need to know these languages when posting. Post Template is mostly written in C#.SharePoint Silverlight Clock: SharePoint Silverlight ClockSilverlight MVVM wizard using Caliburn Micro: This MVVM style Silverlight 4 wizard shows some Caliburn Micro features, as well as the use of MEF and MVVM style unit testing. The UI and code are based on the code accompanying the "Code Project" article "Creating an Internationalized Wizard in WPF" from dec. 2008.Spider Framework: A ruler-based spider framework developing with C#syx Open Source Project: syx Open Source ProjectTigerCat: TigerCat will support application development as infrastructure and RAD tools.TitleNetSolution: This my team Solution.!Uploadert: UploadertWidget Suite for DotNetNuke: This project is intended to hold a suite of useful widgets to make your skinning easier, and raise the level of interactivity with DotNetNuke website visitors.ZenBridge for Picasa: ZenBridge for Picasa makes it easy for Zenfolio users to upload edited images directly to a chosen Zenfolio gallery. It's developed in C#.NET 4.

    Read the article

  • A Look Inside JSR 360 - CLDC 8

    - by Roger Brinkley
    If you didn't notice during JavaOne the Java Micro Edition took a major step forward in its consolidation with Java Standard Edition when JSR 360 was proposed to the JCP community. Over the last couple of years there has been a focus to move Java ME back in line with it's big brother Java SE. We see evidence of this in JCP itself which just recently merged the ME and SE/EE Executive Committees into a single Java Executive Committee. But just before that occurred JSR 360 was proposed and approved for development on October 29. So let's take a look at what changes are now being proposed. In a way JSR 360 is returning back to the original roots of Java ME when it was first introduced. It was indeed a subset of the JDK 4 language, but as Java progressed many of the language changes were not implemented in the Java ME. Back then the tradeoff was still a functionality, footprint trade off but the major market was feature phones. Today the market has changed and CLDC, while it will still target feature phones, will have it primary emphasis on embedded devices like wireless modules, smart meters, health care monitoring and other M2M devices. The major changes will come in three areas: language feature changes, library changes, and consolidating the Generic Connection Framework.  There have been three Java SE versions that have been implemented since JavaME was first developed so the language feature changes can be divided into changes that came in JDK 5 and those in JDK 7, which mostly consist of the project Coin changes. There were no language changes in JDK 6 but the changes from JDK 5 are: Assertions - Assertions enable you to test your assumptions about your program. For example, if you write a method that calculates the speed of a particle, you might assert that the calculated speed is less than the speed of light. In the example code below if the interval isn't between 0 and and 1,00 the an error of "Invalid value?" would be thrown. private void setInterval(int interval) { assert interval > 0 && interval <= 1000 : "Invalid value?" } Generics - Generics add stability to your code by making more of your bugs detectable at compile time. Code that uses generics has many benefits over non-generic code with: Stronger type checks at compile time. Elimination of casts. Enabling programming to implement generic algorithms. Enhanced for Loop - the enhanced for loop allows you to iterate through a collection without having to create an Iterator or without having to calculate beginning and end conditions for a counter variable. The enhanced for loop is the easiest of the new features to immediately incorporate in your code. In this tip you will see how the enhanced for loop replaces more traditional ways of sequentially accessing elements in a collection. void processList(Vector<string> list) { for (String item : list) { ... Autoboxing/Unboxing - This facility eliminates the drudgery of manual conversion between primitive types, such as int and wrapper types, such as Integer.  Hashtable<Integer, string=""> data = new Hashtable<>(); void add(int id, String value) { data.put(id, value); } Enumeration - Prior to JDK 5 enumerations were not typesafe, had no namespace, were brittle because they were compile time constants, and provided no informative print values. JDK 5 added support for enumerated types as a full-fledged class (dubbed an enum type). In addition to solving all the problems mentioned above, it allows you to add arbitrary methods and fields to an enum type, to implement arbitrary interfaces, and more. Enum types provide high-quality implementations of all the Object methods. They are Comparable and Serializable, and the serial form is designed to withstand arbitrary changes in the enum type. enum Season {WINTER, SPRING, SUMMER, FALL}; } private Season season; void setSeason(Season newSeason) { season = newSeason; } Varargs - Varargs eliminates the need for manually boxing up argument lists into an array when invoking methods that accept variable-length argument lists. The three periods after the final parameter's type indicate that the final argument may be passed as an array or as a sequence of arguments. Varargs can be used only in the final argument position. void warning(String format, String... parameters) { .. for(String p : parameters) { ...process(p);... } ... } Static Imports -The static import construct allows unqualified access to static members without inheriting from the type containing the static members. Instead, the program imports the members either individually or en masse. Once the static members have been imported, they may be used without qualification. The static import declaration is analogous to the normal import declaration. Where the normal import declaration imports classes from packages, allowing them to be used without package qualification, the static import declaration imports static members from classes, allowing them to be used without class qualification. import static data.Constants.RATIO; ... double r = Math.cos(RATIO * theta); Annotations - Annotations provide data about a program that is not part of the program itself. They have no direct effect on the operation of the code they annotate. There are a number of uses for annotations including information for the compiler, compiler-time and deployment-time processing, and run-time processing. They can be applied to a program's declarations of classes, fields, methods, and other program elements. @Deprecated public void clear(); The language changes from JDK 7 are little more familiar as they are mostly the changes from Project Coin: String in switch - Hey it only took us 18 years but the String class can be used in the expression of a switch statement. Fortunately for us it won't take that long for JavaME to adopt it. switch (arg) { case "-data": ... case "-out": ... Binary integral literals and underscores in numeric literals - Largely for readability, the integral types (byte, short, int, and long) can also be expressed using the binary number system. and any number of underscore characters (_) can appear anywhere between digits in a numerical literal. byte flags = 0b01001111; long mask = 0xfff0_ff08_4fff_0fffl; Multi-catch and more precise rethrow - A single catch block can handle more than one type of exception. In addition, the compiler performs more precise analysis of rethrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. catch (IOException | InterruptedException ex) { logger.log(ex); throw ex; } Type Inference for Generic Instance Creation - Otherwise known as the diamond operator, the type arguments required to invoke the constructor of a generic class can be replaced with an empty set of type parameters (<>) as long as the compiler can infer the type arguments from the context.  map = new Hashtable<>(); Try-with-resource statement - The try-with-resources statement is a try statement that declares one or more resources. A resource is an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement.  try (DataInputStream is = new DataInputStream(...)) { return is.readDouble(); } Simplified varargs method invocation - The Java compiler generates a warning at the declaration site of a varargs method or constructor with a non-reifiable varargs formal parameter. Java SE 7 introduced a compiler option -Xlint:varargs and the annotations @SafeVarargs and @SuppressWarnings({"unchecked", "varargs"}) to supress these warnings. On the library side there are new features that will be added to satisfy the language requirements above and some to improve the currently available set of APIs.  The library changes include: Collections update - New Collection, List, Set and Map, Iterable and Iteratator as well as implementations including Hashtable and Vector. Most of the work is too support generics String - New StringBuilder and CharSequence as well as a Stirng formatter. The javac compiler  now uses the the StringBuilder instead of String Buffer. Since StringBuilder is synchronized there is a performance increase which has necessitated the wahat String constructor works. Comparable interface - The comparable interface works with Collections, making it easier to reuse. Try with resources - Closeable and AutoCloseable Annotations - While support for Annotations is provided it will only be a compile time support. SuppressWarnings, Deprecated, Override NIO - There is a subset of NIO Buffer that have been in use on the of the graphics packages and needs to be pulled in and also support for NIO File IO subset. Platform extensibility via Service Providers (ServiceLoader) - ServiceLoader interface dos late bindings of interface to existing implementations. It helpe to package an interface and behavior of the implementation at a later point in time.Provider classes must have a zero-argument constructor so that they can be instantiated during loading. They are located and instantiated on demand and are identified via a provider-configuration file in the METAINF/services resource directory. This is a mechansim from Java SE. import com.XYZ.ServiceA; ServiceLoader<ServiceA> sl1= new ServiceLoader(ServiceA.class); Resources: META-INF/services/com.XYZ.ServiceA: ServiceAProvider1 ServiceAProvider2 ServiceAProvider3 META-INF/services/ServiceB: ServiceBProvider1 ServiceBProvider2 From JSR - I would rather use this list I think The Generic Connection Framework (GCF) was previously specified in a number of different JSRs including CLDC, MIDP, CDC 1.2, and JSR 197. JSR 360 represents a rare opportunity to consolidated and reintegrate parts that were duplicated in other specifications into a single specification, upgrade the APIs as well provide new functionality. The proposal is to specify a combined GCF specification that can be used with Java ME or Java SE and be backwards compatible with previous implementations. Because of size limitations as well as the complexity of the some features like InvokeDynamic and Unicode 6 will not be included. Additionally, any language or library changes in JDK 8 will be not be included. On the upside, with all the changes being made, backwards compatibility will still be maintained. JSR 360 is a major step forward for Java ME in terms of platform modernization, language alignment, and embedded support. If you're interested in following the progress of this JSR see the JSR's java.net project for details of the email lists, discussions groups.

    Read the article

  • Special characters from MySQL database (e.g. curly apostrophes) are mangling my XML

    - by Toph
    I have a MySQL database of newspaper articles. There's a volume table, an issue table, and an article table. I have a PHP file that generates a property list that is then pulled in and read by an iPhone app. The plist holds each article as a dictionary inside each issue, and each issue as a dictionary inside each volume. The plist doesn't actually hold the whole article -- just a title and URL. Some article titles contain special characters, like curly apostrophes. Looking at the generated XML plist, whenever it hits a special character, it unpredictably gobbles up a whole bunch of text, leaving the XML mangled and unreadable. (...in Chrome, anyway, and I'm guessing on the iPhone. Firefox actually handles it pretty well, showing a white ? in a black diamond in place of any special characters and not gobbling anything.) Example well-formed plist snippet: <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Rows</key> <array> <dict> <key>Title</key> <string>Vol. 133 (2003-2004)</string> <key>Children</key> <array> <dict> <key>Title</key> <string>No. 18 (Apr 2, 2004)</string> <key>Children</key> <array> <dict> <key>Title</key> <string>Basketball concludes historic season</string> <key>URL</key> <string>http://orient.bowdoin.edu/orient/article_iphone.php?date=2004-04-02&amp;section=1&amp;id=1</string> </dict> <!-- ... --> </array> </dict> </array> </dict> </array> </dict> </plist> Example of what happens when it hits a curly apostrophe: This is from Chrome. This time it ate 5,998 characters, by MS Word's count, skipping down to midway through the opening the title of a pizza story; if I reload it'll behave differently, eating some other amount. The proper title is: Singer-songwriter Farrell ’05 finds success beyond the bubble <dict> <key>Title</key> <string>Singer-songwriter Farrell ing>Students embrace free pizza, College objects to solicitation</string> <key>URL</key> <string>http://orient.bowdoin.edu/orient/article_iphone.php?date=2009-09-18&amp;section=1&amp;id=9</string> </dict> In MySQL that title is stored as (in binary): 53 69 6E 67 |65 72 2D 73 |6F 6E 67 77 |72 69 74 65 72 20 46 61 |72 72 65 6C |6C 20 C2 92 |30 35 20 66 69 6E 64 73 |20 73 75 63 |63 65 73 73 |20 62 65 79 6F 6E 64 20 |74 68 65 20 |62 75 62 62 |6C Any ideas how I can encode/decode things properly? If not, any idea how I can get around the problem some other way? I don't have a clue what I'm talking about, haha; let me know if there's any way I can help you help me. :) And many thanks!

    Read the article

  • SQL SERVER – Video – Beginning Performance Tuning with SQL Server Execution Plan

    - by pinaldave
    Traveling can be most interesting or most exhausting experience. However, traveling is always the most enlightening experience one can have. While going to long journey one has to prepare a lot of things. Pack necessary travel gears, clothes and medicines. However, the most essential part of travel is the journey to the destination. There are many variations one prefer but the ultimate goal is to have a delightful experience during the journey. Here is the video available which explains how to begin with SQL Server Execution plans. Performance Tuning is a Journey Performance tuning is just like a long journey. The goal of performance tuning is efficient and least resources consuming query execution with accurate results. Just as maps are the most essential aspect of performance tuning the same way, execution plans are essentially maps for SQL Server to reach to the resultset. The goal of the execution plan is to find the most efficient path which translates the least usage of the resources (CPU, memory, IO etc). Execution Plans are like Maps When online maps were invented (e.g. Bing, Google, Mapquests etc) initially it was not possible to customize them. They were given a single route to reach to the destination. As time evolved now it is possible to give various hints to the maps, for example ‘via public transport’, ‘walking’, ‘fastest route’, ‘shortest route’, ‘avoid highway’. There are places where we manually drag the route and make it appropriate to our needs. The same situation is with SQL Server Execution Plans, if we want to tune the queries, we need to understand the execution plans and execution plans internals. We need to understand the smallest details which relate to execution plan when we our destination is optimal queries. Understanding Execution Plans The biggest challenge with maps are figuring out the optimal path. The same way the  most common challenge with execution plans is where to start from and which precise route to take. Here is a quick list of the frequently asked questions related to execution plans: Should I read the execution plans from bottoms up or top down? Is execution plans are left to right or right to left? What is the relational between actual execution plan and estimated execution plan? When I mouse over operator I see CPU and IO but not memory, why? Sometime I ran the query multiple times and I get different execution plan, why? How to cache the query execution plan and data? I created an optimal index but the query is not using it. What should I change – query, index or provide hints? What are the tools available which helps quickly to debug performance problems? Etc… Honestly the list is quite a big and humanly impossible to write everything in the words. SQL Server Performance:  Introduction to Query Tuning My friend Vinod Kumar and I have created for the same a video learning course for beginning performance tuning. We have covered plethora of the subject in the course. Here is the quick list of the same: Execution Plan Basics Essential Indexing Techniques Query Design for Performance Performance Tuning Tools Tips and Tricks Checklist: Performance Tuning We believe we have covered a lot in this four hour course and we encourage you to go over the video course if you are interested in Beginning SQL Server Performance Tuning and Query Tuning. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Execution Plan

    Read the article

  • Ubuntu 12.04 patched b43 driver compilation error

    - by Zed
    I tried this How do I install this patched b43 driver? guide to install patched b43 driver on Ubuntu 12.04 with 3.2.0-31-generic kernel but I can't pass compilation phase.Here is what I did: wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.1/compat-wireless-3.1.1-1.tar.bz2 cd compat-wireless-3.1.1-1/ scripts/driver-select b43 make make -C /lib/modules/3.2.0-31-generic/build M=/home/marco/compat-wireless-3.1.1-1 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-31-generic' CC [M] /home/marco/compat-wireless-3.1.1-1/compat/main.o In file included from /home/marco/compat-wireless-3.1.1-1/include/linux/compat-2.6.29.h:5:0, from /home/marco/compat-wireless-3.1.1-1/include/linux/compat-2.6.h:24, from <command-line>:0: include/linux/netdevice.h:1153:5: warning: "IS_ENABLED" is not defined [-Wundef] include/linux/netdevice.h:1153:15: error: missing binary operator before token "(" include/linux/netdevice.h: In function ‘netdev_uses_dsa_tags’: include/linux/netdevice.h:1421:9: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h:1422:31: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h: In function ‘netdev_uses_trailer_tags’: include/linux/netdevice.h:1431:9: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h:1432:35: error: ‘struct net_device’ has no member named ‘dsa_ptr’ make[3]: *** [/home/marco/compat-wireless-3.1.1-1/compat/main.o] Error 1 make[2]: *** [/home/marco/compat-wireless-3.1.1-1/compat] Error 2 make[1]: *** [_module_/home/marco/compat-wireless-3.1.1-1] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-31-generic' make: *** [modules] Error 2 To fix that error I added #include <linux/kconfig.h> to /usr/src/linux-headers-3.2.0-31-generic/include/linux/netdevice.h but now I'm getting something else make make -C /lib/modules/3.2.0-31-generic/build M=/home/marco/compat-wireless-3.1.1-1 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-31-generic' CC [M] /home/marco/compat-wireless-3.1.1-1/compat/main.o LD [M] /home/marco/compat-wireless-3.1.1-1/compat/compat.o CC [M] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.o In file included from /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h:9:0, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/bcma_private.h:8, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:8: /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h: In function ‘ssb_driver_register’: /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h:236:36: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h:236:36: note: each undeclared identifier is reported only once for each function it appears in In file included from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/bcma_private.h:8:0, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:8: /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h: In function ‘bcma_driver_register’: /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h:170:37: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c: At top level: /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:12:20: error: expected declaration specifiers or ‘...’ before string constant /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:13:16: error: expected declaration specifiers or ‘...’ before string constant /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: data definition has no type or storage class [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’ [-Wimplicit-int] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: parameter names (without types) in function declaration [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: data definition has no type or storage class [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’ [-Wimplicit-int] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: parameter names (without types) in function declaration [enabled by default] make[3]: *** [/home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.o] Error 1 make[2]: *** [/home/marco/compat-wireless-3.1.1-1/drivers/bcma] Error 2 make[1]: *** [_module_/home/marco/compat-wireless-3.1.1-1] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-31-generic' make: *** [modules] Error 2 Any suggestion what to try next?

    Read the article

  • JavaScript: this

    - by bdukes
    JavaScript is a language steeped in juxtaposition.  It was made to “look like Java,” yet is dynamic and classless.  From this origin, we get the new operator and the this keyword.  You are probably used to this referring to the current instance of a class, so what could it mean in a language without classes? In JavaScript, this refers to the object off of which a function is referenced when it is invoked (unless it is invoked via call or apply). What this means is that this is not bound to your function, and can change depending on how your function is invoked. It also means that this changes when declaring a function inside another function (i.e. each function has its own this), such as when writing a callback. Let's see some of this in action: var obj = { count: 0, increment: function () { this.count += 1; }, logAfterTimeout = function () { setTimeout(function () { console.log(this.count); }, 1); } }; obj.increment(); console.log(obj.count); // 1 var increment = obj.increment; window.count = 'global count value: '; increment(); console.log(obj.count); // 1 console.log(window.count); // global count value: 1 var newObj = {count:50}; increment.call(newObj); console.log(newObj.count); // 51 obj.logAfterTimeout();// global count value: 1 obj.logAfterTimeout = function () { var proxiedFunction = $.proxy(function () { console.log(this.count); }, this); setTimeout(proxiedFunction, 1); }; obj.logAfterTimeout(); // 1 obj.logAfterTimeout = function () { var that = this; setTimeout(function () { console.log(that.count); }, 1); }; obj.logAfterTimeout(); // 1 The last couple of examples here demonstrate some methods for making sure you get the values you expect.  The first time logAfterTimeout is redefined, we use jQuery.proxy to create a new function which has its this permanently set to the passed in value (in this case, the current this).  The second time logAfterTimeout is redefined, we save the value of this in a variable (named that in this case, also often named self) and use the new variable in place of this. Now, all of this is to clarify what’s going on when you use this.  However, it’s pretty easy to avoid using this altogether in your code (especially in the way I’ve demonstrated above).  Instead of using this.count all over the place, it would have been much easier if I’d made count a variable instead of a property, and then I wouldn’t have to use this to refer to it.  var obj = (function () { var count = 0; return { increment: function () { count += 1; }, logAfterTimeout = function () { setTimeout(function () { console.log(count); }, 1); }, getCount: function () { return count; } }; }()); If you’re writing your code in this way, the main place you’ll run into issues with this is when handling DOM events (where this is the element on which the event occurred).  In that case, just be careful when using a callback within that event handler, that you’re not expecting this to still refer to the element (and use proxy or that/self if you need to refer to it). Finally, as demonstrated in the example, you can use call or apply on a function to set its this value.  This isn’t often needed, but you may also want to know that you can use apply to pass in an array of arguments to a function (e.g. console.log.apply(console, [1, 2, 3, 4])).

    Read the article

  • ODI 11g – Insight to the SDK

    - by David Allan
    This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here. Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6. Entry to the Platform   Object Finder SDK odiInstance odiInstance (groovy variable for console) OdiInstance Topology Objects Object Finder SDK Technology IOdiTechnologyFinder OdiTechnology Context IOdiContextFinder OdiContext Logical Schema IOdiLogicalSchemaFinder OdiLogicalSchema Data Server IOdiDataServerFinder OdiDataServer Physical Schema IOdiPhysicalSchemaFinder OdiPhysicalSchema Logical Schema to Physical Mapping IOdiContextualSchemaMappingFinder OdiContextualSchemaMapping Logical Agent IOdiLogicalAgentFinder OdiLogicalAgent Physical Agent IOdiPhysicalAgentFinder OdiPhysicalAgent Logical Agent to Physical Mapping IOdiContextualAgentMappingFinder OdiContextualAgentMapping Master Repository IOdiMasterRepositoryInfoFinder OdiMasterRepositoryInfo Work Repository IOdiWorkRepositoryInfoFinder OdiWorkRepositoryInfo Project Objects Object Finder SDK Project IOdiProjectFinder OdiProject Folder IOdiFolderFinder OdiFolder Interface IOdiInterfaceFinder OdiInterface Package IOdiPackageFinder OdiPackage Procedure IOdiUserProcedureFinder OdiUserProcedure User Function IOdiUserFunctionFinder OdiUserFunction Variable IOdiVariableFinder OdiVariable Sequence IOdiSequenceFinder OdiSequence KM IOdiKMFinder OdiKM Load Plans and Scenarios   Object Finder SDK Load Plan IOdiLoadPlanFinder OdiLoadPlan Load Plan and Scenario Folder IOdiScenarioFolderFinder OdiScenarioFolder Model Objects Object Finder SDK Model IOdiModelFinder OdiModel Sub Model IOdiSubModel OdiSubModel DataStore IOdiDataStoreFinder OdiDataStore Column IOdiColumnFinder OdiColumn Key IOdiKeyFinder OdiKey Condition IOdiConditionFinder OdiCondition Operator Objects   Object Finder SDK Session Folder IOdiSessionFolderFinder OdiSessionFolder Session IOdiSessionFinder OdiSession Schedule OdiSchedule How to Create an Object? Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object. import oracle.odi.domain.project.OdiProject; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) project = new OdiProject("Project For Demo", "PROJECT_DEMO") odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Update an Object? This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted. import oracle.odi.domain.project.OdiProject; import oracle.odi.domain.project.finder.IOdiProjectFinder; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class); project = prjFinder.findByCode("PROJECT_DEMO"); project.setName("A Demo Project"); odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Delete an Object? Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object. import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder; import oracle.odi.domain.runtime.session.OdiSession; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class); sessc = sessFinder.findAll(); sessItr = sessc.iterator() while (sessItr.hasNext()) {   sess = (OdiSession) sessItr.next()   odiInstance.getTransactionalEntityManager().remove(sess) } tm.commit(txnStatus) This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

    Read the article

  • Converting Openfire IM datetime values in SQL Server to / from VARCHAR(15) and DATETIME data types

    - by Brian Biales
    A client is using Openfire IM for their users, and would like some custom queries to audit user conversations (which are stored by Openfire in tables in the SQL Server database). Because Openfire supports multiple database servers and multiple platforms, the designers chose to store all date/time stamps in the database as 15 character strings, which get converted to Java Date objects in their code (Openfire is written in Java).  I did some digging around, and, so I don't forget and in case someone else will find this useful, I will put the simple algorithms here for converting back and forth between SQL DATETIME and the Java string representation. The Java string representation is the number of milliseconds since 1/1/1970.  SQL Server's DATETIME is actually represented as a float, the value being the number of days since 1/1/1900, the portion after the decimal point representing the hours/minutes/seconds/milliseconds... as a fractional part of a day.  Try this and you will see this is true:     SELECT CAST(0 AS DATETIME) and you will see it returns the date 1/1/1900. The difference in days between SQL Server's 0 date of 1/1/1900 and the Java representation's 0 date of 1/1/1970 is found easily using the following SQL:   SELECT DATEDIFF(D, '1900-01-01', '1970-01-01') which returns 25567.  There are 25567 days between these dates. So to convert from the Java string to SQL Server's date time, we need to convert the number of milliseconds to a floating point representation of the number of days since 1/1/1970, then add the 25567 to change this to the number of days since 1/1/1900.  To convert to days, you need to divide the number by 1000 ms/s, then by  60 seconds/minute, then by 60 minutes/hour, then by 24 hours/day.  Or simply divide by 1000*60*60*24, or 86400000.   So, to summarize, we need to cast this string as a float, divide by 86400000 milliseconds/day, then add 25567 days, and cast the resulting value to a DateTime.  Here is an example:   DECLARE @tmp as VARCHAR(15)   SET @tmp = '1268231722123'   SELECT @tmp as JavaTime, CAST((CAST(@tmp AS FLOAT) / 86400000) + 25567 AS DATETIME) as SQLTime   To convert from SQL datetime back to the Java time format is not quite as simple, I found, because floats of that size do not convert nicely to strings, they end up in scientific notation using the CONVERT function or CAST function.  But I found a couple ways around that problem. You can convert a date to the number of  seconds since 1/1/1970 very easily using the DATEDIFF function, as this value fits in an Int.  If you don't need to worry about the milliseconds, simply cast this integer as a string, and then concatenate '000' at the end, essentially multiplying this number by 1000, and making it milliseconds since 1/1/1970.  If, however, you do care about the milliseconds, you will need to use DATEPART to get the milliseconds part of the date, cast this integer to a string, and then pad zeros on the left to make sure this is three digits, and concatenate these three digits to the number of seconds string above.  And finally, I discovered by casting to DECIMAL(15,0) then to VARCHAR(15), I avoid the scientific notation issue.  So here are all my examples, pick the one you like best... First, here is the simple approach if you don't care about the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15)) + '000'   SELECT @tmp as JavaTime, @dt as SQLTime If you want to keep the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   DECLARE @ms as int   SET @dt = '2010-03-10 14:35:22.123'   SET @ms as DATEPART(ms, @dt)   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST(@ms AS VARCHAR(3)), 3)   SELECT @tmp as JavaTime, @dt as SQLTime Or, in one fell swoop:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST( DATEPART(ms, @dt) AS VARCHAR(3)), 3) as JavaTime   And finally, a way to simply reverse the math used converting from Java date to SQL date. Note the parenthesis - watch out for operator precedence, you want to subtract, then multiply:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(CAST((CAST(@dt as Float) - 25567.0) * 86400000.0 as DECIMAL(15,0)) as VARCHAR(15)) as JavaTime Interestingly, I found that converting to SQL Date time can lose some accuracy, when I converted the time above to Java time then converted  that back to DateTime, the number of milliseconds is 120, not 123.  As I am not interested in the milliseconds, this is ok for me.  But you may want to look into using DateTime2 in SQL Server 2008 for more accuracy.

    Read the article

  • Grouping data in LINQ with the help of group keyword

    - by vik20000in
    While working with any kind of advanced query grouping is a very important factor. Grouping helps in executing special function like sum, max average etc to be performed on certain groups of data inside the date result set. Grouping is done with the help of the Group method. Below is an example of the basic group functionality.     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };         var numberGroups =         from num in numbers         group num by num % 5 into numGroup         select new { Remainder = numGroup.Key, Numbers = numGroup };  In the above example we have grouped the values based on the reminder left over when divided by 5. First we are grouping the values based on the reminder when divided by 5 into the numgroup variable.  numGroup.Key gives the value of the key on which the grouping has been applied. And the numGroup itself contains all the records that are contained in that group. Below is another example to explain the same. string[] words = { "blueberry", "abacus", "banana", "apple", "cheese" };         var wordGroups =         from num in words         group num by num[0] into grp         select new { FirstLetter = grp.Key, Words = grp }; In the above example we are grouping the value with the first character of the string (num[0]). Just like the order operator the group by clause also allows us to write our own logic for the Equal comparison (That means we can group Item by ignoring case also by writing out own implementation). For this we need to pass an object that implements the IEqualityComparer<string> interface. Below is an example. public class AnagramEqualityComparer : IEqualityComparer<string> {     public bool Equals(string x, string y) {         return getCanonicalString(x) == getCanonicalString(y);     }      public int GetHashCode(string obj) {         return getCanonicalString(obj).GetHashCode();     }         private string getCanonicalString(string word) {         char[] wordChars = word.ToCharArray();         Array.Sort<char>(wordChars);         return new string(wordChars);     } }  string[] anagrams = {"from   ", " salt", " earn", "  last   ", " near "}; var orderGroups = anagrams.GroupBy(w => w.Trim(), new AnagramEqualityComparer()); Vikram  

    Read the article

  • SQL SERVER – Top 10 “Ease of Use” Features of expressor Studio

    - by pinaldave
    expressor Studio is new data integration platform that is being marketed as the most easy to use tool of its kind.  But “easy to use” can be a relative term – an expert can find a very complex system easy, but a beginner might be stumped.  A recent article online discussed exactly what makes expressor Studio so easy use, and here is my view on this subject. Simple Installation There is one pop-up for one .exe file, and nothing more.  You can’t get much simpler than this.  It is also in the familiar Windows design, so there should be no surprises. No 3rd party software dependency Have you ever tried to download software, only to be slowed down by the need to download a compatible system to run the program, and another to read the user manual, and so on?  expressor Studio was designed specifically to avoid this problem. Microsoft Office Like Ribbon Bar and Menus As mentioned before, everything is in the familiar Windows design, from the pop up windows to the tool bars and menus.  There should be no learning curve for using this program, or even simply trying to navigate around a new system. General Development Design Interface This software has been designed to be simple and straightforward.  Projects can be arranged in a simple “tree” design, that is totally collapsible and can easy be added to or “trimmed” with a click of a button.  It was meant to be logical and easy to follow. Integrated Contextual Help This is a fancy way of saying that you can practically yell “help!” if you do get stuck on something.  Solving a problem is as simple as highlighting and hitting F1 for contextual help. Visual Indicators and Messages Wouldn’t it be nice to know exactly where something has gone wrong before trying to complete a project.  expressor Studio has a built in system to catch mistakes and highlight them in a bright color, flash a warning message, and even disable functions before you can continue – and possibly lose hours of work. Property Inputs and Selectors Every operator will have a list of requirements that need to be filled in.  But don’t worry; you won’t have to make stuff up to fill in the boxes.  Each one will have a drop-down menu with options to choose from – but not too many as to be confusing. Connection Wizards Configuring connections can be the hardest part of a project.  But not with the expressor Studioconnection wizard.  A familiar, Windows-style menu will walk you through connections so quickly you’ll forget what trouble it used to be. Templates With large, complex projects, a majority of your time is often spent simply setting up the files and inputting data.  But expressor Studio allows you to create one file and then save it as a template, saving you hours of boring data input. Extension Manager Let’s say that you need a little more functionality or some new features on your program. A lot of software requires you to download complex plug-ins that need to be decompressed and installed.  However, expressor Studio has extended its system to an Extension Manager, which allows for quick and easy installation of the functionality you need, without the need to download and decompress. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >