Search Results

Search found 772 results on 31 pages for 'ordering'.

Page 17/31 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • GRUB2 stuck at rescue console, showing "unknown filesystem" for all partitions

    - by AndiDog
    I installed Ubuntu 12.04 on my external USB drive, where I have a 700GB NTFS partition followed by the new 6GB ext4 partition and a swap partition (all primary). The GRUB MBR is also installed to the external hard disk. Since my BIOS puts the external drive as first disk when booting, I removed my internal hard disk before installation in order to avoid ordering problems. Now when I boot from the external drive, GRUB is stuck at the rescue console with the error "unknown filesystem". grub rescue> ls (hd0) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) ls (hd0,<any of them>)/ gives me "unknown filesystem", thus also "insmod normal" GRUB doesn't seem to be able to read my Linux partition as you can see above?! How can I solve this? Additional info: bootinfoscript says (this is with the internal drive in again, but that does not make a difference): Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos2)/boot/grub on this drive. sdb1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdb2: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb3: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info:

    Read the article

  • In BASH, are wildcard expansions guaranteed to be in order?

    - by ArtB
    Is the expansion of a wildcard in BASH guaranteed to be in alphabetical order? I forced to split a large file into [10Mb pieces][1] so that they can be be accepted by my Mercurial repository. So I was thinking I could use: split -b 10485760 Big.file BigFilePiece. and then in place of: cat BigFile | bigFileProcessor I could do: cat BigFilePiece.* | bigFileProcessor In its place. However, I could not find anywhere that guaranteed that the expansion of the asterisk (aka wildcard, aka '*' ) would always be in alphabetical order so that .aa came before .ab ( as opposed to be timestamp ordering or something like that ). Also, are there any flaws in my plan? How great is the performance cost of cating the file together?

    Read the article

  • Windows 7 - Move open programs more freely on the taskbar

    - by SalamiArmi
    When I used Windows XP last year, I had a program called Taskix which let me move the tabs in the taskbar around for programs in any order I wanted. Now that I have Windows 7, a similar function comes built into the operating system - however, it isn't as flexible as Taskix was in XP. Theoretical situation: 1. Open a folder named #1 2. In folder #1, open an instance of a program 3. Open a folder named #2 4. In folder #2, open an instance of the same program that was opened in step 2. After doing this, I will have (in order, on the taskbar) 2 folders open and 2 instances of the programs open. My question really is: Is there any way to fight this automatic ordering of tabs on the taskbar? A registry hack or something would be best, but if there's a third party program out there, that would be nice to know about. Thanks!

    Read the article

  • Search multiple tables

    - by gilden
    I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: substring searching can only count exact results the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). normalized data is not even searched for (I have different tables for categories, languages and file attatchments). My planned solution Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the data column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks.

    Read the article

  • Excel: How do I copy hyperlink address from one column of text to another column with different text?

    - by OfficeLackey
    I have a spreadsheet where column A displays names in a certain format. There are 200-odd names and each has a different hyperlink (which links to that person's web page). I want to reformat the name order so it is "Surname, Name" rather than "Name Surname" and retain the hyperlink in the newly formatted column. I have achieved "Surname, Name" easily by splitting the names into two columns (using LEFT and RIGHT formulae) - forename and surname - then I have a new column with a formula to return "Surname, Name." However, the hyperlinks are not in that new column and I need them. I don't want to do this manually, for obvious reasons. I cannot find a way of copying just hyperlinks from column A without copying the text from column A. So, effectively, what I need is some sort of macro to take, for example, the hyperlink from A2 and copy it to H2, with H2 still retaining the updated ordering of name. I don't have the knowledge to write this myself, so would appreciate solutions.

    Read the article

  • What is the easiest way to apply database functionality into my daily life?

    - by Daddy Warbox
    let me try to explain it by listing some of the things I want to do: Submit random thoughts, notes, facts, and to-do tasks of any sort and at any time. Tag each of these submissions freely. Manage these tags centrally. Associate meta-data with submissions and tags. Search, filter, and sort submissions. I want lots of power here. Display views of submissions (including within searches) in a hierarchy. Create said hierarchies easily out by ordering relevant tags. I'm thinking towards some kind of desktop program that allows me to quickly do all of these things. A web service could also work, too, but it will need offline capabilities. I don't want to have to pay for this, if that's possible. Also, as I know regex and SQL, I wouldn't mind solutions involving the use of either.

    Read the article

  • What makes you look like a bad developer (ie a hacker) [on hold]

    - by user134583
    This comes from a lot of people about me, so I have to look at myself. So I would wonder what make one a bad developer (ie a hacker). These are a few things about me I used IDE intensively, all features, you name it: auto-completion, refactoring, quick fixes, open type, view hierarchy, API documentation, etcc When I deal with writing code for a project in domain I am not used to (I can't have fluency in this, this is new), I only have a very rough high level ideas. I don't use the standard modeling diagrams for early detail planning. Unorthodox diagrams that I invented when I need to draw the design in details. I don't use UML or similar, I find them not enough. I divide the sorts of diagram I drew into 3 types. Very high level diagrams which probably can be understood by almost anybody. Data entity diagram used for modeling data objects only (like ER diagrams and tree for inheritances and composition). Action diagrams for agents/classes and their interactions on data objects they contain. Constantly changing the interface (public methods) between interacting agents/classes if the need arises. I am more refrained when the interface and the module have matured Write initial concept code in a quick hackie way just so that the module works in the general cases so that I can play around with it. The module will be re-factored intensively after playing around so I could see more corner cases that I couldn't or (wouldn't want) anticipate before writing code. Using JUnit for integration-like test by using TestSuite class and ordering Unit test classes in the suite Using debugger almost anytime there is a problem instead of reading the code Constantly search on the internet for how to do some thing with some library that I haven't used a lot. So judgment, am I a bad developer? a hacker? Put in other words, to make sure this is not considered off-topic: - Is this bad practice to make your code too agile during incubating/prototyping phase of software development - Is it bad practice to use JUnit for integration testing, (I know there are other framework for integration testing, but those frameworks are for a specific products, not general)

    Read the article

  • Do TCP/UDP connections add to the Windows incoming connection limit?

    - by user47899
    Hi all, I've tried to figure out what Microsoft means by "Windows sockets" and it all seems very vague. Basically we have customers that sometimes try to set up workgroups with close to 10 Windows XP computers with drive and printer shares and we're worried that some of the non-Windows Ethernet devices on the network will cause issues with the inbound connection limit outlined here where only 10 inbound connections can be active at one time. For example, there is an Ethernet Caller ID device that broadcasts UDP packets to all computers, and a kitchen display system that likewise broadcasts UDP. They may also have incoming TCP packets for our custom online ordering module. Do these TCP/UDP connections count toward the inbound connection limit? I'm aware that Windows 7 has increased the limit to 20 but we might have future customers that will push that limit. Thanks in advance

    Read the article

  • Bringing my Dell XPS 13 ultrabook back to factory state

    - by TysHTTP
    I have a brand new Dell XPS 13 ultra book. After i picked it up at the store, i wiped the exising partitions which also contained the restore/rescue data, which you need to reinstall the factory version of Windows 8 that comes with this machine. I did this because we have a different MS partner edition of Windows which i prefer to run. After doing all this, i noticed that there was something wrong with machine. No real damage, but the specs are not completely as they should be. Turned out that a simple mistake while ordering. Now, long story short, the shop says that it has no problem with taking the product back, as long as it is in it's original state. And this is the problem i'm having, because i formatted the original partitions that contain that rescue option / windows 8 setup, i don't have a clue whether it's possible to get back to that original. Does anyone have an idea on how to get this fixed?

    Read the article

  • Repository query conditions, dependencies and DRY

    - by vFragosop
    To keep it simple, let's suppose an application which has Accounts and Users. Each account may have any number of users. There's also 3 consumers of UserRepository: An admin interface which may list all users Public front-end which may list all users An account authenticated API which should only list it's own users Assuming UserRepository is something like this: class UsersRepository extends DatabaseAbstraction { private function query() { return $this->database()->select('users.*'); } public function getAll() { return $this->query()->exec(); } // IMPORTANT: // Tons of other methods for searching, filtering, // joining of other tables, ordering and such... } Keeping in mind the comment above, and the necessity to abstract user querying conditions, How should I handle querying of users filtering by account_id? I can picture three possible roads: 1. Should I create an AccountUsersRepository? class AccountUsersRepository extends UserRepository { public function __construct(Account $account) { $this->account = $account; } private function query() { return parent::query() ->where('account_id', '=', $this->account->id); } } This has the advantage of reducing the duplication of UsersRepository methods, but doesn't quite fit into anything I've read about DDD so far (I'm rookie by the way) 2. Should I put it as a method on AccountsRepository? class AccountsRepository extends DatabaseAbstraction { public function getAccountUsers(Account $account) { return $this->database() ->select('users.*') ->where('account_id', '=', $account->id) ->exec(); } } This requires the duplication of all UserRepository methods and may need another UserQuery layer, that implements those querying logic on chainable way. 3. Should I query UserRepository from within my account entity? class Account extends Entity { public function getUsers() { return UserRepository::findByAccountId($this->id); } } This feels more like an aggregate root for me, but introduces dependency of UserRepository on Account entity, which may violate a few principles. 4. Or am I missing the point completely? Maybe there's an even better solution? Footnotes: Besides permissions being a Service concern, in my understanding, they shouldn't implement SQL query but leave that to repositories since those may not even be SQL driven.

    Read the article

  • A format for storing personal contacts in a database

    - by Gart
    I'm thinking of the best way to store personal contacts in a database for a business application. The traditional and straightforward approach would be to create a table with columns for each element, i.e. Name, Telephone Number, Job title, Address, etc... However, there are known industry standards for this kind of data, like for example vCard, or hCard, or vCard-RDF/XML or even Windows Contacts XML Schema. Utilizing an standard format would offer some benefits, like inter-operablilty with other systems. But how can I decide which method to use? The requirements are mainly to store the data. Search and ordering queries are highly unlikely but possible. The volume of the data is 100,000 records at maximum. My database engine supports native XML columns. I have been thinking to use some XML-based format to store the personal contacts. Then it will be possible to utilize XML indexes on this data, if searching and ordering is needed. Is this a good approach? Which contacts format and schema would you recommend for this? Edited after first answers Here is why I think the straightforward approach is bad. This is due to the nature of this kind of data - it is not that simple. The personal contacts it is not well-structured data, it may be called semi-structured. Each contact may have different data fields, maybe even such fields which I cannot anticipate. In my opinion, each piece of this data should be treated as important information, i.e. no piece of data could be discarded just because there was no relevant column in the database. If we took it further, assuming that no data may be lost, then we could create a big text column named Comment or Description or Other and put there everything which cannot be fitted well into table columns. But then again - the data would lose structure - this might be bad. If we wanted structured data then - according to the database design principles - the data should be decomposed into entities, and relations should be established between the entities. But this adds complexity - there are just too many entities, and lots of design desicions should be made, like "How do we store Address? Personal Name? Phone number? How do we encode home phone numbers and mobile phone numbers? How about other contact info?.." The relations between entities are complex and multiple, and each relation is a table in the database. Each relation needs to be documented in the design papers. That is a lot of work to do. But it is possible to avoid the complexity entirely - just document that the data is stored according to such and such standard schema, period. Then anybody who would be reading that document should easily understand what it was all about. Finally, this is all about using an industry standard. The standard is, hopefully, designed by some clever people who anticipated and described the structure of personal contacts information much better than I ever could. Why should we all reinvent the wheel?? It's much easier to use a standard schema. The problem is, there are just too many standards - it's not easy to decide which one to use!

    Read the article

  • jQuery hide/show with select tag

    - by Ozzy
    I'm relative new to jQuery and I've been asked to create a hide/show function with a select tag. The function pretty much would be when you click on one of the options in the select tag it will open a div associate with the div of course. To be honest I have no idea how approach this function. I need help urgently, I have already tried many online but none have seem to work. Find below the html code. Thanks. <div class="adwizard"> <select id="selectdrop" name="selectdrop" class="adwizard-bullet"> <option value="adwizard">AdWizard</option> <option value="collateral">Collateral Ordering Tool</option> <option value="ebrochure">eBrochures</option> <option value="brand">Brand Center</option> <option value="funtees">FunTees</option> </select> </div> <div class="panels"> <div id="adwizard" class="sub-box showhide"> <img src="../images/bookccl/img-adwizard.gif" width="95" height="24" alt="AdWizard" /> <p>Let Carnival help you grow your business with our great tools! Lor ipsum dolor sit amet. <a href="https://www.carnivaladwizard.com/home.asp">Learn More</a></p> </div> <div id="collateral" class="sub-box showhide"> <p>The Collateral Ordering Tool makes it easy for you to order destination brochures and the sales DVD for that upcoming event. <a href="http://carnival.litorders.com/workplace.asp">Learn More</a></p> </div> <div id="ebrochure" class="sub-box showhide"> <img src="../images/bookccl/img-ebrochure.gif" width="164" height="39" alt="Brochures" /> <p>Show your clients that you're listening to their specific vacation needs by delivering relevant planning info quickly. <a href="http://productiontrade.carnivalbrochures.com/start.aspx">Learn More</a></p> </div> <div id="brand" class="sub-box showhide"> <p>Carnival Brand Center is where you'll find information on our strategy, guidlines, templates and artwork. <a href="https://carnival.monigle2.net/user_info.asp?login_type=agent">Learn More</a></p> </div> <div id="funtees" class="sub-box showhide"> <img src="../images/bookccl/img-funtees.gif" width="164" height="39" alt="Funtees" /> <p>Create your very own Fun Design shirts to commemorate that special occasion aboard a Carnival "Fun Ship!" <a href="http://carnival.victorydyo.com/">Learn More</a></p> </div> </div><!-- ends .panel --> <a class="view" href="#">See All Marketing Tools</a> </div>

    Read the article

  • C++/boost generator module, feedback/critic please

    - by aaa
    hello. I wrote this generator, and I think to submit to boost people. Can you give me some feedback about it it basically allows to collapse multidimensional loops to flat multi-index queue. Loop can be boost lambda expressions. Main reason for doing this is to make parallel loops easier and separate algorithm from controlling structure (my fieldwork is computational chemistry where deep loops are common) 1 #ifndef _GENERATOR_HPP_ 2 #define _GENERATOR_HPP_ 3 4 #include <boost/array.hpp> 5 #include <boost/lambda/lambda.hpp> 6 #include <boost/noncopyable.hpp> 7 8 #include <boost/mpl/bool.hpp> 9 #include <boost/mpl/int.hpp> 10 #include <boost/mpl/for_each.hpp> 11 #include <boost/mpl/range_c.hpp> 12 #include <boost/mpl/vector.hpp> 13 #include <boost/mpl/transform.hpp> 14 #include <boost/mpl/erase.hpp> 15 16 #include <boost/fusion/include/vector.hpp> 17 #include <boost/fusion/include/for_each.hpp> 18 #include <boost/fusion/include/at_c.hpp> 19 #include <boost/fusion/mpl.hpp> 20 #include <boost/fusion/include/as_vector.hpp> 21 22 #include <memory> 23 24 /** 25 for loop generator which can use lambda expressions. 26 27 For example: 28 @code 29 using namespace generator; 30 using namespace boost::lambda; 31 make_for(N, N, range(bind(std::max<int>, _1, _2), N), range(_2, _3+1)); 32 // equivalent to pseudocode 33 // for l=0,N: for k=0,N: for j=max(l,k),N: for i=k,j 34 @endcode 35 36 If range is given as upper bound only, 37 lower bound is assumed to be default constructed 38 Lambda placeholders may only reference first three indices. 39 */ 40 41 namespace generator { 42 namespace detail { 43 44 using boost::lambda::constant_type; 45 using boost::lambda::constant; 46 47 /// lambda expression identity 48 template<class E, class enable = void> 49 struct lambda { 50 typedef E type; 51 }; 52 53 /// transform/construct constant lambda expression from non-lambda 54 template<class E> 55 struct lambda<E, typename boost::disable_if< 56 boost::lambda::is_lambda_functor<E> >::type> 57 { 58 struct constant : boost::lambda::constant_type<E>::type { 59 typedef typename boost::lambda::constant_type<E>::type base_type; 60 constant() : base_type(boost::lambda::constant(E())) {} 61 constant(const E &e) : base_type(boost::lambda::constant(e)) {} 62 }; 63 typedef constant type; 64 }; 65 66 /// range functor 67 template<class L, class U> 68 struct range_ { 69 typedef boost::array<int,4> index_type; 70 range_(U upper) : bounds_(typename lambda<L>::type(), upper) {} 71 range_(L lower, U upper) : bounds_(lower, upper) {} 72 73 template< typename T, size_t N> 74 T lower(const boost::array<T,N> &index) { 75 return bound<0>(index); 76 } 77 78 template< typename T, size_t N> 79 T upper(const boost::array<T,N> &index) { 80 return bound<1>(index); 81 } 82 83 private: 84 template<bool b, typename T> 85 T bound(const boost::array<T,1> &index) { 86 return (boost::fusion::at_c<b>(bounds_))(index[0]); 87 } 88 89 template<bool b, typename T> 90 T bound(const boost::array<T,2> &index) { 91 return (boost::fusion::at_c<b>(bounds_))(index[0], index[1]); 92 } 93 94 template<bool b, typename T, size_t N> 95 T bound(const boost::array<T,N> &index) { 96 using boost::fusion::at_c; 97 return (at_c<b>(bounds_))(index[0], index[1], index[2]); 98 } 99 100 boost::fusion::vector<typename lambda<L>::type, 101 typename lambda<U>::type> bounds_; 102 }; 103 104 template<typename T, size_t N> 105 struct for_base { 106 typedef boost::array<T,N> value_type; 107 virtual ~for_base() {} 108 virtual value_type next() = 0; 109 }; 110 111 /// N-index generator 112 template<typename T, size_t N, class R, class I> 113 struct for_ : for_base<T,N> { 114 typedef typename for_base<T,N>::value_type value_type; 115 typedef R range_tuple; 116 for_(const range_tuple &r) : r_(r), state_(true) { 117 boost::fusion::for_each(r_, initialize(index)); 118 } 119 /// @return new generator 120 for_* new_() { return new for_(r_); } 121 /// @return next index value and increment 122 value_type next() { 123 value_type next; 124 using namespace boost::lambda; 125 typename value_type::iterator n = next.begin(); 126 typename value_type::iterator i = index.begin(); 127 boost::mpl::for_each<I>(*(var(n))++ = var(i)[_1]); 128 129 state_ = advance<N>(r_, index); 130 return next; 131 } 132 /// @return false if out of bounds, true otherwise 133 operator bool() { return state_; } 134 135 private: 136 /// initialize indices 137 struct initialize { 138 value_type &index_; 139 mutable size_t i_; 140 initialize(value_type &index) : index_(index), i_(0) {} 141 template<class R_> void operator()(R_& r) const { 142 index_[i_++] = r.lower(index_); 143 } 144 }; 145 146 /// advance index[0:M) 147 template<size_t M> 148 struct advance { 149 /// stop recursion 150 struct stop { 151 stop(R r, value_type &index) {} 152 }; 153 /// advance index 154 /// @param r range tuple 155 /// @param index index array 156 advance(R &r, value_type &index) : index_(index), i_(0) { 157 namespace fusion = boost::fusion; 158 index[M-1] += 1; // increment index 159 fusion::for_each(r, *this); // update indices 160 state_ = index[M-1] >= fusion::at_c<M-1>(r).upper(index); 161 if (state_) { // out of bounds 162 typename boost::mpl::if_c<(M > 1), 163 advance<M-1>, stop>::type(r, index); 164 } 165 } 166 /// apply lower bound of range to index 167 template<typename R_> void operator()(R_& r) const { 168 if (i_ >= M) index_[i_] = r.lower(index_); 169 ++i_; 170 } 171 /// @return false if out of bounds, true otherwise 172 operator bool() { return state_; } 173 private: 174 value_type &index_; ///< index array reference 175 mutable size_t i_; ///< running index 176 bool state_; ///< out of bounds state 177 }; 178 179 value_type index; 180 range_tuple r_; 181 bool state_; 182 }; 183 184 185 /// polymorphic generator template base 186 template<typename T,size_t N> 187 struct For : boost::noncopyable { 188 typedef boost::array<T,N> value_type; 189 /// @return next index value and increment 190 value_type next() { return for_->next(); } 191 /// @return false if out of bounds, true otherwise 192 operator bool() const { return for_; } 193 protected: 194 /// reset smart pointer 195 void reset(for_base<T,N> *f) { for_.reset(f); } 196 std::auto_ptr<for_base<T,N> > for_; 197 }; 198 199 /// range [T,R) type 200 template<typename T, typename R> 201 struct range_type { 202 typedef range_<T,R> type; 203 }; 204 205 /// range identity specialization 206 template<typename T, class L, class U> 207 struct range_type<T, range_<L,U> > { 208 typedef range_<L,U> type; 209 }; 210 211 namespace fusion = boost::fusion; 212 namespace mpl = boost::mpl; 213 214 template<typename T, size_t N, class R1, class R2, class R3, class R4> 215 struct range_tuple { 216 // full range vector 217 typedef typename mpl::vector<R1,R2,R3,R4> v; 218 typedef typename mpl::end<v>::type end; 219 typedef typename mpl::advance_c<typename mpl::begin<v>::type, N>::type pos; 220 // [0:N) range vector 221 typedef typename mpl::erase<v, pos, end>::type t; 222 // transform into proper range fusion::vector 223 typedef typename fusion::result_of::as_vector< 224 typename mpl::transform<t,range_type<T, mpl::_1> >::type 225 >::type type; 226 }; 227 228 229 template<typename T, size_t N, 230 class R1, class R2, class R3, class R4, 231 class O> 232 struct for_type { 233 typedef typename range_tuple<T,N,R1,R2,R3,R4>::type range_tuple; 234 typedef for_<T, N, range_tuple, O> type; 235 }; 236 237 } // namespace detail 238 239 240 /// default index order, [0:N) 241 template<size_t N> 242 struct order { 243 typedef boost::mpl::range_c<size_t,0, N> type; 244 }; 245 246 /// N-loop generator, 0 < N <= 5 247 /// @tparam T index type 248 /// @tparam N number of indices/loops 249 /// @tparam R1,... range types 250 /// @tparam O index order 251 template<typename T, size_t N, 252 class R1, class R2 = void, class R3 = void, class R4 = void, 253 class O = typename order<N>::type> 254 struct for_ : detail::for_type<T, N, R1, R2, R3, R4, O>::type { 255 typedef typename detail::for_type<T, N, R1, R2, R3, R4, O>::type base_type; 256 typedef typename base_type::range_tuple range_tuple; 257 for_(const range_tuple &range) : base_type(range) {} 258 }; 259 260 /// loop range [L:U) 261 /// @tparam L lower bound type 262 /// @tparam U upper bound type 263 /// @return range 264 template<class L, class U> 265 detail::range_<L,U> range(L lower, U upper) { 266 return detail::range_<L,U>(lower, upper); 267 } 268 269 /// make 4-loop generator with specified index ordering 270 template<typename T, class R1, class R2, class R3, class R4, class O> 271 for_<T, 4, R1, R2, R3, R4, O> 272 make_for(R1 r1, R2 r2, R3 r3, R4 r4, const O&) { 273 typedef for_<T, 4, R1, R2, R3, R4, O> F; 274 return F(F::range_tuple(r1, r2, r3, r4)); 275 } 276 277 /// polymorphic generator template forward declaration 278 template<typename T,size_t N> 279 struct For; 280 281 /// polymorphic 4-loop generator 282 template<typename T> 283 struct For<T,4> : detail::For<T,4> { 284 /// generator with default index ordering 285 template<class R1, class R2, class R3, class R4> 286 For(R1 r1, R2 r2, R3 r3, R4 r4) { 287 this->reset(make_for<T>(r1, r2, r3, r4).new_()); 288 } 289 /// generator with specified index ordering 290 template<class R1, class R2, class R3, class R4, class O> 291 For(R1 r1, R2 r2, R3 r3, R4 r4, O o) { 292 this->reset(make_for<T>(r1, r2, r3, r4, o).new_()); 293 } 294 }; 295 296 } 297 298 299 #endif /* _GENERATOR_HPP_ */

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Joomla "online-order" component for Joomla

    - by mIRU
    Hello Word :) Somebody can help me in finding an "online - ordering" component for joomla . That can contain the following function : Possibility of creating Form with pages like RS!Form Uploading Files Supporting Payment Gateways like PayPal , MasterCard , WebMoney , YandexMoney ,Money Bookers , Google Checkout ... more ) Saving data in database. A friendly control panel where admin can check statistics , income and expenditure, including offline transactions, and report on the data . A good example , for what I'm looking , is this component http://www.nbill.co.uk/ , but is to expensive and complicated . I'll be happy fore some links or advices :) . Thank a Lot ! for all who are trying to help me ! And sorry for my bad English :-)

    Read the article

  • Combine multiple dataset columns to one dataset

    - by Matt
    I have multiple datasets that I would like to combine into one. There is a common ID field that can be associated to each row. Calling Merge on the dataset will add additional rows to the dataset, but I would like to combine the additional columns. There are too many fields to do this in one query and therefore would make it unmanageable. Each individual query would be able to handle ordering to ensure the data is placed in the correct row. For Example lets say I have two queries resulting in two datasets: SELECT ID, colA, colB SELECT colC, colD The resulting dataset would look like ID colA colB colC colD 1 a b c d 2 e f g h Any ideas on ways to accomplish this?

    Read the article

  • How can I mark a group of changes/changesets in SVN, Hg, or Git

    - by sylvanaar
    I would like to mark an arbitrary group of commits/changesets with a label. Commit 1 *Mark 1 Commit 2 *Mark 2 Commit 3 Commit 4 *Mark 1 Commit 5 *Mark 2 The goal is to easily locate all the changes for a specific mark, and to have that grouping persisted in the VCS directly, as opposed to some outside system like a bug tracking system. The location and ordering of the marks needs to be arbitrary, and should be able to work with both committed/uncommitted and pushed/unpushed changes. In SVN the best way I know is to just edit the commit notes and add some sort of special text that you can search for e.g. "**Mark 1". Or just to make a fake edit and commit it and use its commit note to list all the included revisions. Is there a better solution for SVN? Are there equivalent or better solutions for Hg or Git?

    Read the article

  • Building a directory tree from a list of file paths

    - by Abignale
    I am looking for a time efficient method to parse a list of files into a tree. There can be hundreds of millions of file paths. The brute force solution would be to split each path on occurrence of a directory separator, and traverse the tree adding in directory and file entries by doing string comparisons but this would be exceptionally slow. The input data is usually sorted alphabetically, so the list would be something like: C:\Users\Aaron\AppData\Amarok\Afile C:\Users\Aaron\AppData\Amarok\Afile2 C:\Users\Aaron\AppData\Amarok\Afile3 C:\Users\Aaron\AppData\Blender\alibrary.dll C:\Users\Aaron\AppData\Blender\and_so_on.txt From this ordering my natural reaction is to partition the directory listings into groups... somehow... before doing the slow string comparisons. I'm really not sure. I would appreciate any ideas. Edit: It would be better if this tree were lazy loaded from the top down if possible.

    Read the article

  • How to order by a known id value in mysql

    - by markj
    Hi, is it possible to order a select from mysql by using a known id first. Example: $sql = "SELECT id, name, date FROM TABLE ORDER BY "id(10)", id DESC Not sure if the above makes sense but what I would like to do is first start ordering by the known id which is 10 which I placed in quotes "id(10)". I know that cannot work as is in the query but it's just to point out what I am trying to do. If something similar can work I would appreciate it. Thanks.

    Read the article

  • CakePHP: find neighbors, order on 'name' or 'order'

    - by Brelsnok
    Hi, I have a list of ordered items, ordered according to the int field order. I'm creating a gallery in CakePHP 1.2 that has a prev and next button and those should link to the previous and next item according to their ordering, not according to their id. In order to get this result I've included the 'order' parameter to the find function, and populated it with 'Item.order'=>'DESC'. Still the result is an id ordered list. My question is: what do I do wrong? My controller: $this->Item->id = 16; $neighbours = $this->Item->find('neighbors', array('order'=>array('Item.order'=>'DESC'), 'fields'=>array('id','name')));

    Read the article

  • Refreshing UIScrollView / Animation, timing (Objective-C)

    - by Switch
    I have a UIScrollView that is displaying a list of data. Right now, when the user adds one more item to the list, I can extend the content size of the UIScrollView and scroll down smoothly using setContentOffset and YES to animated. When the user removes an item from the list, I want to resize the content size of the UIScrollView, and scroll back up one step also in an animated fashion. How can I get the ordering right? Right now, if I resize the content size before scrolling back up, the scroll isn't animated. I tried scrolling back up before resizing the content size, but that still didn't give a smooth transition. Is there a way to finish the scrolling animation BEFORE resizing the content size? Thanks

    Read the article

  • Style: Dot notation vs. message notation in Objective-C 2.0

    - by groundhog
    In Objective-C 2.0 we got the "dot" notation for properties. I've seen various back and forths about the merits of dot notation vs. message notation. To keep the responses untainted I'm not going to respond either way in the question. What is your thought about dot notation vs. message notation for property accessing? Please try to keep it focused on Objective-C - my one bias I'll put forth is that Objective-C is Objective-C, so your preference that it be like Java or JavaScript aren't valid. Valid commentary is to do with technical issues (operation ordering, cast precedence, performance, etc), clarity (structure vs. object nature, both pro and con!), succinctness, etc. Note, I'm of the school of rigorous quality and readability in code having worked on huge projects where code convention and quality is paramount (the write once read a thousand times paradigm).

    Read the article

  • Django - Form validation error

    - by Igor G.
    Hello, I have a model like this: class Entity(models.Model): entity_name = models.CharField(max_length=100) entity_id = models.CharField(max_length=30, primary_key=True) entity_parent = models.CharField(max_length=100, null=True) photo_id = models.CharField(max_length=100, null=True) username = models.CharField(max_length=100, null=True) date_matched_on = models.CharField(max_length=100, null=True) status = models.CharField(max_length=30, default="Checked In") def __unicode__(self): return self.entity_name class Meta: app_label = 'match' ordering = ('entity_name','date_matched_on') verbose_name_plural='Entities' I also have a view like this: def photo_match(request): ''' performs an update in the db when a user chooses a photo ''' form = EntityForm(request.POST) form.save() And my EntityForm looks like this: class EntityForm(ModelForm): class Meta: model = Entity My template's form returns a POST back to the view with the following values: {u'username': [u'admin'], u'entity_parent': [u'PERSON'], u'entity_id': [u'152097'], u'photo_id': [u'2200734'], u'entity_name': [u'A.J. McLean'], u'status': [u'Checked Out'], u'date_matched_on': [u'5/20/2010 10:57 AM']} And form.save() throws this error: Exception in photo_match: The Entity could not be changed because the data didn't validate. I have been trying to figure out why this is happening, but cannot pinpoint the exact problem. I can change my Entities in the admin interface just fine. If anybody has a clue about this I would greatly appreciate it! Thanks, Igor

    Read the article

  • Linq query with aggregate function OrderBy

    - by Billy Logan
    Hello everyone, I have the following LinqToEntities query, but am unsure of where or how to add the orderby clause: var results = from d in db.TBLDESIGNER join s in db.TBLDESIGN on d.ID equals s.TBLDESIGNER.ID where s.COMPLETED && d.ACTIVE let value = new { s, d} let key = new { d.ID, d.FIRST_NAME, d.LAST_NAME } group value by key into g orderby g.Key.FIRST_NAME ascending, g.Key.LAST_NAME ascending select new { ID = g.Key.ID, FirstName = g.Key.FIRST_NAME, LastName = g.Key.LAST_NAME, Count = g.Count() }; This should be sorted by First_Name ascending and then Last_Name ascending. I have tried adding ordering but It has had no effect on the result set. Could someone please provide an example of where the orderby would go assuming the query above. Thanks, Billy

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >