Search Results

Search found 8219 results on 329 pages for 'less'.

Page 298/329 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

  • Circular database relationships. Good, Bad, Exceptions?

    - by jim
    I have been putting off developing this part of my app for sometime purely because I want to do this in a circular way but get the feeling its a bad idea from what I remember my lecturers telling me back in school. I have a design for an order system, ignoring the everything that doesn't pertain to this example I'm left with: CreditCard Customer Order I want it so that, Customers can have credit cards (0-n) Customers have orders (1-n) Orders have one customer(1-1) Orders have one credit card(1-1) Credit cards can have one customer(1-1) (unique ids so we can ignore uniqueness of cc number, husband/wife may share cc instances ect) Basically the last part is where the issue shows up, sometimes credit cards are declined and they wish to use a different one, this needs to update which their 'current' card is but this can only change the current card used for that order, not the other orders the customer may have on disk. Effectively this creates a circular design between the three tables. Possible solutions: Either Create the circular design, give references: cc ref to order, customer ref to cc customer ref to order or customer ref to cc customer ref to order create new table that references all three table ids and put unique on the order so that only one cc may be current to that order at any time Essentially both model the same design but translate differently, I am liking the latter option best at this point in time because it seems less circular and more central. (If that even makes sense) My questions are, What if any are the pros and cons of each? What is the pitfalls of circular relationships/dependancies? Is this a valid exception to the rule? Is there any reason I should pick the former over the latter? Thanks and let me know if there is anything you need clarified/explained. --Update/Edit-- I have noticed an error in the requirements I stated. Basically dropped the ball when trying to simplify things for SO. There is another table there for Payments which adds another layer. The catch, Orders can have multiple payments, with the possibility of using different credit cards. (if you really want to know even other forms of payment). Stating this here because I think the underlying issue is still the same and this only really adds another layer of complexity.

    Read the article

  • error unidentfied identfier "exams" and i dont know why in c++

    - by user320950
    // basic file operations #include <iostream> #include <fstream> using namespace std; void read_file_in_array(int exam[100][3]); double calculate_total(int exam1[], int exam2[], int exam3[]); // function that calcualates grades to see how many 90,80,70,60 //void display_totals(); int main() { int go,go2,go3; go=read_file_in_array(exam); go2=calculate_total(exam1,exam2,exam3); //go3=display_totals(); cout << go,go2,go3; return 0; }/* int display_totals() { int grade_total; grade_total=calculate_total(exam1,exam2,exam3); return 0; } */ double calculate_total(int exam1[],int exam2[],int exam3[]) { int calc_tot,above90=0, above80=0, above70=0, above60=0,i,j; calc_tot=read_file_in_array(exam); for(i=0;i<100;i++) { exam1[i]=exam[100][0]; exam2[i]=exam[100][1]; exam3[i]=exam[100][2]; if(exam1[i] <=90 && exam1[i] >=100) { above90++; cout << above90; } } return exam3[i]; } void read_file_in_array(double exam[100][3]) { ifstream infile; int num, i=0,j=0; infile.open("grades.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++) // array numbers less than 100 { for(j=0;j<3;j++) // while reading get 1st array or element infile >> exam[i][j]; infile >> exam[i][j]; infile >> exam[i][j]; cout << exam[i][j] << endl; } exam[i][j]=exam1[i]; exam[i][j]=exam2[i]; exam[i][j]=exam3[i]; } infile.close(); }

    Read the article

  • How to manipulate file paths intelligently in .Net 3.0?

    - by Hamish Grubijan
    Scenario: I am maintaining a function which helps with an install - copies files from PathPart1/pending_install/PathPart2/fileName to PathPart1/PathPart2/fileName. It seems that String.Replace() and Path.Combine() do not play well together. The code is below. I added this section: // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); in order to take care of a bug (code is sensitive to a constant @"pending_install\" vs @"pending_install" which I did not like and changed (long story, but there was a good opportunity for constant reuse). Now the whole function: //You want to uncompress only the files downloaded. Not every file in the dest directory. private void UncompressFiles() { string strSrcDir = _application.Client.TempDir; ArrayList arrFiles = new ArrayList(); GetAllCompressedFiles(ref arrFiles, strSrcDir); IEnumerator enumer = arrFiles.GetEnumerator(); while (enumer.MoveNext()) { string strDestFile = enumer.Current.ToString().Replace(_application.Client.TempDir, String.Empty); // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); strDestFile = Path.Combine(_application.Client.BaseDir, strDestFile); strDestFile = strDestFile.Replace(Path.GetExtension(strDestFile), String.Empty); ZSharpLib.ZipExtractor.ExtractZip(enumer.Current.ToString(), strDestFile); FileUtility.DeleteFile(enumer.Current.ToString()); } } Please do not laugh at the use of ArrayList and the way it is being iterated - it was pioneered by a C++ coder during a .Net 1.1 era. I will change it. What I am interested in: what is a better way of replacing PathPart1/pending_install/PathPart2/fileName with PathPart1/PathPart2/fileName within the current code. Note that _application.Client.TempDir is just _application.Client.BaseDir + @"\pending_install". While there are many ways to improve the code, I am mainly concerned with the part which has to do with String.Replace(...) and Path.Combine(,). I do not want to make changes outside of this function. I wish Path.Combine(,) took an optional bool flag, but it does not. So ... given my constraints, how can I rework this so that it starts to sucks less? Thanks!

    Read the article

  • Having trouble doing an Update with a Linq to Sql object

    - by Pure.Krome
    Hi folks, i've got a simple linq to sql object. I grab it from the database and change a field then save. No rows have been updated. :( When I check the full Sql code that is sent over the wire, I notice that it does an update to the row, not via the primary key but on all the fields via the where clause. Is this normal? I would have thought that it would be easy to update the field(s) with the where clause linking on the Primary Key, instead of where'ing (is that a word :P) on each field. here's the code... using (MyDatabase db = new MyDatabase()) { var boardPost = (from bp in db.BoardPosts where bp.BoardPostId == boardPostId select bp).SingleOrDefault(); if (boardPost != null && boardPost.BoardPostId > 0) { boardPost.ListId = listId; // This changes the value from 0 to 'x' db.SubmitChanges(); } } and here's some sample sql.. exec sp_executesql N'UPDATE [dbo].[BoardPost] SET [ListId] = @p6 WHERE ([BoardPostId] = @p0) AND .... <snip the other fields>',N'@p0 int,@p1 int,@p2 nvarchar(9),@p3 nvarchar(10),@p4 int,@p5 datetime,@p6 int',@p0=1276,@p1=212787,@p2=N'ttreterte',@p3=N'ttreterte3',@p4=1,@p5='2009-09-25 12:32:12.7200000',@p6=72 Now, i know there's a datetime field in this update .. and when i checked the DB it's value was/is '2009-09-25 12:32:12.720' (less zero's, than above) .. so i'm not sure if that is messing up the where clause condition... but still! should it do a where clause on the PK's .. if anything .. for speed! Yes / no ? UPDATE After reading nitzmahone's reply, I then tried playing around with the optimistic concurrency on some values, and it still didn't work :( So then I started some new stuff ... with the optimistic concurrency happening, it includes a where clause on the field it's trying to update. When that happens, it doesn't work. so.. in the above sql, the where clause looks like this ... WHERE ([BoardPostId] = @p0) AND ([ListId] IS NULL) AND ... <rest snipped>) This doesn't sound right! the value in the DB is null, before i do the update. but when i add the ListId value to the where clause (or more to the point, when L2S add's it because of the optomistic concurrecy), it fails to find/match the row. wtf?

    Read the article

  • small scale web site - global javascript file style/format/pattern - improving maintainability

    - by yaya3
    I frequently create (and inherit) small to medium websites where I have the following sort of code in a single file (normally named global.js or application.js or projectname.js). If functions get big, I normally put them in a seperate file, and call them at the bottom of the file below in the $(document).ready() section. If I have a few functions that are unique to certain pages, I normally have another switch statement for the body class inside the $(document).ready() section. How could I restructure this code to make it more maintainable? Note: I am less interested in the functions innards, more so the structure, and how different types of functions should be dealt with. I've also posted the code here - http://pastie.org/999932 in case it makes it any easier var ProjectNameEnvironment = {}; function someFunctionUniqueToTheHomepageNotWorthMakingConfigurable () { $('.foo').hide(); $('.bar').click(function(){ $('.foo').show(); }); } function functionThatIsWorthMakingConfigurable(config) { var foo = config.foo || 700; var bar = 200; return foo * bar; } function globallyRequiredJqueryPluginTrigger (tooltip_string) { var tooltipTrigger = $(tooltip_string); tooltipTrigger.tooltip({ showURL: false ... }); } function minorUtilityOneLiner (selector) { $(selector).find('li:even').not('li ul li').addClass('even'); } var Lightbox = {}; Lightbox.setup = function(){ $('li#foo a').attr('href','#alpha'); $('li#bar a').attr('href','#beta'); } Lightbox.init = function (config){ if (typeof $.fn.fancybox =='function') { Lightbox.setup(); var fade_in_speed = config.fade_in_speed || 1000; var frame_height = config.frame_height || 1700; $(config.selector).fancybox({ frameHeight : frame_height, callbackOnShow: function() { var content_to_load = config.content_to_load; ... }, callbackOnClose : function(){ $('body').height($('body').height()); } }); } else { if (ProjectNameEnvironment.debug) { alert('the fancybox plugin has not been loaded'); } } } // ---------- order of execution ----------- $(document).ready(function () { urls = urlConfig(); (function globalFunctions() { $('.tooltip-trigger').each(function(){ globallyRequiredJqueryPluginTrigger(this); }); minorUtilityOneLiner('ul.foo') Lightbox.init({ selector : 'a#a-lightbox-trigger-js', ... }); Lightbox.init({ selector : 'a#another-lightbox-trigger-js', ... }); })(); if ( $('body').attr('id') == 'home-page' ) { (function homeFunctions() { someFunctionUniqueToTheHomepageNotWorthMakingConfigurable (); })(); } });

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

  • Maps with a nested vector

    - by wawiti
    For some reason the compiler won't let me retrieve the vector of integers from the map that I've created, I want to be able to overwrite this vector with a new vector. The error the compiler gives me is ridiculous. Thanks for your help!! The compiler didn't like this part of my code: line_num = miss_words[word_1]; Error: [Wawiti@localhost Lab2]$ g++ -g -Wall *.cpp -o lab2 main.cpp: In function ‘int main(int, char**)’: main.cpp:156:49: error: no match for ‘operator=’ in ‘miss_words.std::map<_Key, _Tp, _Compare, _Alloc>::operator[]<std::basic_string<char>, std::vector<int>, std::less<std::basic_string<char> >, std::allocator<std::pair<const std::basic_string<char>, std::vector<int> > > >((*(const key_type*)(& word_1))) = line_num.std::vector<_Tp, _Alloc>::push_back<int, std::allocator<int> >((*(const value_type*)(& line)))’ main.cpp:156:49: note: candidate is: In file included from /usr/lib/gcc/x86_64-redhat->linux/4.7.2/../../../../include/c++/4.7.2vector:70:0, from header.h:19, from main.cpp:15: /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/bits/vector.tcc:161:5: note: std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = int; _Alloc = std::allocator<int>] /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/bits/vector.tcc:161:5: note: no known conversion for argument 1 from ‘void’ to ‘const std::vector<int>&’ CODE: map<string, vector<int> > miss_words; // Creates a map for misspelled words string word_1; // String for word; string sentence; // To store each line; vector<int> line_num; // To store line numbers ifstream file; // Opens file to be spell checked file.open(argv[2]); int line = 1; while(getline(file, sentence)) // Reads in file sentence by sentence { sentence=remove_punct(sentence); // Removes punctuation from sentence stringstream pars_sentence; // Creates stringstream pars_sentence << sentence; // Places sentence in a stringstream while(pars_sentence >> word_1) // Picks apart sentence word by word { if(dictionary.find(word_1)==dictionary.end()) { line_num = miss_words[word_1]; //Compiler doesn't like this miss_words[word_1] = line_num.push_back(line); } } line++; // Increments line marker }

    Read the article

  • Coping with feelings of technical mediocrity

    - by Karim
    As I've progressed as a programmer, I noticed more nuance and areas I could study in depth. In part, I've come to think of myself from, at one point, a "guru" to now much less, even mediocre or inadequate. Is this normal, or is it a sign of a destructive excessive ambition? Background I started to program when I was still a kid, I had about 10 or 11 years. I really enjoy my work and never get bored from it. It's amazing how somebody could be paid for what he really likes to do and would be doing it anyway even for free. When I first started to program, I was feeling proud of what I was doing, each application I built was for me a success and after 2-3 year I had a feeling that I'm a coding guru. It was a nice feeling. ;-) But the more I was in the field and the more types of software I started to develop, I was starting to have a feeling that I'm completely wrong in thinking I'm a guru. I felt that I'm not even a mediocre developer. Each new field I start to work on is giving me this feeling. Like when I once developed a device driver for a client, I saw how much I need to learn about device drivers. When I developed a video filter for an application, I saw how much do I still need to learn about DirectShow, Color Spaces, and all the theory behind that. The worst thing was when I started to learn algorithms. It was several years ago. I knew then the basic structures and algorithms like the sorting, some types of trees, some hashtables, strings, etc. and when I really wanted to learn a group of structures I learned about 5-6 new types and saw that in fact even this small group has several hundred subtypes of structures. It's depressing how little time people have in their lives to learn all this stuff. I'm now a software developer with about 10 years of experience and I still feel that I'm not a proficient developer when I think about things that others do in the industry.

    Read the article

  • C++ class member functions instantiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • How do I write object classes effectively when dealing with table joins?

    - by Chris
    I should start by saying I'm not now, nor do I have any delusions I'll ever be a professional programmer so most of my skills have been learned from experience very much as a hobby. I learned PHP as it seemed a good simple introduction in certain areas and it allowed me to design simple web applications. When I learned about objects, classes etc the tutor's basic examnples covered the idea that as a rule of thumb each database table should have its own class. While that worked well for the photo gallery project we wrote, as it had very simple mysql queries, it's not working so well now my projects are getting more complex. If I require data from two separate tables which require a table join I've instead been ignoring the class altogether and handling it on a case by case basis, OR, even worse been combining some of the data into the class and the rest as a separate entity and doing two queries, which to me seems inefficient. As an example, when viewing content on a forum I wrote, if you view a thread, I retrieve data from the threads table, the posts table and the user table. The queries from the user and posts table are retrieved via a join and not instantiated as an object, whereas the thread data is called using my Threads class. So how do I get from my current state of affairs to something a little less 'stupid', for want of a better word. Right now I have a DB class that deals with connection and escaping values etc, a parent db query class that deals with the common queries and methods, and all of the other classes (Thread, Upload, Session, Photo and ones thats aren't used Post, User etc ) are children of that. Do I make a big posts class that has the relevant extra attributes that I retrieve from the users (and potentially threads) table? Do I have separate classes that populate each of their relevant attributes with a single query? If so how do I do that? Because of the way my classes are written, based on what I was taught, my db update row method, or insert method both just take the attributes as an array and update all of that, if I have extra attributes from other db tables in each class then how do I rewrite those methods as obbiously updating automatically like that would result in errors? In short I think my understanding is limited right now and I'd like some pointers when it comes to the fundamentals of how to write more complex classes.

    Read the article

  • errorerror C2059: syntax error : ']', i cant figure out why this coming up in c++

    - by user320950
    void display_totals(); int exam1[100][3];// array that can hold 100 numbers for 1st column int exam2[100][3];// array that can hold 100 numbers for 2nd column int exam3[100][3];// array that can hold 100 numbers for 3rd column int main() { int go,go2,go3; go=read_file_in_array; go2= calculate_total(exam1[],exam2[],exam3[]); go3=display_totals; cout << go,go2,go3; return 0; } void display_totals() { int grade_total; grade_total=calculate_total(exam1[],exam2[],exam3[]); } int calculate_total(int exam1[],int exam2[],int exam3[]) { int calc_tot,above90=0, above80=0, above70=0, above60=0,i,j; calc_tot=read_file_in_array(exam[100][3]); exam1[][]=exam[100][3]; exam2[][]=exam[100][3]; exam3[][]=exam[100][3]; for(i=0;i<100;i++); { if(exam1[i] <=90 && exam1[i] >=100) { above90++; cout << above90; } } return exam1[i],exam2[i],exam3[i]; } int read_file_in_array(int exam[100][3]) { ifstream infile; int num, i=0,j=0; infile.open("grades.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++); // array numbers less than 100 { for(j=0;j<3;j++); // while reading get 1st array or element infile >> exam[i][j]; cout << exam[i][j] << endl; } } infile.close(); return exam[i][j]; }

    Read the article

  • How to manipulate file paths intelligently in .Net 3.5?

    - by Hamish Grubijan
    Scenario: I am maintaining a function which helps with an install - copies files from PathPart1/pending_install/PathPart2/fileName to PathPart1/PathPart2/fileName. It seems that String.Replace() and Path.Combine() do not play well together. The code is below. I added this section: // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); in order to take care of a bug (code is sensitive to a constant @"pending_install\" vs @"pending_install" which I did not like and changed (long story, but there was a good opportunity for constant reuse). Now the whole function: //You want to uncompress only the files downloaded. Not every file in the dest directory. private void UncompressFiles() { string strSrcDir = _application.Client.TempDir; ArrayList arrFiles = new ArrayList(); GetAllCompressedFiles(ref arrFiles, strSrcDir); IEnumerator enumer = arrFiles.GetEnumerator(); while (enumer.MoveNext()) { string strDestFile = enumer.Current.ToString().Replace(_application.Client.TempDir, String.Empty); // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\"")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); strDestFile = Path.Combine(_application.Client.BaseDir, strDestFile); strDestFile = strDestFile.Replace(Path.GetExtension(strDestFile), String.Empty); ZSharpLib.ZipExtractor.ExtractZip(enumer.Current.ToString(), strDestFile); FileUtility.DeleteFile(enumer.Current.ToString()); } } Please do not laugh at the use of ArrayList and the way it is being iterated - it was pioneered by a C++ coder during a .Net 1.1 era. I will change it. What I am interested in: what is a better way of replacing PathPart1/pending_install/PathPart2/fileName with PathPart1/PathPart2/fileName within the current code. Note that _application.Client.TempDir is just _application.Client.BaseDir + @"\pending_install". While there are many ways to improve the code, I am mainly concerned with the part which has to do with String.Replace(...) and Path.Combine(,). I do not want to make changes outside of this function. I wish Path.Combine(,) took an optional bool flag, but it does not. So ... given my constraints, how can I rework this so that it starts to suck less?

    Read the article

  • How would you structure your entity model for storing arbitrary key/value data with different data t

    - by Nathan Ridley
    I keep coming across scenarios where it will be useful to store a set of arbitrary data in a table using a per-row key/value model, rather than a rigid column/field model. The problem is, I want to store the values with their correct data type rather than converting everything to a string. This means I have to choose either a single table with multiple nullable columns, one for each data type, or a set of value tables, one for each data type. I'm also unsure as to whether I should use full third normal form and separate the keys into a separate table, referencing them via a foreign key from the value table(s), or if it would be better to keep things simple and store the string keys in the value table(s) and accept the duplication of strings. Old/bad: This solution makes adding additional values a pain in a fluid environment because the table needs to be modified regularly. MyTable ============================ ID Key1 Key2 Key3 int int string date ---------------------------- 1 Value1 Value2 Value3 2 Value4 Value5 Value6 Single Table Solution This solution allows simplicity via a single table. The querying code still needs to check for nulls to determine which data type the field is storing. A check constraint is probably also required to ensure only one of the value fields contains non-nulll data. DataValues ============================================================= ID RecordID Key IntValue StringValue DateValue int int string int string date ------------------------------------------------------------- 1 1 Key1 Value1 NULL NULL 2 1 Key2 NULL Value2 NULL 3 1 Key3 NULL NULL Value3 4 2 Key1 Value4 NULL NULL 5 2 Key2 NULL Value5 NULL 6 2 Key3 NULL NULL Value6 Multiple-Table Solution This solution allows for more concise purposing of each table, though the code needs to know the data type in advance as it needs to query a different table for each data type. Indexing is probably simpler and more efficient because there are less columns that need indexing. IntegerValues =============================== ID RecordID Key Value int int string int ------------------------------- 1 1 Key1 Value1 2 2 Key1 Value4 StringValues =============================== ID RecordID Key Value int int string string ------------------------------- 1 1 Key2 Value2 2 2 Key2 Value5 DateValues =============================== ID RecordID Key Value int int string date ------------------------------- 1 1 Key3 Value3 2 2 Key3 Value6 How do you approach this problem? Which solution is better? Also, should the key column be separated into a separate table and referenced via a foreign key or be should it be kept in the value table and bulk updated if for some reason the key name changes?

    Read the article

  • C - What is the proper format to allow a function to show an error was encountered?

    - by BrainSteel
    I have a question about what a function should do if the arguments to said function don't line up quite right, through no fault of the function call. Since that sentence doesn't make much sense, I'll offer my current issue. To keep it simple, here is the most relevant and basic function I have. float getYValueAt(float x, PHYS_Line line, unsigned short* error) *error = 0; if(x < line.start.x || x > line.end.x){ *error = 1; return -1; } if(line.slope.value != 0){ //line's equation: y - line.start.y = line.slope.value(x - line.start.x) return line.slope.value * (x - line.start.x) + line.start.y; } else if(line.slope.denom == 0){ if(line.start.x == x) return line.start.y; else{ *error = 1; return -1; } } else if(line.slope.num == 0){ return line.start.y; } } The function attempts to find the point on a line, given a certain x value. However, under some circumstances, this may not be possible. For example, on the line x = 3, if 5 is passed as a value, we would have a problem. Another problem arises if the chosen x value is not within the interval the line is on. For this, I included the error pointer. Given this format, a function call could work as follows: void foo(PHYS_Line some_line){ unsigned short error = 0; float y = getYValueAt(5, some_line, &error); if(error) fooey(); else do_something_with_y(y); } My question pertains to the error. Note that the value returned is allowed to be negative. Returning -1 does not ensure that an error has occurred. I know that it is sometimes preferred to use the following method to track an error: float* getYValueAt(float x, PHYS_Line line); and then return NULL if an error occurs, but I believe this requires dynamic memory allocation, which seems even less sightly than the solution I was using. So, what is standard practice for an error occurring?

    Read the article

  • The 80 column limit, still useful?

    - by Tim Post
    Related: While coding, how many columns do you format for? Is there a valid reason for enforcing a maximum width of 80 characters in a code file, this day and age? I mostly use C, however this question is language agnostic. Its also subjective, so I'll tag it as such. Many individual projects set their own various coding standards, a guide to adjust your coding style. Many enforce an 80 column limit on code, i.e. don't force a dumb 80 x 25 terminal to wrap your lines in someone else's editor of choice if they are stuck with such a display, don't force them to turn off wrapping. Both private and open source projects usually have some style guidelines. My question is, in this day and age, is that requirement more of a pest than a helper? Does anyone still login via the local console with no framebuffer and actually edit code? If so, how often and why cant you use SSH? I help to manage a few open source projects, I was considering extending this limit to 110 columns, but I wanted to get feedback first. So, any feedback is appreciated. I can see the need to make certain OUTPUT of programs (i.e. a --help /h display) 80 columns or less, but I really don't see the need to force people to break up code under 110 columns long into 2 lines, when its easier to read on one line. I can also see the case for adhering to an 80 column limit if you're writing code that will be used on micro controllers that have to be serviced in the field with a god-knows-what terminal emulator. Beyond that, what are your thoughts? Edit: This is not an exact duplicate. I am asking very specific questions, such as how many people are actually still using such a display. I am also not asking "what is a good column limit", I'm proposing one and hoping to gather feedback. Beyond that, I'm also citing cases where the 80 column limit is still a good idea. I don't want a guide to my own "c-style", I'm hoping to adjust standards for several projects. If the duplicate in question had answered all of my questions, I would not have posted this one :) That will teach me to mention it next time. Edit 2 question |= COMMUNITY_WIKI

    Read the article

  • Multiset container appears to stop sorting

    - by Sarah
    I would appreciate help debugging some strange behavior by a multiset container. Occasionally, the container appears to stop sorting. This is an infrequent error, apparent in only some simulations after a long time, and I'm short on ideas. (I'm an amateur programmer--suggestions of all kinds are welcome.) My container is a std::multiset that holds Event structs: typedef std::multiset< Event, std::less< Event > > EventPQ; with the Event structs sorted by their double time members: struct Event { public: explicit Event(double t) : time(t), eventID(), hostID(), s() {} Event(double t, int eid, int hid, int stype) : time(t), eventID( eid ), hostID( hid ), s(stype) {} bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; ... }; The program iterates through periods of adding events with unordered times to EventPQ currentEvents and then pulling off events in order. Rarely, after some events have been added (with perfectly 'legal' times), events start getting executed out of order. What could make the events ever not get ordered properly? (Or what could mess up the iterator?) I have checked that all the added event times are legitimate (i.e., all exceed the current simulation time), and I have also confirmed that the error does not occur because two events happen to get scheduled for the same time. I'd love suggestions on how to work through this. The code for executing and adding events is below for the curious: double t = 0.0; double nextTimeStep = t + EPID_DELTA_T; EventPQ::iterator eventIter = currentEvents.begin(); while ( t < EPID_SIM_LENGTH ) { // Add some events to currentEvents while ( ( *eventIter ).time < nextTimeStep ) { Event thisEvent = *eventIter; t = thisEvent.time; executeEvent( thisEvent ); eventCtr++; currentEvents.erase( eventIter ); eventIter = currentEvents.begin(); } t = nextTimeStep; nextTimeStep += EPID_DELTA_T; } void Simulation::addEvent( double et, int eid, int hid, int s ) { assert( currentEvents.find( Event(et) ) == currentEvents.end() ); Event thisEvent( et, eid, hid, s ); currentEvents.insert( thisEvent ); }

    Read the article

  • How do I make a GUI that behaves like this?

    - by Karl Knechtel
    This is difficult to explain without illustration, so - behold, an illustration, cobbled together from screenshots of a few hello-world examples and a lot of Paint work: I have started out using Windows Forms on .NET (via IronPython, but that shouldn't be important), and haven't been able to figure out very much. GUI libraries in general are very intimidating, simply because every class has so many possible attributes. Documentation is good at explaining what everything does, but not so good at helping you figure out what you need. I will be assembling the GUI dynamically, but I'm not expecting that to be the hard part. The sticking points for me right now are: How do I get text labels to size themselves automatically to the width of the contained text (so that the text doesn't clip, and I also don't reserve unnecessary space for them when resizing the window)? How do I make the vertical scrollbar always appear? Setting the VScroll property (why is this protected when AutoScroll is public, BTW?) doesn't seem to do anything. How come the horizontal scrollbar is not added by AutoScroll when contents are laid out vertically (via Dock = DockStyle.Top)? I can use a minimum size for panels to prevent the label and corresponding control from overlapping when the window is shrunk horizontally, but then the scrollbar doesn't appear and the control is inaccessible. How can I put limits on window resizing (e.g. set a minimum width) without disabling it completely? (Just set minimum/maximum sizes for the Form?) Related to that, is there any way to set minimum/maximum widths or heights without setting a minimum/maximum size (i.e. can I constrain the size in only one dimension)? Is there a built-in control suitable for hex editing or am I going to have to build something myself? ... And should I be using something else (perhaps something more capable?) I've heard WPF mentioned, but I understand that this involves XML and I really want to build a GUI from XML - I already have data in an object graph, and doing some kind of weird XML pseudo-serialization (in Python, no less!) in order to create a GUI seems incredibly roundabout.

    Read the article

  • jquery in ajax loaded content

    - by Kim Gysen
    My application is supposed to be a single page application and I have the following code that works fine: home.php: <div id="container"> </div> accordion.php: //Click functions: load content $('#parents').click(function(){ //Load parent in container $('#container').load('http://www.blabla.com/entities/parents/parents.php'); }); parents.php: <div class="entity_wrapper"> Some divs and selectors </div> <script type="text/javascript"> $(document).ready(function(){ //Some jQuery / javascript }); </script> So the content loads fine, while the scripts dynamically loaded execute fine as well. I apply this system repetitively and it continues to work smoothly. I've seen that there are a lot of frameworks available on SPA's (such as backbone.js) but I don't understand why I need them if this works fine. From the backbone.js website: When working on a web application that involves a lot of JavaScript, one of the first things you learn is to stop tying your data to the DOM. It's all too easy to create JavaScript applications that end up as tangled piles of jQuery selectors and callbacks, all trying frantically to keep data in sync between the HTML UI, your JavaScript logic, and the database on your server. For rich client-side applications, a more structured approach is often helpful. Well, I totally don't have the feeling that I'm going through the stuff they mention. Adding the javascript per page works really well for me. They are html containers with clear scope and the javascript is just related to that part. More over, the front end doesn't do that much, most of the logic is managed based on Ajax calls to external PHP scripts. Sometimes the js can be a bit more extended for some functionalities, but all just loads as smooth in less than a second. If you think that this is bad coding, please tell me why I cannot do this and more importantly, what is the alternative I should apply. At the moment, I really don't see a reason on why I would change this approach as it just works too well. I'm kinda stuck on this question because it just worries me sick as it seems to easy to be true. Why would people go through hard times if it would be as easy as this...

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

  • Java exercise - display table with 2d array

    - by TheHacker66
    I'm struggling to finish a java exercise, it involves using 2d arrays to dinamically create and display a table based on a command line parameter. Example: java table 5 +-+-+-+-+-+ |1|2|3|4|5| +-+-+-+-+-+ |2|3|4|5|1| +-+-+-+-+-+ |3|4|5|1|2| +-+-+-+-+-+ |4|5|1|2|3| +-+-+-+-+-+ |5|1|2|3|4| +-+-+-+-+-+ What i have done so far: public static void main(String[] args) { int num = Integer.parseInt(args[0]); String[][] table = new String[num*2+1][num]; int[] numbers = new int[num]; int temp = 0; for(int i=0; i<numbers.length; i++) numbers[i] = i+1; // wrong for(int i=0; i<table.length; i++){ for(int j=0; j<num;j++){ if(i%2!=0){ temp=numbers[0]; for(int k=1; k<numbers.length; k++){ numbers[k-1]=numbers[k]; } numbers[numbers.length-1]=temp; for(int l=0; l<numbers.length; l++){ table[i][j] = "|"+numbers[l]; } } else table[i][j] = "+-"; } } for(int i=0; i<table.length; i++){ for(int j=0; j<num; j++) System.out.print(table[i][j]); if(i%2==0) System.out.print("+"); else System.out.print("|"); System.out.println();} } This doesn't work, since it prints 1|2|3|4 in every row, which isn't what i need. I found the issue, and it's because the first for loop changes the array order more times than needed and basically it returns as it was at the beginning. I know that probably there's a way to achieve this by writing more code, but i always tend to nest as much as possible to "optimize" the code while i write it, so that's why i tried solving this exercise by using less variables and loops as possible. Thanks in advance for your help!

    Read the article

  • Jquery help : How to implement a Previous/Next & Play/Pause toggle for this script

    - by rameshelamathi
    (function($){ // Creating the sweetPages jQuery plugin: $.fn.sweetPages = function(opts){ // If no options were passed, create an empty opts object if(!opts) opts = {}; var resultsPerPage = opts.perPage || 3; var swDiv = opts.getSwDiv || "swControls"; // The plugin works best for unordered lists, althugh ols would do just as well: var ul = this; var li = ul.find('li'); li.each(function(){ // Calculating the height of each li element, and storing it with the data method: var el = $(this); el.data('height',el.outerHeight(true)); }); // Calculating the total number of pages: var pagesNumber = Math.ceil(li.length/resultsPerPage); // If the pages are less than two, do nothing: if(pagesNumber<2) return this; // Creating the controls div: //var swControls = $('<div class="swControls">'); var swControls = $('<div class='+swDiv+'>'); for(var i=0;i<pagesNumber;i++) { // Slice a portion of the lis, and wrap it in a swPage div: li.slice(i*resultsPerPage,(i+1)*resultsPerPage).wrapAll('<div class="swPage" />'); // Adding a link to the swControls div: swControls.append('<a href="" class="swShowPage">'+(i+1)+'</a>'); } ul.append(swControls); var maxHeight = 0; var totalWidth = 0; var swPage = ul.find('.swPage'); swPage.each(function(){ // Looping through all the newly created pages: var elem = $(this); var tmpHeight = 0; elem.find('li').each(function(){tmpHeight+=$(this).data('height');}); if(tmpHeight>maxHeight) maxHeight = tmpHeight; totalWidth+=elem.outerWidth(); elem.css('float','left').width(ul.width()); }); swPage.wrapAll('<div class="swSlider" />'); // Setting the height of the ul to the height of the tallest page: //ul.height(maxHeight); var swSlider = ul.find('.swSlider'); swSlider.append('<div class="clear" />').width(totalWidth); var hyperLinks = ul.find('a.swShowPage'); hyperLinks.click(function(e){ // If one of the control links is clicked, slide the swSlider div // (which contains all the pages) and mark it as active: $(this).addClass('active').siblings().removeClass('active'); swSlider.stop().animate({'margin-left':-(parseInt($(this).text())-1)*ul.width()},'slow'); e.preventDefault(); }); // Mark the first link as active the first time this code runs: hyperLinks.eq(0).addClass('active'); // Center the control div: swControls.css({ 'right':'10%', 'margin-left':-swControls.width()/2 }); return this; }})(jQuery);

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >