Search Results

Search found 8219 results on 329 pages for 'less'.

Page 299/329 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Java exercise - display table with 2d array

    - by TheHacker66
    I'm struggling to finish a java exercise, it involves using 2d arrays to dinamically create and display a table based on a command line parameter. Example: java table 5 +-+-+-+-+-+ |1|2|3|4|5| +-+-+-+-+-+ |2|3|4|5|1| +-+-+-+-+-+ |3|4|5|1|2| +-+-+-+-+-+ |4|5|1|2|3| +-+-+-+-+-+ |5|1|2|3|4| +-+-+-+-+-+ What i have done so far: public static void main(String[] args) { int num = Integer.parseInt(args[0]); String[][] table = new String[num*2+1][num]; int[] numbers = new int[num]; int temp = 0; for(int i=0; i<numbers.length; i++) numbers[i] = i+1; // wrong for(int i=0; i<table.length; i++){ for(int j=0; j<num;j++){ if(i%2!=0){ temp=numbers[0]; for(int k=1; k<numbers.length; k++){ numbers[k-1]=numbers[k]; } numbers[numbers.length-1]=temp; for(int l=0; l<numbers.length; l++){ table[i][j] = "|"+numbers[l]; } } else table[i][j] = "+-"; } } for(int i=0; i<table.length; i++){ for(int j=0; j<num; j++) System.out.print(table[i][j]); if(i%2==0) System.out.print("+"); else System.out.print("|"); System.out.println();} } This doesn't work, since it prints 1|2|3|4 in every row, which isn't what i need. I found the issue, and it's because the first for loop changes the array order more times than needed and basically it returns as it was at the beginning. I know that probably there's a way to achieve this by writing more code, but i always tend to nest as much as possible to "optimize" the code while i write it, so that's why i tried solving this exercise by using less variables and loops as possible. Thanks in advance for your help!

    Read the article

  • How to close all, or only some, tabs in Safari using AppleScript?

    - by Form
    I have made a very simple AppleScript to close all tabs in Safari. The problem is, it works, but not completely. The problem is that only a couple of tabs are closed. Here's the code: tell application "Safari" repeat with aWindow in windows repeat with aTab in tabs of aWindow if [some condition is encountered] then aTab close end if end repeat end repeat end tell I've also tried this script: tell application "Safari" repeat with i from 0 to the number of items in windows set aWindow to item i of windows repeat with j from 0 to the number of tabs in aWindow set aTab to item j of tabs of aWindow if [some condition is encountered] then aTab close end if end repeat end repeat end tell ... but it does not work either (same behavior). I tried that on my system (MacBook Pro jan 2008), as well as on a Mac Pro G5 under Tiger and the script fails on both, albeit with a much less descriptive error on Tiger. Running the script a few times closes a few tab each time until none is left, but always fails with the same error after closing a few tabs. Under Leopard I get an out of bounds error. Since I am using fast enumeration (not using "repeat from 0 to number of items in windows") I don't see how I can get an out of bounds error with this... My goal is to use the Cocoa Scripting Bridge to close tabs in Safari from my Objective-C Cocoa application but the Scripting Bridge fails in the same manner. The non-deletable tabs show as NULL in the Xcode debugger, while the other tabs are valid objects from which I can get values back (such as their title). In fact I tried with the Scripting Bridge first then told myself why not try this directly in AppleScript and I was surprised to see the same results. I must have a glaring omission or something in there... (seems like a bug in Safari AppleScript support to me... :S) I've used repeat loops and Obj-C 2.0 fast enumeration to iterate through collections before with zero problems, so I really don't see what's wrong here. Anyone can help? Thanks in advance!

    Read the article

  • The 80 column limit, still useful?

    - by Tim Post
    Related: While coding, how many columns do you format for? Is there a valid reason for enforcing a maximum width of 80 characters in a code file, this day and age? I mostly use C, however this question is language agnostic. Its also subjective, so I'll tag it as such. Many individual projects set their own various coding standards, a guide to adjust your coding style. Many enforce an 80 column limit on code, i.e. don't force a dumb 80 x 25 terminal to wrap your lines in someone else's editor of choice if they are stuck with such a display, don't force them to turn off wrapping. Both private and open source projects usually have some style guidelines. My question is, in this day and age, is that requirement more of a pest than a helper? Does anyone still login via the local console with no framebuffer and actually edit code? If so, how often and why cant you use SSH? I help to manage a few open source projects, I was considering extending this limit to 110 columns, but I wanted to get feedback first. So, any feedback is appreciated. I can see the need to make certain OUTPUT of programs (i.e. a --help /h display) 80 columns or less, but I really don't see the need to force people to break up code under 110 columns long into 2 lines, when its easier to read on one line. I can also see the case for adhering to an 80 column limit if you're writing code that will be used on micro controllers that have to be serviced in the field with a god-knows-what terminal emulator. Beyond that, what are your thoughts? Edit: This is not an exact duplicate. I am asking very specific questions, such as how many people are actually still using such a display. I am also not asking "what is a good column limit", I'm proposing one and hoping to gather feedback. Beyond that, I'm also citing cases where the 80 column limit is still a good idea. I don't want a guide to my own "c-style", I'm hoping to adjust standards for several projects. If the duplicate in question had answered all of my questions, I would not have posted this one :) That will teach me to mention it next time. Edit 2 question |= COMMUNITY_WIKI

    Read the article

  • Replace text in XSL using wildcards

    - by JosephThomas
    This is similar to an earlier problem I was having which you guys solved in less than a day. I am working with XML files that are generated by a digital video camera. The camera allows the user to save all of the camera's settngs to an SD card so that the settings can be recalled or loaded into another camera. The XSL stylesheet I am writing will allow users to view the camera's settings, as saved to the SD card in a web browser. While most of the values in the XML file -- as formatted by my stylesheet -- make sense to humans, some do not. What I would like to do is have the stylesheet display text that is based on the value in the XML file but more easily understood by humans. A typical value that can be written to the XML file is "_23_970" which represents the camera's frame rate. This would be better displayed as 23.970 (or 023.970). The first underscore is a sort of place holder to make a space for values over 099.999. The second underscore, obviously represents the decimal. My previous (similar) question involved replacing predictable text, and the solution was matching templates. In this case, however, the camera can be set at any one of 119,999 frame rates (I think I did that math correctly). The approach, I would guess, is to pass a value to the displayed webpage that keeps the numeric values (each digit), replaces the second underscore with a decimal, and replaces the first underscore with either an nbsp or a zero (whichever is easier). If the first character in the string is a "1" (the camera can run at frame rates up to 120.000) then the one should be passed on to the page displayed by the stylesheet. I have read other posts here regarding wildcards, but couldn't find one that answered this question. EDIT: Sorry for leaving out important info. I fared better on my first try at asking a question! I guess I got complacent. Anyhow . . . I should have shown you the code that displays the text in the XSL file as is: <tr> <xsl:for-each select="Settings/Groups/Recording"> <tr><td class="title_column">Frame Rate</td><td><xsl:value-of select="RecOutLinkSpeed"/></td></tr> </xsl:for-each> </tr> I should also have given you the URL for the sample file I have been working with: http://josephthomas.info/Alexa/Setup_120511_140322.xml

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • Query performs poorly unless a temp table is used

    - by Paul McLoughlin
    The following query takes about 1 minute to run, and has the following IO statistics: SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN TASK_REQUESTS AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN AND T3.TASK = 'UPDATE_MEM_BAL' GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'TRANSACTIONS'. Scan count 5977, logical reads 7527408, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TASK_REQUESTS'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 58157 ms, elapsed time = 61437 ms. If I instead introduce a temporary table then the query returns quickly and performs less logical reads: CREATE TABLE #MyTable(RGN VARCHAR(20) NOT NULL, CD VARCHAR(20) NOT NULL, PRIMARY KEY([RGN],[CD])); INSERT INTO #MyTable(RGN, CD) SELECT RGN, CD FROM TASK_REQUESTS WHERE TASK='UPDATE_MEM_BAL'; SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN #MyTable AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'Worktable'. Scan count 5974, logical reads 382339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TRANSACTIONS'. Scan count 4, logical reads 4547, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#MyTable________________________________________________________________000000000013'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1420 ms, elapsed time = 1515 ms. The interesting thing for me is that the TASK_REQUEST table is a small table (3 rows at present) and statistics are up to date on the table. Any idea why such different execution plans and execution times would be occuring? And ideally how to change things so that I don't need to use the temp table to get decent performance? The only real difference in the execution plans is that the temp table version introduces an index spool (eager spool) operation.

    Read the article

  • Dynamic allocating of const member structures

    - by Willy
    I've got class which is using plain-only-data struct with const variables and I'm not sure, if I'm allocating these structures in a proper way. It looks more or less like: #include <cstdlib> #include <iostream> using std::cout; using std::endl; struct some_const_struct { const int arg1; const int arg2; }; class which_is_using_above_struct { public: some_const_struct* m_member; const some_const_struct* const m_const_member; public: const some_const_struct& get_member() const { return *m_member; } const some_const_struct& get_const_member() const { return *m_const_member; } void set_member(const int a, const int b) { if(m_member != NULL) { delete m_member; m_member = NULL; } m_member = new some_const_struct((some_const_struct){a, b}); } explicit which_is_using_above_struct(const int a, const int b) : m_const_member(new some_const_struct((const some_const_struct){a, b})) { m_member = NULL; } ~which_is_using_above_struct() { if(m_member != NULL) { delete m_member; } if(m_const_member != NULL) { delete m_const_member; } } }; int main() { which_is_using_above_struct c(1, 2); c.set_member(3, 4); cout << "m_member.arg1 = " << c.get_member().arg1 << endl; cout << "m_member.arg2 = " << c.get_member().arg2 << endl; cout << "m_const_member.arg1 = " << c.get_const_member().arg1 << endl; cout << "m_const_member.arg2 = " << c.get_const_member().arg2 << endl; return 0; } I'm just not quite sure if the statement: m_member = new some_const_struct((some_const_struct){a, b}); doesn't produce unnessesary use of some_const_struct's copy constructor, ergo allocating that struct twice. What do you think? And is it reasonable to make that struct's members const? (they're not supposed to change in their lifetime at all)

    Read the article

  • Architectural Design for a Data-Driven Silverlight WP7 app

    - by Rosarch
    I have a Silverlight Windows Phone 7 app that pulls data from a public API. I find myself doing much of the same thing over and over again: In the UI, set a loading message or loading progress bar in place of where the content is Get the content, which may be already in memory, cached in isolated file storage, or require an HTTP request If the content can not be acquired (no network connection, etc), display an error message If the content is acquired, display it in the UI Keep the content in main memory for subsequent queries The content that is displayed to the user can be taken directly from a data source, such as an ObservableCollection, or it may be a query on a data source. I would like to factor out this repetitive process into a framework where ideally only the following needs to be specified: Where to display the content in the UI The UI elements to show while loading, on failure, and on success The URI of the HTTP request How to parse the HTTP response into the data structure that will kept in memory The location of the file in isolated storage, if it exists How to parse the file contents into the data structure that will be kept in memory It may sound like a lot, but two strings, three FrameworkElements, and two methods is less than the overhead that I currently have. Also, this needs to work for however the data is maintained in memory, and needs to work for direct collections and queries on those collections. My questions are: Has something like this already been implemented? Are my thoughts about the topic above fundamentally wrong in some way? Here is a design I'm thinking of: There are two components, a View and a Model. The View is given the FrameworkElements for loading, failure, and success. It is also given a reference to the corresponding Model. The View is a UserControl that is placed somewhere in the UI. The Model a class that is given the URI for the data, a method of how to parse the data, and optionally a filename and how to parse the file. It is responsible for retrieving the data and notifying the View whenever the current status (loading/fail/success) changes. If the data downloaded from the network is different from the cache, the network data takes precedence. When the app closes or is tombstoned, the model writes the data to the cache. How does that sound?

    Read the article

  • Java Flow Control Problem

    - by Kyle_Solo
    I am programming a simple 2d game engine. I've decided how I'd like the engine to function: it will be composed of objects containing "events" that my main game loop will trigger when appropriate. A little more about the structure: Every GameObject has an updateEvent method. objectList is a list of all the objects that will receive update events. Only objects on this list have their updateEvent method called by the game loop. I’m trying to implement this method in the GameObject class (This specification is what I’d like the method to achieve): /** * This method removes a GameObject from objectList. The GameObject * should immediately stop executing code, that is, absolutely no more * code inside update events will be executed for the removed game object. * If necessary, control should transfer to the game loop. * @param go The GameObject to be removed */ public void remove(GameObject go) So if an object tries to remove itself inside of an update event, control should transfer back to the game engine: public void updateEvent() { //object's update event remove(this); System.out.println("Should never reach here!"); } Here’s what I have so far. It works, but the more I read about using exceptions for flow control the less I like it, so I want to see if there are alternatives. Remove Method public void remove(GameObject go) { //add to removedList //flag as removed //throw an exception if removing self from inside an updateEvent } Game Loop for(GameObject go : objectList) { try { if (!go.removed) { go.updateEvent(); } else { //object is scheduled to be removed, do nothing } } catch(ObjectRemovedException e) { //control has been transferred back to the game loop //no need to do anything here } } // now remove the objects that are in removedList from objectList 2 questions: Am I correct in assuming that the only way to implement the stop-right-away part of the remove method as described above is by throwing a custom exception and catching it in the game loop? (I know, using exceptions for flow control is like goto, which is bad. I just can’t think of another way to do what I want!) For the removal from the list itself, it is possible for one object to remove one that is farther down on the list. Currently I’m checking a removed flag before executing any code, and at the end of each pass removing the objects to avoid concurrent modification. Is there a better, preferably instant/non-polling way to do this?

    Read the article

  • Java template classes using generator or similar?

    - by Hugh Perkins
    Is there some library or generator that I can use to generate multiple templated java classes from a single template? Obviously Java does have a generics implementation itself, but since it uses type-erasure, there are lots of situations where it is less than adequate. For example, if I want to make a self-growing array like this: class EasyArray { T[] backingarray; } (where T is a primitive type), then this isn't possible. This is true for anything which needs an array, for example high-performance templated matrix and vector classes. It should probably be possible to write a code generator which takes a templated class and generates multiple instantiations, for different types, eg for 'double' and 'float' and 'int' and 'String'. Is there something that already exists that does this? Edit: note that using an array of Object is not what I'm looking for, since it's no longer an array of primitives. An array of primitives is very fast, and uses only as much space a sizeof(primitive) * length-of-array. An array of object is an array of pointers/references, that points to Double objects, or similar, which could be scattered all over the place in memory, require garbage collection, allocation, and imply a double-indirection for access. Edit2: good god, voted down for asking for something that probably doesn't currently exist, but is technically possible and feasible? Does that mean that people looking for ways to improve things have already left the java community? Edit3: Here is code to show the difference in performance between primitive and boxed arrays: int N = 10*1000*1000; double[]primArray = new double[N]; for( int i = 0; i < N; i++ ) { primArray[i] = 123.0; } Object[] objArray = new Double[N]; for( int i = 0; i < N; i++ ) { objArray[i] = 123.0; } tic(); primArray = new double[N]; for( int i = 0; i < N; i++ ) { primArray[i] = 123.0; } toc(); tic(); objArray = new Double[N]; for( int i = 0; i < N; i++ ) { objArray[i] = 123.0; } toc(); Results: double[] array: 148 ms Double[] array: 4614 ms Not even close!

    Read the article

  • Efficiently fetching and storing tweets from a few hundred twitter profiles?

    - by MSpreij
    The site I'm working on needs to fetch the tweets from 150-300 people, store them locally, and then list them on the front page. The profiles sit in groups. The pages will be showing the last 20 tweets (or 21-40, etc) by date, group of profiles, single profile, search, or "subject" (which is sort of a different group.. I think..) a live, context-aware tag cloud (based on the last 300 tweets of the current search, group of profiles, or single profile shown) various statistics (group stuffs, most active, etc) which depend on the type of page shown. We're expecting a fair bit of traffic. The last, similar site peaked at nearly 40K visits per day, and ran intro trouble before I started caching pages as static files, and disabling some features (some, accidently..). This was caused mostly by the fact that a page load would also fetch the last x tweets from the 3-6 profiles which had not been updated the longest.. With this new site I can fortunately use cron to fetch tweets, so that helps. I'll also be denormalizing the db a little so it needs less joins, optimize it for faster selects instead of size. Now, main question: how do I figure out which profiles to check for new tweets in an efficient manner? Some people will be tweeting more often than others, some will tweet in bursts (this happens a lot). I want to keep the front page of the site as "current" as possible. If it comes to, say, 300 profiles, and I check 5 every minute, some tweets will only appear an hour after the fact. I can check more often (up to 20K) but want to optimize this as much as possible, both to not hit the rate limit and to not run out of resources on the local server (it hit mysql's connection limit with that other site). Question 2: since cron only "runs" once a minute, I figure I have to check multiple profiles each minute - as stated, at least 5, possibly more. To try and spread it out over that minute I could have it sleep a few seconds between batches or even single profiles. But then if it takes longer than 60 seconds altogether, the script will run into itself. Is this a problem? If so, how can I avoid that? Question 3: any other tips? Readmes? URLs?

    Read the article

  • No more memory available in Mathematica, Fit the parameters of system of differential equation

    - by user1058051
    I encountered a memory problem in Mathematica, when I tried to process my experimental data. It's a system of two differential equations and I need to find most suitable parameters. Unfortunately I am not a Pro in Mathematica, so the program used a lot of memory, when the parameter epsilon is more than 0.4. When it less than 0.4, the program work properly. The command 'historylength = 0' and attempts to reduce the Accuracy Goal and WorkingPrecision didn`t help. I can't use ' clear Cache ', because there isnt a circle. I'm trying to understand what mistakes I made, and how I may limit the memory usage. I have already bought extra-RAM, now its 4GB, and now I haven't free memory-slots in motherboard Remove["Global`*"]; T=13200; L = 0.085; e = 0.41; v = 0.000557197; q = 0.1618; C0 = 0.0256; R = 0.00075; data = {{L,600,0.141124587},{L,1200,0.254134509},{L,1800,0.342888644}, {L,2400,0.424476295},{L,3600,0.562844542},{L,4800,0.657111356}, {L,6000,0.75137817},{L,7200,0.815876516},{L,8430,0.879823594}, {L,9000,0.900771775},{L,13200,1}}; model[(De_)?NumberQ, (Kf_)?NumberQ, (Y_)?NumberQ] := model[De, Kf, Y] = yeld /.Last[Last[ NDSolve[{ v (Ci^(1,0))[z,t]+(Ci^(0,1))[z,t]== -((3 (1-e) Kf (Ci[z,t]-C0))/ (R e (1-(R Kf (1-R/r[z,t]))/De))), (r^(0,1))[z,t]== (R^2 Kf (Ci[z,t]-C0))/ (q r[z,t]^2 (1-(R Kf (1-R/r[z,t]))/De)), (yeld^(0,1))[z,t]== Y*(v e Ci[z,t])/(L q (1-e)), r[z,0]==R, Ci[z,0]==0, Ci[0,t]==0, yeld[z,0]==0}, {r[z,t],Ci[z,t],yeld},{z,0,L},{t,0,T}]]] fit = FindFit[data, {model[De, Kf, Y][z, t], {Y > 0.97, Y < 1.03, Kf > 10^-6, Kf < 10^-4, De > 10^-13, De < 10^-9}}, {{De,7*10^-13}, {Kf, 10^-5}, {Y, 1}}, {z, t}, Method -> NMinimize] data = {{600,0.141124587},{1200,0.254134509},{1800,0.342888644}, {2400,0.424476295},{3600,0.562844542},{4800,0.657111356}, {6000,0.75137817},{7200,0.815876516},{8430,0.879823594}, {9000,0.900771775},{13200,1}}; YYY = model[ De /. fit[[1]], Kf /. fit[[2]], Y /. fit[[3]]]; Show[Plot[Evaluate[YYY[L,t]],{t,0,T},PlotRange->All], ListPlot[data,PlotStyle->Directive[PointSize[Medium],Red]]] the link on the .nb file http://www.4shared.com/folder/249TSjlz/_online.html

    Read the article

  • Why is Dictionary.First() so slow?

    - by Rotsor
    Not a real question because I already found out the answer, but still interesting thing. I always thought that hash table is the fastest associative container if you hash properly. However, the following code is terribly slow. It executes only about 1 million iterations and takes more than 2 minutes of time on a Core 2 CPU. The code does the following: it maintains the collection todo of items it needs to process. At each iteration it takes an item from this collection (doesn't matter which item), deletes it, processes it if it wasn't processed (possibly adding more items to process), and repeats this until there are no items to process. The culprit seems to be the Dictionary.Keys.First() operation. The question is why is it slow? Stopwatch watch = new Stopwatch(); watch.Start(); HashSet<int> processed = new HashSet<int>(); Dictionary<int, int> todo = new Dictionary<int, int>(); todo.Add(1, 1); int iterations = 0; int limit = 500000; while (todo.Count > 0) { iterations++; var key = todo.Keys.First(); var value = todo[key]; todo.Remove(key); if (!processed.Contains(key)) { processed.Add(key); // process item here if (key < limit) { todo[key + 13] = value + 1; todo[key + 7] = value + 1; } // doesn't matter much how } } Console.WriteLine("Iterations: {0}; Time: {1}.", iterations, watch.Elapsed); This results in: Iterations: 923007; Time: 00:02:09.8414388. Simply changing Dictionary to SortedDictionary yields: Iterations: 499976; Time: 00:00:00.4451514. 300 times faster while having only 2 times less iterations. The same happens in java. Used HashMap instead of Dictionary and keySet().iterator().next() instead of Keys.First().

    Read the article

  • Sorting a list of numbers with modified cost

    - by David
    First, this was one of the four problems we had to solve in a project last year and I couldn’t find a suitable algorithm so we handle in a brute force solution. Problem: The numbers are in a list that is not sorted and supports only one type of operation. The operation is defined as follows: Given a position i and a position j the operation moves the number at position i to position j without altering the relative order of the other numbers. If i j, the positions of the numbers between positions j and i - 1 increment by 1, otherwise if i < j the positions of the numbers between positions i+1 and j decreases by 1. This operation requires i steps to find a number to move and j steps to locate the position to which you want to move it. Then the number of steps required to move a number of position i to position j is i+j. We need to design an algorithm that given a list of numbers, determine the optimal (in terms of cost) sequence of moves to rearrange the sequence. Attempts: Part of our investigation was around NP-Completeness, we make it a decision problem and try to find a suitable transformation to any of the problems listed in Garey and Johnson’s book: Computers and Intractability with no results. There is also no direct reference (from our point of view) to this kind of variation in Donald E. Knuth’s book: The art of Computer Programing Vol. 3 Sorting and Searching. We also analyzed algorithms to sort linked lists but none of them gives a good idea to find de optimal sequence of movements. Note that the idea is not to find an algorithm that orders the sequence, but one to tell me the optimal sequence of movements in terms of cost that organizes the sequence, you can make a copy and sort it to analyze the final position of the elements if you want, in fact we may assume that the list contains the numbers from 1 to n, so we know where we want to put each number, we are just concerned with minimizing the total cost of the steps. We tested several greedy approaches but all of them failed, divide and conquer sorting algorithms can’t be used because they swap with no cost portions of the list and our dynamic programing approaches had to consider many cases. The brute force recursive algorithm takes all the possible combinations of movements from i to j and then again all the possible moments of the rest of the element’s, at the end it returns the sequence with less total cost that sorted the list, as you can imagine the cost of this algorithm is brutal and makes it impracticable for more than 8 elements. Our observations: n movements is not necessarily cheaper than n+1 movements (unlike swaps in arrays that are O(1)). There are basically two ways of moving one element from position i to j: one is to move it directly and the other is to move other elements around i in a way that it reaches the position j. At most you make n-1 movements (the untouched element reaches its position alone). If it is the optimal sequence of movements then you didn’t move the same element twice.

    Read the article

  • Not seeing Sync Block in Object Layout

    - by bob-bedell
    It's my understanding the all .NET object instances begin with an 8 byte 'object header': a synch block (4 byte pointer into a SynchTableEntry table), and a type handle (4 byte pointer into the types method table). I'm not seeing this in VS 2010 RC's (CLR 4.0) debugger memory windows. Here's a simple class that will generate a 16 byte instance, less the object header. class Program { short myInt = 2; // 4 bytes long myLong = 3; // 8 bytes string myString = "aString"; // 4 byte object reference // 16 byte instance static void Main(string[] args) { new Program(); return; } } An SOS object dump tells me that the total object size is 24 bytes. That makes sense. My 16 byte instance plus an 8 byte object header. !DumpObj 0205b660 Name: Offset_Test.Program MethodTable: 000d383c EEClass: 000d13f8 Size: 24(0x18) bytes File: C:\Users\Bob\Desktop\Offset_Test\Offset_Test\bin\Debug\Offset_Test.exe Fields: MT Field Offset Type VT Attr Value Name 632020fc 4000001 10 System.Int16 1 instance 2 myInt 632050d8 4000002 4 System.Int64 1 instance 3 myLong 631fd2b8 4000003 c System.String 0 instance 0205b678 myString Here's the raw memory: 0x0205B660 000d383c 00000003 00000000 0205b678 00000002 ... And here are some annotations: offset 0 000d383c ;TypeHandle (pointer to MethodTable), 4 bytes offset 4 00000003 00000000 ;myLong, 8 bytes offset 12 0205b678 ;myString, 4 byte reference to address of "myString" on GC Heap offset 16 00000002 ;myInt, 4 bytes My object begins a address 0x0205B660. But I can only account for 20 bytes of it, the type handle and the instance fields. There is no sign of a synch block pointer. The object size is reported as 24 bytes, but the debugger is showing that it only occupies 20 bytes of memory. I'm reading Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects, and expected the first 4 bytes of my object to be a zeroed synch block pointer, as shown in Figure 8 of that article. Granted, this is an article about CLR 1.1. I'm just wondering if the difference between what I'm seeing and what this early article reports is a change in either the debugger's display of object layout, or in the way the CLR lays out objects in versions later than 1.1. Anyway, can anyone account for my 4 missing bytes?

    Read the article

  • Windows MAchine Debugging

    - by PrettyFlower
    I've been learning how to program for Windows for some time now and am getting pretty comfy with COM. I had thought to go over to Linux and do some C++ programming there and I wished to run Rosetta Commons so I installed Fedora. I had tried installing Ubuntu a few months ago and things got messy. I had a glitch, maybe caused by one of the live cd creators, my video card or something I don't know. Who Crashed suggested it was my video card and I had regular messages about ntfs.sys and page file issues. At any rate I just installed Fedora and the same thing is happening again. I would like to think with the twenty five years of doing this that I might finally make some headway into debugging my system. I think I may have overlooked a lot of what could be done in favor of simply uninstalling, reinstalling and formatting and starting from scratch. I have opened up the folder windows debugging tools, quite accidentally and just before I was going to clean sweep again, and I found KD and WinDbg. I had never seen these before and I felt that maybe I should look into this. I am quite familiar with the modern machine that is known as the computer, I know what a Kernel is and am now pretty familiar with at the very least Windows Operating System Services. I wish to begin tracking my own machines errors. I understand that most kernel debugging is done on a second machine but I don't have one. And also I understand the goal of the debugger seems to be less about run of the mill errors and more about development time strategies but I'm sure there is more to this. This is my first go at this and I thought maybe I could get some suggestions on where to go from here. I would really like to learn ways to fix my machine and also maybe pick up some tricks on the dev side as well. I hope this isn't too broad a question or too generalized. I'm really just looking for the keywords and an overview of the more routine strategies used. thx

    Read the article

  • probelm with recv() on a tcp connection

    - by michael
    Hi, I am simulating TCP communication on windows in C I have sender and a receiver communicating. sender sends packets of specific size to receiver. receiver gets them and send an ACK for each packet it received back to the sender. If the sender didn't get a specific packet (they are numbered in a header inside the packet) it sends the packet again to the receiver. Here is the getPacket function on the receiver side: //get the next packet from the socket. set the packetSize to -1 //if it's the first packet. //return: total bytes read // return: 0 if socket has shutdown on sender side, -1 error, else number of bytes received int getPakcet(char *chunkBuff,int packetSize,SOCKET AcceptSocket){ int totalChunkLen = 0; int bytesRecv=-1; bool firstTime=false; if (packetSize==-1) { packetSize=MAX_PACKET_LENGTH; firstTime=true; } int needToGet=packetSize; do { char* recvBuff; recvBuff = (char*)calloc(needToGet,sizeof(char)); if(recvBuff == NULL){ fprintf(stderr,"Memory allocation problem\n"); return -1; } bytesRecv = recv(AcceptSocket, recvBuff, needToGet, 0); if (bytesRecv == SOCKET_ERROR){ fprintf(stderr,"recv() error %ld.\n", WSAGetLastError()); totalChunkLen=-1; return -1; } if (bytesRecv == 0){ fprintf(stderr,"recv(): socket has shutdown on sender side"); return 0; } else if(bytesRecv > 0) { memcpy(chunkBuff + totalChunkLen,recvBuff,bytesRecv); totalChunkLen+=bytesRecv; } needToGet-=bytesRecv; } while ((totalChunkLen < packetSize) && (!firstTime)); return totalChunkLen; } i use firstTime because for the first time the receiver doesn't know the normal package size that the sender is going to send to it, so i use a MAX_PACKET_LENGTH to get a package and then set the normal package size to the num of bytes i have received my problem is the last package. it's size is less than the package size so lets say last package size is 2 and the normal package size is 4. so recv() gets two bytes, continues to the while condition, then totalChunkLen < packetSize because 2<4 so it iterates the loop again and the gets stuck in recv() because it's blocking because the sender has nothing to send. on the sender side i can't close the connection because i didn't ACK back, so it's kind of a deadlock. receiver is stuck because it's waiting for more packages but sender has nothing to send. i don't want to use a timeout for recv() or to insert a special character to the package header to mark that it is the last one what can i do ? thanks

    Read the article

  • passing back answers in prolog

    - by AhmadAssaf
    i have this code than runs perfectly .. returns a true .. when tracing the values are ok .. but its not returning back the answer .. it acts strangely when it ends and always return empty list .. uninstantiated variable .. test :- extend(4,12,[4,3,1,2],[[1,5],[3,4],[6]],_ExtendedBins). %printing basic information about the extend(NumBins,Capacity,RemainingNumbers,BinsSoFar,_ExtendedBins) :- getNumberofBins(BinsSoFar,NumberOfBins), msort(RemainingNumbers,SortedRemaining),nl, format("Current Number of Bins is :~w\n",[NumberOfBins]), format("Allowed Capacity is :~w\n",[Capacity]), format("maximum limit in bin is :~w\n",[NumBins]), format("Trying to fit :~w\n\n",[SortedRemaining]), format("Possible Solutions :\n\n"), fitElements(NumBins,NumberOfBins, Capacity,SortedRemaining,BinsSoFar,[]). %this is were the creation for possibilities will start %will check first if the number of bins allowed is less than then %we create a new list with all the possible combinations %after that we start matching to other bins with capacity constraint fitElements(NumBins,NumberOfBins, Capacity,RemainingNumbers,Bins,ExtendedBins) :- ( NumberOfBins < NumBins -> print('Creating new set: '); print('Sorry, Cannot create New Sets')), createNewList(Capacity,RemainingNumbers,Bins,ExtendedBins). createNewList(Capacity,RemainingNumbers,Bins,ExtendedBins) :- createNewList(Capacity,RemainingNumbers,Bins,[],ExtendedBins), print(ExtendedBins). createNewList(0,Bins,Bins,ExtendedBins,ExtendedBins). createNewList(_,[],_,ExtendedBins,ExtendedBins). createNewList(Capacity,[Element|Rest],Bins,Temp,ExtendedBins) :- conjunct_to_list(Element,ListedElement), append(ListedElement,Temp,NewList), sumlist(NewList,Sum), (Sum =< Capacity, append(ListedElement,ExtendedBins,Result); Capacity = 0), createNewList(Capacity,Rest,Bins,NewList,Result). fit(0,[],ExtendedBins,ExtendedBins). fit(Capacity,[Element|Rest],Bin,ExtendedBins) :- conjunct_to_list(Element,Listed), append(Listed,Bin,NewBin), sumlist(NewBin,Sum), (Sum =< Capacity -> fit(Capacity,Rest,NewBin,ExtendedBins); Capacity = 0, append(NewBin,ExtendedBins,NewExtendedBins), print(NewExtendedBins), fit(0,[],NewBin,ExtendedBins)). %get the number of bins provided getNumberofBins(List,NumberOfBins) :- getNumberofBins(List,0,NumberOfBins). getNumberofBins([],NumberOfBins,NumberOfBins). getNumberofBins([_List|Rest],TempCount,NumberOfBins) :- NewCount is TempCount + 1, %calculate the count getNumberofBins(Rest,NewCount,NumberOfBins). %recursive call %Convert set of terms into a list - used when needed to append conjunct_to_list((A,B), L) :- !, conjunct_to_list(A, L0), conjunct_to_list(B, L1), append(L0, L1, L). conjunct_to_list(A, [A]). Greatly appreciate the help

    Read the article

  • Inlining an array of non-default constructible objects in a C++ class

    - by porgarmingduod
    C++ doesn't allow a class containing an array of items that are not default constructible: class Gordian { public: int member; Gordian(int must_have_variable) : member(must_have_variable) {} }; class Knot { Gordian* pointer_array[8]; // Sure, this works. Gordian inlined_array[8]; // Won't compile. Can't be initialized. }; As even beginner C++ users know, the language guarantees that all members are initialized when constructing a class. And it doesn't trust the user to initialize everything in the constructor - one has to provide valid arguments to the constructors of all members before the body of the constructor even starts. Generally, that's a great idea as far as I'm concerned, but I've come across a situation where it would be a lot easier if I could actually have an array of non-default constructible objects. The obvious solution: Have an array of pointers to the objects. This is not optimal in my case, as I am using shared memory. It would force me to do extra allocation from an already contended resource (that is, the shared memory). The entire reason I want to have the array inlined in the object is to reduce the number of allocations. This is a situation where I would be willing to use a hack, even an ugly one, provided it works. One possible hack I am thinking about would be: class Knot { public: struct dummy { char padding[sizeof(Gordian)]; }; dummy inlined_array[8]; Gordian* get(int index) { return reinterpret_cast<Gordian*>(&inlined_array[index]); } Knot() { for (int x = 0; x != 8; x++) { new (get(x)) Gordian(x*x); } } }; Sure, it compiles, but I'm not exactly an experienced C++ programmer. That is, I couldn't possibly trust my hacks less. So, the questions: 1) Does the hack I came up with seem workable? What are the issues? (I'm mainly concerned with C++0x on newer versions of GCC). 2) Is there a better way to inline an array of non-default constructible objects in a class?

    Read the article

  • JavaScript Coding for Finding Shipping Total

    - by user2913279
    I am having a very hard time with this code. I have been working on it for days and cannot seem to figure it out. Please help!! Here are the specific I need for the code: Many companies normally charge a shipping and handling charge for purchases. Create a Web page that allows a user to enter a purchase price into a text box and includes a JavaScript function that calculates shipping and handling. Add functionality to the script that adds a minimum shipping and handling charge of $1.50 for any purchase that is less than or equal to $25.00. For any orders over $25.00, add 10% to the total purchase price for shipping and handling, but do not include the $1.50 minimum shipping and handling charge. The formula for calculating a percentage is price * percent / 100. For example, the formula for calculating 10% of a $50.00 purchase price is 50 * 10 / 100, which results in a shipping and handling charge of $5.00. After you determine the total cost of the order (purchase plus shipping and handling), display it in an alert dialog box. Here is the code I have: <!DOCTYPE> <head> <title>Calculate Shipping</title> <script type="text/javascript"> function parseInt() { var salesPrice = document.salesForm.Price.value; var minCharge = salesPrice + 1.50; var shipping = salesPrice * 10/100; if (salesPrice <= 25) window.alert('Your sales total including shipping is $' + minCharge); else window.alert('Your sales total including shipping is $' + salesPrice + shipping); } </script> </head> <body> <form name="salesForm"> <div > <p>Enter Your Purchase Price</p> <input type="text" name="Price" /><br /><br /> <input type="button" name="Calculate" value="Calculate Shipping" onclick="parseInt ()" /> </div> </form> </body> </html> Everything works except for the math in the alert box. It will show an incorrect total...

    Read the article

  • Java array of arry [matrix] of an integer partition with fixed term

    - by user335209
    Hello, for my study purpose I need to build an array of array filled with the partitions of an integer with fixed term. That is given an integer, suppose 10 and given the fixed number of terms, suppose 5 I need to populate an array like this 10 0 0 0 0 9 0 0 0 1 8 0 0 0 2 7 0 0 0 3 ............ 9 0 0 1 0 8 0 0 1 1 ............. 7 0 1 1 0 6 0 1 1 1 ............ ........... 0 6 1 1 1 ............. 0 0 0 0 10 am pretty new to Java and am getting confused with all the for loops. Right now my code can do the partition of the integer but unfortunately it is not with fixed term public class Partition { private static int[] riga; private static void printPartition(int[] p, int n) { for (int i= 0; i < n; i++) System.out.print(p[i]+" "); System.out.println(); } private static void partition(int[] p, int n, int m, int i) { if (n == 0) printPartition(p, i); else for (int k= m; k > 0; k--) { p[i]= k; partition(p, n-k, n-k, i+1); } } public static void main(String[] args) { riga = new int[6]; for(int i = 0; i<riga.length; i++){ riga[i] = 0; } partition(riga, 6, 1, 0); } } the output I get it from is like this: 1 5 1 4 1 1 3 2 1 3 1 1 1 2 3 1 2 2 1 1 2 1 2 1 2 1 1 1 what i'm actually trying to understand how to proceed is to have it with a fixed terms which would be the columns of my array. So, am stuck with trying to get a way to make it less dynamic. Any help?

    Read the article

  • error C2059: syntax error : ']', i cant figure out why this coming up in c++

    - by user320950
    void display_totals(); int exam1[100][3];// array that can hold 100 numbers for 1st column int exam2[100][3];// array that can hold 100 numbers for 2nd column int exam3[100][3];// array that can hold 100 numbers for 3rd column int main() { int go,go2,go3; go=read_file_in_array; go2= calculate_total(exam1[],exam2[],exam3[]); go3=display_totals; cout << go,go2,go3; return 0; } void display_totals() { int grade_total; grade_total=calculate_total(exam1[],exam2[],exam3[]); } int calculate_total(int exam1[],int exam2[],int exam3[]) { int calc_tot,above90=0, above80=0, above70=0, above60=0,i,j; calc_tot=read_file_in_array(exam[100][3]); exam1[][]=exam[100][3]; exam2[][]=exam[100][3]; exam3[][]=exam[100][3]; for(i=0;i<100;i++); { if(exam1[i] <=90 && exam1[i] >=100) { above90++; cout << above90; } } return exam1[i],exam2[i],exam3[i]; } int read_file_in_array(int exam[100][3]) { ifstream infile; int num, i=0,j=0; infile.open("grades.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++); // array numbers less than 100 { for(j=0;j<3;j++); // while reading get 1st array or element infile >> exam[i][j]; cout << exam[i][j] << endl; } } infile.close(); return exam[i][j]; }

    Read the article

  • Writing OpenGL enabled GUI

    - by Jaen
    I am exploring a possibility to write a kind of a notebook analogue that would reproduce the look and feel of using a traditional notebook, but with the added benefit of customizing the page in ways you can't do on paper - ask the program to lay ruled paper here, grid paper there, paste an image, insert a recording from the built-in camera, try to do handwriting recognition on the tablet input, insert some latex for neat formulas and so on. I'm pretty interested in developing it just to see if writing notes on computer can come anywhere close to the comfort plain paper + pencil offer (hard to do IMO) and can always turn it in as a university C++ project, so double gain there. Coming from the type of project there are certain requirements for the user interface: the user will be able to zoom, move and rotate the notebook as he wishes and I think it's pretty sensible delegate it to OpenGL, so the prospective GUI needs to work well with OGL (preferably being rendered in it) the interface should be navigable with as little of keyboard input as user wishes (incorporating some sort of gestures maybe) up to limiting the keyboard keys as modifiers to the pen movements and taps; this includes tablet and possible multitouch support the interface should keep out of the way where not needed and come up where needed and be easily layerable the notebook sheet itself will be a container for objects representing the notebook blurbs, so it would be nice if the GUI would be able to overlay some frames over the exact parts of the OpenGL-drawn sheet to signify what can be done with given part (like moving, rotating, deleting, copying, editing etc.) and it's extents In terms of interface it's probably going to end up similar to Alias' Sketch Book Pro: [picture][http://1.bp.blogspot.com/_GGxlzvZW-CY/SeKYA_oBdSI/AAAAAAAAErE/J6A0kyXiuqA/s400/Autodesk_Alias_SketchBook_Pro_2.jpg] As far as toolkits go I'm considering Qt and nui, but I'm not really aware how well would they match up the requirements and how well would they handle such an application. As far as I know you can somehow coerce Qt into doing widget drawing with OpenGL, but on the other hand I heard voices it's slot-signal framework isn't exactly optimal and requires it's own preprocessor and I don't know how hard would be to do all the custom widgets I would need (say color-wheel, ruler, blurb frames, blurb selection, tablet-targeted pop-up menu etc.) in the constraints of Qt. Also quite a few Qt programs I've had on my machine seemed really sluggish, but it may be attributed to me having old PC or programmers using Qt suboptimally rather to the framework itself. As for [nui][http://www.libnui.net/] I know it's also cross-platform and all of the basic things you would require of a GUI toolkit and what is the biggest plus it is OpenGL-enabled from the start, but I don't know how it is with custom widgets and other facets and it certainly has smaller userbase and less elaborate documentation than Qt. The question goes as this: Does any of these toolkits fulfill (preferably all of) the requirements or there is a well fitting toolkit I haven't come across or maybe I should just roll up my sleeves, get SFML (or maybe Clutter would be more suited to this?) and something like FastDelegates or libsigc++ and program the GUI framework from the ground up myself? I would be very glad if anyone had experience with a similar GUI project and can offer some comments on how well these toolkits hold up or is it worthwhile to pursue own GUI toolkit in this case. Sorry for longwindedness, duh.

    Read the article

  • Calculating all distances between one point and a group of points efficiently in R

    - by dbarbosa
    Hi, First of all, I am new to R (I started yesterday). I have two groups of points, data and centers, the first one of size n and the second of size K (for instance, n = 3823 and K = 10), and for each i in the first set, I need to find j in the second with the minimum distance. My idea is simple: for each i, let dist[j] be the distance between i and j, I only need to use which.min(dist) to find what I am looking for. Each point is an array of 64 doubles, so > dim(data) [1] 3823 64 > dim(centers) [1] 10 64 I have tried with for (i in 1:n) { for (j in 1:K) { d[j] <- sqrt(sum((centers[j,] - data[i,])^2)) } S[i] <- which.min(d) } which is extremely slow (with n = 200, it takes more than 40s!!). The fastest solution that I wrote is distance <- function(point, group) { return(dist(t(array(c(point, t(group)), dim=c(ncol(group), 1+nrow(group)))))[1:nrow(group)]) } for (i in 1:n) { d <- distance(data[i,], centers) which.min(d) } Even if it does a lot of computation that I don't use (because dist(m) computes the distance between all rows of m), it is way more faster than the other one (can anyone explain why?), but it is not fast enough for what I need, because it will not be used only once. And also, the distance code is very ugly. I tried to replace it with distance <- function(point, group) { return (dist(rbind(point,group))[1:nrow(group)]) } but this seems to be twice slower. I also tried to use dist for each pair, but it is also slower. I don't know what to do now. It seems like I am doing something very wrong. Any idea on how to do this more efficiently? ps: I need this to implement k-means by hand (and I need to do it, it is part of an assignment). I believe I will only need Euclidian distance, but I am not yet sure, so I will prefer to have some code where the distance computation can be replaced easily. stats::kmeans do all computation in less than one second.

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >