Search Results

Search found 9062 results on 363 pages for 'big'.

Page 77/363 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Enormous data and PHP errors

    - by salamis
    I am currently using the following HighCharts:HighStock:Charts: http://www.highcharts.com/stock/demo/data-grouping in order to display the data returned from the server. We retrieve the data from a MySQL database and is really big. We are storing sensor metrics every 1 second. After a while we got the following error: [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4756882 bytes) in C:\\wamp\\www\\admin\\getTrends.php on line 156, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Stack trace:, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 1. {main}() C:\\wamp\\www\\admin\\getTrends.php:0, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 2. getTrendsDataAI() C:\\wamp\\www\\admin\\getTrends.php:33, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 3. printResults() C:\\wamp\\www\\admin\\getTrends.php:102, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 4. createData() C:\\wamp\\www\\admin\\getTrends.php:230, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 5. implode() C:\\wamp\\www\\admin\\getTrends.php:156, referer: http://localhost/admin/trends.php What is the best solution to return this data as JSON object to HighStocks for viewing? And how can we overcome the PHP limitation? Shall we return chunk of data each time? How do they usually present enormous amount of data to the users and creating charts and reports from this data? Another big problem that we need to overcome is that the returned JSON object is enormous. At this point is around 20-30 mbs and it will be much larger in the future. Is it ok to return this data to the user and perform everything client side? Any suggestions or thoughts welcome.

    Read the article

  • Do ruby on rails programmers refactor?

    - by JoaoHornburg
    I'm a Java programmer who started programming Ruby on Rails one year ago. I like the language, rails itself and the principles behind them. But something that bothers me is that Ruby programmers don't seem to refactor. I noticed that there is a big lack of tools for refactoring in Ruby / Rails. Some IDE's, like Aptana and RubyMine seem to offer some very basic refactoring, but nothing really big compared to Eclipse's Java refactorings. Then there is another fact: most railers (even the pros) prefer some lightweight editors, like VIM or TextMate, instead of IDEs. Well, with these tools you just get zero refactoring (only regex with find/replace). This leaves me this impression that rails programmers don't refactor. It might be just a false impression, of course, but I would like to hear the opinion of people who work professionally with ruby on rails. Do you refactor? If you do, how do you do it,with which tools? If not, why not?

    Read the article

  • Rich editor.. need to replace ccorrect string...

    - by JamesM
    On my website I have a login and an article tag for editing. my article code is in HTML5 witch is enough as my editor rule is HTML5: HTML5: index.php line 212 <article contentEditable="false"> >> Changes to true when in edit mode.. <div>One row of text goes here.</div> <div>Row two's content goes here.</div> </article> I want to be able to get the selected text via JS. JS: js.php format : (function(format){ if (user = "{$ADMIN}"){ selection = getSelection(); text = selection.toString(); switch(format){ case "big": result = text.big(); break; case "small": result = text.small(); break; case "bold": result = text.bold(); break; case "italics": result = text.italics(); break; case "fixed": result = text.fixed(); break; case "strike": result = text.strike(); break; case "fontcolor": result = text.fontcolor(); break; case "fontsize": result = text.fontsize(); break; case "sub": result = text.sub(); break; case "sup": result = text.sup(); break; case "link": result = text.link(); break; } } }), I can replace the text with the result but that will replace all of them I a=can do only the first but i want to do it to the selected...

    Read the article

  • IPhone sdk, more accurate collision detection and set the frame/bounds of UIImageView...

    - by Harry
    Hey, Im having a big problem with my app at the moment, its all too inaccurate. I have a image of a ballooon, https://dl.dropbox.com/u/2578642/Balloonedit.png And i have a dart which if it collides into the balloon the game ends. At the moment i am populating the image of the balloon with 8 UIImageViews. and i am detecting if the dart hits them, this was suppose to make it really accurate but its not, the dart pretty much passes through the balloon when its meant to collide, so i have a plan, is there any way to detect when the dart hits the actual image of the balloon not the UIImageView, or is there any way to draw a border around the balloon and detect if it hits that? currently i am using this code to detect the collision: if (CGRectIntersectsRect(pinend.frame, balloonbit1.frame)){ [maintimer invalidate]; accelManeger.delegate = nil; [ball setImage:img]; [UIImageView beginAnimations:nil context:NULL]; [UIImageView setAnimationDuration:0.3]; ball.transform = CGAffineTransformMakeScale(2, 2); [UIImageView commitAnimations]; } So in one method there are 40 of these bits of code and as you can imagine it is not very accurate/fast to respond. So like i said is there a way to draw a border or something around the balloon and detect the collision between the border and dart? Because then i would imagine it would run a lot smother because it would only have to process 5 bits of code. Thanks for any help. This is a big Question so if you can answer it i will buy your app :) Cheers, Harry :/

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

  • Algorithm for finding the best routes for food distribution in game

    - by Tautrimas
    Hello, I'm designing a city building game and got into a problem. Imagine Sierra's Caesar III game mechanics: you have many city districts with one market each. There are several granaries over the distance connected with a directed weighted graph. The difference: people (here cars) are units that form traffic jams (here goes the graph weights). Note: in Ceasar game series, people harvested food and stockpiled it in several big granaries, whereas many markets (small shops) took food from the granaries and delivered it to the citizens. The task: tell each district where they should be getting their food from while taking least time and minimizing congestions on the city's roads. Map example Sample diagram Suppose that yellow districts need 7, 7 and 4 apples accordingly. Bluish granaries have 7 and 11 apples accordingly. Suppose edges weights to be proportional to their length. Then, the solution should be something like the gray numbers indicated on the edges. Eg, first district gets 4 apples from the 1st and 3 apples from the 2nd granary, while the last district gets 4 apples from only the 2nd granary. Here, vertical roads are first occupied to the max, and then the remaining workers are sent to the diagonal paths. Question What practical and very fast algorithm should I use? I was looking at some papers (Congestion Games: Optimization in Competition etc.) describing congestion games, but could not get the big picture. Any help is very appreciated! P. S. I can post very little links and no images because of new user restriction.

    Read the article

  • OutOfMemoryException when I read FileStream 500 MB

    - by Alhambra Eidos
    Hi all, I'm using Filestream for read big file ( 500 MB) and I get the OutOfMemoryException. Any solutions about it. My Code is: using (var fs3 = new FileStream(filePath2, FileMode.Open, FileAccess.Read)) { byte[] b2 = ReadFully(fs3, 1024); } public static byte[] ReadFully(Stream stream, int initialLength) { // If we've been passed an unhelpful initial length, just // use 32K. if (initialLength < 1) { initialLength = 32768; } byte[] buffer = new byte[initialLength]; int read = 0; int chunk; while ((chunk = stream.Read(buffer, read, buffer.Length - read)) > 0) { read += chunk; // If we've reached the end of our buffer, check to see if there's // any more information if (read == buffer.Length) { int nextByte = stream.ReadByte(); // End of stream? If so, we're done if (nextByte == -1) { return buffer; } // Nope. Resize the buffer, put in the byte we've just // read, and continue byte[] newBuffer = new byte[buffer.Length * 2]; Array.Copy(buffer, newBuffer, buffer.Length); newBuffer[read] = (byte)nextByte; buffer = newBuffer; read++; } } // Buffer is now too big. Shrink it. byte[] ret = new byte[read]; Array.Copy(buffer, ret, read); return ret; } thanks in advanced,

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Out Of Memory error while executing mysqldump

    - by Nishaz Salam
    Hi, I am getting the following error when trying to backup a database using mysqldump from the command prompt. C:\Documents and Settings\bobC:\Adobe\LiveCycle8.2\mysql\bin\mysqldump --quick --add-locks --lock-tables -c --default-character-set=utf8 --skip-opt -pxxxx -u adobe -r C:\Adobe\LiveCycle8.2\configurationManager\working\upgrade\mysql\adobe. sql -B adobe --port=3306 --host=localhost mysqldump: Out of memory (Needed 10380928 bytes) mysqldump: Got error: 2008: MySQL client ran out of memory when retrieving data from server As you can see i am using the --quick and --skip-opt too; cannot figure out what is causing the issue. The server log has the following messages 100420 15:16:39 InnoDB: Error: cannot allocate 4814100 bytes of memory for InnoDB: a BLOB with malloc! Total allocated memory InnoDB: by InnoDB 33427880 bytes. Operating system errno: 2 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. 100420 15:16:40 InnoDB: Warning: could not allocate 3814100 + 1000000 bytes to retrieve InnoDB: a big column. Table name adobe/tb_form_data Any help on this regard is highly appreciated P.S: The backup works fine without any issues when i use the MYSQL Administrator, but since an external app( adobe livecycle installer) uses the above command to backup the database during install, i need to get this working. Thanks, Nishaz Salam

    Read the article

  • Comet, responseText and memory usage

    - by ithcy
    Is there a way to clear out the responseText of an XHR object without destroying the XHR object? I need to keep a persistent connection open to a web server to feed live data to a browser. The problem is, there is a relatively large amount of data coming through (several hundred K per second constantly), so memory usage is a big problem, because this connection must remain open for at least several minutes. responseText gets very big very quickly, even though the JSON I send back has been crunched as small as it can get. Due to the way the server-side app works, if I use AJAX-style short polling and just destroy the XHR object when I'm done with it, I miss significant amounts of important data even in the few milliseconds it takes to parse the response, create a new XHR and send it out. I do not have the option to use overlapping requests, as the web server only accepts one connection at a time. (Don't ask.) So Comet is exactly the model I need. What I would like to do is parse each JSON chunk as it comes back from the server, and then clear out responseText so that I can keep using the same connection. However, responseText is read-only. It cannot be directly emptied by any method I have found. Is there a part of the picture I am missing here? Does anyone know any tricks I can use to free up responseText when I'm done reading it? Or is there another place the server responses can go? I am not including code because this is really almost a code-agnostic question. The Javascript routines that spawn the XHRs and handle the returned data are very, very simple.

    Read the article

  • Count rows against to SQL server (2005) table?

    - by David.Chu.ca
    I have a simple question with two options to get count of rows in a SQL server (2005). I am using VS 2005. There are two options to get the count: SELECT id FROM Table1 WHERE dt >= startDt AND dt < endDt;; I get a list of ids from above call in cache and then I get count by List.Count. Another option is SELECT COUNT(*) FROM Table1 WHERE dt >= startDt AND dt < endDt; The above call will get the count directly. The issue is that I had several cases of exceptions with the second method: timeout. What I found is that the table1 is too big with millions of data. When I used the first option, it seems OK. I am confused by the fact that Count() takes more time than getting all the rows(is that true?). Not sure if the aggregation call with Count() would cause SQL server to create temporary table or cache on server side and it would result in slow performance when table is too big? I am not sure what is the best way to get the count?

    Read the article

  • svn dev cycle. howto lots minor "features" pending for approval.

    - by Julian Davchev
    Hi I've read similar questions regarding that but still feel the need to ask a question. I have scenario where we have lots of tiny "features" pending for approval. I generally see two approaches. 1.Keep trunk solid and have tons of branches for each tiny "feature". Basically every new thingy is a branch. Cons: - Might become nightmare to support so many branches no matter how small a change. Keeping all branches in sync etc etc. - Worst con I see in this is setup of test system so one can easily examine changes to approve (basically need to support all branches which seems insane). Pros: - Seemningly easy once approved a branch to be merged back to trunk and new release to be tagged and deployed. 2.For big features a branch is released and for small changes all goes in trunk(relatively stable) directly. Pros: - Easier to set test system as most of the time all will be directly visible. For big features should be easy to maintain separate branch on test. Cons: - Don't really see how release will go. I will not be able to basically release one part of trunk This would involve cherrypicking which is crazy to follow. Other approach is I just enforce that after some time (a week or so) all small features need to be approved so they can deployed before giving new tasks. I just create release branch and either all or none of small features are going live. This will be some fun discussion with head people. I guess having lots of small pending stuff is very problematic to follow technically.

    Read the article

  • How to develop modular web UIs with Django?

    - by nh2
    When doing larger sites in "big business", you most probalbly work in a team with several developers. Let's say dev A makes a form to insert new user data, B creates a user list, C makes some privilege administration and D does crazy statistic graphs work with image generation and so on. Each dev begins to develop his own component, creates a view and a template and tests that independently, until each component works. Now, the client wants to have all those components on one bit HTML page. How to achieve this? How to assemble different views/templates in a form of composition so that they remain modular and can be developed and tested independently? It seems inheritance is not the way to go because all of those UI components are equal and there is no hierarchy. The idea of the assembling template is something like <html> <head> // include the css for the components and their assembly </head> <body> // include user data form here <some containers, images, and so on> // show user list // show privilege administration in this part // and finally, the nice statistic graphs // perhaps, we want to display some other components here in future </body> </html> I have not found an answer on the net yet. Most people come up with one big template which just implements all of the UI functionality, removing all modularity. Each component shall have its own template and view dealing only with that component developed by one person each, and they then shall be sticked together just like bricks. I would highly appreciate any hints!

    Read the article

  • Can you have a Dynamic Data Field which consists of a list of fields?

    - by Telos
    This is a purely theoretical question (at least until I start trying to implement it) but here goes. I wrote a web form a long time ago which has a configurable section for getting information. Basically for some customers there are no fields, for other customers there are up to 20 fields. I got it working by dynamically creating the fields at just the right time in the page lifecycle and going through a lot of headaches. 2 years later, I need to make some pretty big updates to this web form and there are some nifty new technologies. I've worked with ASP.NET Dynamic Data just a bit and, well, I half-crazed plan just occurred to me: The Ticket object has a one-to-many relationship to ExtendedField, we'll call that relationship Fields for brevity. Using that, the idea would be to create a FieldTemplate that dynamically generated the list of fields and displayed it. The big questions here would probably be: 1) Can a single field template resolve to multiple web controls without breaking things? 2) Can dynamic data handle updating/inserting multiple rows in such a fashion? 3) There was a third question I had a few minutes ago, but coworkers interrupted me and I forgot. So now the third question is: what is the third question? So basically, does this sound like it could work or am I missing a better/more obvious solution?

    Read the article

  • Data mining - parsing a log file in Java

    - by nuvio
    Hello I am carrying on a Java project for the university, where I should analyse poker hands. I found some poker hands in a txt log file. They would typically look like this: PokerStars Zoom Hand #86981279921: Hold'em No Limit ($0.10/$0.25 USD) - 2012/09/30 23:49:51 ET Table 'Whirlpool Zoom 40-100 bb' 9-max Seat #1 is the button Seat 1: lgwong ($30.99 in chips) Seat 2: hastyboots ($28.61 in chips) Seat 3: seula i ($25.31 in chips) Seat 4: fr_kevin01 ($31.81 in chips) Seat 5: limey05 ($27.45 in chips) Seat 6: sanlu ($24.65 in chips) Seat 7: Masterfrank ($25.35 in chips) Seat 8: Refu$e2Lose ($33.23 in chips) Seat 9: 1pepepe0114 ($37.62 in chips) hastyboots: posts small blind $0.10 seula i: posts big blind $0.25 *** HOLE CARDS *** fr_kevin01: folds limey05: folds sanlu: folds Masterfrank: folds Refu$e2Lose: folds 1pepepe0114: folds lgwong: folds hastyboots: folds Uncalled bet ($0.15) returned to seula i seula i collected $0.20 from pot seula i: doesn't show hand *** SUMMARY *** Total pot $0.20 | Rake $0 Seat 1: lgwong (button) folded before Flop (didn't bet) Seat 2: hastyboots (small blind) folded before Flop Seat 3: seula i (big blind) collected ($0.20) Seat 4: fr_kevin01 folded before Flop (didn't bet) Seat 5: limey05 folded before Flop (didn't bet) Seat 6: sanlu folded before Flop (didn't bet) Seat 7: Masterfrank folded before Flop (didn't bet) Seat 8: Refu$e2Lose folded before Flop (didn't bet) Seat 9: 1pepepe0114 folded before Flop (didn't bet) My problem is that I am not sure about how to proceed to parse the log file: the only knowledge I have is "manually" scanning line by line for a particular character or symbol, but I am afraid it would need exhaustive error handling. So I was wandering if there is any other techniques or better way to parse these poker hands? Many thanks for your help

    Read the article

  • Reading text files line by line, with exact offset/position reporting

    - by Benjamin Podszun
    Hi. My simple requirement: Reading a huge ( a million) line test file (For this example assume it's a CSV of some sorts) and keeping a reference to the beginning of that line for faster lookup in the future (read a line, starting at X). I tried the naive and easy way first, using a StreamWriter and accessing the underlying BaseStream.Position. Unfortunately that doesn't work as I intended: Given a file containing the following Foo Bar Baz Bla Fasel and this very simple code using (var sr = new StreamReader(@"C:\Temp\LineTest.txt")) { string line; long pos = sr.BaseStream.Position; while ((line = sr.ReadLine()) != null) { Console.Write("{0:d3} ", pos); Console.WriteLine(line); pos = sr.BaseStream.Position; } } the output is: 000 Foo 025 Bar 025 Baz 025 Bla 025 Fasel I can imagine that the stream is trying to be helpful/efficient and probably reads in (big) chunks whenever new data is necessary. For me this is bad.. The question, finally: Any way to get the (byte, char) offset while reading a file line by line without using a basic Stream and messing with \r \n \r\n and string encoding etc. manually? Not a big deal, really, I just don't like to build things that might exist already..

    Read the article

  • Adding XOR function to bigint library

    - by Jason Gooner
    Hi, I'm using this Big Integer library for Javascript: http://www.leemon.com/crypto/BigInt.js and I need to be able to XOR two bigInts together and sadly the library doesn't include such a function. The library is relatively simple so I don't think it's a huge task, just confusing. I've been trying to hack one together but not having much luck, would be very grateful if someone could lend me a hand. This is what I've attempted (might be wrong). But im guessing the structure is going to be quite similar to some of the other functions in there. function xor(x, y) { var c, k, i; var result = new Array(0); // big int for result k=x.length>y.length ? x.length : y.length; // array length of the larger num // Make sure result is the correct array size? maybe: result = expand(result, k); // ? for (c=0, i=0; i < k; i++) { // Do some xor here } // return the bigint xor result return result; } What confuses me is I don't really understand how it stores numbers in the array blocks for the bigInt. I don't think it's a case of simply bigintC[i] = bigintA[i] ^ bigintB[i], then most other functions have some masking operation at the end that I don't understand. I would really appreciate any help getting this working. Thanks

    Read the article

  • Designing a silverlight dashboard with mef - is it possible? (with dynamic loading of xaps)

    - by Tim Robbin
    Hello! I am just trying to wrap my head around MEF. And as I am really going to love it ( I guess ) I started my first sample project and immediatly stumbled into a big problem and now I am asking myself if I can use MEF for my scenario at all and that is the following: Imagine that one got some kind of dashboard with, let's say, five regions and above each region there are two comboboxes. The values in the first combobox represent different possible views (for example, chartControl, tableControl, pictureControl, ...) and the values of the second combobox represents the different data sources for the currently selected control. As the controls are very big in size one wants to download them as needed. If the user selects one comboboxitem the corresponding control xap should be loaded and displayed in this specific region. If the user selectes another control in the same combobox the control should be removed from the visualtree and the next control should be downloaded and displayed. If the user changes the selection in a different combobox the corresponding control should be loaded again only in this specific region, with perhaps different data. And to make it a little more interesting - as this is some kind of dashboard one can change the layout from five regions to - for example - ten regions. I've seen the video "MVVM with MEF in Silverlight Video Tutorial Part 2: Plugins and Metadata" ( http://csharperimage.jeremylikness.com/2010/03/mvvm-with-mef-in-silverlight-video_09.html ) but he is using an ItemsControl and is working with Visibility and he only got ONE region. So I think that this technique is not working for me... Puh, I hope I could make myself clear! Thanks a lot for any piece of information!!! Greetings, Tim.

    Read the article

  • Cobol web development/hosting resources

    - by felixm
    Hello, I'm employed at a fairly big company here in Germany and got the job to create the main website for it which will feature: Static contents; Information and Presentations An employee area (around 6000 employees) featuring various things from calenders, job descriptions, some sort of groups Too many other dynamic things I can't list here I have decided to use COBOL for the job, it may be very underrated but it is a very powerful language, especially for business apps and, as my co-workers say, web (2.0) development too. I also need to use COBOL because all the backend and transactions system of the company is programmed in it (some small parts were programmed in LISP too, idk exactly why). I also have received an API that makes it possible to use COBOL with MySQL easily. This is a big project and it will probably take more than 2 months programming it. What do I have to expect when building a huge web app in COBOL? Are there web frameworks for COBOL available? Some sort of MVC? Are there any good resources for practical web-development with COBOL? Thanks in advance

    Read the article

  • When is a good time to start thinking about scaling?

    - by Slokun
    I've been designing a site over the past couple days, and been doing some research into different aspects of scaling a site horizontally. If things go as planned, in a few months (years?) I know I'd need to worry about scaling the site up and out, since the resources it would end up consuming would be huge. So, this got me to thinking, when is the best time to start thinking about, and designing for, scalability? If you start too early on, you could easily over complicate your design, and make it impossible to actually build. You could also get too caught up in the details, the architecture, whatever, and wind up getting nothing done. Also, if you do get it working, but the site never takes off, you may have wasted a good chunk of extra effort. On the other hand, you could be saving yourself a ton of effort down the road. Designing it from the ground up to be big would make it much easier later on to let it grow big, with very little rewriting going on. I know for what I'm working on, I've decided to make at least a few choices now on the side of scaling, but I'm not going to do a complete change of thinking to get it to scale completely. Notably, I've redesigned my database from a conventional relational design to one similar to what was suggested on the Reddit site linked below, and I'm going to give memcache a try. So, the basic question, when is a good time to start thinking or worrying about scaling, and what are some good designs, tips, etc. for when doing so? A couple of things I've been reading, for those who are interested: http://www.codinghorror.com/blog/2009/06/scaling-up-vs-scaling-out-hidden-costs.html http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html http://developer.yahoo.com/performance/rules.html

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Find out which row caused the error

    - by Felipe Fiali
    I have a big fat query that's written dynamically to integrate some data. Basically what it does is query some tables, join some other ones, treat some data, and then insert it into a final table. The problem is that there's too much data, and we can't really trust the sources, because there could be some errored or inconsistent data. For example, I've spent almost an hour looking for an error while developing using a customer's database because somewhere in the middle of my big fat query there was an error converting some varchar to datetime. It turned out to be that they had some sales dating '2009-02-29', an out-of-range date. And yes, I know. Why was that stored as varchar? Well, the source database has 3 columns for dates, 'Month', 'Day' and 'Year'. I have no idea why it's like that, but still, it is. But how the hell would I treat that, if the source is not trustable? I can't HANDLE exceptions, I really need that it comes up to another level with the original message, but I wanted to provide some more info, so that the user could at least try to solve it before calling us. So I thought about displaying to the user the row number, or some ID that would at least give him some idea of what record he'd have to correct. That's also a hard job because there will be times when the integration will run up to 80000 records. And in an 80000 records integration, a single dummy error message: 'The conversion of a varchar data type to a datetime data type resulted in an out-of-range datetime value' means nothing at all. So any idea would be appreciated. Oh I'm using SQL Server 2005 with Service Pack 3.

    Read the article

  • Button outside view... how to make it work

    - by Mike
    I have a UIImageView based class that creates objects with the following characteristics: a small image square and a UITextView below. If the user drags the object by the image it can drag it around. If the user taps on the UITextView the keyboard has to appear and the user can change the text on it. The objects created by the class are like this: 1) the object creates a 60x60 pixels frame 2) puts an image inside that frame 3) creates a UITextView and puts it below that 60x60 frame. So, as the class is a UIImageView based and it creates an image with 60x60 pixels and the UITextView is located outside that area, in theory the text view is outside the area the tapping are for that object. Obviously I could make the class create a big square to encompass the image and the text view, but that frame would be too big and I have the objects created by this class to be as close as possible when I add them to another view. I could also create the text views from the same view I created the objects, but I would have to manage each object and each correspondent text view and I need them to move together... so, I have a problem. Any ideas on a simplest way to do that?

    Read the article

  • How to display only one letter in Flex Text Layout Framework ContainerController?

    - by rattkin
    I'm trying to implement dropped initials feature into my Flex application. Since Text Layout Framework does not support floating, the only known solution is to create additional containers that will be linked together, displaying the same textflow. Width and positioning of these containers has to be set in such a way that it will pretend that it's float. I'm using the same solution for dropped initials. Basically, I'm creating three containers, one for the initial letter (first letter in text flow), the other for text floating around, and the 3rd one to display text below these two. All these containers share one textflow. I have big issues with forcing controller to display only one letter from the text flow, and size it accordingly, so that it wont take any unnecessary aditional space and won't get any more text into it. Using ContainerController.getContentBounds() returns me size of whole sprite of the letter (with ascent/descent empty parts), not the height/width of the actual rendered letter. I'm using textFlow.flowComposer.getLineAt(0).getTextLine().getAtomBounds(0), but i think it's still not right. Also, even if I set container for this dimensions, it sometimes display additional text in it, especially for bigger fonts. See screen : Also, if I set width to just 1px less that contentBounds, things are going crazy, containers are moved around, positioned with big margins, etc. How should I solve this? Is it a bug in TLF / Player? Can I fix it somehow? Can I detect the size of the letter, or make containercontroller autosize to fit just one letter only?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >