Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 487/595 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Jquery cycle plugin and image tag from code behind

    - by Geetha
    Hi All, I am using cycle plugin to show images. I am binding the html code for image from code behind. Problem: Images are not getting displayed. If i hard coded the image tag it is working. Code: $(window).load(function() { $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", data: "{}", url: "Default.aspx/GetImage", dataType: "json", success: function(data) { alert(data.d); $("#sample").html(data.d); $('#sample').cycle({ fx: 'fade', continuous: true, speed: 7500, timeout: 55000, pause: 1, sync: 1 }); } }); }); Data.d will give: <img src="Images/Frontbanner/s0.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s111.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s112.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s113.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s114.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s115.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s116.jpg" width="664" height="428" border="0" />

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • Fast, Unicode-capable, cross-platform programmer's text editor that shows invisibles like ZWSP?

    - by Roger_S
    Our publishing workflow includes Windows and Linux machines (there are some Macs too, but not in the critical-path workflow). Many texts include both English and Khmer and are marked-up in XML. XML Copy Editor is the best cross-platform open-source XML editor I've discovered. It utilizes the Scintilla editing component, which is generally good with Unicode but which does not enable non-printing or invisible characters like U+200B (zero-width space) and U+200C (zero-width non-joiner) to be displayed. Khmer does not separate words with a space character as Western languages do, so ZWSP is used in electronic texts to enable applications to break lines easily. Ideally I'd edit the markup and the content in a single editor, but XML awareness is less important at times than being able to display invisibles. (OpenOffice.org Writer and Microsoft Word are the only two apps I know that will display ZWSP. They are not suitable for the markup and text manipulations that need to be done to prepare manuscripts for publication, unfortunately, although I guess they're fine for authoring.) I tried out a promising editor last week, but a search-and-replace regex operation that took under a second in TextPad 4.7.3 lasted over twenty seconds. So I want to mention that speed and the ability to handle large (up to 150mb) files is also a concern. Is there a good, fast, free or not too expensive text editor, with versions on Windows and Linux and maybe mac too, Unicode-aware and capable of displaying invisibles like ZWSP? That has syntax highlighting, can handle large files and is customizable enough that I won't tear my hair out in frustration? Thanks, Roger_S

    Read the article

  • Stopping cookies being set from a domain (aka "cookieless domain") to increase site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • Parallel.For Batching

    - by chibacity
    Is there built-in support in the TPL for batching operations? I was recently playing with a routine to carry out character replacement on a character array which required a lookup table i.e. transliteration: for (int i = 0; i < chars.Length; i++) { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } } I could see that this could be trivially parallelized, so jumped in with a first stab which I knew would perform worse as the tasks were too fine-grained: Parallel.For(0, chars.Length, i => { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } }); I then reworked the algorithm to use batching so that the work could be chunked onto different threads in less fine-grained batches. This made use of threads as expected and I got some near linear speed up. I'm sure that there must be built-in support for batching in the TPL. What is the syntax, and how do I use it? const int CharBatch = 100; int charLen = chars.Length; Parallel.For(0, ((charLen / CharBatch) + 1), i => { int batchUpper = ((i + 1) * CharBatch); for (int j = i * CharBatch; j < batchUpper && j < charLen; j++) { char replaceChar; if (lookup.TryGetValue(chars[j], out replaceChar)) { chars[j] = replaceChar; } } });

    Read the article

  • Javascript changing source of an image

    - by Pete Herbert Penito
    Hi Everyone! I have some javascript which runs a timer that animates something on the website. It all works perfectly but I want to get an image to change when the animation is run, It uses jquery: if(options.auto && dir=="next" && !clicked) { timeout = setTimeout(function(){ if (document.images['bullet1'].src == "img/bulletwhite.png") { document.images['bullet1'].src = "img/bullet.png"; document.images['bullet2'].src = "img/bulletwhite.png"; } animate("next",false); },diff*options.speed+options.pause); } options.auto means that its automatically cycling, dir is the direction of the motion and clicked is whether or not the user clicked it. Is there something wrong with my syntax here? I ran firebug with it, and it doesn't throw any errors, it simply won't work. Any advice would help! I should mention that the src of bullet1 starts at bulletwhite.png and then I was hoping for it to change to bullet.png and have bullet2 change to bulletwhite.png.

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • How can I accelerate the generation of the an MD5 Checksum within vb.net?

    - by Richard
    I'm working with some very large files residing on P2 (Panasonic) cards. Part of the process we employ is to first generate a checksum of the file we are going to copy, then copy the file, then run a checksum on the file to confirm that it copied OK. The problem is, is that files are large (70 GB+) and take a long time to complete. It's an issue since we will eventually be dealing with thousands of these files. I would like to find a faster way to generate the checksum other than using the System.Security.Cryptography.MD5CryptoServiceProvider I don't care if this means using a specialized hardware card, provided it works and is not to ungodly expensive. I would prefer to have a method of encoding that provided some feedback as to how far the process has gone along so I can display it like I do now. The application is written in vb.net. I would prefer to be able to use it as component, library, reference within my application, but I'm willing to call an outside application if there is enough improvement in the speed of generating the checksum. Needless to say, the checksum must be consistent and correct. :-) Thank you in advance for your time and efforts, Richard

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • gl_FragColor and glReadPixels

    - by chun0216
    I am still trying to read pixels from fragment shader and I have some questions. I know that gl_FragColor returns with vec4 meaning RGBA, 4 channels. After that, I am using glReadPixels to read FBO and write it in data GLubyte *pixels = new GLubyte[640*480*4]; glReadPixels(0, 0, 640,480, GL_RGBA, GL_UNSIGNED_BYTE, pixels); This works fine but it really has speed issue. Instead of this, I want to just read RGB so ignore alpha channels. I tried: GLubyte *pixels = new GLubyte[640*480*3]; glReadPixels(0, 0, 640,480, GL_RGB, GL_UNSIGNED_BYTE, pixels); instead and this didn't work though. I guess it's because gl_FragColor returns 4 channels and maybe I should do something before this? Actually, since my returned image (gl_FragColor) is grayscale, I did something like float gray = 0.5 //or some other values gl_FragColor = vec4(gray,gray,gray,1.0); So is there any efficient way to use glReadPixels instead of using the first 4 channels method? Any suggestion? By the way, this is on opengl es 2.0 code.

    Read the article

  • Is there any performance issue using Row_Number to implement table paging in Sql Server 2008?

    - by majkinetor
    I want to implement table paging using this method: SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum ,* FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate ,OrderID; Is there anything I should be aware of ? Table has millions of records. Thx. EDIT: After using suggested MAXROWS method for some time (which works really really fast) I had to switch back to ROW_NUMBER method because of its greater flexibility. I am also very happy about its speed so far (I am working with View having more then 1M records with 10 columns). To use any kind of query I use following modification: PROCEDURE [dbo].[PageSelect] ( @Sql nvarchar(512), @OrderBy nvarchar(128) = 'Id', @PageNum int = 1, @PageSize int = 0 ) AS BEGIN SET NOCOUNT ON Declare @tsql as nvarchar(1024) Declare @i int, @j int if (@PageSize <= 0) OR (@PageSize > 10000) SET @PageSize = 10000 -- never return more then 10K records SET @i = (@PageNum - 1) * @PageSize + 1 SET @j = @PageNum * @PageSize SET @tsql = 'WITH MyTableOrViewRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY ' + @OrderBy + ') AS RowNum ,* FROM MyTableOrView WHERE ' + @Sql + ' ) SELECT * FROM MyTableOrViewRN WHERE RowNum BETWEEN ' + CAST(@i as varchar) + ' AND ' + cast(@j as varchar) exec(@tsql) END If you use this procedure make sure u prevented sql injection.

    Read the article

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • Quickest way to compare a bunch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • IIS7 dynamic content compression and webservices

    - by vandalo
    I am moving and old asmx webservice to a new server with IIS7. This webservice basically sends a big dataset (10mb+) to a winform application. The old solution was implemented using a custom soap extension which compressed the content before sending the stream to the client. The client, of course, implemented the same custom soap extension, to decompressed the stream in a dataset. Everything has worked pretty well for years. My customer doesn't want to change the code upgrading to WCF. They just want to put the old App on the new server and use the new dynamic content compression features. We're testing things on a test server (win serv 2008) and it seems that it's working pretty well, even if it seems slow: we can't see any difference in performance (speed) between the uncompressed and compressed stream. Here's the question. Where should I put the settings? Most people say I can't put it in my web.config; others say it can be put there. I am a bit confused. Are there any tricks or things I should know? What about mimeTypes? Should I set some parameters, somewhere? ... considering my stream is XML (dataset) ?? Thanks to everyone who would like to help Alberto

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • getting data from dynamic schema

    - by coure2011
    I am using mongoose/nodejs to get data as json from mongodb. For using mongoose I need to define schema first like this var mongoose = require('mongoose'); var Schema = mongoose.Schema; var GPSDataSchema = new Schema({ createdAt: { type: Date, default: Date.now } ,speed: {type: String, trim: true} ,battery: { type: String, trim: true } }); var GPSData = mongoose.model('GPSData', GPSDataSchema); mongoose.connect('mongodb://localhost/gpsdatabase'); var db = mongoose.connection; db.on('open', function() { console.log('DB Started'); }); then in code I can get data from db like GPSData.find({"createdAt" : { $gte : dateStr, $lte: nextDate }}, function(err, data) { res.writeHead(200, { "Content-Type": "application/json", "Access-Control-Allow-Origin": "*" }); var body = JSON.stringify(data); res.end(body); }); How to define scheme for a complex data like this, you can see that subSection can go to any deeper level. [ { 'title': 'Some Title', 'subSection': [{ 'title': 'Inner1', 'subSection': [ {'titile': 'test', 'url': 'ab/cd'} ] }] }, .. ]

    Read the article

  • PHP driven site needs password change.

    - by Drea
    I have inherited a website that needs the password changed that accesses the database. I can see that there are two tables within the database but neither of them have username or password info. The previous web guy moved out of the country and can't be reached. I am not up-to-speed enough to figure this out. I have gone through all the files to try and find the answer but can't get it. It's hosted by goDaddy.com and I have changed the passwords there but it didn't change this login info. www.executivehomerents.com/cpanel <-this brings up the prompt for the username & password which I won't give out but the page only gives you 5 choices and none of them deal with changing the password. They are simply to change the data in the tables. If you go to: http://www.websitedatabases.com/ <=this is the company that the PHPMagic program was purchased from-- They have no contact number. Here is another page that might help: http://www.executivehomerents.com/cpanel/dbinput/setup.php I don't think I'll get an answer to this but it's worth a try... thanks.

    Read the article

  • java url connection, wait for data being sent through the outputstream

    - by Mateu
    I'm writting a java class that tests uploading speed connection to a server. I want to check how many data can be send in 5 seconds. I've written a class which creates a URL, creates a connection, and sends data trough the outPutStream. There is a loop where I writte data to the stream for 5 seconds. However I'm not able to see when data has been send (I writte data to the output stream, but data is not send yet). How can I wait untill data is really sent to the server? Here goes my code (which does not work): URL u = new URL(url) HttpURLConnection uc = (HttpURLConnection) u.openConnection(); uc.setDoOutput(true); uc.setDoInput(true); uc.setUseCaches(false); uc.setDefaultUseCaches(false); uc.setRequestMethod("POST"); uc.setRequestProperty("Content-Type", "application/octet-stream"); uc.connect(); st.start(); // Send the request OutputStream os = uc.getOutputStream(); //This while is incorrect cause it does not wait for data beeing sent while (st.getElapsedTime() < miliSeconds) { os.write(buffer); os.flush(); st.addSize(buffer.length); } os.close(); Thanks

    Read the article

  • What are CAD apps written in, and how are they organized ?

    - by ldigas
    What are CAD applications (Rhino, Autocad) of today written in and how are they organized internally ? I gave as an example, Autocad and Rhino, although I would love to hear of other examples as well. I'm particularly interested in knowing what is their backend written in (multilanguage ?) and how is it organized, and how do they handle their frontend (GUI) in real time ? Do they use native windows API's or some libraries of their own, since I imagine, as good as may be, the open source solutions on today's market won't cut it. I may be wrong ... As most of you who have used them know, they handle amongs other things relatively complex rotational operations in realtime (shading is not interesting me). I've been doing some experiments with several packages recently, and for some larger models found that there is considerable difference in speed in, for example, programed rotation (big full ship models) amongst some of them (which I won't name). So I'm wondering about their internals ... Also, if someone knows of some book on the subject, I'd be interested to hear of it.

    Read the article

  • What is the fastest way to find duplicates in multiple BIG txt files?

    - by user2950750
    I am really in deep water here and I need a lifeline. I have 10 txt files. Each file has up to 100.000.000 lines of data. Each line is simply a number representing something else. Numbers go up to 9 digits. I need to (somehow) scan these 10 files and find the numbers that appear in all 10 files. And here comes the tricky part. I have to do it in less than 2 seconds. I am not a developer, so I need an explanation for dummies. I have done enough research to learn that hash tables and map reduce might be something that I can make use of. But can it really be used to make it this fast, or do I need more advanced solutions? I have also been thinking about cutting up the files into smaller files. To that 1 file with 100.000.000 lines is transformed into 100 files with 1.000.000 lines. But I do not know what is best: 10 files with 100 million lines or 1000 files with 1 million lines? When I try to open the 100 million line file, it takes forever. So I think, maybe, it is just too big to be used. But I don't know if you can write code that will scan it without opening. Speed is the most important factor in this, and I need to know if it can be done as fast as I need it, or if I have to store my data in another way, for example, in a database like mysql or something. Thank you in advance to anybody that can give some good feedback.

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • Concept: Is mongo right for applying schemas?

    - by Jan
    I am currently in charge of checking wether it is valuable for one of our upcoming products to be developed on mongo. Without going too much into detail, I'll try to explain, what the app does. The app simply has "entities". These entities are technical stuff, like cell phones, TVs, Laptops, tablet pcs, and so forth. Of course, a cell phone has other attributes than a Tablet PCs and a Laptop has even other attributes, like RAM, CPU, display size and so on. Now I want to have something that we wanna call a scheme: We define that we need to have saved the display size, amount of ram size of flash devices, processor type, processor speed and so on for tablet pcs. For cell phone we might save display size, GSM, Edge, 3g, 4g, processor, ram, touch screen technology, bla bla bla. I think you got it :) What I want to realize is, that each "category" has a schema and when one of the system's users enters a new product (let's say the new iphone 4), the app constructs the form to be filled out with the appropriate attributes. So far it sounds nice and should not be a problem with mongo. But now the tough for which I could not find a clean solution.... An attribute modeled in mongo looks like: { _id: 1234456, name: "Attribute name", type: 0, "description" } But what to do, if i need this attribute in several languages, like: { en: {name: "Attribute name", type: 0, "description"}, de: {name: "Name des Attributs, type: 0, "Beschreibung"} } I also need to ensure that the german attribute gets updated as soon as the english gets updated, for instance when type changes from 0 to 1. Any ideas on that?

    Read the article

  • genStrAsCharArray optimisation benefits

    - by Rich
    Hi I am looking into the options available to me for optimising the performance of JBoss 5.1.0. One of the options I am looking at is setting genStrAsCharArray to true in <JBOSS_HOME>/server/<PROFILE>/deployers/jbossweb.deployer/web.xml. This affects the generation of .java code from .JSPs. The comment describes this flag as: Should text strings be generated as char arrays, to improve performance in some cases? I have a few questions about this. Is this the generation of Strings in the dynamic parts of the JSP page (ie each time the page is called) or is it the generation of Strings in the static parts (ie when the .java is built from the JSP)? "in some cases" - which cases are these? What are the situations where the performance is worse? Does this speed up the generation of the .java, the compilation of the .class or the execution of the .class? At a more technical level (and the answer to this will probably depend on the answer to part 1), why can the use of char arrays improve performance? Thanks in advance Rich

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >