Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 237/267 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • What is the fastest way to insert 100 000 records from one database to another?

    - by Pentium10
    I have a mobile application. My client has a large data set ~100.000 records. It's updated frequently. When we sync we need to copy from one database to another. I have attached the second database to the main, and run an insert into table select * from sync.table. This is extremely slow, it takes about 10 minutes I think. I noticed that the journal file gets increased step by step. How can I speed this up? EDITED 1 I have indexes off, and I have journal off. Using insert into table select * from sync.table it still takes 10 minutes. EDITED 2 If I run a query like select id,invitem,invid,cost from inventory where itemtype = 1 order by invitem limit 50 it takes 15-20 seconds. The table schema is: CREATE TABLE inventory ('id' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'serverid' INTEGER NOT NULL DEFAULT 0, 'itemtype' INTEGER NOT NULL DEFAULT 0, 'invitem' VARCHAR, 'instock' FLOAT NOT NULL DEFAULT 0, 'cost' FLOAT NOT NULL DEFAULT 0, 'invid' VARCHAR, 'categoryid' INTEGER DEFAULT 0, 'pdacategoryid' INTEGER DEFAULT 0, 'notes' VARCHAR, 'threshold' INTEGER NOT NULL DEFAULT 0, 'ordered' INTEGER NOT NULL DEFAULT 0, 'supplier' VARCHAR, 'markup' FLOAT NOT NULL DEFAULT 0, 'taxfree' INTEGER NOT NULL DEFAULT 0, 'dirty' INTEGER NOT NULL DEFAULT 1, 'username' VARCHAR, 'version' INTEGER NOT NULL DEFAULT 15 ) Indexes are created like CREATE INDEX idx_inventory_categoryid ON inventory (pdacategoryid); CREATE INDEX idx_inventory_invitem ON inventory (invitem); CREATE INDEX idx_inventory_itemtype ON inventory (itemtype); I am wondering, the insert into ... select * from isn't the fastest built-in way to do massive data copy? EDITED 3 SQLite is serverless, so please stop voting a particular answer, because that is not the answer I'm sure.

    Read the article

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

  • Making firefox refresh images faster

    - by Earlz
    I have a thing I'm doing where I need a webpage to stream a series of images from the local client computer. I have a very simple run here: http://jsbin.com/idowi/34 The code is extremely simple setTimeout ( "refreshImage()", 100 ); function refreshImage(){ var date = new Date() var ticks = date.getTime() $('#image').attr('src','http://127.0.0.1:2723/signature?'+ticks.toString()); setTimeout ("refreshImage()", 100 ); } Basically I have a signature pad being used on the client machine. We want for the signature to show up in the web page and for them to see themselves signing it within the web page(the pad does not have an LCD to show it to them right there). So I setup a simple local HTTP server which grabs an image of what the current state of the signature pad looks like and it gets sent to the browser. This has no problems in any browser(tested in IE7, 8, and Chrome) but Firefox where it is extremely laggy and jumpy and doesn't keep with the 10 FPS rate. Does anyone have any ideas on how to fix this? I've tried creating very simple double buffering in javascript but that just made things worse. Also for a bit more information it seems that Firefox is executing the javascript at the correct framerate as on the server the requests are coming in at a constant speed. But the images are only refreshed inconsistently ranging from 5 times per second all the way down to 0 times per second(taking 2 seconds to do a refresh) Also I have tried using different image formats all with the same results. The formats I've tried include bitmaps, PNGs, and GIFs (GIFs caused a minor problem in Chrome with flicker though) Could it be possible that Firefox is somehow caching my images causing a slight lag? I send these headers though: Pragma-directive: no-cache Cache-directive: no-cache Cache-control: no-cache Pragma: no-cache Expires: 0

    Read the article

  • How does git fetches commits associated to a file ?

    - by liadan
    I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!

    Read the article

  • optimization math computation (multiplication and summing)

    - by wiso
    Suppose you want to compute the sum of the square of the differences of items: $\sum_{i=1}^{N-1} (x_i - x_{i+1})^2$, the simplest code (the input is std::vector<double> xs, the ouput sum2) is: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (prev - (*i)) * (prev - (*i)); // only 1 - with compiler optimization prev = (*i); } I hope that the compiler do the optimization in the comment above. If N is the length of xs you have N-1 multiplications and 2N-3 sums (sums means + or -). Now suppose you know this variable: sum = $x_1^2 + x_N^2 + 2 sum_{i=2}^{N-1} x_i^2$ Expanding the binomial square: $sum_i^{N-1} (x_i-x_{i+1})^2 = sum - 2\sum_{i=1}^{N-1} x_i x_{i+1}$ so the code becomes: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (*i) * prev; prev = (*i); } sum2 = -sum2 * 2. + sum; Here I have N multiplications and N-1 additions. In my case N is about 100. Well, compiling with g++ -O2 I got no speed up (I try calling the inlined function 2M times), why?

    Read the article

  • Using unions to simplify casts

    - by Steven Lu
    I realize that what I am trying to do isn't safe. But I am just doing some testing and image processing so my focus here is on speed. Right now this code gives me the corresponding bytes for a 32-bit pixel value type. struct Pixel { unsigned char b,g,r,a; }; I wanted to check if I have a pixel that is under a certain value (e.g. r, g, b <= 0x10). I figured I wanted to just conditional-test the bit-and of the bits of the pixel with 0x00E0E0E0 (I could have wrong endianness here) to get the dark pixels. Rather than using this ugly mess (*((uint32_t*)&pixel)) to get the 32-bit unsigned int value, i figured there should be a way for me to set it up so I can just use pixel.i, while keeping the ability to reference the green byte using pixel.g. Can I do this? This won't work: struct Pixel { unsigned char b,g,r,a; }; union Pixel_u { Pixel p; uint32_t bits; }; I would need to edit my existing code to say pixel.p.g to get the green color byte. Same happens if I do this: union Pixel { unsigned char c[4]; uint32_t bits; }; This would work too but I still need to change everything to index into c, which is a bit ugly but I can make it work with a macro if i really needed to.

    Read the article

  • PHP include taking too long

    - by wxiiir
    I have a php file with around 100mb which is full of arrays (only arrays). I've made a script that includes this file (for processing), first it exhausted the default Xampp 128mb memory limit, i've raised it to 1024mb but it just takes forever and doesn't do anything. I'm sure the problem is being created by the sheer size of the file because i've tried removing all lines of code and just leaving the include and an echo for me to know when it finishes executing, and it does the same thing (which is taking forever), i've also tried to run the 100mb file in separate and same thing again. A 10mb file is taking forever as well but a similar 1mb file is almost instantly read and executed so the problem must be more than just the file size. I was avoiding using c++ for a simple project as this and would rather not to as php is easier for me and the task that will be executed doesn't need to benefit from the added speed that it would have if it had been done in c++ but if i have no luck in solving this problem i guess i'll have to. EDIT Reasons for not using a database: 1-Whoever made it didn't used a database and it will be pretty hard to store this in an organized database if i'm not able to do something with it first, like just reading it, copying parts from it or putting in memory or something. 2-I don't have experience working with databases as pretty much all stuff i've ever done in php didn't needed large amounts of stored data, 50kb at best, if i was thinking about a big project or huge chunks of data as this one, i definitely would, but i didn't made this mess to start with and now i have to undo it. 3-The logic for having to store a small portion of data like 10mb in hard drive when now every computer has pretty much enough ram to fit the whole OS in it is pretty much incomprehensible unless someone gives a good explanation about it, if i had to access a lot of said files simultaneously i would understand but like i said, this is a simple project, this is the only file that will be accessed at a given time this isn't even to make some kind of website, it's to run a few times and be done with it.

    Read the article

  • Processor, OS : 32bit, 64 bit

    - by Sandbox
    I am new to programming and come from a non-CS background (no formal degree). I mostly program winforms using C#. I am confused about 32 bit and 64 bit.... I mean, have heard about 32 bit OS, 32 bit processor and based on which a program can have maximum memory. How it affects the speed of a program. There are lot more questions which keep coming to mind. I tried to go through some Computer Organization and Architecture books. But, either I am too dumb to understand what is written in there or the writers assume that the reader has some CS background. Can someone explain me these things in a plain simple English or point me to something which does that. EDIT: I have read things like In 32-bit mode, they can access up to 4GB memory; in 64-bit mode, they can access much much more....I want to know WHY to all such things. BOUNTY: Answers below are really good....esp one by Martin. But, I am looking at a thorough explanation, but in plain simple English.

    Read the article

  • php / phpDoc - @return instance of $this class ?

    - by searbe
    How do I mark a method as "returns an instance of the current class" in my phpDoc? In the following example my IDE (Netbeans) will see that setSomething always returns a foo object. But that's not true if I extent the object - it'll return $this, which in the second example is a bar object not a foo object. class foo { protected $_value = null; /** * Set something * * @param string $value the value * @return foo */ public function setSomething($value) { $this->_value = $value; return $this; } } $foo = new foo(); $out = $foo->setSomething(); So fine - setSomething returns a foo - but in the following example, it returns a bar..: class bar extends foo { public function someOtherMethod(){} } $bar = new bar(); $out = $bar->setSomething(); $out->someOtherMethod(); // <-- Here, Netbeans will think $out // is a foo, so doesn't see this other // method in $out's code-completion ... it'd be great to solve this as for me, code completion is a massive speed-boost. Anyone got a clever trick, or even better, a proper way to document this with phpDoc?

    Read the article

  • What is the best (Windows) program launcher?

    - by AR
    One of the biggest general productivity boosters I've used is a good program launcher. I was a long-time user of SlickRun, and I've tried a few others. My current favorite is Executor - by far the best I've used. Other options: Executor: My current favorite Vista Start Menu: Pretty good, actually, but Executor is similar (binds to Win+Z) and much more flexible. Quicksilver: For Macs only, but it seems to be the gold standard against which most other launchers are measured. Google Desktop: Press Ctrl+Ctrl and it's a quick launcher! AutoHotKey: Much,much more than just a launcher - more than I need, really. SlickRun: simple and unobtrusive Launchy: Seems to be the launcher of choice for many StackOverflow users :) Colibri: "Type Ahead - Information at the tip of your wings". Quite a cool concept. Many, many others. Scott Hanselman outlines some more here. I realize that everyone will have their own preferences, but the question is: is there anything that really stands out in terms of speed, features, and especially productivity increase?

    Read the article

  • Is there a way in VS2008 (c#) to see all the possible exception types that can originate from a meth

    - by Matt
    Is there a way in VS2008 IDE for c# to see all the possible exception types that can possibly originate from a method call or even for an entire try-catch block? I know that intellisense or the object browser tells me this method can throw these types of exceptions but is there another way than using the object browser everytime? Something more accessible when coding? Furthermore, I don't think intellisense or the object browser do anything more than read the XML code comments. Shouldn't it be possible to scan a class's source and find all the exception types that can be thrown. (Forget path-ing based on method input, just scan the code for exception types) Am I wrong? Extending this idea, you should be able to hover over the 'try' or 'catch' keywords and present a tooltip with all the types of exceptions that can be thrown. My question boils down to, does a VS2008 add on like this exist? Does VS2010 do this perhaps? If not, could you implement it the way I've described, by scanning the class code for thrown exception types and would people find it useful. Exceptions bubble up so you have to scan every bit of code every method call, which I guess could be impractical, though I suppose you could build an index the first time and increase your speed that way. (It might be a cool little project....)

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Good C++ array class for dealing with large arrays of data in a fast and memory efficient way?

    - by Shane MacLaughlin
    Following on from a previous question relating to heap usage restrictions, I'm looking for a good standard C++ class for dealing with big arrays of data in a way that is both memory efficient and speed efficient. I had been allocating the array using a single malloc/HealAlloc but after multiple trys using various calls, keep falling foul of heap fragmentation. So the conclusion I've come to, other than porting to 64 bit, is to use a mechanism that allows me to have a large array spanning multiple smaller memory fragments. I don't want an alloc per element as that is very memory inefficient, so the plan is to write a class that overrides the [] operator and select an appropriate element based on the index. Is there already a decent class out there to do this, or am I better off rolling my own? From my understanding, and some googling, a 32 bit Windows process should theoretically be able address up to 2GB. Now assuming I've 2GB installed, and various other processes and services are hogging about 400MB, how much usable memory do you think my program can reasonably expect to get from the heap? I'm currently using various flavours of Visual C++.

    Read the article

  • .net remoting - Better solution to wait for a service to initialize ?

    - by CitizenInsane
    Context I have a client application (which i cannot modify, i.e. i only have the binary) that needs to run from time to time external commands that depends on a resource which is very long to initialize (about 20s). I thus decided to initialize this resource once for all in a "CommandServer.exe" application (single instance in the system tray) and let my client application call an intermediate "ExecuteCommand.exe" program that uses .net remoting to perform the operation on the server. The "ExecuteCommand.exe" is in charge for starting the server on first call and then leave it alive to speed up further commands. The service: public interface IMyService { void ExecuteCommand(string[] args); } The "CommandServer.exe" (using WindowsFormsApplicationBase for single instance management + user friendly splash screen during resource initializations): private void onStartupFirstInstance(object sender, StartupEventArgs e) { // Register communication channel channel = new TcpServerChannel("CommandServerChannel", 8234); ChannelServices.RegisterChannel(channel, false); // Register service var resource = veryLongToInitialize(); service = new MyServiceImpl(resource); RemotingServices.Marshal(service, "CommandServer"); // Create icon in system tray notifyIcon = new NotifyIcon(); ... } The intermediate "ExecuteCommand.exe": static void Main(string[] args) { startCommandServerIfRequired(); var channel = new TcpClientChannel(); ChannelServices.RegisterChannel(channel, false); var service = (IMyService)Activator.GetObject(typeof(IMyService), "tcp://localhost:8234/CommandServer"); service.RunCommand(args); } Problem As the server is very long to start (about 20s to initialize the required resources), the "ExecuteCommand.exe" fails on service.RunCommand(args) line because the server is yet not available. Question Is there a elegant way I can tune the delay before to receive "service not available" when calling service.RunCommand ? NB1: Currently I'm working around the issue by adding a mutex in server to indicate for complete initiliazation and have "ExecuteCommand.exe" to wait for this mutex before to call service.RunCommand. NB2: I have no background with .net remoting, nor WCF which is recommended replacer. (I chose .net remoting because this looked easier to set-up for this single shot issue in running external commands).

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • jQuery trigger custom event synchronously?

    - by Miguel Angelo
    I am using the jQuery trigger method to call an event... but it behaves inconsistently. Sometimes it call the event, sometimes it does not. <a href="#" onclick=" $(this).trigger('custom-event'); window.location.href = 'url'; return false; ">text</a> The custom-event has lots of listeners added to it. It is as if the trigger method is not synchronous, allowing the window.location.href be changed before executing the events. And when window.location.href is changed a navigation occurs, interrupting everything. How can I trigger events synchronously? Using jQuery 1.8.1. Thanks! EDIT I have found that the event, when called has a stack trace like this: jQuery.fx.tick (jquery-1.8.1.js:9021) tick (jquery-1.8.1.js:8499) jQuery.Callbacks.self.fireWith (jquery-1.8.1.js:1082) jQuery.Callbacks.fire (jquery-1.8.1.js:974) jQuery.speed.opt.complete (jquery-1.8.1.js:8991) $.customEvent (myfile.js:28) This proves that jQuery trigger method is asynchronous. Oohhh my... =\

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • JQuery transition animation

    - by kk-dev11
    This program randomly selects two employees from a json-object Employees array, winnerPos is already defined. For better user experience I programmed these functions to change pictures one by one. The animation stops when the randomly selected person is shown on the screen. The slideThrough function will be triggered when the start button is pressed. function slideThrough() { counter = 0; start = true; clearInterval(picInterval); picInterval = setInterval(function () { changePicture(); }, 500); } function changePicture() { if (start) { if (counter > winnerPos) { setWinner(); start = false; killInterval(); } else { var employee = Employees[counter]; winnerPic.fadeOut(200, function () { this.src = 'img/' + employee.image; winnerName.html(employee.name); $(this).fadeIn(300); }); counter++; } } } The problem is the animation doesn't work smoothly. At first it works, but not perfect. The second time the transition happens in an irregular way, i.e. different speed and fadeIn/fadeOut differs from picture to picture. Could anyone help me to fine-tune the transition?

    Read the article

  • PHP: How to know if a user is currently downloading a file?

    - by metrobalderas
    I'm developing a quick rapidshare-like site where the user can download files. First, I created a quick test setting headers and using readfile() but then I found in the comments section there's a way to limit the speed of the download, which is great, here's the code: $local_file = 'file.zip'; $download_file = 'name.zip'; // set the download rate limit (=> 20,5 kb/s) $download_rate = 20.5; if(file_exists($local_file) && is_file($local_file)) { header('Cache-control: private'); header('Content-Type: application/octet-stream'); header('Content-Length: '.filesize($local_file)); header('Content-Disposition: filename='.$download_file); flush(); $file = fopen($local_file, "r"); while(!feof($file)) { // send the current file part to the browser print fread($file, round($download_rate * 1024)); // flush the content to the browser flush(); // sleep one second sleep(1); } fclose($file);} else { die('Error: The file '.$local_file.' does not exist!'); } But now my question is, how to limit the number of downloads at the same time? How can I check there's still a connection with some user's IP? Thanks.

    Read the article

  • How does a java compiler resolve a non-imported name

    - by gexicide
    Consider I use a type X in my java compilation unit from package foo.bar and X is not defined in the compilation unit itself nor is it directly imported. How does a java compiler resolve X now efficiently? There are a few possibilities where X could reside: X might be imported via a star import a.b.* X might reside in the same package as the compilation unit X might be a language type, i.e. reside in java.lang The problem I see is especially (2.). Since X might be a package-private type, it is not even required that X resides in a compilation unit that is named X.java. Thus, the compiler must look into all entries of the class path and search for any classes in a package foo.bar, it then must read every class that is in package foo.bar to check whether X is included. That sounds very expensive. Especially when I compile only a single file, the compiler has to read dozens of class files only to find a type X. If I use a lot of star imports, this procedure has to be repeated for a lot of types (although class files won't be read twice, of course). So is it advisable to import also types from the same package to speed up the compilation process? Or is there a faster method for resolving an unimported type X which I was not able to find?

    Read the article

  • 2 basic but interesting questions about .NET

    - by b-gen-jack-o-neill
    Hi, when I first saw C#, I thought this must be some joke. I was starting with programming in C. But in C# you could just drag and drop objects, and just write event code to them. It was so simple. Now, I still like C the most, becouse I am very attracted to the basic low level operations, and C is just next level of assembler, with few basic routines, so I like it very much. Even more becouse I write little apps for microcontrollers. But yeasterday I wrote very simple control program for my microcontroller based LED cube in asm, and I needed some way to simply create animation sequences to the Cube. So, I remembered C#. I have practically NO C# skills, but still I created simple program to make animation sequences in about hour with GUI, just with help of google and help of the embeded function descriptions in C#. So, to get to the point, is there some other reason then top speed, to use any other language than C#? I mean, it is so effective. I know that Java is a bit of similiar, but I expect C# to be more Windows effective since its directly from Microsoft. The second question is, what is the advantage of compiling into CIL, and than run by CLR, than directly compile it into machine code? I know that portability is one, but since C# is mainly for Windows, wouldn´t it be more powerfull to just compile it directly? Thanks.

    Read the article

  • What guarantees are there on the run-time complexity (Big-O) of LINQ methods?

    - by tzaman
    I've recently started using LINQ quite a bit, and I haven't really seen any mention of run-time complexity for any of the LINQ methods. Obviously, there are many factors at play here, so let's restrict the discussion to the plain IEnumerable LINQ-to-Objects provider. Further, let's assume that any Func passed in as a selector / mutator / etc. is a cheap O(1) operation. It seems obvious that all the single-pass operations (Select, Where, Count, Take/Skip, Any/All, etc.) will be O(n), since they only need to walk the sequence once; although even this is subject to laziness. Things are murkier for the more complex operations; the set-like operators (Union, Distinct, Except, etc.) work using GetHashCode by default (afaik), so it seems reasonable to assume they're using a hash-table internally, making these operations O(n) as well, in general. What about the versions that use an IEqualityComparer? OrderBy would need a sort, so most likely we're looking at O(n log n). What if it's already sorted? How about if I say OrderBy().ThenBy() and provide the same key to both? I could see GroupBy (and Join) using either sorting, or hashing. Which is it? Contains would be O(n) on a List, but O(1) on a HashSet - does LINQ check the underlying container to see if it can speed things up? And the real question - so far, I've been taking it on faith that the operations are performant. However, can I bank on that? STL containers, for example, clearly specify the complexity of every operation. Are there any similar guarantees on LINQ performance in the .NET library specification?

    Read the article

  • jQuery jPicker colorpicker: How to convert from 8 digit (w Transparency) to standard 6 digit hex?

    - by Scott B
    I've got a jPicker installed and running fine; its a pretty sweet script. However, the value it returns to my input box is 8 digit hex. I need it to return 6 digit hex. Rather than post-process the 8 digit into 6, I'd rather just hack into the script and force 6 digit. Alternately, I'd be ok with hooking into the change event of the jPicker to intercept the value its sending to the input element and doing the conversion there just before it updates the input with the hex. Here's my code: $(function() { $('#myThemeColor').jPicker(); /* Bind jPicker to myThemeColor input */ $("#carousel").jCarouselLite({ btnNext: ".next", btnPrev: ".prev", visible: 6, speed: 700 }); And here's the code I'm working with to intercept the myThemeColor input's change event, but its not firing at all. $('#myThemeColor').change(function() { alert(this.val()); /* does not fire on any action */) if($(this).val().length == 8) { $(this).val(function(i, v) { return v.substring(0, 6); }); } });

    Read the article

  • need help speeding up tag cloud filter for IE

    - by rod
    Hi All, Any ideas on how to speed this up in IE (performs decent in Firefox, but almost unusable in IE). Basically, it's a tag cloud with a filter text box to filter the cloud. <html> <head> <script type="text/javascript" src="jquery-1.3.2.min.js"></script> <script type="text/javascript"> $(document).ready(function(){ $('#tagFilter').keyup(function(e) { if (e.keyCode==8) { $('#cloudDiv > span').show(); } $('#cloudDiv > span').not('span:contains(' + $(this).val() + ')').hide(); }); }); </script> </head> <body> <input type="text" id="tagFilter" /> <div id="cloudDiv" style="height: 200px; width: 400px; overflow: auto;"> <script type="text/javascript"> for (i=0;i<=1300;i++) { document.write('<span><a href="#">Test ' + i + '</a>&nbsp;</span>'); } </script> </div> </body> </html> thanks, rodchar

    Read the article

  • How to make smooth movements in OpenGL?

    - by thyrgle
    So this kind of on topic to my other OpenGL question (not my OpenGL ES question but OpenGL the desktop version). If you have someone press a key to move a square how do you make the square movement naturally and less jumpy but also at the same speed I have it now? This is my code for the glutKeyboardFunc() function: void handleKeypress(unsigned char key, int x, int y) { if (key == 'w') { for (int i = 0; i < 12; i++) { if (i == 1 || i == 7 || i == 10 || i == 4) { square[i] = square[i] + 1; } } glutPostRedisplay(); } if (key == 'd') { for (int i = 0; i < 12; i++) { if (i == 0 || i % 3 == 0) { square[i] = square[i] + 1; } } glutPostRedisplay(); } if (key == 's') { for (int i = 0; i < 12; i++) { if (i == 1 || i == 7 || i == 10 || i == 4) { square[i] = square[i] - 1; } } glutPostRedisplay(); } if (key == 'a') { for (int i = 0; i < 12; i++) { if (i == 0 || i % 3 == 0) { square[i] = square[i] - 1; } } glutPostRedisplay(); } } I'm sorry if this doesn't quite make sense I'll try to rephrase it in a better way if it doesn't make sense.

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >