Search Results

Search found 6011 results on 241 pages for 'fast fish'.

Page 197/241 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • HttpURLConnection inside a loop

    - by Carlos Garces
    Hi! I'm trying to connect to one URL that I know that exist but I don't know when. I don't have access to this server so I can't change anything to receive a event. The actual code is this. URL url = new URL(urlName); for(int j = 0 ; j< POOLING && disconnected; j++){ HttpURLConnection connection = (HttpURLConnection) url.openConnection(); int status = connection.getResponseCode(); if(status == HttpURLConnection.HTTP_OK || status == HttpURLConnection.HTTP_NOT_MODIFIED){ //Some work }else{ //wait 3s Thread.sleep(3000); } } Java not is my best skill and I'm not sure if this code is good from the point of view of performance. I'm opening a new connection every 3 seconds? or the connection is reused? If I call to disconnect() I ensure that no new connections are open in the loop, but... it will impact in performance?. Suggestions? What is the fast/best ways to know it a URL exist?

    Read the article

  • Does Firefox on OS X Lion make use of Full Page Zoom for the touchpad? How to customize behavior?

    - by Steven Lu
    I really like the smooth pinch-zoom of Safari using the touchpad, but the two-finger scroll on Firefox is so much better than the scrolling performance in Safari. So I really like to use Firefox, but then I miss out on two-finger-double-tap to zoom to paragraph width, and the smooth pinch gesture zoom. What I'm wondering is if it is possible to write a Firefox Extension to improve the update rate of the full-page zoom in Firefox that is already functioning via the touchpad pinch gesture. I feel like it is specifically programmed to zoom at certain zoom levels: 100%, 120%, 150% (these are guesses of mine) but I think it would be great if I can get some more control there to make it work more like the zoom functionality in Safari. Also the two-finger-double-tap on a paragraph or element to zoom to it would be really awesome as well. https://developer.mozilla.org/en/Full_page_zoom This seems to indicate (if "full page zoom" is what I think it is) that an extension has the ability to zoom to an arbitrary scale factor, but what remains is to find out if it is possible to obtain or hook the touchpad pinch gesture. Update: I have updated the toolkit.zoomManager.zoomValues option in about:config to include more zoom levels: .3,.5,.67,.8,.9,1,1.01,1.02,1.03,1.04,1.05,1.06,1.07,1.08,1.09,1.1,1.2,1.33,1.5,1.7,2,2.4,3 Notice how I inserted a bunch of entries between 1 and 1.1. But it isn't switching between them any faster (why would it?) so it's less usable than before because of waiting for it to respond fast enough. It's clear that re-rendering the page at a different zoom level requires time and in order for the zoom to be dynamic, some kind of screen capture and scale effect must be performed (which is what Safari does). I guess such a thing is probably doable but I don't think I could pull it off. :-/

    Read the article

  • How to find Part Time development/IT work?

    - by Jonathan
    I've been working in the IT field now for 10 years. Originally trained as an Engineer, started out with C++ and have been a .Net specialist since beta. Currently seconded to a major city and working in the finance industry as freelance, I really feel like i've hit the glass ceiling. Have been contracting now for 5 years as the company politics and frustration of not being promoted and poor pay rises for excellent work but during the last decade of corporate cost cutting took its toll on my morale. Freelance made all the difference and i've had a very decorated career for good clients. What any Engineering student could ever dream of when starting out. The problem is, it doesn't particularly make me happy. It's good work, and i enjoy the problem solving aspects of it and having something to do each day. However there is always a large overhead of non-technical work and dealing with poor managers etc. I guess the Engineering was always a bit of a mistake i made the best out of, and now having 10 years behind a computer hasn't done wonders for my health or eye sight. In a nutshell i am in the process of retraining as a therapist and would like to open my own clinic. However, never having done this before, the fast pace IT skills outdate and the fact that all my experience and skills are non transferrable, i am a little worried. Any ideas how i can find part time IT work as i build up my business? (it's incredibly hard to find freelancing work that doesn't require long hours and overtime). Or other ideas to make the transition easier, and perhaps backout if it financially doesn't work/or i have enough marketing skills? I'd be interested to hear from people who have made a similar transition, successfully or unsuccessfully. Many thanks

    Read the article

  • Why does output of fltk-config truncate arguments to gcc?

    - by James Morris
    I'm trying to build an application I've downloaded which uses the SCONS "make replacement" and the Fast Light Tool Kit Gui. The SConstruct code to detect the presence of fltk is: guienv = Environment(CPPFLAGS = '') guiconf = Configure(guienv) if not guiconf.CheckLibWithHeader('lo', 'lo/lo.h','c'): print 'Did not find liblo for OSC, exiting!' Exit(1) if not guiconf.CheckLibWithHeader('fltk', 'FL/Fl.H','c++'): print 'Did not find FLTK for the gui, exiting!' Exit(1) Unfortunately, on my (Gentoo Linux) system, and many others (Linux distributions) this can be quite troublesome if the package manager allows the simultaneous install of FLTK-1 and FLTK-2. I have attempted to modify the SConstruct file to use fltk-config --cflags and fltk-config --ldflags (or fltk-config --libs might be better than ldflags) by adding them like so: guienv.Append(CPPPATH = os.popen('fltk-config --cflags').read()) guienv.Append(LIBPATH = os.popen('fltk-config --ldflags').read()) But this causes the test for liblo to fail! Looking in config.log shows how it failed: scons: Configure: Checking for C library lo... gcc -o .sconf_temp/conftest_4.o -c "-I/usr/include/fltk-1.1 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT" gcc: no input files scons: Configure: no How should this really be done? And to complete my answer, how do I remove the quotes from the result of os.popen( 'command').read()? EDIT The real question here is why does appending the output of fltk-config cause gcc to not receive the filename argument it is supposed to compile?

    Read the article

  • ASP.NET MVC on Cassini: How can I force the "content" directory to return 304s instead of 200s?

    - by Portman
    Scenario: I have an ASP.NET MVC application developed in Visual Studio 2008. There is a root folder named "Content" that stores images and stylesheets. When I run locally (using Cassini) and browse my application, every resource from the "Content" directory is always downloaded. Using Firebug, I can verify that the web server returns an HTTP 200 ("ok"). Desired: I would like for Cassini to return HTTP 304 ("not modified") instead of 200. This is the behavior when running the site under IIS7. Reasoning: The site I am working on has a large number of static resources (often as many as 40 per page). Browsing the site is very fast on IIS7, because these resources are (correctly) cached by the browser. However, browsing the site on my local machine is painfully slow. Pages that render in under 1 second on IIS7 take over 30 seconds to render on Cassini. It's actually faster for me to upload the entire website every few minutes and test from there. (Yes, I recognize that this is perverse and crazy.) So: how can I instruct/trick Cassini into treating the "Content" directory like IIS7 does?

    Read the article

  • Eclipse on windows doesnt start

    - by sap
    I usually do all my java development on linux, using fedora package manager setting up a development environment is easy and fast. Now I have to start using windows but I never used it for java development and im having a few difficulties having it setup. So I downloaded and installed thye java 6 JDK (just the standard edition, not the EE) and installed it. Next I downloaded eclipse classic package, which doesnt have an installer, you just unzip it and run it. I had to add the java bin directory to the PATH variable, which I did. But when I start eclipse.exe I get this: http://img02.imagefra.me/img/img02/1/12/12/f_12c33ivd2m_c79c09f.jpg I already made a new environment variable called CLASSPATH and add the d:/java sdk/lib directory to it, but it the same thing. Am I missing something? Thanks. UPDATE: so i wrote the path to the java.exe on the eclipse.ini file (linking to jvm.dll didnt work) and now it just opens a console window for a few seconds and then closes (doesnt output anything). also launching it like: java -jar plugins/org.eclipse.equinox.launcher_1.0.0.v20070208a.jar make the vm work for about 1-2 seconds and then it returns, with no outputs. UPDATE2: i didnt know it was writting a log file, found it and read it and it said i was using GWT x32 libraries on a x64 VM, so i just downloaded an eclipse x64 version and it worked. i still had to use the .ini trick to say where the JVM is installed. thanks a lot for the help.

    Read the article

  • Help with Neuroph neural network

    - by user359708
    For my graduate research I am creating a neural network that trains to recognize images. I am going much more complex than just taking a grid of RGB values, downsampling, and and sending them to the input of the network, like many examples do. I actually use over 100 independently trained neural networks that detect features, such as lines, shading patterns, etc. Much more like the human eye, and it works really well so far! The problem is I have quite a bit of training data. I show it over 100 examples of what a car looks like. Then 100 examples of what a person looks like. Then over 100 of what a dog looks like, etc. This is quite a bit of training data! Currently I am running at about one week to train the network. This is kind of killing my progress, as I need to adjust and retrain. I am using Neuroph, as the low-level neural network API. I am running a dual-quadcore machine(16 cores with hyperthreading), so this should be fast. My processor percent is at only 5%. Are there any tricks on Neuroph performance? Or Java peroformance in general? Suggestions? I am a cognitive psych doctoral student, and I am decent as a programmer, but do not know a great deal about performance programming.

    Read the article

  • How to trace the connection pool in a Java Web application - DBMS_APPLICATION_INFO

    - by Cleiton Garcia
    Hello, I need improve the traceability in a Web Application that usually run on fixed db user. The DBA should have a fast access for the information about the heavy users that are degrading the database. 5 years ago, I implemented a .NET ORM engine which makes a log of user and the server using the DBMS_APPLICATION_INFO package. Using a wrapper above the connection manager with the following code: DBMS_APPLICATION_INFO.SET_MODULE('" + User + " - " + appServerMachine + "',''); Each time that a connection get a connection from the pool, the package is executed to log the information in the V$SESSION. Has anyone discover or implemented a solution for this problem using the Toplink or Hibernate? Is there a default implementation for this problem? I found here a solutions as I implemented 5 years ago, but I'd like to know with anyone have a better solution and integrated with the ORM. http://stackoverflow.com/questions/53379/using-dbmsapplicationinfo-with-jboss My application is above Spring, the DAO are implemented with JPA (using hibernate) and actually running directly in Tomcat, with plans to (next year) migrate to SAP Netwevare Application Server. Thanks.

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • Building a News-feed that comprises posts "created by user's connections" && "on the topics user is following"

    - by aklin81
    I am working on a project of Questions & Answers website that allows a user to follow questions on certain topics from his network. A user's news-feed wall comprises of only those questions that have been posted by his connections and tagged on the topics that he is following(his expertise topics). I am confused what database's datamodel would be most fitting for such an application. The project needs to consider the future provisions for scalability and high performance issues. I have been looking at Cassandra and MySQL solutions as of now. After my study of Cassandra I realized that Simple news-feed design that shows all the posts from network would be easy to design using Cassandra by executing fast writes to all followers of a user about the post from user. But for my kind of application where there is an additional filter of 'followed topics', (ie, the user receives posts "created by his network" && "on topics user is following"), I could not convince myself with a good schema design in Cassandra. I hope if I missed something because of my short understanding of cassandra, perhaps, can you please help me out with your suggestions of how this news-feed could be implemented in Cassandra ? Looking for a great project with Cassandra ! Edit: There are going to be maximum 5 tags allowed for tagging the question (ie, max 5 topics can be tagged on a question).

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • WPF, C# - Making Intellisense/Autocomplete list, fastest way to filter list of strings

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Django LFS - custom views

    - by owca
    For all those ligthning fast shop users. I'm trying to implement my own first page view that will list all products from shop ( under '/' address). So I have a template : {% extends "lfs/shop/shop_base.html" %} {% block content %} <div id="najnowsze_produkty"> <ul> {% for obj in objects %} <li> {{ obj.name }} </li> {% endfor %} </ul> </div> {% endblock %} and then I've edited main shop view : from lfs.catalog.models import Category from lfs.catalog.models import Product def shop_view(request, template_name="lfs/shop/shop.html"): products = Product.objects.all() shop = lfs_get_object_or_404(Shop, pk=1) return render_to_response(template_name, RequestContext(request, { "shop" : shop, "products" : products })) but it just shows nothing. When I do Product.objects.all() query in shell I get results. Any ideas what could cause the problem ? Maybe I should filter products with 'active' status only ? But I'm not sure if it can influence all objects in any way.

    Read the article

  • Trouble Shoot JavaScript Function in IE

    - by CreativeNotice
    So this function works fine in geko and webkit browsers, but not IE7. I've busted my brain trying to spot the issue. Anything stick out for you? Basic premise is you pass in a data object (in this case a response from jQuery's $.getJSON) we check for a response code, set the notification's class, append a layer and show it to the user. Then reverse the process after a time limit. function userNotice(data){ // change class based on error code returned var myClass = ''; if(data.code == 200){ myClass='success'; } else if(data.code == 400){ myClass='error'; } else{ myClass='notice'; } // create message html, add to DOM, FadeIn var myNotice = '<div id="notice" class="ajaxMsg '+myClass+'">'+data.msg+'</div>'; $("body").append(myNotice); $("#notice").fadeIn('fast'); // fadeout and remove from DOM after delay var t = setTimeout(function(){ $("#notice").fadeOut('slow',function(){ $(this).remove(); }); },5000); }

    Read the article

  • Threads, Sockets, and Designing Low-Latency, High Concurrency Servers

    - by lazyconfabulator
    I've been thinking a lot lately about low-latency, high concurrency servers. Specifically, http servers. http servers (fast ones, anyway) can serve thousands of users simultaneously, with very little latency. So how do they do it? As near as I can tell, they all use events. Cherokee and Lighttpd use libevent. Nginx uses it's own event library performing much the same function of libevent, that is, picking a platform optimal strategy for polling events (like kqueue on *bsd, epoll on linux, /dev/poll on Solaris, etc). They all also seem to employ a strategy of multiprocess or multithread once the connection is made - using worker threads to handle the more cpu intensive tasks while another thread continues to listen and handle connections (via events). This is the extent of my understanding and ability to grok the thousand line sources of these applications. What I really want are finer details about how this all works. In examples of using events I've seen (and written) the events are handling both input and output. To this end, do the workers employ some sort of input/output queue to the event handling thread? Or are these worker threads handling their own input and output? I imagine a fixed amount of worker threads are spawned, and connections are lined up and served on demand, but how does the event thread feed these connections to the workers? I've read about FIFO queues and circular buffers, but I've yet to see any implementations to work from. Are there any? Do any use compare-and-swap instructions to avoid locking or is locking less detrimental to event polling than I think? Or have I misread the design entirely? Ultimately, I'd like to take enough away to improve some of my own event-driven network services. Bonus points to anyone providing solid implementation details (especially for stuff like low-latency queues) in C, as that's the language my network services are written in.

    Read the article

  • Sendkeys problem from .NET program

    - by user203123
    THe code below I copied from MSDN with a bit of modification: [DllImport("user32.dll", CharSet = CharSet.Unicode)] public static extern IntPtr FindWindow(string lpClassName,string lpWindowName); DllImport("User32")] public static extern bool SetForegroundWindow(IntPtr hWnd); int cnt = 0; private void button1_Click(object sender, EventArgs e) { IntPtr calculatorHandle = FindWindow("Notepad", "Untitled - Notepad"); if (calculatorHandle == IntPtr.Zero) { MessageBox.Show("Calculator is not running."); return; } SetForegroundWindow(calculatorHandle); SendKeys.SendWait(cnt.ToString()); SendKeys.SendWait("{ENTER}"); cnt++; SendKeys.Flush(); System.Threading.Thread.Sleep(1000); } The problem is the number sequence in Notepad is not continuously. The first click always results 0 (as expected). but from the second click, the result is unpredictable (but the sequence is still in order, e.g. 3, 4, 5, 10, 14, 15, ....) If I click the button fast enough, I was able to get the result in continuous order (0,1,2,3,4,....) but sometimes it produces more than 2 same numbers (e.g. 0,1,2,3,3,3,4,5,6,6,6,7,8,9,...)

    Read the article

  • Dynamically created operators

    - by Gero
    I created a program using dev-cpp and wxwidgets which solves a puzzle. The user must fill the operations blocks and the results blocks, and the program will solve it. Im solving it using bruteforce, i generate all non repeated 9 length number combinations using a recursive algorithm. It does it pretty fast. Up to here all is great! But the problem is when my program operates depending the character on the blocks. Its extremely slow (it never gets the answer), because of the chars comparation against +, -, *, etc. Im doing a CASE. Is there some way or some programming language wich allows dinamic creation of operators? So i can define the operator ROW1COL2 to be a +, and the same way to all other operations. I leave a screenshot of the app, so its easier to understand how the puzzle works. http://www.imageshare.web.id/images/9gg5cev8vyokp8rhlot9.png PD: The algorithm works, i tryed it with a trivial puzzle, and solved it in a second.

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • Web Workers - Transferable Objects for JSON

    - by kclem06
    HTML 5 Web workers are very slow when using worker.postMessage on a large JSON object. I'm trying to figure out how to transfer a JSON Object to a web worker - using the 'Transferable Objects' types in Chrome, in order to increase the speed of this. Here is what I'm referring to and appears it should speed this up quite a bit: http://updates.html5rocks.com/2011/12/Transferable-Objects-Lightning-Fast I'm having trouble finding a good example of this (and I don't believe I want to use an ArrayBuffer). Any help would be appreciated. I'm imagining something like this: worker = new Worker('workers.js'); var large_json = {}; for(var i = 0; i < 20000; ++i){ large_json[i] = i; large_json["test" + i] = "string"; }; //How to make this call to use Transfer Objects? Takes approx 2 seconds to serialize this for me currently. worker.webkitPostMessage(large_json);

    Read the article

  • sybase - fails to use index unless string is hard-coded

    - by Garrett
    I'm using Sybase 12.5.3 (ASE); I'm new to Sybase though I've worked with MSSQL pretty extensively. I'm running into a scenario where a stored procedure is really very slow. I've traced the issue to a single SELECT stmt for a relatively large table. Modifying that statement dramatically improves the performance of the procedure (and reverting it drastically slows it down; i.e., the SELECT stmt is definitely the culprit). -- Sybase optimizes and uses multi-column index... fast!<br> SELECT ID,status,dateTime FROM myTable WHERE status in ('NEW','SENT') ORDER BY ID -- Sybase does not use index and does very slow table scan<br> SELECT ID,status,dateTime FROM myTable WHERE status in (select status from allowableStatusValues) ORDER BY ID The code above is an adapted/simplified version of the actual code. Note that I've already tried recompiling the procedure, updating statistics, etc. I have no idea why Sybase ASE would choose an index only when strings are hard-coded and choose a table scan when choosing from another table. Someone please give me a clue, and thank you in advance.

    Read the article

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • Python pixel manipulation library

    - by silinter
    So I'm going through the beginning stages of producing a game in Python, and I'm looking for a library that is able to manipulate pixels and blit them relatively fast. My first thought was pygame, as it deals in pure 2D surfaces, but it only allows pixel access through pygame.get_at(), pygame.set_at() and pygame.get_buffer(), all of which lock the surface each time they're called, making them slow to use. I can also use the PixelArray and surfarray classes, but they are locked for the duration of their lifetimes, and the only way to blit them to a surface is to either copy the pixels to a new surface, or use surfarray.blit_array, which requires creating a subsurface of the screen and blitting it to that, if the array is smaller than the screen (if it's bigger I can just use a slice of the array, which is no problem). I don't have much experience with PyOpenGL or Pyglet, but I'm wondering if there is a faster library for doing pixel manipulation in, or if there is a faster method, in Pygame, for doing pixel manupilation. I did some work with SDL and OpenGL in C, and I do like the idea of adding vertex/fragment shaders to my program. My program will chiefly be dealing in loading images and writing/reading to/from surfaces.

    Read the article

  • How to recalculate all-pairs shorthest paths on-line if nodes are getting removed?

    - by Pavel Shved
    Latest news about underground bombing made me curious about the following problem. Assume we have a weighted undirected graph, nodes of which are sometimes removed. The problem is to re-calculate shortest paths between all pairs of nodes fast after such removals. With a simple modification of Floyd-Warshall algorithm we can calculate shortest paths between all pairs. These paths may be stored in a table, where shortest[i][j] contains the index of the next node on the shortest path between i and j (or NULL value if there's no path). The algorithm requires O(n³) time to build the table, and eacho query shortest(i,j) takes O(1). Unfortunately, we should re-run this algorithm after each removal. The other alternative is to run graph search on each query. This way each removal takes zero time to update an auxiliary structure (because there's none), but each query takes O(E) time. What algorithm can be used to "balance" the query and update time for all-pairs shortest-paths problem when nodes of the graph are being removed?

    Read the article

  • Solr associations

    - by Tom
    Hi all, The last couple of days we are thinking of using Solr as our search engine of choice. Most of the features we need are out of the box or can be easily configured. There is however one feature that we absolutely need that seems to be well hidden (or missing) in Solr. I'll try to explain with an example. We have lots of documents that are actually businesses: <document> <name>Apache</name> <cat>1</cat> ... </document> <document> <name>McDonalds</name> <cat>2</cat> ... </document> In addition we have another xml file with all the categories and synonyms: <cat id=1> <name>software</name> <synonym>IT<synonym> </cat> <cat id=2> <name>fast food</name> <synonym>restaurant<synonym> </cat> We want to associate both businesses and categories so we can search using the name and/or synonyms of the category. But we do not want to merge these files at indexing time because we should update the categories (adding.remioving synonyms...) without indexing all the businesses again. Is there anything in Solr that does this kind of associations or do we need to develop some specific pieces? All feedback and suggestions are welcome. Thanks in advance, Tom

    Read the article

  • PocketPC c++ windows message processing recursion problem

    - by user197350
    Hello, I am having a problem in a large scale application that seems related to windows messaging on the Pocket PC. What I have is a PocketPC application written in c++. It has only one standard message loop. while (GetMessage (&msg, NULL, 0, 0)) { { TranslateMessage (&msg); DispatchMessage (&msg); } } We also have standard dlgProc's. In the switch of the dlgProc, we will call a proprietary 3rd party API. This API uses a socket connection to communicate with another process. The problem I am seeing is this: whenever two of the same messages come in quickly (from the user clicking the screen twice too fast and shouldn't be) it seems as though recursion is created. Windows begins processing the first message, gets the api into a thread safe state, and then jumps to process the next (identical ui) message. Well since the second message also makes the API call, the call fails because it is locked. Because of the design of this legacy system, the API will be locked until the recursion comes back out (which also is triggered by the user; so it could be locked the entire working day). I am struggling to figure out exactly why this is happening and what I can do about it. Is this because windows recognizes the socket communication will take time and preempts it? Is there a way I can force this API call to complete before preemption? Is there a way I can slow down the message processing or re-queue the message to ensure the first will execute (capturing it and doing a PostMessage back to itself didnt work). We don't want to lock the ui down while the first call completes. Any insight is greatly appreciated! Thanks!!

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >