Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 223/267 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Netbeans Profile JUnit 4 problem

    - by Krishna K
    I have a unit test that takes 200 sec to run. I am trying to use NetBeans profiler to speed it up. But the profiler doesn't run the unit test. It just creates an object of the test and exits. Doesn't run the actual test methods or @Before / @After methods. This is a maven project with surefire and junit 4. And partial output is below. Profiler Agent: Waiting for connection on port 5140, timeout 10 seconds (Protocol version: 9) Profiler Agent: Established local connection with the tool ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.cris.puzzle.solvers.SudokuSolverTest Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec Results : Tests run: 0, Failures: 0, Errors: 0, Skipped: 0 Profiler Agent: Connection with agent closed Profiler Agent: Connection with agent closed Profiler Agent: Initializing... Profiler Agent: Options: >C:/Program Files/NetBeans 6.8/profiler3/lib,5140,10< Profiler Agent: Initialized succesfully ------------------------------------------------------------------------ BUILD SUCCESSFUL ------------------------------------------------------------------------ Total time: 14 seconds Does anyone know how to make it work? Thank you.

    Read the article

  • Java and tomcat vs ASP.NET and IIS

    - by Mark Cooper
    Until recently I'd considered myself to be a pretty good web programmer (coming up for 10yrs commercial experience on a variety of e-commerce, static and enterprise applications). I'm self taught and have always used the Microsoft product stack (ASP, ASP.NET)... My applications are always functional, relatively bug free, but have never been lightening quick. As a frequent web user I always found this to be the norm... how fast are the websites from the big tech players (eBay, Facebook, Microsoft, IBM, Dell, Telerik etc etc) - in truth none are particularly fast. I always attributed this to "the way things are with web apps"... ...then I cam across a product called Jira from atlasian and this has stopped me in my tracks... This application is fast, and I mean blindingly fast.. too fast to time the switches between pages, fully live content, lots of images and data and cross references etc etc... I run this on an intranet, with a large application DB, and this is running on a very normal server (single processor, SATA HDD, 8GB RAM). Am I missing something?? Are my programming techniques that bad?? I am wondering if this speed gain is down to it being written in Java and running on Tomcat. Does anyone have any benchmarks to compare JSP / ASP or Tomcat / IIS??? Thanks, Mark NOTE: this isn't a blatant plug for Jira. I don't work for them or have any affiliation to them... but I would like to be able to write applications like them :)

    Read the article

  • Which key:value store to use with Python?

    - by Kurt
    So I'm looking at various key:value (where value is either strictly a single value or possibly an object) stores for use with Python, and have found a few promising ones. I have no specific requirement as of yet because I am in the evaluation phase. I'm looking for what's good, what's bad, what are the corner cases these things handle well or don't, etc. I'm sure some of you have already tried them out so I'd love to hear your findings/problems/etc. on the various key:value stores with Python. I'm looking primarily at: memcached - http://www.danga.com/memcached/ python clients: http://pypi.python.org/pypi/python-memcached/1.40 http://www.tummy.com/Community/software/python-memcached/ CouchDB - http://couchdb.apache.org/ python clients: http://code.google.com/p/couchdb-python/ Tokyo Tyrant - http://1978th.net/tokyotyrant/ python clients: http://code.google.com/p/pytyrant/ Lightcloud - http://opensource.plurk.com/LightCloud/ Based on Tokyo Tyrant, written in Python Redis - http://code.google.com/p/redis/ python clients: http://pypi.python.org/pypi/txredis/0.1.1 MemcacheDB - http://memcachedb.org/ So I started benchmarking (simply inserting keys and reading them) using a simple count to generate numeric keys and a value of "A short string of text": memcached: CentOS 5.3/python-2.4.3-24.el5_3.6, libevent 1.4.12-stable, memcached 1.4.2 with default settings, 1 gig memory, 14,000 inserts per second, 16,000 seconds to read. No real optimization, nice. memcachedb claims on the order of 17,000 to 23,000 inserts per second, 44,000 to 64,000 reads per second. I'm also wondering how the others stack up speed wise.

    Read the article

  • Java (Tomcat): how to configure a cookieless subdomain to serve static content

    - by Webinator
    One of the tip given by both Google and Yahoo! to speed up webpages loading is to configure a cookieless subdomain to server static content. How do you configure a "cookieless subdomain" using Tomcat in standalone mode (this question is not about how to use Apache to serve static content in a cookieless-way, but about how to do it in Tomcat-standalone mode)? Note that I don't care about filters supporting If-Modified-Since nor care about filters supporting gzipping: the static content I'm serving is forever cacheable (or its name will change) and it is already compressed data (so gzip would only slow down the transfer). Do I need two different Tomcat webapps? (one "cookiefull" and one "cookieless") Do I need two different servlets? (as of now I've got only one dispatcher/controller servlet). Why would a "regular" link to, say, a static image be called in a cookiefull way when it would be on the same domain as the main webapp and then be called in a "cookie-less" way when it is on a subdomain? I don't understand exactly what is going on: is it the browser that decides to append or not cookies to the query? If so, why would it not append the cookies to a static query on a "cookieless" subdomain. Any example as to what is going on behind the scene is most welcome :)

    Read the article

  • WinForms Control.BeginInvoke asynchronous callback

    - by Darran
    I have a number of Janus grid controls that need to be populated on an application startup. I'd like to load these grids on different threads to speed startup time and the time it takes to refresh these grids. Each grid is on a seperate tab. Ideally I'd like to use Control.BeginInvoke on each grid and on the grid load completing the tabs will become enabled. I know with Delegates you can do a Asynchronous callback when using BeginInvoke, so I could enable the tabs in the asynchronous callback, however when using Control.BeginInvoke this is not possible. Is there a way to do asynchronous callbacks using Control.BeginInvoke or possibly a better solution? So far I have: public delegate void BindDelegate(IMyGrid grid); private IAsyncResult InvokeBind(IMyGrid grid) { return ((Control)grid).BeginInvoke( new BindDelegate(DoBind), new object[] { grid } ); } private void DoBind(IMyGrid grid) { grid.Bind(); // Expensive operation } private void RefreshComplete() { IAsyncResult grid1Asynch = InvokeBind(grid1); IAsyncResult grid2Asynch = InvokeBind(grid2); IAsyncResult grid3Asynch = InvokeBind(grid2); IAsyncResult grid4Asynch = InvokeBind(grid3); IAsyncResult grid5Asynch = InvokeBind(grid4); IAsyncResult grid6Asynch = InvokeBind(grid5); } Now I could spin off a separate thread and keep checking to see if the IAsynchResults have completed and depending on which one completes I could re-enable the Tab control that the grid is contained in. Is there a better way of doing this?

    Read the article

  • Dreamweaver and GZIP files

    - by Vian Esterhuizen
    Hi, I've recently tried to optimize my site for speed and brandwith. Amongst many other techniques, I've used GZIP on my .css and .js files. Using PuTTY I compressed the files on my site and then used: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{HTTP_USER_AGENT} !Konqueror RewriteCond %{REQUEST_FILENAME}.gz -f RewriteRule ^(.*)\.css$ $1.css.gz [QSA,L] RewriteRule ^(.*)\.js$ $1.js.gz [QSA,L] <FilesMatch \.css\.gz$> ForceType text/css </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript </FilesMatch> </IfModule> <IfModule mod_mime.c> AddEncoding gzip .gz </IfModule> in my .htaccess file so that they get served properly because all my links are without the ".gz". My problem is, I cant work on the GZIP file in Dreamweaver. Is there a plugin or extension of somesort that allows Dreamweaver to temporarily uncompress thses files so it can read them? Or is there a way that I can work on my local copies as regular files, and server side they automatically get compressed when they are uploaded. Or is there a different code editor I should be using that would completely get around this? Or a just a different technique to doing this? I hope this question makes sense, Thanks

    Read the article

  • Fastest image iteration in Python

    - by Greg
    I am creating a simple green screen app with Python 2.7.4 but am getting quite slow results. I am currently using PIL 1.1.7 to load and iterate the images and saw huge speed-ups changing from the old getpixel() to the newer load() and pixel access object indexing. However the following loop still takes around 2.5 seconds to run for an image of around 720p resolution: def colorclose(Cb_p, Cr_p, Cb_key, Cr_key, tola, tolb): temp = math.sqrt((Cb_key-Cb_p)**2+(Cr_key-Cr_p)**2) if temp < tola: return 0.0 else: if temp < tolb: return (temp-tola)/(tolb-tola) else: return 1.0 .... for x in range(width): for y in range(height): Y, cb, cr = fg_cbcr_list[x, y] mask = colorclose(cb, cr, cb_key, cr_key, tola, tolb) mask = 1 - mask bgr, bgg, bgb = bg_list[x,y] fgr, fgg, fgb = fg_list[x,y] pixels[x,y] = ( (int)(fgr - mask*key_color[0] + mask*bgr), (int)(fgg - mask*key_color[1] + mask*bgg), (int)(fgb - mask*key_color[2] + mask*bgb)) Am I doing anything hugely inefficient here which makes it run so slow? I have seen similar, simpler examples where the loop is replaced by a boolean matrix for instance, but for this case I can't see a way to replace the loop. The pixels[x,y] assignment seems to take the most amount of time but not knowing Python very well I am unsure of a more efficient way to do this. Any help would be appreciated.

    Read the article

  • Separation of multipage tiff with compression "CCITT T.6" very slow

    - by Alex
    I need to separate multiframe tiff files, and use the following method: public static Image[] GetFrames(Image sourceImage) { Guid objGuid = sourceImage.FrameDimensionsList[0]; FrameDimension objDimension = new FrameDimension(objGuid); int frameCount = sourceImage.GetFrameCount(objDimension); Image[] images = new Image[frameCount]; for (int i = 0; i < frameCount; i++) { MemoryStream ms = new MemoryStream(); sourceImage.SelectActiveFrame(objDimension, i); sourceImage.Save(ms, ImageFormat.Tiff); images[i] = Image.FromStream(ms); } return images; } It works fine, but if the source image was encoded using the CCITT T.6 compression, separating a 20-frame-file takes up to 15 seconds on my 2,5ghz CPU.(edit: One core is at 100% during the process) When saving the images afterward to a single file using standard compression (LZW), the separation time of the LZW-file is under 1 second. Saving with CCITT compression also takes very long. Is there a way to speed up the process?

    Read the article

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • How to fix RapidXML String ownership concerns?

    - by Roddy
    RapidXML is a fast, lightweight C++ XML DOM Parser, but it has some quirks. The worst of these to my mind is this: 3.2 Ownership Of Strings. Nodes and attributes produced by RapidXml do not own their name and value strings. They merely hold the pointers to them. This means you have to be careful when setting these values manually, by using xml_base::name(const Ch *) or xml_base::value(const Ch *) functions. Care must be taken to ensure that lifetime of the string passed is at least as long as lifetime of the node/attribute. The easiest way to achieve it is to allocate the string from memory_pool owned by the document. Use memory_pool::allocate_string() function for this purpose. Now, I understand it's done this way for speed, but this feels like an car crash waiting to happen. The following code looks innocuous but 'name' and 'value' are out of scope when foo returns, so the doc is undefined. void foo() { char name[]="Name"; char value[]="Value"; doc.append_node(doc.allocate_node(node_element, name, value)); } The suggestion of using allocate_string() as per manual works, but it's so easy to forget. Has anyone 'enhanced' RapidXML to avoid this issue?

    Read the article

  • Highlighting a Table Correctly Despite rowspan and colspan attributes - WITHOUT jQuery

    - by ScottSEA
    Thanks to some !#$#@ in another department who wrote some crap code, I am unable to use the jQuery library - despite my fervent desire to do so. Their software has already been released into the world, and I have to be compatible. ============================================ I am trying to highlight a table. Desired behavior: Clicking on a cell in the body highlights the row. Clicking on a cell in the head highlights the column. If a column and row are both highlighted, the intersection is highlighted a different color (super-highlight). Clicking on a previously super-highlighted cell turns off the highlights. This behavior is simple enough to do with a basic table, but when rowspans and colspans start rearing their ugly heads, things start to get a little wonky... highlighting cell[5], for instance, no longer works reliably. My thought, in order to speed execution time of the highlighting itself (by changing a class name), is to pre-calculate the 'offsets' of all cells - with a 'colStart' and 'colEnd', 'rowStart' and 'rowEnd' when the page loads and store that in an array somehow. The question: How would YOU implement this functionality? I am fairly rusty at my JavaScript, awfully rudimentary in my programming skills and would benefit greatly from some guidance. Thanks, Scott.

    Read the article

  • Transition from 2D to 3D later in game development

    - by Axarydax
    Hi, I'd like to work on a game, but for rapidly prototyping it, I'd like to keep it as simple as possible, so I'd do everything in top-down 2D in GDI+ and WinForms (hey, I like them!), so I can concentrate on the logic and architecture of the game itself. I thinking about having the whole game logic (server) in one assembly, where the WinForms app would be a client to that game, and if/when the time is right, I'd write a 3D client. I am tempted to use XNA, but I haven't really looked into it, so I don't know if it won't take too much time getting up to speed - I really don't want to spent much time doing other stuff than the game logic, at least while I have the inspiration. But I wouldn't have to abandon everything and transfer to new platform when transitioning from 2D to 3D. Another idea is just to get over it and learn XNA/Unity/SDL/something at least to that level so I can make the same 2D version as I could in GDI+, and I won't have to worry about switching frameworks anymore. Let's just say that the game is the kind where you watch a dude from behind, you run around the gameworld and interact with objects. So the bird's eye perspective could be doable for now. Thanks.

    Read the article

  • Secure hash and salt for PHP passwords

    - by luiscubal
    It is currently said that MD5 is partially unsafe. Taking this into consideration, I'd like to know which mechanism to use for password protection. Is “double hashing” a password less secure than just hashing it once? Suggests that hashing multiple times may be a good idea. How to implement password protection for individual files? Suggests using salt. I'm using PHP. I want a safe and fast password encryption system. Hashing a password a million times may be safer, but also slower. How to achieve a good balance between speed and safety? Also, I'd prefer the result to have a constant number of characters. The hashing mechanism must be available in PHP It must be safe It can use salt (in this case, are all salts equally good? Is there any way to generate good salts?) Also, should I store two fields in the database(one using MD5 and another one using SHA, for example)? Would it make it safer or unsafer? In case I wasn't clear enough, I want to know which hashing function(s) to use and how to pick a good salt in order to have a safe and fast password protection mechanism. EDIT: The website shouldn't contain anything too sensitive, but still I want it to be secure. EDIT2: Thank you all for your replies, I'm using hash("sha256",$salt.":".$password.":".$id) Questions that didn't help: What's the difference between SHA and MD5 in PHP Simple Password Encryption Secure methods of storing keys, passwords for asp.net How would you implement salted passwords in Tomcat 5.5

    Read the article

  • Stack and Hash joint

    - by Alexandru
    I'm trying to write a data structure which is a combination of Stack and HashSet with fast push/pop/membership (I'm looking for constant time operations). Think of Python's OrderedDict. I tried a few things and I came up with the following code: HashInt and SetInt. I need to add some documentation to the source, but basically I use a hash with linear probing to store indices in a vector of the keys. Since linear probing always puts the last element at the end of a continuous range of already filled cells, pop() can be implemented very easy without a sophisticated remove operation. I have the following problems: the data structure consumes a lot of memory (some improvement is obvious: stackKeys is larger than needed). some operations are slower than if I have used fastutil (eg: pop(), even push() in some scenarios). I tried rewriting the classes using fastutil and trove4j, but the overall speed of my application halved. What performance improvements would you suggest for my code? What open-source library/code do you know that I can try?

    Read the article

  • Uploadify and rails 3 authenticity tokens

    - by Ceilingfish
    Hi chaps, I'm trying to get a file upload progress bar working in a rails 3 app using uploadify (http://www.uploadify.com) and I'm stuck at authenticity tokens. My current uploadify config looks like <script type="text/javascript" charset="utf-8"> $(document).ready(function() { $("#zip_input").uploadify({ 'uploader': '/flash/uploadify.swf', 'script': $("#upload").attr('action'), 'scriptData': { 'format': 'json', 'authenticity_token': encodeURIComponent('<%= form_authenticity_token if protect_against_forgery? %>') }, 'fileDataName': "world[zip]", //'scriptAccess': 'always', // Incomment this, if for some reason it doesn't work 'auto': true, 'fileDesc': 'Zip files only', 'fileExt': '*.zip', 'width': 120, 'height': 24, 'cancelImg': '/images/cancel.png', 'onComplete': function(event, data) { $.getScript(location.href) }, // We assume that we can refresh the list by doing a js get on the current page 'displayData': 'speed' }); }); </script> But I am getting this response from rails: Started POST "/worlds" for 127.0.0.1 at 2010-04-22 12:39:44 ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (6.6ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (12.2ms) This appears to be because I'm not sending the authentication cookie along with the request. Does anyone know how I can get the values I should be sending there, and how I can make rails read it from HTTP POST rather than trying to find it as a cookie?

    Read the article

  • iPhone - dequeueReusableCellWithIdentifier usage

    - by Jukurrpa
    Hi, I'm working on a iPhone app which has a pretty large UITableView with data taken from the web, so I'm trying to optimize its creation and usage. I found out that dequeueReusableCellWithIdentifier is pretty useful, but after seeing many source codes using this, I'm wondering if the usage I make of this function is the good one. Here is what people usually do: UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"Cell"]; // Add elements to the cell return cell; And here is the way I did it: NSString identifier = [NSString stringWithFormat:@"Cell @d", indexPath.row]: // The cell row UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell != nil) return cell; cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:identifier]; // Add elements to the cell return cell; The difference is that people use the same identifier for every cell, so dequeuing one only avoids to alloc a new one. For me, the point of queuing was to give each cell a unique identifier, so when the app asks for a cell it already displayed, neither allocation nor element adding have to be done. In fine I don't know which is best, the "common" method ceils the table's memory usage to the exact number of cells it display, whislt the method I use seems to favor speed as it keeps all calculated cells, but can cause large memory consumption (unless there's an inner limit to the queue). Am I wrong to use it this way? Or is it just up to the developper, depending on his needs?

    Read the article

  • Debugging SQL Server Slowness: Same Database, Different Servers

    - by Craig Walker
    For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue. Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one. As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing. What things should I be looking at? What tools can I use to investigate why this is happening?

    Read the article

  • Looking for ideas how to refactor my (complex) algorithm

    - by _simon_
    I am trying to write my own Game of Life, with my own set of rules. First 'concept', which I would like to apply, is socialization (which basicaly means if the cell wants to be alone or in a group with other cells). Data structure is 2-dimensional array (for now). In order to be able to move a cell to/away from a group of another cells, I need to determine where to move it. The idea is, that I evaluate all the cells in the area (neighbours) and get a vector, which tells me where to move the cell. Size of the vector is 0 or 1 (don't move or move) and the angle is array of directions (up, down, right, left). This is a image with representation of forces to a cell, like I imagined it (but reach could be more than 5): Let's for example take this picture: Forces from lower left neighbour: down (0), up (2), right (2), left (0) Forces from right neighbour : down (0), up (0), right (0), left (2) sum : down (0), up (2), right (0), left (0) So the cell should go up. I could write an algorithm with a lot of if statements and check all cells in the neighbourhood. Of course this algorithm would be easiest if the 'reach' parameter is set to 1 (first column on picture 1). But what if I change reach parameter to 10 for example? I would need to write an algorithm for each 'reach' parameter in advance... How can I avoid this (notice, that the force is growing potentialy (1, 2, 4, 8, 16, 32,...))? Can I use specific design pattern for this problem? Also: the most important thing is not speed, but to be able to extend initial logic. Things to take into consideration: reach should be passed as a parameter i would like to change function, which calculates force (potential, fibonacci) a cell can go to a new place only if this new place is not populated watch for corners (you can't evaluate right and top neighbours in top-right corner for example)

    Read the article

  • Find unique vertices from a 'triangle-soup'

    - by sum1stolemyname
    I am building a CAD-file converter on top of two libraries (Opencascade and DWF Toolkit). However, my question is plattform agnostic: Given: I have generated a mesh as a list of triangular faces form a model constructed through my application. Each Triangle is defined through three vertexes, which consist of three floats (x, y & z coordinate). Since the triangles form a mesh, most of the vertices are shared by more then one triangle. Goal: I need to find the list of unique vertices, and to generate an array of faces consisting of tuples of three indices in this list. What i want to do is this: //step 1: build a list of unique vertices for each triangle for each vertex in triangle if not vertex in listOfVertices Add vertex to listOfVertices //step 2: build a list of faces for each triangle for each vertex in triangle Get Vertex Index From listOfvertices AddToMap(vertex Index, triangle) While I do have an implementation which does this, step1 (the generation of the list of unique vertices) is really slow in the order of O(n!), since each vertex is compared to all vertices already in the list. I thought "Hey, lets build a hashmap of my vertices' components using std::map, that ought to speed things up!", only to find that generating a unique key from three floating point values is not a trivial task. Here, the experts of stackoverflow come into play: I need some kind of hash-function which works on 3 floats, or any other function generating a unique value from a 3d-vertex position.

    Read the article

  • Does replace into have a where clause?

    - by Lajos Arpad
    I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this: A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem. If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days. Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties). Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this: replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into. I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link: http://dev.mysql.com/doc/refman/5.0/en/replace.html Thank you for your answers in advance, Lajos Árpád

    Read the article

  • Delphi threads deadlock

    - by Lobuno
    Hello! I am having a problem sometimes with a deadlock when destroying some threads. I've tried to debug the problem but the deadlock never seems to exist when debugging in the IDE, perhaps because of the low speed of the events in the ide. The problem: The main thread creates several threads when the application starts. The threads are always alive and synchronizing with the main thread. No problems at all. The threads are destroyed when the applcation ends (mainform.onclose) like this: thread1.terminate; thread1.waitfor; thread1.free; and so on. but some times one of the threads (which logs some string to a memo, using synchronize) will lock the whole application when closing. I suspect that the thread is synchronizing when I call waitform and the harmaggeddon happens, but that's is just a guess because the deadlock never happens when debbuging (or I've never been able to reproduce it anyway). Any advice?

    Read the article

  • Setting up a "cookieless domain" to improve site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • GAE Task Queue oddness

    - by b3nw
    I have been testing the taskqueue with mixed success. Currently I am using the default queue, in default settings ect ect.... I have a test url setup which inserts about 8 tasks into the queue. With short order, all 8 are completed properly. So far so good. The problem comes up when I re-load that url twice under say a minute. Now watching the task queue, all the tasks are added properly, but only the first batch execute it seems. But the "Run in Last Minute" # shows the right number of tasks being run.... The request logs tell a different story. They show only the first set of 8 running, but all task creation urls working successfully. The oddness of this is that if I wait say a minute between the task creation url requests, it will work fine. Oddly enough changing the bucket_size or execution speed does not seem to help. Only the first batch are executed. I have also reduced the number of requests all the way down to 2, and still found only the first 2 execute. Any others added display the same issues as above. Any suggestions? Thanks

    Read the article

  • Is there any special tool for interactive GUI development

    - by niko
    Hi, Currently I am preparing exercises about networks and mobile communications for students. I was thinking about creating an interactive user-interface which enables the user to drag&drop predefined elements and then implement a logic based upon element distances etc. An example would be to place two base stations (a predefined element with several properties), set the scale in the interface and then check the signal interferrence in the environment (user-interface). The first part might be too abstract whereas the example might be too specific, but I was wondering whether there already exists any friendly framework or language which enables developers to create interactive interfaces (for teaching/learning purpouses) in short ammount of time. Usually I write applications for PC environment in .NET but in this case it would take too much time to create a specific interface for every exercise. I would appreciate if anyone could suggest any way to create interactive user-interface in short ammount of time. Are there any special programming languages or development tools for this kind of applications or are there any useful frameworks for .NET, Java or any other language to speed up the development of user-interfaces? Thank you!

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >