Search Results

Search found 9286 results on 372 pages for 'transfer speed'.

Page 57/372 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How can i test my DB speed? (Learning)

    - by acidzombie24
    I have design a database. Theres no columns with indexing, nor any code for optimizing. I am positive i should index certain columns since i search them a lot. My question is HOW do i test if any part of my database will be slow? ATM I am using sqlite and i will be switching to either MS Sql or MySql based on my host provider. Will creating 100,000 records in each table be enough? Or will that always be fast in sqlite and i need to do 1mil? Do i need 10mil before a database will become slow? Also how do i time it? I am using C# so should i use StopWatch or is there a ADO.NET/Sqlite function i should use?

    Read the article

  • LinkedList.contains execution speed

    - by Le_Coeur
    Why Methode LinkedList.contains() runs quickly than such implementation: for (String s : list) if (s.equals(element)) return true; return false; I don't see great difference between this to implementations(i consider that search objects aren't nulls), same iterator and equals operation

    Read the article

  • Sun's JVM instruction speed table

    - by Pindatjuh
    Is there a benchmark available how much relative time each instruction costs in a single-thread, average-case scenario (either with or without JIT compiler), for the JVM (any version) by Sun? If there is not a benchmark already available, how can I get this information? E.g.: TIME iload_1 1 iadd 12 getfield 40 etc. Where getfield is equivalent to 40 iload_1 instructions.

    Read the article

  • How to speed up my websites (backoffices)

    - by jmpena
    Hello im developing some backends in ASP.NET 2.0 and i have put all the images in Cache, GZIPED my CSS, JS files and everything to speedup the load of each options. the performance its good and i have no problems with the clients but i want "MORE" fast loads and im looking for some recomendations. Is important to mention that those websites are using only in intranets so im thinking to implement my next projects using IFRAME for content that way (i think) the options will be loading faster because they not have to load the entire site. any help / recomendations? thanks in advance.

    Read the article

  • Image ransfer using hessian protocol from client's folder to tomcat server

    - by ?? ?
    My goal is to upload a image(.jpg or .png)from client's folder to tomcat6 server through hessian protocol. And do image processing using opencv on server, then return the image back to client. Question1. Is the following transfering steps correct? put a test.jpg image on client's folder -- convert the test.jpg in client.java(main.java) class to BufferedImage -- convert the BufferedImage to mat or Iplimage in server for using openCV.I have set a hello world sample from Simple Messaging Example using Hessian , and searched from Hessian with large binary data and other websites, but still dont know how to use it! Question2. Is there a related JAVA sample code? Thank you very much. Btw, I am using ubuntu12+netbeans7.2

    Read the article

  • C++ Vector at/[] operator speed

    - by sub
    In order to give functions the option to modify the vector I can't do curr = myvec.at( i ); doThis( curr ); doThat( curr ); doStuffWith( curr ); But I have to do: doThis( myvec.at( i ) ); doThat( myvec.at( i ) ); doStuffWith( myvec.at( i ) ); (as the answers of my other question pointed out) I'm going to make a hell lot of calls to myvec.at() then. How fast is it, compared to the first example using a variable to store the result? Is there a different option for me? Can I somehow use pointers? When it's getting serious there will be thousands of calls to myvec.at() per second. So every little performance-eater is important.

    Read the article

  • Speed up :visible:input selector avoiding filter

    - by macca1
    I have a jQuery selector that is running way too slow on my unfortunately large page: $("#section").find(":visible:input").filter(":first").focus(); Is there a quicker way to select the first visible input without having to find ALL the visible inputs and then filtering THAT selection for the first? I want something like :visible:input:first but that doesn't seem to work.

    Read the article

  • Tags php improving database speed and user experience

    - by Doodle
    How did stackOverFlow, excuse me if it wasn't the first implementation of this system. Decide what its initial tags where? I want to provide users with the best experience on my site and am implementing a tags system. I don't care about search engines or any of that. I just care about my users. Does any one have any advice about things that have failed or succeeded when they allowed users to tag things? Does any one know some good resources about the methodologies of user tagging? Does any one know some good resources about implementing a tags system from a programming perspective, database structures, theories, etc ? I'll give my check to who ever I feel points me in the best direction on the subject.

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • Android Speed based on accelerometer values

    - by fraguas
    Hi. I need to obtain the velocity of an android device, based on the accelerometer values. I made a code that allows me to get the accelerometer values, and then I calculate the velocity, using the formula: v = v0 + at. (vector calculation) My problem is that my velocity only increases and never decreases. I think the problem is that the device never gets an negative acceleration. Can you help me with this?

    Read the article

  • Speed up this query joining to a table multiple times

    - by Mongus Pong
    Hi, I have this query that (stripped right down) goes something like this : SELECT [Person_PrimaryContact].[LegalName], [Person_Manager].[LegalName], [Person_Owner].[LegalName], [Person_ProspectOwner].[LegalName], [Person_ProspectBDM].[LegalName], [Person_ProspectFE].[LegalName], [Person_Signatory].[LegalName] FROM [Cache] LEFT JOIN [dbo].[Person] AS [Person_Owner] WITH (NOLOCK) ON [Person_Owner].[PersonID] = [Cache].[ClientOwnerID] LEFT JOIN [dbo].[Person] AS [Person_Manager] WITH (NOLOCK) ON [Person_Manager].[PersonID] = [Cache].[ClientManagerID] LEFT JOIN [dbo].[Person] AS [Person_Signatory] WITH (NOLOCK) ON [Person_Signatory].[PersonID] = [Cache].[ClientSignatoryID] LEFT JOIN [dbo].[Person] AS [Person_PrimaryContact] WITH (NOLOCK) ON [Person_PrimaryContact].[PersonID] = [Cache].[PrimaryContactID] LEFT JOIN [dbo].[Person] AS [Person_ProspectOwner] WITH (NOLOCK) ON [Person_ProspectOwner].[PersonID] = [Cache].[ProspectOwnerID] LEFT JOIN [dbo].[Person] AS [Person_ProspectBDM] WITH (NOLOCK) ON [Person_ProspectBDM].[PersonID] = [Cache].[ProspectBDMID] LEFT JOIN [dbo].[Person] AS [Person_ProspectFE] WITH (NOLOCK) ON [Person_ProspectFE].[PersonID] = [Cache].[ProspectFEID] Person is a huge table and each join to it has a pretty significant hit in the execution plan. Is there anyway I can adjust this query so that I am only linking to it once, or at least get SQL Server to scan through it only once?

    Read the article

  • iphone application development -- passing data to and from the server.

    - by SAPNA
    i have to develop an -phone application .user logs in through the i-phone and gets data stored in the database.our database is created in MY SQL. and website is developed in (classic) ASP.interface is created in i-phone SDK. connection is remaining.what should i use for transferring data to server and from the server. JSON or SOAP.is XML parsing necessary.actually i am very new to this field. so a bit confused. we have some time left for completing our application.so in urgent need of help. thank you in advance.

    Read the article

  • How to improve the speed of a loop containing a sqlalchemy query statement as conditional

    - by LtPinback
    This loop checks if a record is in the sqlite database and builds a list of dictionaries for those records that are missing and then executes a multiple insert statement with the list. This works but it is very slow (at least i think it is slow) as it takes 5 minutes to loop over 3500 queries. I am a complete newbie in python, sqlite and sqlalchemy so I wonder if there is a faster way of doing this. list_dict = [] session = Session() for data in data_list: if session.query(Class_object).filter(Class_object.column_name_01 == data[2]).filter(Class_object.column_name_00 == an_id).count() == 0: list_dict.append({'column_name_00':a_id, 'column_name_01':data[2]}) conn = engine.connect() conn.execute(prices.insert(),list_dict) conn.close() session.close() edit: I moved session = Session() outside the loop. Did not make a difference.

    Read the article

  • Are there any FTP programs which can automatically send the contents of a folder to a remote server?

    - by Nick G
    Are there any FTP programs which can automatically copy (or rather 'move') the contents of a folder to a remote server? I have of course googled this but only really found one or two ancient products which look really clunky and unmaintained. I was wondering if there's a way to do this from the command line or any better solution to the base problem. In more detail, new files get written to a folder every few hours. These new files need to be FTP'd elsewhere and then deleted. Mirroring or synchonisation systems are probably out of the picture as we need to delete the source files once they've been successfully transferred. If it's easier, the 'solution' could pull the files off the server (rather than the server pushing them to the client). The computers will both be Windows OS.

    Read the article

  • Good way to send a large file over a network in C#?

    - by BFreeman
    I am trying to build an application that can request files from a service running on another machine in the network. These files can be fairly large (500mb + at times). I was looking into sending it via TCP but I'm worried that it may require that the entire file be stored in memory. There will probably only be one client. Copying to a shared directory isn't acceptable either. The only communication required is for the client to say "gimme xyz" and the server to send it (and whatever it takes to ensure this happens correctly). Any suggestions?

    Read the article

  • Speed of running a test suite in Rails

    - by Milan Novota
    I have 357 tests (534 assertions) for my app (using Shoulda). The whole test suite runs in around 80 seconds. Is this time OK? I'm just curious, since this is one of my first apps where I write tests extensively. No fancy stuff in my app. Btw.: I tried to use in memory sqlite3 database, but the results were surprisingly worse (around 83 seconds). Any clues here? I'm using Macbook with 2GB of RAM and 2GHz Intel Core Duo processor as my development machine.

    Read the article

  • Hibernate Relationship Mapping/Speed up batch inserts

    - by manyxcxi
    I have 5 MySQL InnoDB tables: Test,InputInvoice,InputLine,OutputInvoice,OutputLine and each is mapped and functioning in Hibernate. I have played with using StatelessSession/Session, and JDBC batch size. I have removed any generator classes to let MySQL handle the id generation- but it is still performing quite slow. Each of those tables is represented in a java class, and mapped in hibernate accordingly. Currently when it comes time to write the data out, I loop through the objects and do a session.save(Object) or session.insert(Object) if I'm using StatelessSession. I also do a flush and clear (when using Session) when my line count reaches the max jdbc batch size (50). Would it be faster if I had these in a 'parent' class that held the objects and did a session.save(master) instead of each one? If I had them in a master/container class, how would I map that in hibernate to reflect the relationship? The container class wouldn't actually be a table of it's own, but a relationship all based on two indexes run_id (int) and line (int). Another direction would be: How do I get Hibernate to do a multi-row insert?

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • Difference between macros and functions in C in relation to instruction memory and speed

    - by DAHANS
    To my understanding the difference between a macro and a function is, that a macro-call will be replaced by the instruction in the definition, and a function does the whole push, branch and pop -thing. Is this right, or have I understand something wrong? Additionally, if this is right, it would mean, that macros would take more space, but would be faster (because of the lack of the push,branch and pop instructions.), wouldn't it?

    Read the article

  • Speed of Synchronization vs Normal

    - by Swaranga Sarma
    I have a class which is written for a single thread with no methods being synchronized. class MyClass implements MyInterface{ //interface implementation methods, not synchronized } But we also needed a synchronized version of the class. So we made a wrapper class that implements the same interface but has a constructor that takes an instance of MyClass. Any call to the methods of the synchronized class are delegated to the instance of MyClass. Here is my synchronized class.. class SynchronizedMyClass implements MyInterface{ //the constructor public SynchronizedMyClass(MyInterface i/*this is actually an instance of MyClass*/) //interface implementation methods; all synchronized; all delegated to the MyInterface instance } After all this I ran numerous amounts of test runs with both the classes. The tests involve reading log files and counting URLs in each line. The problem is that the synchronized version of the class is consistently taking less time for the parsing. I am using only one thread for the teste, so there is no chance of deadlocks, race around condition etc etc. Each log file contains more than 5 million lines which means calling the methods more than 5 million times. Can anyone explain why synchronized versiuon of the class migt be taking less time than the normal one?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >