Search Results

Search found 21644 results on 866 pages for 'connection speed'.

Page 124/866 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • How to speed up loading the splash screen.

    - by AngryHacker
    I am optimizing the startup of a WinForms app. One issue I identified is the loading of the splash screen form. It takes about half a second to a second. I know that multi-threading is a no-no on UI pieces, however, seeing how the splash screen is a fairly autonomous piece of the application, is it possible to somehow mitigate its performance hit by throwing it one some other thread (perhaps in the way Chrome does it), so that the important pieces of the application can actually get going.

    Read the article

  • Cocoa Read NSInputStream from FTP connection

    - by Chuck
    Hi, I (apparently) manage to make a ftp connection, but fail to read anything from it, and with good cause: I don't reach the reading until the connection has timed out. Here's my code: NSHost *host = [NSHost hostWithAddress:@"127.0.0.1"]; [NSStream getStreamsToHost:host port:3333 inputStream:&iStream outputStream:&oStream]; NSMutableDictionary *settings = [NSMutableDictionary dictionaryWithCapacity:1]; [settings setObject:(NSString *)NSStreamSocketSecurityLevelTLSv1 forKey:(NSString *)kCFStreamSSLLevel]; [settings setObject:[NSNumber numberWithBool:YES] forKey:(NSString *)kCFStreamSSLAllowsAnyRoot]; [iStream retain]; [iStream setDelegate:self]; [iStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; CFReadStreamSetProperty((CFReadStreamRef)iStream, kCFStreamPropertySSLSettings, (CFTypeRef)settings);forKey:NSStreamSocketSecurityLevelKey]; [iStream open]; [oStream retain]; [oStream setDelegate:self]; [oStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; CFWriteStreamSetProperty((CFWriteStreamRef)oStream, kCFStreamPropertySSLSettings, (CFTypeRef)settings); forKey:NSStreamSocketSecurityLevelKey]; [oStream open]; NSMutableData *returnMessage = [NSMutableData dataWithLength: 300]; [iStream read: [returnMessage mutableBytes] maxLength: 300]; NSString *readData = [[NSString alloc] initWithBytes: [returnMessage bytes] length: 300 encoding: NSUTF8StringEncoding]; NSRunAlertPanel(@"response", readData, nil, nil, nil); I have not sent a request to the FTP to switch to ssl yet. Any help is greatly appreciated as I find Xcode quite horrible for debugging (no exception or error msg on failed steps what so ever). Chuck

    Read the article

  • How to speed-up Eclipse startup?

    - by Ivan
    I've installed Eclipse Modelling Framework (eclipse-modeling-galileo-SR1-incubation-linux-gtk.tar.gz) by extracting 'features' and 'plugins' filders of the package to '~/.eclipse/org.eclipse.platform_3.5.0_1543616141'. And now Eclipse splashscreen is shown for many minutes (the whole system (incl. BIOS, Linux and KDE) takes much less time to start). I don't need Eclipse Modelling Framework to be loaded at startup, if it is, I only need it when I start a modelling project. How to set up Eclipse to start faster and don't load all the plugins at startup time? My system runs Arch Linux 2010.05, OpenJDK 6.0, Eclipse 3.5.2.

    Read the article

  • Trying to speed up a SQLITE UNION QUERY

    - by user142683
    I have the below SQLITE code SELECT x.t, CASE WHEN S.Status='A' AND M.Nomorebets=0 THEN S.PriceText ELSE '-' END AS Show_Price FROM sb_Market M LEFT OUTER JOIN (select 2010 t union select 2020 t union select 2030 t union select 2040 t union select 2050 t union select 2060 t union select 2070 t ) as x LEFT OUTER JOIN sb_Selection S ON S.MeetingId=M.MeetingId AND S.EventId=M.EventId AND S.MarketId=M.MarketId AND x.t=S.team WHERE M.meetingid=8051 AND M.eventid=3 AND M.Name='Correct Score' With the current interface restrictions, I have to use the above code to ensure that if one selection is missing, that a '-' appears. Some feed would be something like the following SelectionId Name Team Status PriceText =================================== 1 Barney 2010 A 10 2 Jim 2020 A 5 3 Matt 2030 A 6 4 John 2040 A 8 5 Paul 2050 A 15/2 6 Frank 2060 S 10/11 7 Tom 2070 A 15 Is using the above SQL code the quickest & efficient?? Please advise of anything that could help. Messages with updates would be preferable.

    Read the article

  • Improve drawingvisual render's speed

    - by Michael Hao
    I create my own FrameworkElement and override VisualChildrenCount{get;} and GetVisualChild(int index) by returning my own DrawingVisual collection instance.I have override OnRender . I will add 20-50 DrawingVisuals in this FrameworkElement ,every DrawingVisual will have 2000 line segments.The logic value of these points between 0 to 60000.when I zoom into 1:1 the FrameworkElement 's Height will be 60000, the rending time will be 15 minutes!! How do I improve the rending performance ?

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • Access SSAS cube from across domains without direct database connection

    - by SuperKing
    Hello, I'm working with SQL Server Analysis Services for the first time and have the dilemma of working on a project in which users must be able to access SSAS Cubes (via a custom web dashboard) that live across different servers and domains, but without having access to the other server's SSAS database connection strings. So Organization A and Organization B will have their own cubes on their own servers, but Organization A users must be able to view Organization B's cubes, and Organization B users must be able to view Organization A's cubes, but neither organization should have access to the connection string. I've read about allowing HTTP access to the SSAS server and cube from the link below, but that requires setting up users for authentication or allowing anonymous access to one organization's server for users of another organization, and I'm not sure this would be acceptable for this situation, or if this is the preferred way to do this. Is performance acceptable here? http://technet.microsoft.com/en-us/library/cc917711.aspx I also wonder if perhaps it makes sense to run a nightly/weekly process that accesses the other organization's SSAS database via a web service or something, and pull that data into a database on the organization's server, and then rebuild the cube. Then that cube would be queried without having to go and connect to the other organization server when viewing the cube. Has anyone else attempted to accomplish something similar? Is HTTP access the standard way to go for this? Or any other possible options? Thanks, and please let me know if you need more info, still unclear on how some of this works.

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • C# Lambda Expression Speed

    - by Nathan
    I have not used many lambda expressions before and I ran into a case where I thought I could make slick use of one. I have a custom list of ~19,000 records and I need to find out if a record exists or not in the list so instead of writing a bunch of loops or using linq to go through the list I decided to try this: for (int i = MinX; i <= MaxX; ++i) { tempY = MinY; while (tempY <= MaxY) { bool exists = myList.Exists(item => item.XCoord == i && item.YCoord == tempY); ++tempY; } } Only problem is it take ~9 - 11 seconds to execute. Am I doing something wrong is this just a case of where I shouldn't be using an expression like this? Thanks.

    Read the article

  • Speed up compilation with mockito on Android

    - by pbreault
    I am currently developing an android app in eclipse using: One project for the app One project for the tests (Instrumentation and Pojo tests) In the test project, I am importing the mockito library for standard POJO testing. However, when I import the library, the compilation time skyrockets from 1 second to about 30 seconds in eclipse. The cause seems to be that the whole library is converted each time. So basically, each time a make a modification that I want to test, I have to wait 30 seconds. The only workarounds that I have found so far would be: Disable "Build Automatically" Create a project that includes only pojo tests and put mockito only there. Use another library that compiles faster (e.g. easymock) Any other suggestion?

    Read the article

  • SQL: Speed Improvement - Cluttered union query

    - by vol7ron
    SELECT * FROM ( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) foo -- UNION -- SELECT a.user_id , a.f_name , a.l_name , '' , '' , '' FROM current_tbl a WHERE a.user_id NOT IN ( select user_id from( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) bar ) ORDER BY user_id Example of table population: current_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Berry A3 | Calv | Chard | | import_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Butcher <- last_name different | | Expected Output: ----------------------------------------------------------------------- user_id1 | f_name1 | l_name1 | user_id2 | f_name2 | l_name2 ----------+-----------+-----------+------------+-----------+----------- A1 | Adam | Acorn | A1 | Adam | Acorn A2 | Beth | Berry | A2 | Beth | Butcher A3 | Calv | Chard | | | Doing this method gets rid of conditions where the row would be: A2 | Beth | Berry | A2 | Beth | Butcher But it keeps the A3 row I hope this makes sense and I haven't overly simplified it. This is a continuation question from my other question. The succession of these improvements has dropped the query down from ~32000ms to where it's at now ~1200ms - quite an improvement. I supect I can optimize by using UNION ALL in the subquery and of course the usual index optimizations, but I'm looking for the best SQL optimization. FYI this particular case is for PostgreSQL.

    Read the article

  • best way to switch between secure and unsecure connection without bugging the user

    - by Brian Lang
    The problem I am trying to tackle is simple. I have two pages - the first is a registration page, I take in a few fields from the user, once they submit it takes them to another page that processes the data, stores it to a database, and if successful, gives a confirmation message. Here is my issue - the data from the user is sensitive - as in, I'm using an https connection to ensure no eavesdropping. After that is sent to the database, I'd like on the confirmation page to do some nifty things like Google Maps navigation (this is for a time reservation application). The problem is by using the Google Maps api, I'd be linking to items through a unsecure source, which in turn prompts the user with a nasty warning message. I've browsed around, Google has an alternative to enterprise clients, but it costs $10,000 a year. What I am hoping is to find a workaround - use a secure connection to take in the data, and after it is processed, bring them to a page that isn't secure and allows me to utilize the Google Maps API. If any of you have a Netflix account you can see exactly what I would like to do when you sign-in, it is a secure page, which then takes you to your account / queue, on an unsecure page. Any suggestions? Thanks!

    Read the article

  • Speed up SQL Server Fulltext Index through Text Duplication of Non-Indexed Columns

    - by Alex
    1) I have the text fields FirstName, LastName, and City. They are fulltext indexed. 2) I also have the FK int fields AuthorId and EditorId, not fulltext indexed. A search on FirstName = 'abc' AND AuthorId = 1 will first search the entire fulltext index for 'abc', and then narrow the resultset for AuthorId = 1. This is bad because it is a huge waste of resources as the fulltext search will be performed on many records that won't be applicable. Unfortunately, to my knowledge, this can't be turned around (narrow by AuthorId first and then fulltext-search the subset that matches) because the FTS process is separate from SQL Server. Now my proposed solution that I seek feedback on: Does it make sense to create another computed column which will be included in the fulltext search which will identify the Author as text (e.g. AUTHORONE). That way I could get rid of the AuthorId restriction, and instead make it part of my fulltext search (a search for 'abc' would be 'abc' and 'AUTHORONE' - all executed as part of the fulltext search). Is this a good idea or not? Why?

    Read the article

  • Improving the speed of php

    - by cast01
    I'm currently working on a website in PHP, and I'm wondering what the best practices/methods are to reduce the time requests take. I've build the site in a modular way, so a page would consist of a number of modules, and each of these would need to request information. For example, I have a cart module, that (if a cart is set) will fetch the cart with the id (stored in a session variable) from the database and return its contents. I have another module that lists categories and this needs to fetch the categories from the database. My system is built with models, and each model might also make a request, for example a category model will make a request to get products in that category.

    Read the article

  • ColdFusion speed cost of FileExists

    - by davidosomething
    I want to: on every page, check if a file exists include that file if TRUE i.e.: <cfset variables.includes.header = ExpandPath("_inc_header.cfm")> <cfif FileExists(variables.includes.header)> <cfinclude template = "#variables.includes.header#"> </cfif> Is this a good idea?

    Read the article

  • IPC speed and compare

    - by Lily
    I am trying to implement a real-time application which involves IPC across different modules. The modules are doing some data intensive processing. I am using message queue as the backbone(Activemq) for IPC in the prototype, which is easy(considering I am a totally IPC newbie), but it's very very slow. Here is my situation: I have isolated the IPC part so that I could change it other ways in future. I have 3 weeks to implement another faster version. ;-( IPC should be fast, but also comparatively easy to pick up I have been looking into different IPC approaches: socket, pipe, shared memory. However, I have no experience in IPC, and there is definitely no way I could fail this demo in 3 weeks... Which IPC will be the safe way to start with? Thanks. Lily

    Read the article

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • Altruistic network connection bandwidth estimation

    - by datenwolf
    Assume two peers Alice and Bob connected over a IP network. Alice and Bob are exchanging packets of lossy compressed data which are generated and to be consumes in real time (think a VoIP or video chat application). The service is designed to cope with as little bandwidth available, but relies on low latencies. Alice and Bob would mark their connection with an apropriate QoS profile. Alice and Bob want use a variable bitrate compression and would like to consume all of the leftover bandwidth available for the connection between them, but would voluntarily reduce the consumed bitrate depending on the state of the network. However they'd like to retain a stable link, i.e. avoid interruptions in their decoded data stream caused by congestion and the delay until the bandwidth got adjusted. However it is perfectly possible for them to loose a few packets. TL;DR: Alice and Bob want to implement a VoIP protocol from scratch, and are curious about bandwidth and congestion control. What papers and resources do you suggest for Alice and Bob to read? Mainly in the area of bandwidth estimation and congestion control.

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • Asp.Net Login Control very slow initial connection to Non-Trusted AD Domain

    - by Eric Brown - Cal
    ASP.NET Login control is very slow making the initial connection to AD when authenticating to a different domain than the domain the web server is a member of. Problem occurs for the IIS server and when using with the Visual Studio's built in web server. It takes about 30 seconds the first time when attempting to use the control to connect against another domain. There is no trust relationship bewteen the web server's domain and the other domains (attempted connecting to several different domains). Subsequent connections execute quickly until the connection times out. Using Systernals Process Monitor to troubleshoot, there are two OpenQuery operations right before the delay to "C:\WINDOWS\asembly\GAC_MSIL\System.DirectoryServices\2.0.0.0_b03f5f7f11d50a3a\Netapi32.dll with a result NAME NOT FOUND" and right after the 30 second delay the TCP Send and TCP Recieves indicate communication begins with the AD server. Things we have tried: Impersonating an administrator on the web server in the web.config; Granting permissions to the CryptoKeys to the NetworkService and ASPNET; Specifying by IP instead of DNS name; Multiple variations of specifying the name and ldap server with domains and OU's; Local host entries; Looked for ports being blocked (SYN_SENT) with netstat -an. Nslookup resolves all the domains and systems involved correectly. TraceRt shows the Correct routes Any Idea or hints are greately appreicated.

    Read the article

  • Threading cost - minimum execution time when threads would add speed

    - by Lukas
    I am working on a C# application that works with an array. It walks through it (meaning that at one time only a narrow part of the array is used). I am considering adding threads in it to make it perform faster (it runs on a dualcore computer). The problem is that I do not know if it would actually help, because threads cost something and this cost could easily be more than the parallel gain... So how do I determine if threading would help?

    Read the article

  • GPIB connection to external device using MATLAB

    - by hkf
    Is there a way to establish a GPIB connection using MATLAB without the instrument control Tool box? (I don't have it). Also is there a way for MATLAB to know what the external device's RS232 parameter values are ( Baud rate, stop bit etc..). For the RS232 connection I have the following code: % This function is meant to send commands to Potentiostat Model 263A. % A run includes turning the cell on, reading current for time t1, turning % the cell off, waiting for time t2. % t1 is the duration [secs] for which the Potentiostat must run (cell is on) % t2 is the duration [secs] to on after off % n is the number of runs % port is the serial port name such as COM1 function [s] = Potentiostat_control(t1,t2,n) port = input('type port name such as COM1', 's') s = serial(port); set(s,'BaudRate', 9600, 'DataBits', 8, 'Parity', 'even', 'StopBits', 2 ,'Terminator', 'CR/LF'); fopen(s) %fprintf(s,'RS232?') disp(['Total runs requested = ' num2str(n)]) disp('i denotes number of runs executed so far..'); for i=1:n i %data1 = query(s, '*IDN?') fprintf(s,'%s','CELL 1'); % sends the command 'CELL 1' %fprintf(s,'%s','READI'); pause(t1); fprintf(s,'%s','CELL 0'); %fprintf(s,'%s','CLEAR'); pause(t2); end fclose(s)

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >