Search Results

Search found 6363 results on 255 pages for 'buford speed'.

Page 28/255 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

  • USB Drive Not recognized

    - by user36582
    My Friend's Pen Drive, which was working well very well just few days ago, is not being recognized after getting used by a virus affected machine. Its not on fdisk -l or lsusb However in dmesg I can see the following: [ 977.300013] usb 5-2: new full speed USB device using uhci_hcd and address 2 [ 977.420014] usb 5-2: device descriptor read/64, error -71 [ 977.644023] usb 5-2: device descriptor read/64, error -71 [ 977.860013] usb 5-2: new full speed USB device using uhci_hcd and address 3 [ 977.980013] usb 5-2: device descriptor read/64, error -71 [ 978.204013] usb 5-2: device descriptor read/64, error -71 [ 978.420013] usb 5-2: new full speed USB device using uhci_hcd and address 4 [ 978.828015] usb 5-2: device not accepting address 4, error -71 [ 978.940015] usb 5-2: new full speed USB device using uhci_hcd and address 5 [ 979.348013] usb 5-2: device not accepting address 5, error -71 [ 979.348292] hub 5-0:1.0: unable to enumerate USB device on port 2 [ 1017.848015] usb 5-2: new full speed USB device using uhci_hcd and address 6 [ 1017.968012] usb 5-2: device descriptor read/64, error -71 [ 1018.192017] usb 5-2: device descriptor read/64, error -71 [ 1018.408014] usb 5-2: new full speed USB device using uhci_hcd and address 7 [ 1018.528012] usb 5-2: device descriptor read/64, error -71 [ 1018.752023] usb 5-2: device descriptor read/64, error -71 [ 1018.968012] usb 5-2: new full speed USB device using uhci_hcd and address 8 [ 1019.376019] usb 5-2: device not accepting address 8, error -71 [ 1019.488011] usb 5-2: new full speed USB device using uhci_hcd and address 9 [ 1019.896016] usb 5-2: device not accepting address 9, error -71 [ 1019.896308] hub 5-0:1.0: unable to enumerate USB device on port 2 [ 1049.984016] usb 5-1: new full speed USB device using uhci_hcd and address 10 [ 1050.104014] usb 5-1: device descriptor read/64, error -71 [ 1050.328014] usb 5-1: device descriptor read/64, error -71 [ 1050.544014] usb 5-1: new full speed USB device using uhci_hcd and address 11 [ 1050.664018] usb 5-1: device descriptor read/64, error -71 [ 1050.888019] usb 5-1: device descriptor read/64, error -71 [ 1051.104025] usb 5-1: new full speed USB device using uhci_hcd and address 12 [ 1051.512014] usb 5-1: device not accepting address 12, error -71 [ 1051.624101] usb 5-1: new full speed USB device using uhci_hcd and address 13 [ 1052.032014] usb 5-1: device not accepting address 13, error -71 [ 1052.032991] hub 5-0:1.0: unable to enumerate USB device on port 1 What these Errors actually mean and how Can I get this pen drive Back to work ??

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • how to speed up the code??

    - by kaushik
    i have very huge code about 600 lines plus. cant post the whole thing here. but a particular code snippet is taking so much time,leading to problems. here i post that part of code please tell me what to do speed up the processing.. please suggest the part which may be the reason and measure to improve them if this small part of code is understandable. using_data={} def join_cost(a , b): global using_data #print a #print b save_a=[] save_b=[] print 1 #for i in range(len(m)): #if str(m[i][0])==str(a): save_a=database_index[a] #for i in range(len(m)): # if str(m[i][0])==str(b): #print 'save_a',save_a #print 'save_b',save_b print 2 save_b=database_index[b] using_data[save_a[0]]=save_a s=str(save_a[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') print 3 for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) end_time=save_a[4] #print end_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(end_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 q=[] print 4 l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') q=k3.split(' ') #print q print 5 s=str(save_b[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) strt_time=save_b[3] #print strt_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(strt_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 w=[] l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') w=k3.split(' ') #print w cost=0 for i in range(12): #print q[i] #print w[i] h=float(q[i])-float(w[i]) cost=cost+math.pow(h,2) j_cost=math.sqrt(cost) #print cost return j_cost def target_cost(a , b): a=(b+1)*3 b=(a+1)*2 t_cost=(a+b)*5/2 return t_cost r1='shht:ra_77' r2='grx_18' g=[] nodes=[] nodes=nodes+[[r1]] for i in range(len(y_in_db_format)): g=y_in_db_format[i] #print g #print g[0] g.remove(str(g[0])) nodes=nodes+[g] nodes=nodes+[[r2]] print nodes print "lenght of nodes",len(nodes) lists=[] #lists=lists+[r1] for i in range(len(nodes)): for j in range(len(nodes[i])): lists=lists+[nodes[i][j]] #lists=lists+[r2] print lists distance={} for i in range(len(lists)): if i==0: distance[str(lists[i])]=0 else: distance[str(lists[i])]=long(123231223) #print distance group_dist=[] infinity=long(123232323) for i in range(len(nodes)): distances=[] for j in range(len(nodes[i])): #distances=[] if i==0: distances=distances+[[nodes[i][j], 0]] else: distances=distances+[[nodes[i][j],infinity]] group_dist=group_dist+[distances] #print distances print "group_distances",group_dist #print "check",group_dist[0][0][1] #costs={} #for i in range(len(lists)): #if i==0: # costs[str(lists[i])]=1 #else: # costs[str(lists[i])]=get_selfcost(lists[i]) path=[] for i in range(len(nodes)): mini=[] if i!=(len(nodes)-1): #temp=long(123234324) #Now calculate the cost between the current node and each of its neighbour for k in range(len(nodes[(i+1)])): for j in range(len(nodes[i])): current=nodes[i][j] #print "current_node",current j_distance=join_cost( current , nodes[i+1][k]) #t_distance=target_cost( current , nodes[i+1][k]) t_distance=34 #print distance #print "distance between current and neighbours",distance total_distance=(.5*(float(group_dist[i][j][1])+float(j_distance))+.5*(float(t_distance))) #print "total distance between the intial_nodes and current neighbour",total_distance if int(group_dist[i+1][k][1]) > int(total_distance): group_dist[i+1][k][1]=total_distance #print "updated distance",group_dist[i+1][k][1] a=current #print "the neighbour",nodes[i+1][k],"updated the value",a mini=mini+[[str(nodes[i+1][k]),a]] print mini

    Read the article

  • ACPI Throttling in Ubuntu

    - by Evan
    I'm looking to throttle my cpu through the ACPI. I've read up on it, but I keep receiving permission denied statements. I have 8 available throttling states. Here are the outcomes of my atttempts: evan@evan-laptop:/proc/acpi/processor/CPU0$ echo 3 /proc/acpi/processor/CPU0/throttling bash: /proc/acpi/processor/CPU0/throttling: Permission denied evan@evan-laptop:/proc/acpi/processor/CPU0$ sudo echo 3 /proc/acpi/processor/CPU0/throttling bash: /proc/acpi/processor/CPU0/throttling: Permission denied EDIT: For reference, I am running Ubuntu Karmic with Intel Core Duo T2500 with ACPI enabled

    Read the article

  • Which Linux is the most efficient?

    - by quandary
    Simple question: There is a gazillion Linux distributions out there. Which one (distribution/incl. window manager) makes (technically) the most efficient use of my (aging) computer ? I have appx. 1 GB RAM and a 1.6 GHz processor, 120 GB hd. I develop applications (C++/.NET/mono/ASP/PostGre SQL/). Usually, I prefer distros with apt-get. Anybody knows which one takes the most care of my limited RAM, and wich one is the fastest/slimmest of them all, that has a decent repo and is damn fast)

    Read the article

  • Bluray Drives: 2x vs 4x vs 6x vs 8x read/writespeed.

    - by Wesley
    Hi all, I couldn't find a duplicate question, but I was wondering what the differences are between different read/write speeds for Bluray drive. I'm planning on buying one for a build but don't know if I can cheap out on getting a Bluray 2x drive or spend more money for a quality Bluray 8x drive. Will I just experience more lag/buffering times for Bluray discs on a 2x and none for a 6x or 8x? Thanks in advance.

    Read the article

  • Our wi-fi at work is ridiculously slow, will adding more range extenders improve it?

    - by john
    At work, we have two wireless networks (e.g., Work1 Work2); the Work2 is used downstairs and Work1 is used upstairs. However, both are notoriously slow. The connection is better when we are wired in, but unfortunately due to our building being very old and our company growing very fast, most employees are not seated near the walls where the ethernet cables are. I had Cox, our ISP, run a bandwidth utilization test and it doesn't seem like we are capping out on upstream/downstream, which leads me to believe that it's strictly an issue with the wireless networks (which were implemented before I got there). The wireless networks are both Apple Airport Extremes. Is there anything I can do to improve the situation for everyone? Speeds are extremely slow, and sometimes drops out.

    Read the article

  • Do different operating systems have different read and write speeds?

    - by Ivan
    If I have two different operating systems, such as Windows 8 and Ubuntu, running on the same hardware, will the two operating systems have different read and write speeds? My guess is that there would be minimal difference between operating systems and read and write speeds to the hard disk since the major limited factor is seeking; however, different operating systems may use different file systems in order to attempt to reduce seek time in the hard disk. Likewise, I'm sure that modern operating systems will not actually write directly to the hard disk, and instead will just have it in memory and marked with a dirty bit. Are there any studies that show differences in read and write speeds between OSs? Or would the file system being used by the OS matter more than the OS itself?

    Read the article

  • Why does my Internet slow to a crawl unless I reboot my router every few days?

    - by Lord Torgamus
    A few weeks ago, I noticed that my Internet connection had slowed down to a crawl. I waited a few days hoping it would go away on its own, but it didn't get better. So I asked this question about how to make it faster. The problem went away after I updated to the latest firmware, so I didn't follow up too carefully. But every few days since then, my Internet has slowed down again. Unlike before, all I have to do to fix it is open the router administration page and press the "Reboot" button. Nothing else seems to work, though I'm sure there are options I haven't tried. If it makes a difference, my girlfriend and I both transfer large amounts of data fairly routinely for school (videoconferencing, downloading entire recorded lectures). The router is a Cisco/Linksys 160N V3 that's about a year old. Most of the time, it deals with just two standard Windows 7 laptops. The only thing I came across while searching for answers/dupes was this question, which seems similar superficially, but probably doesn't have the same root issue. Anyways, it's not resolved. What could be causing these slowdowns, and how can I get rid of them?

    Read the article

  • Any rerefence of CPU world statistics?

    - by Áxel Costas Pena
    I am looking for any referencee about computer power statistics across the world. My main interest is about real computing capabilities, so I'd prefer information about real processor power, and even best if it includes also other critical hardware statistics, like RAM memory, but if it isn't possible, maybe statistics about brand/model distribution will be also useful. I've Googled for some minutes and I've found nothing related.

    Read the article

  • How to set expiration date for external files? [closed]

    - by garconcn
    I have a site included lots of external files, most of them are gif format. I have no control on the external files, but have to use them(with permission). When I check the site using Google Pagespeed, I got very low score(31) even though the page load is fast. One of the high priority suggestion is to leverage browser caching by setting an expiration date. However, all the files are on external links. I have already set the expiration date for local files.

    Read the article

  • Cisco ASA 5505 and slow download speeds for Apple devices

    - by James
    For traffic routing through my ASA 5505, downloads for all Apple devices, including AppleTV iPad gen 1 IMac MacBook Pro are very slow. speedof.me show less than 1 Mbps download (where I should have 20 Mbps +), yet for any Windows-based device, the download speeds are in excess of 20 Mbps. The Windows device, including the iMac and MacBook Pro machines, are connected via ethernet cable. Why are Apple devices experiencing such pain? Is it an ASA setting, or something else? Thanks.

    Read the article

  • Compression without Mod_Deflate

    - by pws5068
    Greetings all, After running tests with Google PageSpeed, I believe my site could really benefit from compressing js/html/css/php files. Unfortunately, my host (Host Gator) does not support Mod_Gzip or Mod_Deflate. I was able to enable php compression through the ini file. Is there another way to serve compressed files to browsers that support them, in a manner similar to Mod_Deflate?

    Read the article

  • Is it possible to configure a CDN so that it will step out of the way for a subset of regional IPs?

    - by rwired
    We have a website which targets customers in China, both expat and local Chinese. We have an ICP license which allows us to host in a datacenter inside China. Internet in China is actually as fast as anywhere else (faster than most places actually), so long as the content is served-up within the boundaries of the Great-Firewall. Anything that crosses the wall is horribly slow. The problem is that most expats have some sort of VPN installed so that they can access all the blocked stuff. What this means is that when they access our site, the traffic first has to go out of China through the firewall to their VPN, and then back in. The performance is terrible, worse than if we were just hosting outside of China directly (which we used to do before the ICP was issued). So I want to use a global CDN to mirror the site automatically, but I only want to deliver the content via the CDN if the user's request IP address is outside of China. Inside China I would like the content to be served by our own server. I also want to be careful with the domain names. We currently use www.xxx.com and www.xxx.cn for language selection purposes, as these perform well in SEO on Google (which the expats use), and Baidu (which the locals use). If possible I would like to avoid having one domain on the outside, and the other on the inside since not all expats use a VPN, and some Chinese speakers also use VPNs. Also some of our legitimate customers in both languages are from outside of China. I also don't want to resort to using something like www2.xxx.com/cn for the outside connection if at all possible, since I have worries about duplicate content and canonical URLs ruining our SEO (unless you know of a quick fix for that). CDNs I'm considering are: Google PageSpeed, CloudFlare, Amazon CloudFront. None of which have datacenters inside China. I have complete control of the .com DNS zone records, but the .cn zones are under the control of the domain issuing body in China. I'm not sure at this time if they would allow even a CNAME to point to an IP outside of China (although I don't see why not). They no longer allow outside registrars like they used to.

    Read the article

  • Does Gigabit degrade all ports to 100 megabit if there is a 100 megabit device attached?

    - by hjoelr
    Our company is buying some HP Procurve managed gigabit switches to replace some of our core switches. However, we aren't able to upgrade all of our switches from 100Mb to Gigabit switches. I think I know the answer but I'm not exactly sure. If we plug those 100Mb switches (or even a 100Mb device) into those Gigabit switches, will the performance of the entire switch drop to 100Mb or will just that one port work at 100Mb?

    Read the article

  • Firefox being really sluggish on php.net website?

    - by Rory
    Is it just me, or is firefox (3.5 on Ubuntu 9.10 karmic) really sluggish when opening the PHP.net website? When I have several tabs open with just the PHP.net website, and I tab up and down (with Control-PageUp/Down), it's slow to change tab. If I do it quickly, then firefox freezes for a few seconds (I know because it goes grey, which is a compiz feature to show unresponsive windows). The CPU usage also goes up when I'm tabbing to PHP.net pages. UPDATE: This appears to happen for all PHP.net webpages. For other pages, on other sites, Firefox is fine (for me).

    Read the article

  • Slow transfer with memory stick (819 kb/s)

    - by Nrew
    What do I do to optimize the file transfer rate of a Memory Stick Duo? The file transfer was not like this when it was still new. Can reformatting give new life to a memory stick? It takes about 20 minutes just to transfer 1Gb of file from computer to memory stick. The computer is decent enough. 2.50Ghz processor, 2Gb ram.

    Read the article

  • Wireless vs. Wired: which is faster?

    - by studiohack
    I have the option of hooking up my machines to the internet either wirelessly or via ethernet cable (wired). I'm curious as to which is faster; the approximate wireless signal strength (average) is about 60%. My question is, would my internet be faster if I used ethernet, resulting in a stronger connection?

    Read the article

  • My internet connection just got really slow - How can I troubleshoot it?

    - by Walden
    A few days ago my connection became really slow. I have DSL which should be 3mb down and 768k up. I'm lucky if I get 768k down and 200k up. It sucks. I called my ISP, Verizon and they did some sort of line test and told me the problem was on my end. I rebooted my modem several times, like they told me. I'm not really sure why I even bothered calling them, the guy on the other end was just reading stuff out of a notebook - pretty useless. So, I checked my network traffic in windows resource monitor, and there doesn't seem to be anything there hogging the bandwidth. What else could be slowing my connection down on my PC? on my router? Something else?

    Read the article

  • /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor disappeared on ubuntu 11.10

    - by Bob
    I have an Ubuntu 11.10 server that has been up for 210 days. I have been frequently doing apt-get upgrade every few weeks, and this time I noticed that my server load average just shot up. The last time this happened between upgrades, it was because the cpu scaling governor was set to ondemand. But this time when I tried to list the contents of /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor the file is missing. There isn't even a cpufreq folder anymore! How do I fix this and ensure there is no cpu scaling going on?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >