Search Results

Search found 23762 results on 951 pages for 'network speed'.

Page 171/951 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Fastest Way To Format a Plain Text Using Javascript

    - by Nathan Campos
    I have a huge plain text document, about 700kb which is very big for plain texts and I need to format it on cloud converting it to HTML, but the only things that I need to replace, format to HTML so it can be displayed by the browser, are bold and italic. For bold at the plain text they are like this: Not on bold... **bold text here** not bold here And italic like this: Not italic... *italic text* no italic Just like StackOverflow does for their formatting, but the problem is that I need to make it a lot faster, since the text is so big... One of my ideas was to add a page slide, so I the script just need to format some part of the text, not it all, then after the user changes the page the script would be called again, but the problem is how I can make the code for this all?

    Read the article

  • Visual Studio 2010 - Is it slow for anyone else?

    - by AngryHacker
    I've read a lot of stuff about VS2010 being much more performant than VS2008. When I've finally installed it, I found that it, in fact, is much slower (save for the Add References dialog). For instance, Silverlight projects take twice as long to load, the startup of the IDE itself is much slower, etc... Am I missing something here or is it like this for everyone?

    Read the article

  • How to get REALLY fast python over a simple loop

    - by totallymike
    I'm working on a spoj problem, INTEST. The goal is to specify the number of test cases (n) and a divisor (k), then feed your program n numbers. The program will accept each number on a newline of stdin and after receiving the nth number, will tell you how many were divisible by k. The only challenge in this problem is getting your code to be FAST because it k can be anything up to 10^7 and the test cases can be as high as 10^9. I'm trying to write it in python and having trouble speeding it up. Any ideas? import sys first_in = raw_input() thing = first_in.split() n = int(thing[0]) k = int(thing[1]) total = 0 i = 0 for line in sys.stdin: t = int(line) if t % k == 0: total += 1 print total

    Read the article

  • Looking for a fast hash-function.

    - by Julian
    Hello, I'm looking for a special hash-function. Let's say I have a large list of strings, if I order them by their hash-values they should be ordered quasi randomly. The most important point is: it must be super fast. I've tried md5 and sha1 and they're using to much cpu power. Clashes are not a problem. I'm using javascript, so it shouldn't be too complicated to implement.

    Read the article

  • Firefox and Chrome slow on localhost; known fix doesn't work on Windows 7

    - by Herb Caudill
    Firefox and Chrome are known to be slow on localhost when IP6 is enabled. In previous versions of Windows, the simplest fix is to comment out this line from the hosts file, as explained in the answer to this question. ::1 localhost However, as noted in this question, in Windows 7 this line is already commented out: # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost Is there an alternative way to disable the ::1 localhost reference in Windows 7?

    Read the article

  • PHP include(): File size & performance

    - by Tom
    An inexperienced PHP question: I've got a PHP script file that I need to include on different pages lots of times in lots of places. I have the option of either breaking the included file down into several smaller files and include these on a as-needed basis... OR ... I could just keep it all together in a single PHP file. I'm wondering if there's any performance impact of using a larger vs. smaller file for include() in this context? For example, is there any performance difference between a 200KB file and a 20KB file? Thank you.

    Read the article

  • MySQL: Return grouped fields where the group is not empty, effeciently

    - by Ryan Badour
    In one statement I'm trying to group rows of one table by joining to another table. I want to only get grouped rows where their grouped result is not empty. Ex. Items and Categories SELECT Category.id FROM Item, Category WHERE Category.id = Item.categoryId GROUP BY Category.id HAVING COUNT(Item.id) > 0 The above query gives me the results that I want but this is slow, since it has to count all the rows grouped by Category.id. What's a more effecient way? I was trying to do a Group By LIMIT to only retrieve one row per group. But my attempts failed horribly. Any idea how I can do this? Thanks

    Read the article

  • When does n++ execute faster than n=n+1 ?

    - by gcc
    Related: http://stackoverflow.com/questions/24853/c-what-is-the-difference-between-i-and-i In C language, Why does n++ execute faster than n=n+1? (int n=...; n++;) (int n=...; n=n+1;) Our instructor asked that question in today's class. (this is not homework)

    Read the article

  • Speedup C++ code

    - by Werner
    Hi, I am writing a C++ number crunching application, where the bottleneck is a function that has to calculate for double: template<class T> inline T sqr(const T& x){return x*x;} and another one that calculates Base dist2(const Point& p) const { return sqr(x-p.x) + sqr(y-p.y) + sqr(z-p.z); } These operations take 80% of the computation time. I wonder if you can suggest approaches to make it faster, even if there is some sort of accuracy loss Thanks

    Read the article

  • [boost::filesystem] performance: is it better to read all files once, or use b::fs functions over an

    - by rubenvb
    I'm conflicted between a "read once, use memory+pointers to files" and a "read when necessary" approach. The latter is of course much easier (no additional classes needed to store the whole dir structure), but IMO it is slower? A little clarification: I'm writing a simple build system, that read a project file, checks if all files are present, and runs some compile steps. The file tree is static, so the first option doesn't need to be very dynamic and only needs to be built once every time the program is run. Thanks

    Read the article

  • UITableViewController executes delate functions before network request finishes

    - by user1543132
    I'm having trouble trying to populate a UITableView with the results of a network request. It seems that my code is alright as it works perfectly when my network is speedy, however, when it's not, the function - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath- still executes, which results in a bad access error. I presume that this is because the array that the aforesaid function attempts to utilize has not been populated. This brings me to my question: Is there anyway that I can have the UITableView delegate methods delayed to avoid this? - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"AlbumsCell"; //UITableViewCell *basicCell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier forIndexPath:indexPath]; AlbumsCell *cell = (AlbumsCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (!cell) { **// Here is where the Thread 1: EXC_BAD_ACCESS (code=2 address=0x8)** cell = [[[AlbumsCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } Album *album = [_albums objectAtIndex:[indexPath row]]; [cell setAlbum:album]; return cell; }

    Read the article

  • Fastest way to convert a binary file to SQLite database

    - by chown
    I've some binary files and I'm looking for a way to convert each of those files to a SQLite database. I've already tried C# but the performance is too slow. I'm seeking an advice on how and what programming language should be the best to perform this kind of conversion. Though I prefer any Object Oriented Language more (like C#, Java etc), I'm open for any programming language that boosts up the conversion. I don't need a GUI frontend for the conversion, running the script/program from console is okay. Thanks in advance

    Read the article

  • Network license control for a Java application

    - by user1461615
    I have been tasked with providing some form of network license control for a Java application. The app would be stored on a network drive and run from a client machine. The basic idea is that it will be able to work out how many times it is being run concurrently and prevent the N+1th user from running the software where N is the number of concurrent licenses the customer has purchased. Is this possible somehow with a Java application? I implemented a "solution" which relied on multi-cast UDP communication between the running instances of the application but this didn't work because on most networks this kind of communication is blocked by security measures. Is there a better way? I don't even mind if it requires JNI/JNA. N.B. The solution does not have to be that sophisticated or highly secure.

    Read the article

  • MySQL: Return grouped fields where the group is not empty, efficiently

    - by Ryan Badour
    In one statement I'm trying to group rows of one table by joining to another table. I want to only get grouped rows where their grouped result is not empty. Ex. Items and Categories SELECT Category.id FROM Item, Category WHERE Category.id = Item.categoryId GROUP BY Category.id HAVING COUNT(Item.id) > 0 The above query gives me the results that I want but this is slow, since it has to count all the rows grouped by Category.id. What's a more effecient way? I was trying to do a Group By LIMIT to only retrieve one row per group. But my attempts failed horribly. Any idea how I can do this? Thanks

    Read the article

  • SQL & PHP - Which is faster mysql_num_rows() or 'select count()'?

    - by Joel
    I'm just wondering which method is the most effective if I'm literally just wanting to get the number of rows in a table. $res = mysql_query("SELECT count(*) as `number` FROM `table1`"); $count = mysql_fetch_result($res,0,'number'); or $res = mysql_query("SELECT `ID` FROM `table1`"); $count = mysql_num_rows($res); Anyone done any decent testing on this?

    Read the article

  • .net File.Copy very slow when copying many small files (not over network)

    - by Guavaman
    I'm making a simple folder sync backup tool for myself and ran into quite a roadblock using File.Copy. Doing tests copying a folder of ~44,000 small files (Windows mail folders) to another drive in my system, I found that using File.Copy was over 3x slower than using a command line and running xcopy to copy the same files/folders. My C# version takes over 16+ minutes to copy the files, whereas xcopy takes only 5 minutes. I've tried searching for help on this topic, but all I find is people complaining about slow file copying of large files over a network. This is neither a large file problem nor a network copying problem. I found an interesting article about a better File.Copy replacement, but the code as posted has some errors which causes problems with the stack and I am nowhere near knowledgeable enough to fix the problems in his code. Are there any common or easy ways to replace File.Copy with something more speedy?

    Read the article

  • [python] voice communication for python help!

    - by Eric
    Hello! I'm currently trying to write a voicechat program in python. All tips/trick is welcome to do this. So far I found pyAudio to be a wrapper of PortAudio. So I played around with that and got an input stream from my microphone to be played back to my speakers. Only RAW of course. But I can't send RAW-data over the netowrk (due the size duh), so I'm looking for a way to encode it. And I searched around the 'net and stumbled over this speex-wrapper for python. It seems to good to be true, and believe me, it was. You see in pyAudio you can set the size of the chunks you want to take from your input audiobuffer, and in that sample code on the link, it's set to 320. Then when it's encoded, its like ~40 bytes of data per chunk, which is fairly acceptable I guess. And now for the problem. I start a sample program which just takes the input stream, encodes the chunks, decodes them and play them (not sending over the network due testing). If I just let my computer idle and run this program it works great, but as soon as I do something, i.e start Firefox or something, the audio input buffer gets all clogged up! It just grows and then it all crashes and gives me an overflow error on the buffer.. OK, so why am I just taking 320 bytes of the stream? I could just take like 1024 bytes or something and that will easy the pressure on the buffer. BUT. If I give speex 1024 bytes of data to encode/decode, it either crashes and says that thats too big for its buffer. OR it encodes/decodes it, but the sound is very noisy and "choppy" as if it only encoded a tiny bit of that 1024 chunk and the rest is static noise. So the sound sounds like a helicopter, lol. I did some research and it seems that speex only can convert 320 bytes of data at time, and well, 640 for wide-band. But that's the standard? How can I fix this problem? How should I construct my program to work with speex? I could use a middle-buffer tho that takes all available data to read from the buffer, then chunk this up in 320 bits and encode/decode them. But this takes a bit longer time and seems like a very bad solution of the problem.. Because as far as I know, there's no other encoder for python that encodes the audio so it can be sent over the network in acceptable small packages, or? I've been googling for three days now. Also there is this pyMedia library, I don't know if its good to convert to mp3/ogg for this kind of software. Thank in in advance for reading this, hope anyone can help me! (:

    Read the article

  • What algorithms are suitable for this simple machine learning problem?

    - by user213060
    I have a what I think is a simple machine learning question. Here is the basic problem: I am repeatedly given a new object and a list of descriptions about the object. For example: new_object: 'bob' new_object_descriptions: ['tall','old','funny']. I then have to use some kind of machine learning to find previously handled objects that had similar descriptions, for example, past_similar_objects: ['frank','steve','joe']. Next, I have an algorithm that can directly measure whether these objects are indeed similar to bob, for example, correct_objects: ['steve','joe']. The classifier is then given this feedback training of successful matches. Then this loop repeats with a new object. a Here's the pseudo-code: Classifier=new_classifier() while True: new_object,new_object_descriptions = get_new_object_and_descriptions() past_similar_objects = Classifier.classify(new_object,new_object_descriptions) correct_objects = calc_successful_matches(new_object,past_similar_objects) Classifier.train_successful_matches(object,correct_objects) But, there are some stipulations that may limit what classifier can be used: There will be millions of objects put into this classifier so classification and training needs to scale well to millions of object types and still be fast. I believe this disqualifies something like a spam classifier that is optimal for just two types: spam or not spam. (Update: I could probably narrow this to thousands of objects instead of millions, if that is a problem.) Again, I prefer speed when millions of objects are being classified, over accuracy. What are decent, fast machine learning algorithms for this purpose?

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • Is it theoretically possible to emulate a human brain on a computer?

    - by JoelK
    Our brain consists of billions of neurons which basically work with all the incoming data from our senses, handle our consciousness, emotions and creativity as well as our hormone system, etc. So I'm completely new to this topic but doesn't each neuron have a fixed function? E.g.: If a signal of strength x enters, if the last signal was x ms ago, redirect it. From what I've learned in biology about our nerves system which includes our brain because both consist of simple neurons, it seems to me as our brain is one big, complicated computer. Maybe so complicated that things such as intelligence and cognition become possible? As the most complicated things about a neuron pretty much are the chemical aspects on generating an electric singal, keeping itself alive, and eventually segmenting itself, it should be pretty easy emulating some on a computer, or? You won't have to worry about keeping your virtual neuron alive, or? If you can emulate a single neuron on a computer, which shouldn't be too hard, could you theoretically emulate more than 1000 billions of them, recreating intelligence, cognition and maybe even creativity? In my question I'm leaving out the following aspects: Speed of our current (super) computers Actually writing a program for emulating neurons I don't know much about this topic, please tell me if I got anything wrong :) (My secret goal: Make a copy of my brain and store it on some 10 million TB HDD and make someone start it up in the future)

    Read the article

  • How to change the setting for a network device reported by ethtool, specifically Speed, on VM?

    - by Ramadheer Singh
    This is related to these two questions, although they don't answer my question. The machines are RHEL6. 1.ethtool not showing all the properties 2.changing network speed to 1000Mb/s Output on VM: [root@foo ~]# ethtool eth0 Settings for eth0: Current message level: 0x00000007 (7) Link detected: yes Output on Real Hardware: (interested in Speed) # ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes ***Speed: 1000Mb/s*** Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: d Wake-on: d Link detected: yes if there's anyway I can set this in VM, please suggest.

    Read the article

  • Sporadic name resolution failure happening on web service call

    - by ansleygal
    One of our wcf service applications calls a seperate third party web service to submit information. We are getting the following error every so often, but not all the time: System.Net.WebException: The remote name could not be resolved: 'ws.examplesite.net' at System.Net.HttpWebRequest.GetRequestStream() at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) The wierd thing is that after the error happens, we can hit "Submit" again a second later and it will go through just fine. We have checked and double checked with our network guys and they have confirmed that DNS is correct, and they have done multiple nslookups in a row to confirm. This is happening in all environments (dev, test and prod). We use the third party test and prod urls, and it is happening when we point to both. Does anyone have any other trouble shooting techniques for this or any reason this would happen? Much thanks, ~Ansley

    Read the article

  • raw h.264 packet capture and playing in VLC

    - by MAC
    Hi, I am capturing packets off the network from a video conference HDX. The video is sent in RTP and is encoded in H264. I am trying to capture these packets and generate a video file. I wrote raw H264 data from the packets to disk and i am trying to play it in VLC. VLC just shows a green box. Am i being too naive in my approach with data writing or should am I wrong in assuming that VLC should play this file? Anyone have any experience in such things?

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >