Search Results

Search found 23762 results on 951 pages for 'network speed'.

Page 24/951 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Map ftp servers as network drives in Ubuntu Linux

    - by Carl
    Hi everybody! I'm a new Linux user, just switched over from Windows. I've got a couple FTP servers I connect to on a regular basis through sftp. I was wondering if there was a way to, as we say in windows, map them as network drives in Linux, so I can just copy stuff into a drive or folder and it will just map it to the server by ftp. That would be pretty cool. Anybody know how this would be possible, and how to do it? I Can't see to find anything in the literature. I'm running Ubuntu 9.04. Thanks!

    Read the article

  • Virtual Network Printer

    - by user113720
    I'm pretty new to Microsoft Servers so don't blame me if the question isn't that smart [I'm a Unix guy]. I need to install a Virtual Printer of a Microsoft Server 2008 r2. The requirements are: The printer must print on a file {whatever file... txt or pdf } The printer must run on a server The printer must accept plaintext from a specific IP:port The connection between the device that prints and the server is a local network I've tried to install a virtual printer, but I cannot specify the constraint about the socket from which receive data to print. Thank you so much

    Read the article

  • Planning office network [closed]

    - by gakhov
    I'm planning to setup my office network from scratch and want to ask professional opinions or tips. My office is connected to Internet with Cable connection (100Mb/s). The devices i would like to connect are VoIP Phone (RJ-11), TV (WiFi/LAN), 3 laptops (WiFi), a few smartphones (WiFi), iPad (WiFi), Kindle (WiFi) and, probably, MediaServer (WiFi/LAN). As you can see, the most load will be on WiFi connections (probably, even if TV supports WiFi it's better to connect it by LAN?). So, i need help to choose the best routers combination (or even one?) to support stable connections for all these devices and minimize the total number of routers/adapters. Any thoughts? Thank you!

    Read the article

  • Users will be kicked out of a network drive (DFS)

    - by user71563
    Hi, In early January 2011, we completely switched to Windows Server 2008 R2 and Windows 7. On our domain controller set up a DFS is that the users as "Z: drive" is displayed. The DFS was it in the same way during our time with Windows Server 2003 R2 and Windows XP. At the time it has always worked without problems. Since Windows 7, we have sometimes the case that when a user accesses to the Z drive, the Explorer will return to the workplace without a user can do. After two to three trials of the Explorer remains in the network drive and the users work. This phenomenon occurs irregularly and you can not restrict exactly why. In the event log at the time no obvious entries are logged. Does anyone know the problem or has had similar experiences? I am grateful for any help. Greetings, sY!v3Rs

    Read the article

  • Logical and Physical network topologies

    - by t.thielemans
    I'm trying to understand the difference between logical and physical topologies but it's a bit confusing to me. Cisco states these as logical topologies, but from my understanding these should be physical topologies? This is what I understand so far: Physical PtP: desktop directly connected to a desktop Multiaccess: several desktops connected to a medium with access to each other (Cisco Ring image, how should I view this in a live situation?) Ring: several desktops directly connected to each other creating a loop? Logical PtP: two desktops (virtually) connected to each other with intermediairy devices in between MultiAccess: (don't have a clue) Ring: (don't have a clue) Could anyone help me out and perhaps explain the difference a bit more detailed? Online I can't find any useful topics. I am using the Cisco Network Fundamentals book.

    Read the article

  • Create network shares via command line with specific permissions

    - by Derek
    This is sort of a two-pronged question. I am developing an application that will need to be able to create network shares in Windows Server 2003 via the command line. So, firstly, how do I create shares in Windows via the command line? I tried researching it, and all I was able to find is that I should be using net, but other than that, there isn't much documentation. Also, in this share there will be a few directories with the names of users on the domain, and I would like for the directories to not be readable or writable by anyone else. For example, say I have two directories: jsmith and jdoe. I would like the user jsmith to write and read from the directory jsmith, but not the directory called jdoe, and vice versa.

    Read the article

  • Network DFS Shares Jumping Back To Root

    - by Taz
    We map several network drives to DFS locations via a logon script. Recently we've had a number of users complain of a very unusual behaviour when navigating these shares. They will be going through folders and will get 'rubber-banded' back to the root of the share. This will happen for a few minutes and then go back to behaving normally. The users are on Windows 7 and the fileshare is on Windows Server 2K8R2. Any idea what could be causing this annoying behaviour?

    Read the article

  • Windows PC's Intermittant Network faults.

    - by Kristiaan
    Hello everyone, im running into some issues with our client PC's (windows xp sp3 systems). this morning we ran into some problems with PC's not connecting to internal / external systems intermittantly. this would manifest as a problem connecting to any service, email, web, backoffice database systems etc. given a random amount of time be it a few minutes etc the problem would disapear and the pc would carry on as normal, some systems however have not been able to connect to certain sytems since the problem initally happened. im hoping for some suggestions / network diag advice really to help me locate the cause of this problem. all the clients are windows xp, connecting to a domain controller that is windows 2003 std this server also acts as a DNS server for us. we also have websense 7.0.1 installed on it to filter traffic.

    Read the article

  • Mapped network drive missing from My Computer and Explorer

    - by matt wilkie
    On a Windows XP Pro SP3 machine one network drive refuses to show up in My Computer or Explorer. The missing drive letter is G:, if that matters. Other mappings work fine. Other profiles one the same machine have no problem seeing G:. I can access the G: just fine typing it into the address bar or in CMD shell. I've used TweakUI to toggle hide/show G: with no difference. TweakUI says G: should be visible. I've logged off,on between toggles to make sure the settings are taking effect. I've looked at reg key [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer] and made sure it's zero'd.ref We've limped along with this broken setup for some time, just working around it, but some applications do not allow typing in a path when choosing a place to save files and it's reached the point where it's intolerable. So, anyone have any idea why XP won't show this drive letter? or how to fix it?

    Read the article

  • Router for Infrastructure Network

    - by amfortas
    We have an HPC operation that down the years has grown to several racks of gear at three sites, hooked up via Gigabit fiber and Catalyst 2960s (we control the links and switches). Thus far all machines have been on a flat RCF1918 10/8 but we are looking to segment the network in order to streamline matters for iSCSI and generally keep infrastructure equipment away from our end-users. We have now reached a point where we need to consider introducing VLANs for specific subnets and are wondering if it would be worthwhile in the longer run to acquire a small router to keep to keep track of all this stuff and cut down on the complexity of netmasks and routes on host machines, etc. Has anyone here had a similar experience? Suggestions as to suitable equipment would be welcome.

    Read the article

  • Temporarily block other users from network printer

    - by TecBrat
    I found where someone else asked this question here, but they did not get a working answer. We have a printer that is shared. It has it's own network card, so we all have equal access to it. (none of our computers owns it) One of our users needs to print on specialty paper and we need to be sure not to print when that paper is in the printer. Our current method is "Hey, don't print anything right now!" Obviously this method is not preferred because it does not enforce itself. :-) I think all our PCs are running Win7 Home. The printer in question is an HP Laserjet 2200. Is there a way that we can make this happen?

    Read the article

  • IPSec 2 hosts (preshared key) - network shares very slow

    - by LxFlip
    I'm testing a IPSec config between 2 hosts, using ipsec auth with preshared key, very simple configuration. (I want to start with a IPSec simple preshared key config, and then step up to a Certificate or kerberos...) The problem is: The connection is working but when accessing network file shares the first time it's very slow. On the same host i'm testing the shares, i have an IIS site running, and the performance seems very normal, fast. Does anybody know why does SMB shares are soo slow? Is there any ipsec policy options that should be tweaked? Thanks

    Read the article

  • C# serialPort speed

    - by MarekK
    Hi I am developing some monitoring tool for some kind of protocol based on serial communication. Serial BaudRate=187,5kb I use System.IO.Ports.SerialPort class. This protocol has 4 kinds of frames. They have 1Byte,3Bytes,6Bytes, 10-255Bytes. I can work with them but I receive them too late to respond. For the beginning I receive first packed after ex. 96ms (too late), and it contains about 1000B. This means 20-50 frames (too much, too late). Later its work more stable, 3-10Bytes but it is still too late because it contains 1-2 frames. Of Course 1 frame is OK, but 2 is too late. Can you point me how can I deal with it more reliable? I know it is possible. Revision1: I tried straight way: private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e) { if (!serialPort1.IsOpen) return; this.BeginInvoke(new EventHandler(this.DataReceived)); } And Backgroud worker: And ... new Tread(Read) and... always the same. Too late, too slow. Do I have to go back to WinApi and import some kernel32.dll functions? Revision 2: this is the part of code use in the Treading way: int c = serialPort1.BytesToRead; byte[] b = new byte[c]; serialPort1.Read(b, 0, c); I guess it is some problem with stream use inside SerialPort class. Or some synchronization problem. Revision 3: I do not use both at once!! I just tried different ways. Regards MarekK

    Read the article

  • Speed boost to adjacency matrix

    - by samoz
    I currently have an algorithm that operates on an adjacency matrix of size n by m. In my algorithm, I need to zero out entire rows or columns at a time. My implementation is currently O(m) or O(n) depending on if it's a column or row. Is there any way to zero out a column or row in O(1) time?

    Read the article

  • Speed of CSS

    - by Ólafur Waage
    This is just a question to help me understand CSS rendering better. Lets say we have a million lines of this. <div class="first"> <div class="second"> <span class="third">Hello World</span> </div> </div> Which would be the fastest way to change the font of Hello World to red? .third { color: red; } div.third { color: red; } div.second div.third { color: red; } div.first div.second div.third { color: red; } Also, what if there was at tag in the middle that had a unique id of "foo". Which one of the CSS methods above would be the fastest. I know why these methods are used etc, im just trying to grasp better the rendering technique of the browsers and i have no idea how to make a test that times it. UPDATE: Nice answer Gumbo. From the looks of it then it would be quicker in a regular site to do the full definition of a tag. Since it finds the parents and narrows the search for every parent found. That could be bad in the sense you'd have a pretty large CSS file though.

    Read the article

  • std::vector iterator or index access speed question

    - by Simone Margaritelli
    Just a stupid question . I have a std::vector<SomeClass *> v; in my code and i need to access its elements very often in the program, looping them forward and backward . Which is the fastest access type between those two ? Iterator access std::vector<SomeClass *> v; std::vector<SomeClass *>::iterator i; std::vector<SomeClass *>::reverse_iterator j; // i loops forward, j loops backward for( i = v.begin(), j = v.rbegin(); i != v.end() && j != v.rend(); i++, j++ ){ // some operations on v items } Subscript access (by index) std::vector<SomeClass *> v; unsigned int i, j, size = v.size(); // i loops forward, j loops backward for( i = 0, j = size - 1; i < size && j >= 0; i++, j-- ){ // some operations on v items } And, does const_iterator offer a faster way to access vector elements in case i do not have to modify them? Thank you in advantage.

    Read the article

  • Understanding memory and cpu speed

    - by tipu
    Firstly, I am working on a windows xp 64 machine with 4gb ram and 2.29 ghz x4 I am indexing 220,000 lines of text that are more or less the same length. These are divided into 15 equally sized files. File 1/15 takes 1 minute to index. As the script indexes more files, it seems to take much longer with file 15/15 taking 40 minutes. My understanding is that the more I put in memory, the faster the script is. The dictionary is indexed in a hash, so fetch operations should be O(1). I am not sure where the script would be hanging the CPU. I have the script here.

    Read the article

  • php string versus boolean speed test

    - by ae
    I'm looking at trying to optimise a particular function in a php app and foolishly assumed that a boolean lookup in a 'if' statement would be quicker than a string compare. But to check it I put together a short test (see below). To my surprise, the string lookup was quicker. Is there anything wrong with my test (I'm wired on too much coffee so I'm suspicious of my own code)? If not, I would be interested in any comments people have around string versus boolean lookups in php. The result for the first test (boolean lookup) was 0.168 The result for the second test (string lookup) was 0.005 <?php $how_many = 1000000; $counter1 = 0; $counter2 = 0; $abc = array('boolean_lookup'=>TRUE, 'string_lookup'=>'something_else'); $start = microtime(); for($i = 0; $i < $how_many; $i++) { if($abc['boolean_lookup']) { $counter1++; } } echo ($start - microtime()); echo '<hr>'; $start = microtime(); for($i = 0; $i < $how_many; $i++) { if($abc['string_lookup'] == 'something_else') { $counter2++; } } echo ($start - microtime());

    Read the article

  • Cython Speed Boost vs. Usability

    - by zubin71
    I just came across Cython, while I was looking out for ways to optimize Python code. I read various posts on stackoverflow, the python wiki and read the article "General Rules for Optimization". Cython is something which grasps my interest the most; instead of writing C-code for yourself, you can choose to have other datatypes in your python code itself. Here is a silly test i tried, #!/usr/bin/python # test.pyx def test(value): for i in xrange(value): i**2 if(i==1000000): print i test(10000001) $ time python test.pyx real 0m16.774s user 0m16.745s sys 0m0.024s $ time cython test.pyx real 0m0.513s user 0m0.196s sys 0m0.052s Now, honestly, i`m dumbfounded. The code which I have used here is pure python code, and all I have changed is the interpreter. In this case, if cython is this good, then why do people still use the traditional Python interpretor? Are there any reliability issues for Cython?

    Read the article

  • Optimizing for speed - 4 dimensional array lookup in C

    - by Tiago
    I have a fitness function that is scoring the values on an int array based on data that lies on a 4D array. The profiler says this function is using 80% of CPU time (it needs to be called several million times). I can't seem to optimize it further (if it's even possible). Here is the function: unsigned int lookup_array[26][26][26][26]; /* lookup_array is a global variable */ unsigned int get_i_score(unsigned int *input) { register unsigned int i, score = 0; for(i = len - 3; i--; ) score += lookup_array[input[i]][input[i + 1]][input[i + 2]][input[i + 3]]; return(score) } I've tried to flatten the array to a single dimension but there was no improvement in performance. This is running on an IA32 CPU. Any CPU specific optimizations are also helpful. Thanks

    Read the article

  • Comparisons of web programming languages (on speed, etc.)

    - by Dave
    I'm looking for a site / report / something that can compares "identical" programs (programs that do the same thing) in different web-programming languages and then compares the speeds of each of them. I agree that there will be MANY MANY criteria on which this information can be sliced and diced by, but has anyone done any real comparison of this? I am interested in web-based languages only, ie php, perl, C, C++, java, asp, asp.net, etc.

    Read the article

  • speed string search in PHP

    - by Marc
    Hi! I have a 1.2GB file that contains a one line string. What I need is to search the entire file to find the position of an another string (currently I have a list of strings to search). The way what I'm doing it now is opening the big file and move a pointer throught 4Kb blocks, then moving the pointer X positions back in the file and get 4Kb more. My problem is that a bigger string to search, a bigger time he take to got it. Can you give me some ideas to optimize the script to get better search times? this is my implementation: function busca($inici){ $limit = 4096; $big_one = fopen('big_one.txt','r'); $options = fopen('options.txt','r'); while(!feof($options)){ $search = trim(fgets($options)); $retro = strlen($search);//maybe setting this position absolute? (like 12 or 15) $punter = 0; while(!feof($big_one)){ $ara = fgets($big_one,$limit); $pos = strpos($ara,$search); $ok_pos = $pos + $punter; if($pos !== false){ echo "$pos - $punter - $search : $ok_pos <br>"; break; } $punter += $limit - $retro; fseek($big_one,$punter); } fseek($big_one,0); } } Thanks in advance!

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • Speed of QHash lookups using QStrings as keys.

    - by Ryan R.
    I need to draw a dynamic overlay on a QImage. The component parts of the overlay are defined in XML and parsed out to a QHash<QString, QPicture> where the QString is the name (such as "crosshairs") and the QPicture is the resolution independent drawing. I then draw components of the overlay as they are needed at a position determined during runtime. Example: I have 10 pictures in my QHash composing every possible element in a HUD. During a particular frame of video I need to draw 6 of them at different positions on the image. During the next frame something has changed and now I only need to draw 4 of them but 2 of those positions have changed. Now to my question: If I am trying to do this quickly, should I redefine my QHash as QHash<int, QPicture> and enumerate the keys to counteract the overhead caused by string comparisons; or are the comparisons not going to make a very big impact on performance? I can easily make the conversion to integer keys as the XML parser and overlay composer are completely separate classes; but I would like to use a consistent data structure across the application. Should I overcome my desire for consistency and re-usability in order to increase performance? Will it even matter very much if I do?

    Read the article

  • iphone threading speed up startup of app

    - by BahaiResearch.com
    I have an app that must get data from the Sqlite database in order to display the first element to the User. I have created a domain object which wraps the DB access and is a thread safe singleton. Is this following strategy optimal to ensure the fastest load given the iPhone's file access and memory management capabilities in threaded apps: 1) In the AppDelegate's FinishedLaunching event the very first thing I do is create the domain singleton within a new thread. This will cause the domain object to go to Sqlite and get the data it needs without locking the UI thread. 2) I then call the standard Window methods to add the View and MakeKeyAndVisible etc. Is there an earlier stage in the AppDelegate where I should fire off the thread that creates the Domain Object and accesses Sqlite?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >