Search Results

Search found 5436 results on 218 pages for 'transfer rate'.

Page 19/218 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Transfer Win8 user settings between profiles [closed]

    - by GlennFerrieLive
    Possible Duplicate: How do I sync grouped Windows Store apps between devices? Is there a way for me to copy/save/transfer my "start menu" configuration, meaning the grouping and ordering of the elements on the Start screen, between user profiles? Is it in the registry? I am open to manual or "coded" suggestions. UPDATE: I'd like to VETO this closing. I am aware of the "roaming" profile behavior. I want to COPY my configuration BETWEEN profiles on the same machine.... DIFFERENT profile DIFFERENT person. I like the way my start screen is set up. i want to set my wife up with the same layout.

    Read the article

  • Monitor resolution messed up somehow

    - by Kelp
    I purchased the Westinghouse 22" LCD LCM-22w3 a few years ago, and now it's been acting up on me. I just booted into Windows 7(without changing any settings), and the default resolution is 1600x1024, and it allows me to select a refresh rate of up to 85 Hz(it didn't let me do that). I usually have my resolution set to 1680x1050 with a refresh rate of 60 Hz. Now, that resolution does not even appear in the list. Does anyone have any idea of what could be the problem and how to fix it? Edit: I am not sure if this will help, but when I go to change the screen resolution, the monitor is known as "Generic Non-PnP Monitor". It used to be referred to as "Generic PnP Monitor). I tried to disable Generic Non-PnP Monitor, but when I restart, it uses that monitor again. Edit 2: I created a custom .inf file using Powerstrip, but that does not work either. The monitor settings are being stubborn.

    Read the article

  • `rsync` NEVER uses its 'famous' delta-transfer!

    - by o_O Tync
    I have a big iso image which is currently being downloaded by a torrent client with space-reservation turned on: that means, file size is not changing while some chunks in in (4 Mib) are constantly changing because of a download. At 90% download I do the initial rsync to save time later: $ rsync -Ph DVD.iso /some/target/ sending incremental file list DVD.iso 2.60G 100% 40.23MB/s 0:01:01 (xfer#1, to-check=0/1) sent 2.60G bytes received 73 bytes 34.59M bytes/sec total size is 2.60G speedup is 1.00 Then, when the file's fully downloaded, I rsync again: total size is 2.60G speedup is 1.00 Speedup=1 says delta-transfer was not used, although 90% of the file has not changed. Why?!

    Read the article

  • Transfer Raid Drives to External Enclosure

    - by dubbeat
    I have 2 raid disks (a grand total of 360GB) in my laptop. I'm fast running out of space and want to install new drives. I've a pretty good idea how to do this. My question is what can I do with the drives that I remove? I've got lots of media files on these drives that I'd like to keep and maybe transfer back onto my laptop once I have the new drives installed. Bearing in mind that I know next to nothing about hardware how do you suggegst I go about reusing the removed drives somehow? Thanks,

    Read the article

  • Rejecting new HTTP requests when server reaches a certain throughput

    - by Sam
    I have a requirement to run an HTTP server that rejects new HTTP requests (with a 503, or similar) when the global transfer rate of current HTTP responses exceeds a certain level. For example, if the web server is transferring at 98Mbps, and a new HTTP request arrives, we would want to reject this (as we couldn't guarantee a good speed). I've had a look at mod_cband for Apache, limit_req for nginx, and lighttpd's rate limiting features, but none of them seem to handle my (rather contrived, granted) use case. I should add that I'm open to using pretty much any web server, and am open to implementing this in iptables rules if someone can craft such a rule! (Refusing the TCP connection is fine, it doesn't have to respond with an HTTP 503). Any suggestions?

    Read the article

  • Upgraded to Ubuntu 12.04 from 10.04 and have to transfer database from Postgresql 8.4 to 9.1

    - by Stpn
    I upgraded server with a Rails application to Ubuntu 12.04 from 10.04 and cannot connect to Postgresql database now... Here is the error message from Rails app: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432" Also the pg_ctl start is not recognized as a command.. EDIT: Turns out my database in on Postgresl 8.4 and my sever is now running on 9.1. So all the database files / configs are on 8.4.. How can I transfer them? Just straight copy from old pg_hba.conf?

    Read the article

  • FC SAN network high-error rate simulation

    - by Wieslaw Herr
    Is there a way to simulate a malfunctioning device or a faulty cable in a FC SAN network? edit: I know shutting down a port on a switch is an option, I'd like to simulate high error rates though. In an ethernet network it would be a simple case of adding a transparent bridge that discards a given percent of the packets, but I have absolutely no idea how to tackle that in an Fibre Channel environment...

    Read the article

  • Frame rate upsampling codec/player/software?

    - by djechelon
    Hello, I recently noticed that when I play both HD and SD videos on my HDMI TV at 1080p 60Hz from my computer, the motion is not fluid as I would expect. As far as I know, it could be because the 24fps video needs to be upsampled by the codec to match the 60Hz output of the monitor. But, as far as I know, the upsampling is done by simply repeating each photogram for a certain amount of frames. I usually play MKV videos with VLC. Do you know if there is a player or codec that performs the upsampling by interpolation like some 100Hz TVs do? I recently saw an LG led TV play a 24fps 720p video at 100Hz with an incredible motion fluidity, and I simply wonder why can't my computer do! I have an NVidia card. Does PureVideo help? I'm a noob with these things. Thank you.

    Read the article

  • How can I send super large files directly to another computer in the Internet for free?

    - by Cruise
    I regulary need to transfer very large files (30 GB) to my friend - financial statistics. I don't have any problem with bandwidth: it is very broad here. I did some research in the area, so: 1. I would not use FTP, as it is very tricky to get it working behind a NAT. 2. I would not use Skype/MSN/ICQ, as it is not designed for file transfer and it underperforms on the huge files. 3. I would not use file-sharing services, as I need to pay for big files (30 GB is a problem here) and I don't like holding any piece of my data on the third-party server. So, I need some smart tool that will do what I need: sending files directly browser-to-browser and not browser-server-browser. Is it so complex? Is there some web application in the Internet that can do this?

    Read the article

  • Apple Mail inbox multiplying at an alarming rate

    - by mechko
    All of a sudden, this morning, Apple Mail started downloading emails to my gmail account despite the fact that I knew there were no new emails. I looked at the inbox and discovered that there were four copies of each of the recent email. I cancelled the sync, and Mail promptly started to sync twice as many emails. After a few attempts I had approximately 32 times my inbox preparing to sync, so I closed Mail and left this way. Does anyone know what happened, why, and, most importantly, how to fix it?

    Read the article

  • Rate of UDP packet loss over WLAN

    - by Martin
    While testing something with TFTP I noticed lots of timeouts (and slow speed as result) when I used my WLAN - and no problems when using a network cable. A quick test program sending/receiving UDP revealed that there are about 3-5% packets lost. While it's obvious that WLAN has to be less reliable than LAN, I have no knowledge what loss rates are considered 'normal' - and when there is a need to further investigate the network infrastructure. Are there 'typical' packet loss rates on WLAN (and other network technologies e.g. PowerLAN, WAN, ...? Thanks

    Read the article

  • Transfering SMS messages from iPhone to computer?

    - by green7o7o
    I've got a problem. Maybe the problem had been discussed before. I once come across the problem that my iPhone rejects receiving new SMS but all the SMS are so important that I don't want to delete them. So I simply need to backup your iPhone SMS onto computer or print them or transfer to another iPhone? Here I am looking for a way on how to transfer your SMS from iPhone to computer and keep them in safe. Hope to get answers as soon as possible.

    Read the article

  • Host data transfer limit calculations and network protocol headers

    - by UpTheCreek
    OK, this might be a really stupid question, but... I'm building a web app that utilises websockets. There's fairly rapid messaging going on, so I've been taking a look at the network traffic with wireshark, to see if there's any way of reducing the amount of data we are sending over the wire, and hence costs. A typical message has approx 150 byte data payload, and according to wireshark the lower layer stuff takes up about: Ethernet: 14 bytes IP: 20 Bytes TCP: 20 Bytes My question is, are these network headers included in data transfer calculations? What about TCP ACK messages? (another 54 bytes according to wireshark) This may seem petty, but because we have so much messaging going on, and because the payload is a similar size to these headers, it's significant.

    Read the article

  • Transfer disk image to larger/smaller disk

    - by forthrin
    I need to switch the hard drive on a 2006 iMac to a new SSD. I don't have the original installation CDs. I know I can order CDs from Apple, but this costs money. Someone told me it's possible to rip the image of the old drive and transfer to the new drive. If so, does the size of the new drive have to be exactly the same as the old? If not, my questions are: Is it possible to "stretch" the image from 120 MB disk to a 256 MB disk (numbers are examples)? If so, what is the command line for this? Likewise, is it possible to "shrink" an image from a larger disk (eg. 256 MB) to a smaller disk (eg. 120 MB), provided that the actual space used on the disk does not exceed 120 MB? How do you do this on the command line?

    Read the article

  • Rejecting new HTTP requests when server reaches a certain throughput

    - by user56221
    I have a requirement to run an HTTP server that rejects new HTTP requests (with a 503, or similar) when the global transfer rate of current HTTP responses exceeds a certain level. For example, if the web server is transferring at 98Mbps, and a new HTTP request arrives, we would want to reject this (as we couldn't guarantee a good speed). I've had a look at mod_cband for Apache, limit_req for nginx, and lighttpd's rate limiting features, but none of them seem to handle my (rather contrived, granted) use case. I should add that I'm open to using pretty much any web server, and am open to implementing this in iptables rules if someone can craft such a rule! (Refusing the TCP connection is fine, it doesn't have to respond with an HTTP 503). Any suggestions?

    Read the article

  • How do you limit the bandwidth for a file copy?

    - by wizard
    I've got an old windows 2000 box in a remote location with a T1 connection and a vpn to my location. I normally use smb mounts to transfer files but now it's time to decommission the server and copy it's backups to my location. I have about 40 gigabytes (compressed) to copy. I'm prepared for it to take a long time, but I have a few caveats. I need to limit the bandwidth so terminal service connections to the site are not affected I want to be able to resume a partial transfer There are a few small files and several large files (10-20 gigabytes). I'm familiar with rsync on *nix platforms but have had bad luck with windows and I don't know that it will really keep partially transfered files. What do you use?

    Read the article

  • Transfer using linux ssh and maintaining permissions

    - by jbolt
    I need to transfer files across ssh to another server. The file structures are identical on both sides. I have used scp -r but that does not retain the orginal file/dir permissions. rsync does the job of keeping the permissions in tact but does not delete the files on the destination side if I want to overwrite them because of changes. I know rsync will write the changes when the source files are newer but I need it to just copy everything reguardless of the date (ie replace destination directory with the one I am moving) without having to shell into the destination first and manually delete the dir. I heard tar can do this but I can not seem to get it to work without errors. The syntax is tar -cf - /directory/directory | ssh host.name tar -xf - C /destination_directory Any help would be appreciated.

    Read the article

  • How to find out when to increase bit rate? (TCP streaming solution)

    - by Kabumbus
    How to find out when to increase bit rate? (TCP streaming solution) We have a stream with "frames". each "frame" has a "timestamp" . frames have bit rate property which is actually there size. We generate frames with our app and stream them one by one on to our TCP server socket. At the same time server post replies so when after each sent frame we try to read from socket we receive which timestamp is currently on server. if timestamp is lover than previous frame we lover bit rate 20%. Such scheme seems to work giving me one way vbr (lowering) but I wonder how to implement increase? I mean we can always try to increase 5% each frame until some limited desired value but each time we have delay will lose real-time feature of our stream... Generally such scheme is for finding out how much of network stream is currently used by other user apps and give picture of how much server is loaded at the same time so we can stream just right amount of data for all to receive it in real time. So what shall I do to add increase to my scheme? So having current bit rate of A I thought we could add +7% for 3 frames and than one -20% and than if all that 3 frames with +7% came in time we could add 14% to A and repeat circle and it would hopefully not be really noticeable if 2nd frame wold come to us with delay... probably this one is too localised because it is a requirement for me to use TCP.

    Read the article

  • CURL & web.py: transfer closed with outstanding read data remaining

    - by Richard J
    Hi Folks, I have written a web.py POST handler, thus: import web urls = ('/my', 'Test') class Test: def POST(self): return "Here is your content" app = web.application(urls, globals()) if __name__ == "__main__": app.run() When I interact with it using Curl from the command line I get different responses depending on whether I post it any data or not: curl -i -X POST http://localhost:8080/my HTTP/1.1 200 OK Transfer-Encoding: chunked Date: Thu, 06 Jan 2011 16:42:41 GMT Server: CherryPy/3.1.2 WSGI Server Here is your content (Posting of no data to the server gives me back the "Here is your content" string) curl -i -X POST --data-binary "@example.zip" http://localhost:8080/my HTTP/1.1 100 Content-Length: 0 Content-Type: text/plain HTTP/1.1 200 OK Transfer-Encoding: chunked Date: Thu, 06 Jan 2011 16:43:47 GMT Server: CherryPy/3.1.2 WSGI Server curl: (18) transfer closed with outstanding read data remaining (Posting example.zip to the server results in this error) I've scoured the web.py documentation (what there is of it), and can't find any hints as to what might be going on here. Possibly something to do with 100 continue? I tried writing a python client which might help clarify: h1 = httplib.HTTPConnection('localhost:8080') h1.request("POST", "http://localhost:8080/my", body, headers) print h1.getresponse() body = the contents of the example.zip, and headers = empty dictionary. This request eventually timed out without printing anything, which I think exonerates curl from being the issue, so I believe something is going on in web.py which isn't quite right (or at least not sufficiently clear) Any web.py experts got some tips? Cheers, Richard

    Read the article

  • WPD on XP, Vista, and 7 (need to transfer photo and video files)

    - by Bradley Dean
    I need to transfer files (still photos and videos) from any portable device that a user may connect (still camera, video camera, mobile phone, etc.) I don't need to worry about plain storage devices as these have drive letters. And I only care about existing files, I don't care about live video, preview video, taking new pictures, etc. I originally tried WIA, which works great except it can not transfer video files. So then I tried WPD, following along with dimeby8's tutorial: http://blogs.msdn.com/b/dimeby8/archive/2006/09/27/774259.aspx I haven't gotten the transfer working yet (I'm converting it over to C#), but I can at least see the device and enumerate the files in Win7. In XP I get nothing. It's pointed out in this thread that WPD won't enumerate devices on XP (see Lisa O [MSFT]'s post): http://social.msdn.microsoft.com/Forums/en/windowssdk/thread/56459945-b757-45df-8c9f-4ebdbbb18a2c So WIA is out because it won't do video. And WPD is out because it won't do XP. Has anyone gotten this to work? Am I missing something simple here? Thanks.

    Read the article

  • Can I force my video card to work in a certain refresh rate?

    - by EFanZh
    I have a GeForce GT 640 video card, but by default the screen flickers badly. And if I change the refresh rate to 59 Hz in NVIDIA Control Panel manually, everything is OK. I don't know why. Now the problem is, if the system restarts, the refresh rate become 60Hz, then the screen flicker is flickering. I need to change the refresh rate again. Can I keep the refresh rate to 59 Hz? Or it would be better if the screen doesn't flicker at 60 Hz.

    Read the article

  • Creating a ssh tunnel to transfer files?

    - by Vincent
    For me, networks are a very "opaque" thing, and even with reading a lot of tutorial about SSH, I do not understand how to create a basic tunnel to transfer my files. The configuration is the following : My Computer --[Internet]--> Bridge Machine --[Local Network]--> Final Machine Currently I do the following : 1) Connect to the Bridge Machine with : ssh -X [email protected] 2) Connect to the Final Machine with : ssh -X username@finalmachine 3) I copy the address of files I need (for example .../mydirectory) 4) Then I deconnect from the finalmachine with : exit 5) I copy the files to the bridge : scp -r username@finalmachine:/.../mydirectory . 6) I deconnect from the bridge with : exit 7) I copy the files to my machine : scp -r [email protected]:/.../mydirectory . Which is quite complicated. My question is basic : how to simplify this using a SSH tunnel ? (and please explain me the signification of each command line you write, to understand what each line really do and to avoid to use it like a magical thing. Furthermore if some ports number are used, explain me if I can pick a completely random number or if I have to choose a specific one.)

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • Limited bandwidth and transfer rates per user.

    - by Cx03
    I searched for a while but couldn't find anything concrete, hopefully someone can help me. I'm going to be running a Debian server on a gigabit port, and want to give each user his/her fair share of internet access. The first objective is easy - transfer rates (speed) per user. From what I've looked at, IPTables/Shorewall could do the job easy. Is this easy to setup, or could one of you point me at a config? I was hoping to limit users at 300mbit or 650mbit each. The second objective gets complicated. Due to the usage of the boxes, most of the traffic will be internal network traffic that does NOT get counted to the quota. However, I still need to limit the external traffic, and if they go over, cut off access (or throttle traffic to a very low speed (10mbit?)). Let's say the user has a 3TB external traffic limit. The IF part is: If the hostname they are exchanging the traffic with DOES NOT MATCH .ovh. or .kimsufi. (company owns multiple TLDs), count to the quota. Once said quota exceeds 3TB, choke them. Where could I find a system to count that for me? It would also need to reset or be able to be manually reset on a monthly basis. Thanks ahead of time!

    Read the article

  • Transfer iptables rules to another server (almost) real time

    - by MrShunz
    I'm running 2 cPanel servers with ConfigServer Security & Firewall plugin. One of the functions of the plugin is to block via iptables (temporarily and/or permanently) IPs which fail various authentications (POP3/IMAP, SMTP, FTP, webmail, mod_security and such). Now, i'd like to push those IP blocks to the border router to drop packets as soon as possible (and doing so protecting the other machines on the network). Keep in mind that after N failed logins IP is blocked for 5 minutes, then re-allowed. If multiple bans occours in an hour IP is blocked permanently and should be unlocked "by hand". So I need a near realtime solution. What I'm looking for is a better way than firing some cronjobs both on cPanels and border router to: dump the rules to file transfer the file to border router (via scp/sftp) load the rules from the file in the border router I'm aware that I will need some scripts to parse and modify the rules as cPanels have one ethernet interface and some aliases while border router has two ehternet interfaces and some loopbacks. All machines involved use Linux. EDIT as per @pjmorse comment. The plugin consists of a bunch of perl and config files. The part I'm intrested in is a process which scans logfiles (lfd) and installs iptables rules (and sends an alert email). Fact is, it upgrades quite often (one or two times a week) and itself is 7000 lines of perl so I'm not comfortable on tampering with it.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >