Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 397/1952 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

  • Setting max_allowed_packet for mysql on solaris 10

    - by Drakonen
    I want to set the max_allowed_packet setting for mysql (5.1.31) which is running on Solaris 10. Unfortunately mysql does not seem to read the my.cfg. I tried to place it in /etc/mycfg, /opt/mysql/mysql/data/my.cfg and in /opt/mysql/mysql/support-files/my.cfg. At each of these locations, the max_allowed_packet does not get set when i check with: `select @@max_allowed_packet;` When I start mysqld as such it does set the setting: # su mysql $ mysqld --defaults-file=/etc/my.cfg This are the contents of my.cfg: [mysqld] max_allowed_packet = 50M How can i make mysql read the config when i start it with the SMF tools?

    Read the article

  • Why is the `remove rows` button not always present in MySQL Workbench?

    - by Shawn
    I'm using MySQL Workbench to remove rows from a table in my database. Most of the time, I will simply write a select statement, then select the rows I want to erase and use the remove rows button (circled in the screenshot below). But sometimes (quite often actually), the remove rows button does not appear. Instead, I get something like the screenshot below: The remove rows button is not there and the remove rows option is grayed out in the context menu, so basically, I can't remove rows... The only way I've found of solving this issue is to run the select query many times until the button appears (it usually does after 3 of 4 times). Does anyone know why this is happening? UPDATE Today, I've been running a select query dozens of times and the button never appears. It seems my incomprehensible workaround no longer works... Help! btw: Using a delete statement does work, though I would rather not have to write one for each row I want to remove as this happens quite frequently during development...

    Read the article

  • Pendrive not working on USB port - USB2 Enhanced Host Controller - 27CC

    - by user1664417
    I have a situation here, is kind of weird. Situation as below: If i enable the 'Intel(R) 82801G (ICH7 Family) USB2 Enhanced Host Controller - 27CC' , then my pc will not be able to read any USB stick/pendrive, but usb port work fine with keyboard and mouse. If I disable the device driver, then all kind of pendrive can be read. Below is What i have done : 1) try the same pendrive on other pc, it work like charm. 2) try others pendrive on issue pc, same problem, but it work fine in others pc. 3) try all the port, including the port that connected to mouse n keyboard. 4) update the driver version. 5) restore default setting of usb in bios Please help to solve these issue if anyone experience it before. Many thanks.. :

    Read the article

  • How to go about rotating logs which are arbitrary named and placed in deeply nested directories?

    - by Roman Grazhdan
    I have a couple of hosts which are basically a playground for developers. On these hosts, each of them has a directory under /tmp where he is free to do all he wants - store files, write logs etc. Of course, the logs are to be rotated, or else the disc will be 100% full in a week. The files can be plenty, but I've dealt with it with paths like /tmp/[a-e]*/* and so on and lived happily for a while, but as they try new cool stuff on the machine logrotate rules grow ugly and unmanageable, and it's getting more difficult to understand which files hit the glob. Also, logrotate would segfault if asked to rotate a socket. I don't feel like trying to enforce some naming policies in that environment, I think it's going to take quite a lot of time and get people annoyed and still would fail at some point. And I still need to manage the logs, not just rm the dirs at night. So is it a good idea in circumstances like these to write a script which would handle these temporary files? I prefer sticking with standard utilities whenever possible, but here I think logrotate is getting less and less manageable. And probably someone heard of some logrotate alternatives which would work well in such an environment? I don't need emailing logs or some other advanced features, so theoretically some well commented find | xargs would do. P.S. I do have a log aggregator but this stuff is not going to touch my little cute logstash machine.

    Read the article

  • Task Manager always crashes within 1 or 2 seconds. Solutions?

    - by tallship
    This is the error report: Problem signature: Problem Event Name: APPCRASH Application Name: taskmgr.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3ee Fault Module Name: hostv32.dll Fault Module Version: 0.0.0.0 Fault Module Timestamp: 4c5c027d Exception Code: c0000005 Exception Offset: 0000000000068b73 OS Version: 6.1.7600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: bf4f Additional Information 2: bf4f79e8ecbde38b818b2c0e2771a379 Additional Information 3: d246 Additional Information 4: d2464c78aa97e6b203cd0fca121f9a58 Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Whenever I open the task manager, within a few seconds it crashes, saying it has stopped working with the above report. I took the fault module (hostv32.dll) and scanned it with avast but it found no threat. Any reason/solution to this problem? Thanks

    Read the article

  • How to remove leading whitespace from file and folder names?

    - by timoto
    How to remove leading whitespace from file and folder names? (I'm running OS X 10.6 Snow Leopard.) As provided below by @Lri I was able to remove trailing whitespace using this: #!/bin/bash IFS=$'\n' for d in {1..9}; do find ~/Desktop -name '* ' -depth $d | while read f; do mv "$f" "$(sed 's/ *$//' <<< "$f")" done done Now I'm trying to remove leading whitespace with this: #!/bin/bash IFS=$'\n' for d in {1..9}; do find ~/Desktop -name '* ' -depth $d | while read f; do mv "$f" "$(sed 's/^ *//;s/ *$//' <<< "$f")" done done but it doesn't work. What am I doing wrong?

    Read the article

  • Would an array of SSD drives be able to succesfully substitute the system memory?

    - by Florin Mircea
    I watched a few videos trying to answer this. This video (youtube.com/watch?v=eULFf6F5Ri8) shows a bunch of guys stacking 24 SSD's reaching a peak of around 2GBps r/w. That's under the limit of the worst DDR3 in this list (memorybenchmark.net/write_ddr3_amd.html) - that shows DDR3 memory performance varying from 2.78 to 6.55 Gb per second, but that video is over 3 years old. This video (youtube.com/watch?v=27GmBzQWwP0) shows a more optimistic situation, but for PCI-E SSD drives: 5 drives peaking at around 4Gb. And this other video shows that stacking up more than 3 SSD's doesn't realistically offer a substantial added performance. This and the fact that in all benchmarks the drives act quite poorly when dealing with small files (5k file read/write averaging from 10MB to around 30-40MBps) as opposed to how native memory handles such files, seems to indicate a definite NO to this question. Also, the write life cycle is indeed limited and the drives might wear out quickly, as kindly pointed out by paddy. However, I wanted to get more opinions on this. Would it be possible to at least obtain current memory performance with SSD's in RAID 0? And if so, in what circumstances? I am assuming using this configuration with a Windows OS that has a memory pagefile resident to that stack of SSD's, thus making it very fast to work with.

    Read the article

  • How to set Laptop Brightness in Gnome and /proc on Thinkpad W510?

    - by hakre
    I can set the brightness of my laptop screen already via /proc. I can read and change the value. Now I've set the value to 33 and then I went into gnome power management and enabled the option that it should reduce the backlight brightness being on battery. That works, the screen gets darker. If I now read out the current setting from /proc it still says it's 33. So I assume that there is another node in /proc to be used to control the brightness. The node I use so far is: /proc/acpi/video/VID/LCD0/brightness I'm using the nouveau driver. With the nvida driver the brightness can not be controlled.

    Read the article

  • How to disable color dithering for low-bit-depth screen settings?

    - by gogowitsch
    I am using Terminal Services and TeamViewer a lot to access other computers, partly over slow networks. The problem described below is not affected by which of the two remote access services I am using. When accessing Windows 7 Professional machines, a great deal of text is hard to read as the background is dithered. Even for exactly the same colors, Windows 2003 does not seem to dither at all, but to choose the closest available color. I strongly prefer the latter, as I don't care for the exact colors, I just want to be able to read easily. I am not sure whether this is operating system-related. The programs on the remote systems do not allow me to change the color choices for the various backgrounds to anything sane. Is there a way to disable this color dithering using some target operating system setting that will do the trick for both Terminal Services and TeamViewer?

    Read the article

  • Xterm is not completely erasing field lines

    - by user26367
    We have a SSH tunnel to a remote unix box from Windows clients using Cygwin. It launches a terminal program from the unix box locally on the Windows box for data input. The xterm window is launched as follows xterm -fn 10x20 -bg DodgerBlue4 -fg white -cr white -ls -geometry 90x30 -e program When a screen goes from read only mode to edit mode, the edit fields have ____. When going back to read only mode, a single pixel artifact is left behind for each field. *readonly* User: *edit* User: ___________ *after edit exit* User: . <- this dot is left behind Any idea what we need to change to fix this?

    Read the article

  • Inverting colors of a PDF

    - by legr3c
    I need to invert all the colors of a PDF document (background, text, graphics, and images). I want it persistent in the file so the inverted viewing options, that some viewers offer, won't help. Rasterizing the document and using image manipulation software is also not an option. I read somewhere that this can be done with the Enfocus PitStop plugin for Acrobat. However I didn't see a corresponding command anywhere. Am I missing something? Then I read that the ARTS PDF Crackerjack plugin for Acrobat offers negative printing so I tried that, too. The option is there but it simply doesn't work. I have been searching for very long for a way to do this. It seems like a common enough task but I just can't find out how to do it. Are there maybe any virtual printer drivers or something of the sort that support negative printing? Can anyone help?

    Read the article

  • High shmmax value in Redhat 6.3

    - by xpapad
    We are using Redhat 6.3 with 30G RAM to host our Postgres server. The (default) shmmax value is 68,719,476,736 In some forums I have read that having an shmmax value larger than the RAM causes extensive paging, but in the Redhat forums it warns against changing a kernel parameter that is already configured to a value larger than the minimum requirements for an environment. In ServerFault I've read that this probably has no impact. So is there any impact of having shmmax value RAM in a DB server, or the kernel understands this and handles it appropriately? Thanks

    Read the article

  • How to take mysql replication backup

    - by user53864
    I have a MySQL master-master replication setup with a slave for each master(only one master used for read/writes at a time) on Ubuntu server. Wondering what would be the best way to schedule backup of replication databases with mysqldump. I have following clarifications because of which could not proceed further. Scheduling mysqldump backup on masters safe for replication? Connecting masters with GUI applications(workbench) for database manipulations(read, writes.. by developers) is safe? Any inputs are welcome.

    Read the article

  • What does the the reconstruction process of mdadm do exactly on raid10

    - by Azrael
    I've got a system with 4 disks set up as raid10. All disks are usable, and mdadm all states them with UUUU. Due to a recent system crash, the raid is currently reconstruction the raid as it was marked as "not clean," and a reconstruction process was started. On a closer look smartctl shows problems on one disk: sd 0:0:0:0: [sda] Unhandled sense code sd 0:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 0:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] Descriptor sense data with sense descriptors (in hex): 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 24 cd 78 d4 sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed sd 0:0:0:0: [sda] CDB: Read(10): 28 00 24 cd 75 1e 00 04 00 00 With a research about the reconstruction process, I only found information concerning raid5 but nothing for raid10. Can I replace this problematic disk during the reconstruction process, or will I kill the raid with this?

    Read the article

  • Does disabling BIOS shadowing increase free RAM space?

    - by user32569
    Hi, I know in these days this is very stupid question, but for study purposes. I read that when PC starts, CPU is set to read adress just under the 4GB. There should BIOS be mapped to by memory controller. My question is, in old days, had disabling BIOS shadowing actually freed some RAM for you? I mean, even when BIOS was not shadowed to RAM directly, still adresses for BIOS MMIO access were wasted. And when you cant adress it, its like there is no extra space gained.

    Read the article

  • Iptables REDIRECT + openvpn problem

    - by Emilio
    I want to redirect connection to port 22 to my openvpn binded port, on 60001. Openvpn is running on server on 60001 server:~$ sudo netstat -apn | grep openvpn udp 0 0 67.xx.xx.137:60001 0.0.0.0:* 4301/openvpn I redirect on server port 22 to 60001 server:~$ sudo iptables -F -t nat server:~$ sudo iptables -A PREROUTING -t nat -p udp --dport 22 -j REDIRECT --to-ports 60001 I start openvpn client (openvpn.conf is correct, it works with remote IP 22 replaced with remote IP 60001) client:~$ ./openvpn openvpn.conf Tue Apr 27 00:42:50 2010 OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [EPOLL] built on Mar 23 2010 Tue Apr 27 00:42:50 2010 UDPv4 link local (bound): [undef]:1194 Tue Apr 27 00:42:50 2010 UDPv4 link remote: 67.xx.xx.137:22 Tue Apr 27 00:42:52 2010 read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Tue Apr 27 00:42:55 2010 read UDPv4 [ECONNREFUSED]: Connection refused (code=111) ... It doesn't connect. iptables shows requests from client to server but no answers. What's wrong with it?

    Read the article

  • Songbird too damn slow, at least on my mac

    - by Cawas
    I've read on few places that Songbird is no good with more than some thousands of library items because it starts getting quite slow. Well, in my case (which is a clean install) I've imported 17k items (which I know is not that much) and instead of becoming just too slow it frequently gets to not responding for several minutes until getting back to its senses again. That's for whichever random operation such as deleting 1 item from library. I've also read on few more places that gives very little hope on fixing this issue, but I wonder... Is there any way to tweak it and make it work as fast as expected? Am I missing something or is this just a complete and utterly useless piece of software for libraries with more than 10 thousand thingies?

    Read the article

  • CD-R suddenly unreadable?

    - by TheD
    I have a CD-R which has worked fine for a good while. All of a sudden, Windows can no longer read its data. It's not scratched, it's clean and Windows can detect the disk and usage of space on it. But whenever I attempt to access the data stored: Explorer crashes Or after 5-10 mins of trying to read it, it opens up and just shows a desktop.ini file in Explorer. This happens on multiple machines. Any ideas? Is there a way to recover the data? For example - some sector by sector recovery software if any exist for CD's?

    Read the article

  • Using mongodump with an auth enabled mongodb server

    - by bb-generation
    I'm trying to do a daily backup of my mongodb server (auth enabled) using the mongodump tool. mongodump provides two parameters to set the credentials: -u [ --username ] arg username -p [ --password ] arg password Unfortunately they don't provide any parameter to read the password from stdin. Therefore everytime I run this command, everyone on the server can read the password (e.g. by using ps aux). The only workaround I have found is stopping the database and directly accessing the database files using the --dbpath parameter. Is there any other solution which allows me to backup the mongodb database without stopping the server and without "publishing" my password? I am using Debian squeeze 6.0.5 amd64 with mongodb 1.4.4-3.

    Read the article

  • How to view multiple log files as one file in unix/linux

    - by user42679
    Hi, I was wondering if there is a convenient way in linux/unix to read multiple log files as one. More specifically, I would like to view a sequence of log files (app.log, app.log.1 app.log.2, etc) as one big file using normal unix tools (vi, less, etc). When the EOF is read the tool will automatically move to the beginning of the next file. During my work I have to analyze uat/prod logs to investigate and solve problems. The fact that I need to traverse many log files disturbs my work and causes delays. Any ideas?

    Read the article

  • proftpd initial directory for each user

    - by Dels
    After successfully setting up proftpd server, i want to add initial directory for each users, i have 2 user, webadmin that can access all folder and upload that can only access upload folder ... # Added config DefaultRoot ~ RequireValidShell off AuthUserFile /etc/proftpd/passwd # VALID LOGINS <Limit LOGIN> AllowUser webadmin, upload DenyALL </Limit> <Directory /home/webadmin> <Limit ALL> DenyAll </Limit> <Limit DIRS READ WRITE> AllowUser webadmin </Limit> </Directory> <Directory /home/webadmin/upload> <Limit ALL> DenyAll </Limit> <Limit DIRS READ WRITE> AllowUser upload </Limit> </Directory> All set ok, but i need to tell my ftp client initial directory for each user (otherwise it keep fail to retrieve directory), which i think it should be automatically set for each user (no need to type initial directory in ftp client)

    Read the article

  • Easily manage vsftpd virtual users?

    - by Phil
    I have a vsftpd server configured with many virtual users. logins are stored in a Berkeley DB file One configuration file exists for each user to define his permissions (read-only or read-write, home directory, etc.). To do that, I use the user_config_dir parameter (set in vsftpd.conf). I am wondering if it would be possible to manage these virtual users from a simple GUI (such as web interface). I have found some tools but they are limited to generic vsftpd configuration, not virtual users management. Otherwise, PAM-MySQL seems to be a good way to manage users efficiently but only username/password and logs can be stored in database, not permissions. Finally, I've found this thread, but the solution is a bit awkward... Is there any way to easily manage the vsftpd users ?

    Read the article

  • ideal memory configuration 4 bank, ddr3, AM3+ FX - 1 vs 2 vs 4 dimms?

    - by TardisGuy
    Ok, so ive been looking around, trying to learn and understand the way that ram works. Ive gotten one answer that said "The addressing is best for 2 sticks, and when you use 4; it slows down" Another answer said something like: Theres bank/channel interleave that makes the memory read like one stick Also I read something about the memory density also being a factor. I dug further and found out that theres a higher speed limit on my board for 2 sticks vs 4, so now im trying to put an image in my head of how and why, and... pfft. Can anyone explain, or recommend a resource that would answer these questions?

    Read the article

  • FreeBSD: problem with Postfix after updating LDAP

    - by Olexandr
    At the server I installed openldap-server, at this computer open-ldap client has already been installed. Version of openldap-client (2.4.16) was older then new openldap-server (2.4.21) and the version of client has updated too. OpenLDAP-client works with postfix on this server and after all updates postfix cann't start again. The error when postfix stop|start is: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix" At the category with libraries is libldap-2.4.so.7, but libldap-2.4.so.6 is removed from the server. When I want to deinstall curently version of openldap-client, system write ===> Deinstalling for net/openldap24-client O.K., but when I start "make install" system write: ===> Installing for openldap-sasl-client-2.4.23 ===> openldap-sasl-client-2.4.23 depends on shared library: sasl2.2 - found ===> Generating temporary packing list ===> Checking if net/openldap24-client already installed ===> An older version of net/openldap24-client is already installed (openldap-client-2.4.21) You may wish to ``make deinstall'' and install this port again by ``make reinstall'' to upgrade it properly. If you really wish to overwrite the old port of net/openldap24-client without deleting it first, set the variable "FORCE_PKG_REGISTER" in your environment or the "make install" command line. *** Error code 1 Stop in /usr/ports/net/openldap24-client. *** Error code 1 Stop in /usr/ports/net/openldap24-client. Updating of ports doesn't help, and postfix writes error: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix"

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >