Search Results

Search found 18029 results on 722 pages for 'stripe size'.

Page 564/722 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Should I partition a 1TB Hard Disk whose primary use is media storage?

    - by Senthil
    I am going to get a 1TB hard disk. I will be storing 1080p or 720p movies, high-bitrate music and pictures in it. I use my PC 90% of the time only to play/listen/see those. I am running out of space in my current HD so I am getting another one. My specs are 2.7GHz Dual Core, 512MB GeForce 9400GT, 2GB DDR2 RAM and all the proper matroska codecs/players. I guess that is enough to play 1080p movies withough a glitch, given an ideal hard disk. I've read about proper partitioning giving performance improvement etc.. I don't want my hard disk to be the bottleneck. Can someone tell me whether I should partition my 1TB hard disk into many drives? If I should, what is the ideal size of each partition? Smooth playing of movies is very important to me. Once I start filling up the disk, there is no turning back. So I want to get it right before I start. Thanks.

    Read the article

  • Can't pin modified shortcuts to the Windows 7 task bar

    - by Coder
    I have a shortcut to a .bat file which I pin to the task bar using a workaround by using another icon and this seems to work. Now I make a copy of that shortcut, point it to a different .bat file, rename it, and I can't pin this one to the task bar. I have to find some other new unused icon to pin, pin it, then modify it manually. The other problem this causes is that windows seems to track which icons were pinned even if they are modified after the fact. As such, if I use media player as my dummy icon, pin it, then alter it's name and shortcut to point to a .bat file, I can't re-pin windows media player and if I select unpin from the windows media player, it unpins my shortcut to my .bat file. I can't believe how ridiculous this is. Is there a way to pin anything I want to the taskbar (ie. .bat file in my case) that does not cause problems like this? Is there an easy way I can copy an existing shortcut and modify it and re-pin it to the taskbar? The reason I want to copy it is because I start a .bat file (in particular git bash) and I set properties on the window like quick edit, increase the screen buffer and set it's position and size manually. I don't want to have to do this to every single icon I want to pin since they will be identical aside from the shortcut url.

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • Need Help getting perl module DBD::mysql installed for bugzilla on RedHat.

    - by Alos Diallo
    Hi everyone I am having some issues getting Bugzilla setup, I have the software on the server and am trying to get the pre-rec's setup. I am using RedHat 4.1.2-42. I have all of the required perl modules save one:DBD::mysql When I try: sudo perl install-module.pl DBD::mysql I get the following response(this is only an excerpt): rm -f blib/arch/auto/DBD/mysql/mysql.so LD_RUN_PATH="/usr/lib64/mysql:/usr/lib64:/lib64" /usr/bin/perl myld gcc -shared -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic dbdimp.o mysql.o -o blib/arch/auto/DBD/mysql/mysql.so \ -L/usr/lib64/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib64 -lssl -lcrypto \ /usr/bin/ld: skipping incompatible /usr/lib/libssl.so when searching for -lssl /usr/bin/ld: skipping incompatible /usr/lib/libssl.a when searching for -lssl /usr/bin/ld: cannot find -lssl collect2: ld returned 1 exit status make: * [blib/arch/auto/DBD/mysql/mysql.so] Error 1 /usr/bin/make -- NOT OK Running make test Can't test without successful make Running make install make had returned bad status, install seems impossible I then tried the following: CFLAGS="-I/usr/lib64/mysql:/usr/lib64:/lib64" perl install-module.pl DBD::mysql I get the same result. I have also tried to install it using CPAN but also get the same result. Right now I have DBD-mysql v3.0007 but need (v4.00) Also when I try to install open ssl it says I have the latest version. Does anyone know what I have to do to get this to work? Any help would be greatly appreciated. Thank you

    Read the article

  • Postfix qmgr process causes heavy overload on mailservers

    - by Mattias
    We are using Postfix as MTA for our e-mailmarketing software and once in a while we see that the load on one of the mailservers rises above 5. The load is caused by the qmgr-process which is the heart of Postfix and I see that it is consuming a lot of CPU resources. The process seems to be stuck because after 15 minutes it is still doing the samething and still increasing the load. Once I restart the postfix service the load rapidly decreases to below 1 and Postfix continues to send e-mails without any problems. I'm wondering if anyone else has encountered this problem and if people have suggestions on how to prevent it. The problem shows up on all our mailservers but almost never at more than 1 at the time. It seems to be triggered only when we are sending a mailing but the size (10 or 100.000 e-mails doesn't seem to make a difference). It maybe happens once a week or even less often and the time and day is also different every time. We tried to solve the problem by decreasing the amount of messages qmgr is allowed to process but this didn't solve it. We are using Postfix 2.5.5 on Debian Lenny 5.0.8 (postfix is installed through the default Debian repository). No special messages can be found in the logs (syslog, messages, mail.*). Thank you for your time

    Read the article

  • Uploading file > 1 MB on Django admin gives 400 Bad Request response.

    - by ayaz
    I have a small Django (1.2.x) project deployed on Apache (2.x) via mod_wsgi (2.x). In the admin, if I upload a file < 1MB, I can get it through; however, for a file, say, 1.2MB in size, I get a 400 response from the server with "Error 400" in the body only. I am wondering why this is happening. As far as I can see, there is no LimitRequestBody set in Apache configuration. I have tried uploading with several browsers including: Firefox, Chrome, and Safari. In the log file for Apache, there is apparently no entry for requests that gave the 400 error response. This is strange. I should point out that the scenario where this is happening is thus: The project in question is deployed on two identical Apache servers (completely identical setup) that are behind a load balancer. On my development setup, of course, the problem does not surface. Any help with this will be very much appreciated.

    Read the article

  • come on!teach u save photos from iphone to computer

    - by goodm
    i am using iphone ,when we have fun,we took a lot of nice pics ,but,that is a question,how to transfer photos from iphone to computer,now ,let me show you,step by step: Step 1: Download Tansee iPhone Transfer Photo free trial version here,and then install it. You also need iTunes above 7.3 installed.or download at: http://www.softseeking.com/prodail.aspx?proid=74 Step 2: Connect iPhone to your computer. Step 3: Launch Tansee iPhone Transfer Photo and all the photos in your iPhone will display automatically, Step 4: Select the photos to be transferred to your computer, the selected file will marked with red border. You can select photos by click on each one, or just drag a rectangle to select a bundle of photos. You can also select all photos by click right button of your mouse or click "File" to choose. Note: you can only select first 6 photos if you haven't purchase. Step 5: Click "Copy" button to select output path and start to transfer photos to your computer: iPhone Camera Photo & Camera Video: Click "Camera Roll", do as steps above can copy your iPhone Camera Photos and iPhone Camera Videos to PC. Options Setting 1.Backup File Format: To select backup photo file format, Tansee iPhone Transfer Photo support BMP and JPG file now. 2.Backup Path: To select directory for storing the backup photos. You can select backup directory for each photo during backup by check "Ask Every Time" or store all files in a specified directory by checking "Save Here" and select the directory in the edit box. 3.Backup Resolution: To select the photo size to be backup.

    Read the article

  • Set postfix to send email but not to receive them

    - by CodeShining
    I'm using Google Apps to handle personal email addresses for my domain name, and I set up the DNS as Google suggests. All works fine. Now since I need a SMTP to send emails from my e-commerce I installed Postfix on the server. It works fine when I send emails to any email address but it doesn't send to the same domain name, so let's say my domain is example.com, I set postfix using example.com, if I try to reset a password using [email protected] postfix doesn't send and instead reports on the mail.log Sep 20 01:09:52 ip-10-54-26-162 postfix/pickup[6809]: B09A3415D8: uid=33 from=<www-data> Sep 20 01:09:52 ip-10-54-26-162 postfix/cleanup[6854]: B09A3415D8: message-id=<20120920010952.B09A3415D8@ip-10-54-26-162.eu-west-1.compute.internal> Sep 20 01:09:52 ip-10-54-26-162 postfix/qmgr[30978]: B09A3415D8: from=<[email protected]>, size=4234, nrcpt=1 (queue active) Sep 20 01:09:52 ip-10-54-26-162 postfix/local[6856]: B09A3415D8: to=<[email protected]>, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=5.1.1, status=bounced (unknown user: "myaccount") Of course it cannot find a local user "myaccount" since that account is on Google Apps... How can I tell Postfix to send the email and do not search for a local user?

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • testdisk - recover partition table

    - by Evaggelos Balaskas
    I destroyed my partition table of my laptop. Testdisk reports the below Disk laptop.img - 250 GB / 232 GiB - CHS 30402 255 63 (RO) Partition Start End Size in sectors >P MS Data 435868 456606 20739 [NO NAME] P MS Data 19232600 19235479 2880 [NO NAME] D MS Data 41945087 83890143 41945057 D MS Data 57151486 168579069 111427584 D MS Data 67637246 141037565 73400320 D MS Data 151523326 193466365 41943040 D MS Data 170617328 170618223 896 D MS Data 170631168 170634047 2880 D MS Data 171338232 171344405 6174 [Boot] D MS Data 172008235 172231918 223684 [NO NAME] P MS Data 193466368 214437887 20971520 D MS Data 217321375 225321678 8000304 [root] D MS Data 224923646 308809725 83886080 [media] D MS Data 308809728 420237311 111427584 D MS Data 418910206 481824765 62914560 [vmimages] my partition table had 3 Primary Partitions. 1. WinXP Home 2. /boot 3. LVM inside LVM i had 9 or 10 LVM partitions One of them was my home (encrypted with luks) testdisk cant recover my partition table or any other partition. Partitions with [P] doesnt have any useful data. I want to use dd to extract the partitions and try to recover as many files i can. Any ideas of how i can extract eg. the [root] lvm partition from the above testdisk report ? I am afraid that my disk was also corrupted.

    Read the article

  • PLESK PostFix Error Local in maillog, how to troubleshoot

    - by RCNeil
    I'm using the PHP mail() function, using PostFix, on CentOS6, Plesk 10.4, and my email is not getting delivered to a particular address. My personal GMail and Yahoo email addresses receive email from my server fine and do not produce errors. After a wonderful suggestion on here, I checked my mail logs, and this is the error I see : Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: from= <[email protected]>, size=645, nrcpt=1 (queue active) Apr 10 10:26:29 ######### postfix-local[8331]: postfix-local: [email protected], [email protected], dirname=/var/qmail/mailnames Apr 10 10:26:29 ######### postfix-local[8331]: cannot chdir to mailname dir name: No such file or directory Apr 10 10:26:29 ######### postfix-local[8331]: Unknown user: [email protected] Apr 10 10:26:29 ######### postfix/pipe[8330]: 19EA21827: to=<[email protected]>, relay=plesk_virtual, delay=0.15, delays=0.11/0/0/0.04, dsn=2.0.0, status=sent (delivered via plesk_virtual service) Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: removed [email protected] is the name I've declared in php.ini for sendmail_from = "[email protected]" sendmail_path = "/usr/sbin/sendmail -t -f [email protected]" and the recipient is supposed to be [email protected]. Is this an error on my side or the recipients? Can I address this on my server? Many thanks SF.

    Read the article

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • Finding custom printer-specific options on *nix

    - by enbuyukfener
    Hey all, I have a printer set up using CUPS (FujiXerox Document Center 1100), it is named DC1100. There are capabilities of the printer that are not shows as options of the printer as listed by: lpoptions -l -d DC1100 The output is below: PrintoutMode/Printout Mode: Draft *Normal High Photo InputSlot/Media Source: Upper Lower MultiPurpose LargeCapacity Manual *Standard PageSize/Page Size: *Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement PageRegion/PageRegion: Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement STP_Brightness/Brightness: 0.00 0.02 0.04 [snip] 2.00 STP_Contrast/Contrast: 0.00 0.05 0.10 0.15 [snip] 4.00 STP_ColorCorrection/Color Correction: Accurate Bright Density [snip] Uncorrected STP_DitherAlgorithm/Dither Algorithm: Adaptive EvenTone Fast [snip] VeryFast STP_EnableDensity/Density Enable: *Disabled Enabled STP_Density/Density Value: 0.1 0.2 0.3 0.4 [snip] 8.0 STP_EnableGamma/Composite Gamma Enable: *Disabled Enabled STP_Gamma/Composite Gamma Value: 0.10 0.15 0.20 0.25 [snip] 4.00 STP_LinearContrast/Linear Contrast Adjustment: *False True STP_Duplex/Double-Sided Printing: DuplexNoTumble DuplexTumble *None Resolution/Rendering Resolution: *FromPrintoutMode 150x150dpi 300x300dpi 600x600dpi OutputType/Output Type: *FromPrintoutMode BlackAndWhite Grayscale STP_ImageType/Image Type: *FromPrintoutMode Photo Graphics LineArt None Text TextGraphics STP_Resolution/Resolution: *FromPrintoutMode 150dpi 300dpi 600dpi I am particularly looking for options for: "secure print" (possibly by setting a mode and setting a username) stapling hole punching Perhaps I need a vendor specific driver/PPD file? If so, any pointers as I have no idea where to look for one. I haven't been able to find one on the official site or on sites such as http://www.openprinting.org

    Read the article

  • Unable to PPTP through NAT on Cisco 881

    - by MasterRoot24
    I'm trying to connect to a PPTP server which is sat behind a Cisco 881 NAT router. The server is running Ubuntu Server 12.04 and is running Poptop pptpd as the PPTP daemon listening for connections. As discussed in my other question, I'm trying to setup a Cisco 881 router to replace my old Linksys WAG320N. This same server and WAN connection worked fine with the WAG320N with no special configuration, other than allowing 1723 in through the firewall. On the Cisco 881, I'm using the newer ip nat enable or NAT NVI to setup static routes in through the firewall for the services running behind the router. My reason being that I can't run another copy of my live DNS domains internally with local IP addresses in. For the purposes of this question, though, I have rebuilt the router with ip nat inside/outside style NAT'ing, but this issue is still apparent. HTTP/SMTP/IMAP etc. all work ok from both the WAN and LAN interfaces of the router. I'm only having issues with SIP (see other question) and PPTP. My issue is that the GRE doesn't appear to be passing through NAT correctly and one end of the connection is not receiving GRE traffic when it should be, so the server hangs up the connection. Here's an example of /var/log/syslog with debug enabled in /etc/pptpd.conf: Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: MGR: Launching /usr/sbin/pptpctrl to handle client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pppd options file = /etc/ppp/pptpd-options Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection started Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 1) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a START CTRL CONN RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 156 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 7) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Set parameters to 100000000 maxbps, 64 window size Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a OUT CALL RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Starting call (launching pppd, opening GRE) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pty_fd = 6 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: tty_fd = 7 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 32 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): program binary = /usr/sbin/pppd Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 11 21:06:30 <HOSTNAME> pppd[22627]: pppd 2.4.5 started by root, uid 0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Using interface ppp0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Connect: ppp0 <--> /dev/pts/3 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: GRE: Bad checksum from pppd. Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 15) Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Got a SET LINK INFO packet with standard ACCMs Dec 11 21:07:00 <HOSTNAME> pppd[22627]: LCP: timeout sending Config-Requests Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Connection terminated. Dec 11 21:07:00 <HOSTNAME> avahi-daemon[1042]: Withdrawing workstation service for ppp0. Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Modem hangup Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Exit. Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Reaping child PPP[22627] Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection finished Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Exiting now Dec 11 21:07:00 <HOSTNAME> pptpd[5803]: MGR: Reaped child 22626 As far as Cisco are concerned, all I need is ip nat source static tcp <SERVER LAN IP> 1723 interface FastEthernet4 1723 but of course this doesn't seem to the be helping the GRE traffic through as it should. Trying the connection to the LAN IP of the server from the same LAN as the server (behind the router), the PPTP connection works fine, so I'm confident that the server's config is ok. Furthermore, all I needed on my WAG320N was to open 1723 in the firewall. Here's my current router config: ! ! Last configuration change at 20:20:15 UTC Tue Dec 11 2012 by xxx version 15.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname xxx ! boot-start-marker boot-end-marker ! ! enable secret 4 xxxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 xxx quit ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! ip domain list dmz.xxx.local ip domain list xxx.local ip domain name dmz.xxx.local ip name-server 192.168.1.x ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn xxx ! ! username admin privilege 15 secret 4 xxx username joe secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.x 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.x 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ! ! ip nat source list 1 interface FastEthernet4 overload ip nat source list 2 interface FastEthernet4 overload ip nat source static tcp 192.168.1.x 1723 interface FastEthernet4 1723 ! ! access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.1.0 0.0.0.255 ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 exec-timeout 15 0 login authentication local_auth line aux 0 exec-timeout 15 0 login authentication local_auth line vty 0 4 access-class 2 in login authentication local_auth length 0 transport input all ! ! end UPDATE 16/12/2012: The only progress that I have been able to make on this issue is that I'm confident that the issue is caused by the GRE tunnels (which are required for the PPTP connection to complete) are being blocked. When attempting a connection, I can see in show ip nat nvi translations that both a TCP translation on 1723 is setup and also a GRE translation is setup also. I appear to be able to see GRE related packets on the LAN that the server is on, so I am lead to believe that the server is sending(?) GRE packets, however running Wireshark on a client PC when attempting a connection shows absolutely no GRE packets. Whilst there are no configuration directives in my config posted above (that I can pin point) which would specifically block them, it would appear that the GRE packets are not being allowed in/out of the router's firewall, even though a NAT translation entry is setup to the server's LAN address. Would anyone be able to provide me with some help to ensure that GRE packets are not blocked by the router's firewall, so that this can be ruled out as a possible issue please?

    Read the article

  • how to debug when xinetd says : got signal 17 (child exited)

    - by Faizan Shaik
    I am trying to start services vsftpd and sshd using xinetd. my config files are as follows. /etc/xinetd.conf defaults { instances = 60 log_type = FILE /var/log/xinetdlog log_on_success = HOST PID log_on_failure = HOST cps = 25 30 only_from = localhost } includedir /etc/xinetd.d /etc/xinetd.d/ftp service ftp { disable = no server = /usr/sbin/vsftpd server_args = -l user = root socket_type = stream protocol = tcp wait = no instances = 4 flags = REUSE nice = 10 log_on_success += DURATION HOST USERID only_from = 127.0.0.1 10.0.0.0/24 } /etc/xinetd.d/ssh service ssh { disable = no log_on_failure += USERID server = /usr/sbin/sshd user = root socket_type = stream protocol = tcp wait = no instances = 20 flags = REUSE only_from = 127.0.0.1 10.0.0.0/24 } Even though i've included only_from attribute, vsftp server as well as ssh server are refusing connection from localhost. while vsftp and ssh servers are working fine individually when i check with "service vsftpd start" and "service ssh start". when i did debug using "xinetd -d" throug terminal i got the output as 13/10/20@00:06:08: DEBUG: 3592 {cnf_start_services} Started service: ftp 13/10/20@00:06:08: DEBUG: 3592 {cnf_start_services} Started service: ssh 13/10/20@00:06:08: DEBUG: 3592 {cnf_start_services} mask_max = 8, services_started = 2 13/10/20@00:06:08: NOTICE: 3592 {main} xinetd Version 2.3.14 started with libwrap loadavg options compiled in. 13/10/20@00:06:08: NOTICE: 3592 {main} Started working: 2 available services 13/10/20@00:06:08: DEBUG: 3592 {main_loop} active_services = 2 13/10/20@00:06:16: DEBUG: 3592 {main_loop} select returned 1 13/10/20@00:06:16: DEBUG: 3592 {server_start} Starting service ftp 13/10/20@00:06:16: DEBUG: 3592 {main_loop} active_services = 2 13/10/20@00:06:16: DEBUG: 3607 {exec_server} duping 9 13/10/20@00:06:16: DEBUG: 3592 {main_loop} active_services = 2 13/10/20@00:06:16: DEBUG: 3592 {main_loop} select returned 1 13/10/20@00:06:16: DEBUG: 3592 {check_pipe} Got signal 17 (Child exited) 13/10/20@00:06:16: DEBUG: 3592 {child_exit} waitpid returned = 3607 13/10/20@00:06:16: DEBUG: 3592 {server_end} ftp server 3607 exited 13/10/20@00:06:16: DEBUG: 3592 {svc_postmortem} Checking log size of ftp service 13/10/20@00:06:16: INFO: 3592 {conn_free} freeing connection 13/10/20@00:06:16: DEBUG: 3592 {child_exit} waitpid returned = -1 both services are getting started but neither of them is working. after banging for 3-4 hours i still don't have any clue about this error. Any help would be appreciated. thanks!

    Read the article

  • Fix Video timelines

    - by Josh
    So, I have been going through and riping all of my DVD's and it seems that the way to get the highest quality out of these is to have DVD Shrink de-encrypt, rip, and decompress, the DVD's. After that I usually end up with a high quality (high size) set of .vob files in a classic DVD structure. Then I use a python script that I wrote to automate the process of finding the title sequence and then combining all of the title sequences' .vob files together into one file(similar to the "copy /b" command in windows), and then changing the extension to .mpg (a more widely supported format then .vob). This allows me to get a high quality rip in about 40 min. The problem comes in playing the files. I need all of the ripped dvd's to play on my media computer using windows media center but windows media center (and vlc for that matter) all think that the video files are anywhere from 5 min. to 0 min. which is not a problem (the video will still play all the way through) but if you want to pause it, when it is unpaused the video will start all the way over (Also fast forward and rewind don't work). I suspect that it is something wrong with the way the timeline is encoded in the video file, various forums on the internet recommended using virtualdub to fix the errors. But when I try to open the file virtual dub says that the file is not in mpeg-1 encoding and may be in mpeg-2. Is there any way to fix this? PS: I am aware that there was a similar question but it hasn't had any activity for 2 months and is dealing more with wmv files.

    Read the article

  • Can I still restore partition table?

    - by Johannes Lund
    Once I was going to resize partitions on my Mac HD from Bootcamp. I changed my mind and was going to quit, but apparently I hit a button, which made every single mac partion dissapear, and windows 7 refused to restart and be reinstalled. The 1 TB large HD consists of 3 partions, I believe. Since I can't see their actual size (except bootcamp), this is how I recall it. Macintosh HD about 500GB (Somewhere around 700GB according to disk utillity, but 500 according to Finder, and 500GB was all I could access.) Lion Recovery disk Bootcamp 293.36 GB To fix this I connected my mac via target disk mode to a pc and ran TestDisk. However this is the results: Since I Don't have 10 reputation I cant post the image showing the testdisk results, so I post a link instead hoping it is ok. The two mac partitions' sizes are completely wrong, and BOOTCAMP isn't showing. I tested using disk utilities from the snow leopard dvd. There there is one 293.36 GB Mac OS Extended partition. Before I had the firewire cable for target disk mode I tried reinstalling windows. Without success I tried again formating BOOTCAMP. Was that a bad thing to do? Could it have overwritten data from Macintosh HD? Unfortunately I have no backup. I could bring it to some kind of computer repair firm though.

    Read the article

  • Centos INODES usage

    - by MSTF
    We are using Centos & cPanel server but we have a important problem for INODES usage. "df -i" command showing for / directory using 6 million inodes!. When I check number of files for / directory, it has few thousand files. df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda4 6578176 6567525 10651 100% / tmpfs 8238094 1 8238093 1% /dev/shm /dev/sdi1 61054976 169 61054807 1% /backup /dev/sda1 51296 38 51258 1% /boot /dev/sda2 0 0 0 - /boot/efi /dev/sdc1 7290880 1252 7289628 1% /database /dev/sdb2 4096000 53258 4042742 2% /home /dev/sdd1 7290880 3500 7287380 1% /home2 /dev/sde1 7290880 68909 7221971 1% /home3 /dev/sdg1 7290880 68812 7222068 1% /home5 /dev/sdh1 7290880 695076 6595804 10% /home6 /dev/sdf1 7290880 58658 7232222 1% /tmp df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 99G 30G 65G 32% / tmpfs 32G 0 32G 0% /dev/shm /dev/sdi1 917G 270G 601G 32% /backup /dev/sda1 788M 80M 669M 11% /boot /dev/sda2 400M 296K 400M 1% /boot/efi /dev/sdc1 110G 1.5G 103G 2% /database /dev/sdb2 62G 1.1G 58G 2% /home /dev/sdd1 110G 79G 26G 76% /home2 /dev/sde1 110G 3.9G 101G 4% /home3 /dev/sdg1 110G 51G 54G 49% /home5 /dev/sdh1 110G 64G 41G 62% /home6 /dev/sdf1 110G 611M 104G 1% /tmp SDA disk just have Operating System and cPanel. There is no account, database, tmp on SDA disk. Why SDA using high inodes? Note: All disks is SSD 120GB Thanks.

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • APC fragmentation on EC2 Micro for Wordpress + W3TC

    - by Maarten Provo
    I'm trying to optimize APC for my Amazon EC2 Micro server running one Wordpress-site with W3TC. I've started with the settings advised by TechZilla in another topic but I keep getting high fragmentation with 50% of space being free. I've uploaded an image to http://www.maartenprovo.be/downloads/apc.jpg but I can't post it here since I need at least 10 reputation. What values can I optimize to prevent fragmentation? [apc] apc.enabled=1 apc.shm_segments=1 ;32M per WordPress install apc.shm_size=164M ;Leave at 2M or lower. WordPress does't have any file sizes close to 2M apc.max_file_size=2M ;Relative to the number of cached files apc.num_files_hint=1000 ;Relative to the size of WordPress apc.user_entries_hint=4096 ;The number of seconds a cache entry is allowed to idle in a slot before APC dumps the cache apc.ttl=7200 apc.user_ttl=7200 apc.gc_ttl=3600 ;Auto update chache files on change in WP-ADMIN or W3TC apc.stat=1 ;This MUST be 0, WP can have errors otherwise! apc.include_once_override=0 ;Only set to 1 while debugging apc.enable_cli=0 ;Allow 2 seconds after a file is created before it is cached to prevent users from seeing half-written/weird pages apc.file_update_protection=2 ;Ignore files apc.filters apc.slam_defense = 0 apc.write_lock = 1 apc.cache_by_default=1 apc.use_request_time=1 apc.mmap_file_mask=/var/tmp/apc.XXXXXX apc.stat_ctime=0 apc.canonicalize=1 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.lazy_classes=0 apc.lazy_functions=0

    Read the article

  • samba joined to AD canot see users when in the security tab on client

    - by Jonathan
    I've got samba joined via kerberos and winbindd to our AD network and user authentication and everything else is working great. However when I try to add users/groups to file permissions it tells me they are not found. All the users groups show up fine with getent so I'm not sure why they are not showing up. Here is my smb.conf and I would much appreciate any help with this. #GLOBAL PARAMETERS [global] socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=11264 SO_SNDBUF=11264 workgroup = [hidden] realm = [hidden] preferred master = no server string = xerxes web/file server security = ADS encrypt passwords = yes log level = 3 log file = /var/log/samba/%m max log size = 50 printcap name = cups printing = cups winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes winbind nested groups = Yes winbind separator = + winbind refresh tickets = yes idmap uid = 1600-20000 idmap gid = 1600-20000 template primary group = "Domain Users" template shell = /bin/bash kerberos method = system keytab nt acl support = yes [homes] comment = Home Direcotries valid users = %S read only = No browseable = No create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [test] comment = Test path=/mnt/test writeable=yes valid users = %s create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [printers] comment = All Printers path = /var/spool/cups browseable = no printable = yes

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • Is there a way in Windows 7 to disable "journaling"?

    - by Psycogeek
    C:\$extend\$Usn.Jrnl:$J:$data Here is a picture finally. The large strip in the center of the top band is the largest chunk, in the other, grey areas are the various clusters with it. On the right, the big long grey line is $logfile (not paging), and it is 63&nbsb;MB. Paging, 500&nbsb;MB is the dark cyan chunk, next to the yellow MFTres in the inner rings.. The disk was defragged so they could be seen easier. Not all clusters of this type of file are tagged, but the idea is there. The disk is 4k clusters, now about 12 GB size. Each cute little block in the picture is .81 MB and represents 207 clusters. The dkGreen section, is mostly the whole Winsxs pile, also interesting when they keep telling us it doesn't take much disk space. Wikipedia suggests that in previous NT systems "USN journaling" would be turned on when enabled (assumes it could also be turned off?). What aspects, services, or program is working on putting that stuff all over the disk which is known by $jrnl$ type clusters, even if it is not actual USN journaling? Is it possible in a Windows 7 system to completly disable the journaling, and what would be the ramifications of that? On a Windows XP NTFS system, I do not recall seeing the quantity of disk clusters used with these $jrnl$ names, so I do not recall this being necessary in this quantity for an NTFS file system itself? I understand that it would not be there, if it did not have a useful function :-) Information about how wonderful is fine, if that information will help track down what parts of the system create and use it. Change Journals states: Change journals are also needed to recover file system indexing Hmm, that might explain some of them, or why it was left on the disk. A crash while background indexing?

    Read the article

  • SMTP server closes connection unexpectedly

    - by janin
    I'm writing a python program to send emails, and when trying to send to yopmail, hotmail and some other hosts the connection gets closed by the server without a message. I tried connecting directly with netcat and the same thing happens. Here's what the exchange looks like : $ nc smtp.yopmail.com 25 220 mx.yopmail.com ESMTP *** ehlo mx.myhost.com 250 SIZE 2048000 mail FROM:<[email protected]> 250 OK rcpt TO:<[email protected]> The connection is just closed abruptly at this point. On other hosts, like my ISP's, everything goes fine. I've checked the blacklists but my IP is not listed. Any idea what's going on? Edit: My IP is not listed in any blacklist. I own myhost.com, but I don't have an SPF record. I'll add one and update this post when the record has propagated. Edit 2: with the SPF added the email is now accepted and Hotmail adds a Authentication-Results: hotmail.com; sender-id=pass header to the email. However it gets classified as spam, but I guess that's another matter. Thanks for your help.

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >