Search Results

Search found 19425 results on 777 pages for 'output clause'.

Page 589/777 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • Can't write to samba share

    - by Tiddo
    I try to setup a samba file server, but whatever I do I can't get write access to work (reading works fine). This is my current situation: I have a local fileserver with 3 harddisks mounted at /mnt/share/disk<nr>. 2 of these use the ext4 filesystem, the third one is ntfs. This file server runs Fedora 18 32-bit. The root folders of these harddisks are owned by superman:superman, and testparm outputs the following: [global] workgroup = WORKGROUP netbios name = FILE_SERVER server string = Samba Server Version %v interfaces = lo, eth0, 192.168.123.191/8 log file = /var/log/samba/log.%m max log size = 50 unix extensions = No load printers = No idmap config * : backend = tdb hosts allow = 192.168.123. cups options = raw wide links = Yes [share] comment = Home Directories path = /home/share/ write list = superman, @users force user = superman read only = No create mask = 0777 directory mask = 0777 inherit permissions = Yes guest ok = Yes I've tried a lot to get this to work: the disk are chmodded to 777, I've tried turning off selinux, I've added the samba_share_t label to the disks and as can be seen in the above output I tried to make the smb config as permissive as I could, but still I cannot write to the share (tried from Windows 7 and another Fedora installation). What can I try to be able to write to the shares? EDIT: The replies I got so far are mostly concerned with the smb.conf. I have however tried a lot of different setup, ready made configs, and solutions to similar problems for the smb.conf file, so I suspect that the real problem is somewhere else.

    Read the article

  • Translating debian network configuration to gentoo

    - by thpetrus
    I just got rid off Debian on my VPS (OpenVZ) and installed Gentoo on it, however it is a plain Gentoo image without further configuration, i.e. no working network. I'm not familiar with Debian and coulnd't figure out how to get the network set up, these are the debian network files /etc/network/interfaces: auto venet0 iface venet0 inet manual up ifconfig venet0 up up ifconfig venet0 127.0.0.2 up route add default dev venet0 down route del default dev venet0 down ifconfig venet0 down iface venet0 inet6 manual up ifconfig venet0 add ipv6addr/128 down ifconfig venet0 del ipv6addr/128 up route -A inet6 add default dev venet0 down route -A inet6 del default dev venet0 auto venet0:0 iface venet0:0 inet static address external_ip netmask 255.255.255.255 auto venet0:1 iface venet0:1 inet static address internal_ip netmask 255.255.255.255 Please note that external_ip, internal_ip and ipv6addr are placeholders. I copied the /etc/resolv.conf, know the gateway_ip and also have another ouput of ifconfig, if necessary. This is what I came up with, /etc/conf.d/net: config_venet0="127.0.0.2 netmask 255.255.255.255 brd 0.0.0.0" config_venet0:0="external_ip netmask 255.255.255.255 brd 0.0.0.0" route_venet0:0="default via gateway_ip" config_venet0:1="internal_ip netmask 255.255.255.255 brd 0.0.0.0" Broadcast IP is taken from ifconfig debian output - however it doesn't work. A symbolic link net.venet0:0 -> net.lo in /etc/init.d/ was created and I added net.venet0:0 to the boot runlevel.

    Read the article

  • Ruby net:LDAP returns "code = 53 message = Unwilling to perform" error

    - by Yong
    Hi, I am getting this error "code = 53, message = Unwilling to perform" while I am traversing the eDirectory treebase = "ou=Users,o=MTC". My ruby script can read about 126 entries from eDirectory and then it stops and prints out this error. I do not have any clue of why this is happening. I am using the ruby net:LDAP library version 0.0.4. The following is an excerpt of the code. require 'rubygems' require 'net/ldap' ldap = Net::LDAP.new :host => "10.121.121.112", :port => 389, :auth => {:method => :simple, :username => "cn=abc,ou=Users,o=MTC", :password => "123" } filter = Net::LDAP::Filter.eq( "mail", "*mtc.ca.gov" ) treebase = "ou=Users,o=MTC" attrs = ["mail", "uid", "cn", "ou", "fullname"] i = 0 ldap.search( :base => treebase, :attributes => attrs, :filter => filter ) do |entry| puts "DN: #{entry.dn}" i += 1 entry.each do |attribute, values| puts " #{attribute}:" values.each do |value| puts " --->#{value}" end end end puts "Total #{i} entries found." p ldap.get_operation_result Here is the output and the error at the end. Thank you very much for your help. DN: cn=uvogle,ou=Users,o=MTC mail: --->[email protected] fullname: --->Ursula Vogler ou: --->Legislation and Public Affairs dn: --->cn=uvogle,ou=Users,o=MTC cn: --->uvogle Total 126 entries found. OpenStruct code=53, message="Unwilling to perform"

    Read the article

  • .tex file remains in use by process when batch file is triggered by .Rnw Sweave processing.

    - by drknexus
    This is a pretty specialized question. I'm using the Eclipse IDE in a Windows XP environment with the StatET plug-in so I can write R code as an R/Sweave document. This produces a .tex file that is then post processed by pdflatex.exe. When I create the file as normal everything works great (except maybe my file named russfnc2.Rnw seems to result in russfnc.pdf even though pdflatex.exe on the console window correctly says that the output is being writen to russfnc2.pdf). The big problem is when I trigger a batch file from within my Rnw code. My goal here is to spawn a side process that waits for the PDF to be made and uploads it to the server. So the Rnw contains: if(file.exists("rsp.finalize.bat")) {system("rsp.finalize.bat",wait=FALSE,invisible=FALSE)} The batch file calls Rterm.exe to run a script: setwd("C:/theprojectdirectory") while(!file.exists("russfnc.pdf")) { Sys.sleep(1) } Sys.sleep(60) At the end of that script, I use a shell call to launch psftp.exe and upload the files. All of this works fine, when I use my Eclipse profile to trigger Sweave... that is unless I have that batch file at the end of the .Rnw. When it is located there, I get the error message pdflatex.exe: Permission denied: c:\thepath\thetexfile.tex. After that, the .tex file (as far as XP is concerned) is in use by another process and I have to reboot in order to delete it (and, of course, the pdf is not made). If I manually trigger the batch file after pdflatex.exe has done its things, everything works fine. How can I make this work correctly using the tools I'm familiar with vis., R and Dos-style batch files? I'm not sure if this is a SuperUser question or a StackOverflow question, so I'm starting here.

    Read the article

  • Centos/Postfix able to send mail but not receive it

    - by Dan Hastings
    I have set up postfix and used the mail command to test and an email was successfully sent and delivered. The email arrived in my yahoo inbox BUT the sender also recieved an email in the Maildir directory saying "I'm sorry to have to inform you that your message could not be delivered to one or more recipients", even though the message was delivered. I tried replying from yahoo to the email but it never arrived. I have 1 MX record added to godaddy which i did last week. Priority0 Host @ Points to mail.domain.com TTL1 Hour Postfix main.cf has the following added to it myhostname = mail.domain.com mydomain = domain.com myorigin = $mydomain inet_interfaces = all mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mynetworks = 192.168.0.0/24, 127.0.0.0/8 relay_domains = home_mailbox = Maildir/ I checked var/logs/maillog and found the following errors occuring postfix/anvil[18714]: statistics: max connection rate 1/60s for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max connection count 1 for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max cache size 1 at Jun 3 09:30:15 postfix/smtpd[18772]: connect from unknown[unknown] postfix/smtpd[18772]: lost connection after CONNECT from unknown[unknown] postfix/smtpd[18772]: disconnect from unknown[unknown] output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 168.100.189.0/28, 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Can enabling a RAID controller's writeback cache harm overall performance?

    - by Nathan O'Sullivan
    I have an 8 drive RAID 10 setup connected to an Adaptec 5805Z, running Centos 5.5 and deadline scheduler. A basic dd read test shows 400mb/sec, and a basic dd write test shows about the same. When I run the two simultaneously, I see the read speed drop to ~5mb/sec while the write speed stays at more or less the same 400mb/sec. The output of iostat -x as you would expect, shows that very few read transactions are being executed while the disk is bombarded with writes. If i turn the controller's writeback cache off, I dont see a 50:50 split but I do see a marked improvement, somewhere around 100mb/s reads and 300mb/s writes. I've also found if I lower the nr_requests setting on the drive's queue (somewhere around 8 seems optimal) I can end up with 150mb/sec reads and 150mb/sec writes; ie. a reduction in total throughput but certainly more suitable for my workload. Is this a real phenomenon? Or is my synthetic test too simplistic? The reason this could happen seems clear enough, when the scheduler switches from reads to writes, it can run heaps of write requests because they all just land in the controllers cache but must be carried out at some point. I would guess the actual disk writes are occuring when the scheduler starts trying to perform reads again, resulting in very few read requests being executed. This seems a reasonable explanation, but it also seems like a massive drawback to using writeback cache on an system with non-trivial write loads. I've been searching for discussions around this all afternoon and found nothing. What am I missing?

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • (manually configured) kernel update leaves wireless in a mess

    - by Mala
    I recently upgraded my kernel from 2.6.31-gentoo-r6 to 2.6.32-gentoo-r7. In both cases, I configured everything manually. However, since the upgrade, my wireless card appears to be on the fritz. It will connect to networks just fine, and remain connected, but can only access the internet (and other hosts on the network) for about 3 seconds after connecting. Reconnecting to the network appears to fix the problem... for another 3 seconds or so. The problem is "solved" by booting into the older kernel. The relevant lspci entry is 02:00.0 Network controller: Intel Corporation PRO/Wireless 5300 AGN [Shiloh] Network Connection I'm pretty sure I have the correct drivers enabled in the kernel Device Drivers ---> Network device support ---> Wireless LAN (IEEE 802.11) ---> <*> Intel Wireless Wifi [*] Enable LED support in iwlagn and iwl3945 drivers [*] Enable Spectrum Measurement in iwlagn driver [*] Enable full debugging output in iwlagn and iwl3945 drivers <*> Intel Wireless WiFi Next Gen AGN (iwlagn) [*] Intel Wireless WiFi 4965AGN [*] Intel Wireless WiFi 5000AGN; Intel WiFi Link 1000, 6000, and 6050 Series I tried with the other intel drivers enabled as well (iwl3945) and no difference. Is there something stupid I'm missing? Is there something I have to recompile after upgrading the kernel (a la nvidia)? Thanks Mala

    Read the article

  • mplayer (mplayerhq.hu) repeats ending audio frames

    - by kamikatze
    mplayer (from mplayerhq.hu) on windows repeats the last few audio frames upon exit. When the video ends, before you can see Exiting... (End of file) in the command prompt, you will hear the last 1/2 second or so of the audio track again. This behavior is the same for multiple containers/codecs/soundcards Vista or Windows 7. Is there a workaround for this? My playback specs: MPlayer Sherpya-MT-SVN-r31027-4.2.5 (C) 2000-2010 MPlayer Team 150 audio & 343 video codecs Playing splash_final.wmv. ASF file format detected. [asfheader] Audio stream found, -aid 1 [asfheader] Video stream found, -vid 2 VIDEO: [WMV3] 1280x720 24bpp 1000.000 fps 6291.5 kbps (768.0 kbyte/s) ========================================================================== Opening video decoder: [dmo] DMO video codecs DMO dll supports VO Optimizations 0 1 DMO dll might use previous sample when requested Decoder supports the following formats: YV12 YUY2 UYVY YVYU RGB8 [..] Decoder is capable of YUV output (flags 0x1b) Movie-Aspect is undefined - no prescaling applied. VO: [directx] 1280x720 = 1280x720 Planar YV12 Selected video codec: [wmv9dmo] vfm: dmo (Windows Media Video 9 DMO) ========================================================================== ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 44100 Hz, 2 ch, s16le, 329.8 kbit/23.37% (ratio: 41221-176400) Selected audio codec: [ffwmav2] afm: ffmpeg (DivX audio v2 (FFmpeg)) ========================================================================== AO: [dsound] 44100Hz 2ch s16le (2 bytes per sample) Starting playback...

    Read the article

  • Windows cannot open directory with too long name created by Linux

    - by Tim
    Hello! My laptop has two OSes: Windows 7 and Ubuntu 10.10. A partition of Windows 7 of format NTFS is mounted in Ubuntu. In Ubuntu, I created a directory under somehow deep path and with a long name for itself, specifically, the name for that directory is "a set of size-measurable subsets ie sigma algebra". Now in Windows, I cannot open the directory, which I guess is because of the name is too long, nor can I rename it. I was wondering if there is some way to access that directory under Windows? Better without changing the directory if possible, but will have to if necessary. Thanks and regards! Update: This is the output using "DIR /X" in cmd.exe, which does not shorten the directory name: F:\science\math\Foundations of mathematics\set theory\whether element of a set i s also a set\when element is set\when element sets are subsets of a universal se t\closed under some set operations\sigma algebra of sets>DIR /X Volume in drive F is Data Volume Serial Number is 0492-DD90 Directory of F:\science\math\Foundations of mathematics\set theory\whether elem ent of a set is also a set\when element is set\when element sets are subsets of a universal set\closed under some set operations\sigma algebra of sets 03/14/2011 10:43 AM <DIR> . 03/14/2011 10:43 AM <DIR> .. 03/08/2011 10:09 AM <DIR> a set of size-measurable sub sets ie sigma algebra 02/12/2011 04:08 AM <DIR> example 02/17/2011 12:30 PM <DIR> general 03/13/2011 02:28 PM <DIR> mapping from sigma algebra t o R or C i.e. measure 02/12/2011 04:10 AM <DIR> msbl mapping from general ms bl space to Borel msbl R or C 02/12/2011 04:10 AM 4,928 new file~ 03/14/2011 10:42 AM <DIR> temp 03/02/2011 10:58 AM <DIR> with Cartesian product of se ts 1 File(s) 4,928 bytes 9 Dir(s) 39,509,340,160 bytes free

    Read the article

  • Postfix misconfigured? 550 Sender rejected from recieving server

    - by wnstnsmth
    We use Postfix on our CentOS 6 machine, having the following configuration. We use PHP's mail() function to send rudimentary password reset emails, but there is a problem. As you will see, mydomain and myhostname is correctly set, afaik. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ***.ch myhostname = test.***.ch newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 Now this is the stuff that is in the /var/log/maillog of Postfix upon sending an email to ***.***@***.ch, with ***.ch being the same domain our sending server test.***.ch is on: Dec 13 16:55:06 R12X0210 postfix/pickup[6831]: E6D6311406AB: uid=48 from=<apache> Dec 13 16:55:06 R12X0210 postfix/cleanup[6839]: E6D6311406AB: message-id=<20121213155506.E6D6311406AB@test.***.ch> Dec 13 16:55:07 R12X0210 postfix/qmgr[6832]: E6D6311406AB: from=<apache@test.***.ch>, size=1276, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/smtp[6841]: E6D6311406AB: to=<***.***@***.ch>, relay=mail.***.ch[**.**.249.3]:25, delay=46, delays=0.18/0/21/24, dsn=5.0.0, status=bounced (host mail.***.ch[**.**.249.3] said: 550 Sender Rejected (in reply to RCPT TO command)) Dec 13 16:55:52 R12X0210 postfix/cleanup[6839]: 8562C11406AC: message-id=<20121213155552.8562C11406AC@test.***.ch> Dec 13 16:55:52 R12X0210 postfix/bounce[6848]: E6D6311406AB: sender non-delivery notification: 8562C11406AC Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: from=<>, size=3065, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: E6D6311406AB: removed Dec 13 16:55:52 R12X0210 postfix/local[6850]: 8562C11406AC: to=<root@test.***.ch>, orig_to=<apache@test.***.ch>, relay=local, delay=0.13, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: removed So the receiving server rejects the sender (line 4 of log output). We have tested it with one other recipient and it worked, so this problem might be completely unrelated to our settings, but related to the recipient. Still, with this question, I want to make sure we're not making an obvious misconfiguration on our side.

    Read the article

  • How CPU communicates with HW

    - by b-gen-jack-o-neill
    Good day. I am new here, but I could not find answer to my question using google, so I help I do not violate any rules. So, basically, all I want to ask is, how CPU comminucates with other HW, such as printers, Graphic card, sound card, LAN card etc. I know, that for basic system I/O, you can use BIOS interrupts. INT 10h I believe is for display output. But, what I would like to know is, what actually happens when you execute instruction int 10h. From desription of int instruction, it should jump to routine, which is stored on adress pointed by adress stored in iterrupt table. But how does this routine get into the RAM? Does BIOS save that routines to the RAM? And what actually that routine does? I mean, CPU can only acess RAM, right? So how can now acess some other HW? Is there some special instrucion for it? Or is CPU somehow connected to BIOS, and than BIOS actually does the work? And the last thing, does even OS like Windows or GNU/Linux use BIOS interrupts, or can OS acess HW directly? Thanks.

    Read the article

  • Colorizing your terminal and shell environment?

    - by Stefan Lasiewski
    I spend most of my time working in Unix environments and using Terminal emulators. I try to use color on the commandline, because color makes the output more useful and intuitive. What are some good ways to add color to my terminal environment? What tricks do you do? What pitfals have you encountered? Unfortunately, support for color is wildly variable depending on terminal type, OS, TERM setting, utility, buggy implementations, etc. Here's what I do currently, after alot of experimentation: I tend to set 'TERM=xterm-color', which is supported on most hosts (but not all). I work on a number of different hosts, different OS versions, etc. I'm trying to keep things simple and generic, if possible. Many OSs set things like 'dircolors' and by default, and I don't want to modify this everywhere. So I try to stick with the defaults. Instead tweak my Terminal's color configuration. Use color for some unix commands (ls, grep, less, vim) and the Bash prompt. These commands seem to the standard "ANSI escape sequences" I've managed to find some settings which are widely supported, and which don't print gobbledygook characters in older environments (even FreeBSD4!) (For the most part). From my .bash_profile ### Color support # The Terminal application typically does 'export TERM=term=color' # Some terminal types will print Black, White & underlined with these settings. OS=`uname -s` case "$OS" in "SunOS" ) # Solaris9 ls doesn't allow color, so use special characters instead. LS_OPTS='-F' ;; "Linux" ) # GNU tools supports colors! See dircolors to customize colors export LS_OPTS='--color=auto' # Color support using 'less -R' alias less='less --RAW-CONTROL-CHARS' alias ls='ls ${LS_OPTS} export GREP_OPTIONS="--color=auto" ;; "Darwin"|"FreeBSD") # Most FreeBSD & Apple Darwin supports colors # LS_OPTS="-G" export CLICOLOR=true alias less='less --RAW-CONTROL-CHARS' export GREP_OPTIONS="--color=auto" ;; esac

    Read the article

  • ffmpeg conversion problem

    - by user33126
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • ffmpeg conversion problem

    - by Elamurugan
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • DRBD Replication failure

    - by user62513
    A couple of weeks ago I setup a 2 nodes CRM system with one of the resources managed being MySQL over DRBD. Today for maintenance reasons I restarted both nodes but now they can't connect to each other anymore. DRBD fell out of sync and I followed this guide to get it back connected but it's only able to run successfully on one node. But this strange thing happens: If I crm node standby both nodes and I try: crm node online node0 before crm node online node1, all the CRM resources start successfully but the DRBD partitions are still running in StandAlone state. crm node online node1 beofre crm node online node0, the DRBD resource fails to start, thus causing mysql not to start. If I standby both resources and call crm node online node0 then it times out and prints this error: Running crm node online node0 produces this output after timing out Error setting standby=off (section=nodes, set=<null>): Remote node did not respond Error performing operation: Remote node did not respond Is there anything I'm doing wrong here? An alternative will be just do MySQL replication but I'm not sure how to promote a slave to master when the master database is not available.

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Why does redis report limit of 1024 files even after update to limits.conf?

    - by esilver
    I see this error at the top of my redis.log file: Current maximum open files is 1024. maxclients has been reduced to 4064 to compensate for low ulimit. I have followed these steps to the letter (and rebooted): Moreover, I see this when I run ulimit: ubuntu@ip-XX-XXX-XXX-XXX:~$ ulimit -n 65535 Is this error specious? If not, what other steps do I need to perform? I am running redis 2.8.13 (tip of the tree) on Ubuntu LTS 14.04.1 (again, tip of the tree). Here is the user info: ubuntu@ip-XX-XXX-XXX-XXX:~$ ps aux | grep redis root 1027 0.0 0.0 66328 2112 ? Ss 20:30 0:00 sudo -u ubuntu /usr/local/bin/redis-server /etc/redis/redis.conf ubuntu 1107 19.2 48.8 7629152 7531552 ? Sl 20:30 2:21 /usr/local/bin/redis-server *:6379 The server is therefore running as ubuntu. Here are my limits.conf file without comments: ubuntu@ip-XX-XXX-XXX-XXX:~$ cat /etc/security/limits.conf | sed '/^#/d;/^$/d' ubuntu soft nofile 65535 ubuntu hard nofile 65535 root soft nofile 65535 root hard nofile 65535 And here is the output of sysctl fs.file-max: ubuntu@ip-XX-XXX-XXX-XXX:~$ sysctl -a| grep fs.file-max sysctl: permission denied on key 'fs.protected_hardlinks' sysctl: permission denied on key 'fs.protected_symlinks' fs.file-max = 1528687 sysctl: permission denied on key 'kernel.cad_pid' sysctl: permission denied on key 'kernel.usermodehelper.bset' sysctl: permission denied on key 'kernel.usermodehelper.inheritable' sysctl: permission denied on key 'net.ipv4.tcp_fastopen_key' as sudo ubuntu@ip-10-102-154-226:~$ sudo sysctl -a| grep fs.file-max fs.file-max = 1528687 Also, I see this error at the top of the redis.log file, not sure if it's related. It makes sense that the ubuntu user isn't allowed to change max open files, but given the high ulimits I have tried to set he shouldn't need to: [1050] 23 Aug 21:00:43.572 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. [1050] 23 Aug 21:00:43.572 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • cURL - Unkown SSL protocol error - OS X 10.9

    - by saq7
    I am trying to use cURL and get the following error on every https request I make. The error is always the same. HTTP requests work flawlessly. The verbose output is quite useless. Saquibs-MacBook-Pro:~ skothawala$ curl https://google.com -vv * Adding handle: conn: 0x7fe09b803a00 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x7fe09b803a00) send_pipe: 1, recv_pipe: 0 * About to connect() to google.com port 443 (#0) * Trying 74.125.226.129... * Connected to google.com (74.125.226.129) port 443 (#0) * Unknown SSL protocol error in connection to google.com:-9805 * Closing connection 0 curl: (35) Unknown SSL protocol error in connection to google.com:-9805 I have looked through other answers on this forum and other places on the internet, but haven't found an answer. Most people's issues involve particular servers and the configuration of SSL on these servers. Mine however is problematic anytime HTTPS is used (with any website). Can someone please suggest what I should be looking into to solve the problem? Can it be that something is not properly configured? What should I be looking for?

    Read the article

  • Winamp playing sound but no video

    - by Greg Sansom
    I am having problems playing video in Winamp (the movie I am trying to play is an AVI - not sure if other formats work). I have installed the K-Lite Codec Pack, and the video does work in Winamp Classic. I can also play the video in Winamp on another machine (although I can't remember the exact configuration details of that machine - and I don't think they're relevant). There are a few symptoms: The content of the Video view is either empty, transparent, or displays rendering from other programs. Opening the Visualization view shows the following error: MILKDROP ERROR DirectX initialization failed (GetDeviceCaps). This means that no valid 3D-accelerated display adapter could be found on your computer. If you know this is not the case, it is possible that your graphics subsystem is unstable; please try rebooting your computer and then try to run the plugin again. Otherwise, please install a 3D-accelerated display adapter. Trying to open streams via the SHOUTCast TV plugin shows Error opening video output, and the video does not open. Opening the file with WMC causes the following error (although the movie still plays): Error creating DX9 allocation presenter CreateDevice failed D3DERR_NOTAVAILABLE There are no warnings displayed in Device Manager, although the display adapter is the standard Windows one. Running DxDiag shows no problems (codec for Video listed as XviD 1.1.2 Final). GSpot reports that codecs are installed. System specs: - Windows Server 2008 r2 Standard 64-bit, with latest updates; - .NET 3.5.1 installed; - Winamp v5.6.01 (latest version); - DirectX 11 (Latest version); - K-Lite Codec Pack 7.0.0 (Full); - Machine is HP DC7600 - full specs here. Please comment if there is any more information which will help to diagnose the problem.

    Read the article

  • NetBackup's bplist doesn't get user/group info for Windows files

    - by Gnustavo
    I'm trying to get information about storage consumption from NetBackup's bplist output. I'm running NBU 6.0MP5 on a RHEL 3 server. The server is backing up several Solaris, Linux, and Windows machines. When I use bplist to get information about files backed up on any UNIX machine I get something like this: # bplist -C unixclient -R 99 -l -s 01/28/2006 -e 01/29/2006 / drwxr-xr-x test ccase 0 Nov 16 09:28 /l/home2/test/ -rw------- test ccase 4737 Jan 06 17:54 /l/home2/test/.bash_history -rw-rw-r-- test ccase 104 Nov 11 2004 /l/home2/test/.bashrc However, when I use it to list files backed up on any Windows client I can't get the user and group information. They both always appear as 'root'. Like this: # bplist -C winclient -t 13 -R 99 -l -s 02/20/2006 / drwx------ root root 0 Feb 20 14:26 /C/temp/ -rwx------ root root 41 Feb 20 14:26 /C/temp/asdf.txt drwx------ root root 0 May 25 2004 /C/temp/CTRMNGR/ Does anyone know why bplist doesn't show the correct user/group for Windows files? If it can't, is there a way to get that information using another command? Thanks. Gustavo.

    Read the article

  • Upstart multiple instances of service not working

    - by Dax
    I started playing with MongoDB on Lucid. Now I would like to run a DB and Config server on the same box. They both use the same binary to launch, but with different config files and running on different ports. All directories for log and lib is split so one goes to mongodb and the other to mongoconf. Each process can be started without any problems on their own. start mongodb stop mongodb start mongoconf stop mongoconf But if I try to start both, the second one would just start and exit. Using 'initctl log-priority debug' I got the following in the logs. Jan 6 12:44:12 mongo4 init: event_finished: Finished started event Jan 6 12:44:12 mongo4 init: job_process_handler: Ignored event 1 (1) for process 5690 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) main process (5690) terminated with status 1 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) goal changed from start to stop Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) state changed from running to stopping man 5 init shows that you can use instance names to differentiate the two. I tried using 'instance mongoconf' in the on upstart script and 'instance mongodb' in the other one, and it still fails. I can manually start the other process, so there is definitely no conflicts on port numbers or directories. Any ideas on what to try or how to get output on why it is 'terminated with status 1'? Thanx

    Read the article

  • Removing/modifying LDAP objectclasses/attributes using olc

    - by Foezjie
    I'm having trouble using openldap's olc to modify a schema without shutting down the server. To test some things out, I made the following schema: objectIdentifier tests orgUlyssisOID:4 objectIdentifier testAttribute tests:1 objectIdentifier testObjectClass tests:2 attributeType ( testAttribute:1 NAME 'attr1' DESC 'attribuut 1' SYNTAX '1.3.6.1.4.1.1466.115.121.1.40' ) attributeType ( testAttribute:2 NAME 'attr2' DESC 'attribuut 2' SUP userPassword SINGLE-VALUE ) objectclass ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST (attr1 $ attr2 ) ) And added it to a new schema called test. (cn={9}test.ldif in cn=schema). Now I can't seem to figure out how to delete class1 from that schema. I use the following LDIF (and tried lots of variations too, to no avail) dn : cn={9}test,cn=schema,cn=config changetype: modify delete: olcObjectClasses olcObjectClasses: ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST ( attr1 $ attr2 ) ) Running ldapmodify -x -W -D cn=admin,cn=config -f test.ldif -d 0 gives no output. -d 1 gives this: ldap_create ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:389 ldap_new_socket: 4 ldap_prepare_socket: 4 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 4 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 38 bytes to sd 4 ldap_result ld 0x7f2a8ccf3430 msgid 1 wait4msg ld 0x7f2a8ccf3430 msgid 1 (infinite timeout) wait4msg continue ld 0x7f2a8ccf3430 msgid 1 all 1 ** ld 0x7f2a8ccf3430 Connections: * host: localhost port: 389 (default) refcnt: 2 status: Connected last used: Mon Sep 10 11:29:57 2012 ** ld 0x7f2a8ccf3430 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f2a8ccf3430 request count 1 (abandoned 0) ** ld 0x7f2a8ccf3430 Response Queue: Empty ld 0x7f2a8ccf3430 response count 0 ldap_chkResponseList ld 0x7f2a8ccf3430 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f2a8ccf3430 NULL ldap_int_select read1msg: ld 0x7f2a8ccf3430 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 12 contents: read1msg: ld 0x7f2a8ccf3430 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f2a8ccf3430 0 new referrals read1msg: mark request completed, ld 0x7f2a8ccf3430 msgid 1 request done: ld 0x7f2a8ccf3430 msgid 1 res_errno: 0, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 4 ldap_free_connection: actually freed So no real indication of an error. Where am I doing it wrong? Bonus question: If I have some entries of a certain objectclass, can I modify it (add/remove attributeTypes) without removing the entries? Thanks in advance for all help.

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >