Search Results

Search found 7645 results on 306 pages for 'persian dev'.

Page 116/306 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Remote installing an msi on citrix servers using WMI

    - by capn
    OK, I'm a C# programmer that is trying to streamline the deployment of a custom windows form app I inherited and built an installer for with WiX (this app will need to be reinstalled regularly as I'm making changes to it). I'm not really used to admin type things (or vbs, or WMI, or terminal servers, or Citrix, and even WiX and MSI are not things I usually deal with) but so far I put together some vbs and have an end goal in mind. The msi does work, and I've installed it from the mapped O: drive on my dev machine and while RDP'd to a citrix machine. End Goal: Deploy code written on my dev machine and compiled into an MSI (that I can improve upon within the confines of WiX and whatever the Windows Installer Engine allows) to the cluster of Citrix machines my users have access to. What am I missing in my script to get the MSI to execute on the remote machines? Layout: Machine A is my dev machine, and has the vbs script and the msi file (XP SP3) Machines C1 - C6 are the Citrix Servers that need the application installed them via the msi (Server 2003 R2 SP2) There is also optionally a shared network resource that all the machines can access. Script: 'Set WMI Constants Const wbemImpersonationLevelImpersonate = 3 Const wbemAuthenticationLevelPktPrivacy = 6 'Set whether this is installing to the debug Citrix Servers Const isDebug = true 'Set MSI location 'Network location yields error 1619 (This installation package could not be opened.) msiLocation = "\\255.255.255.255\odrive\Citrix Deployment\Setup.msi" 'Directory on machine A yields error 3 (file not found) 'msiLocation = "C:\Temp\Deploy\Setup.msi" 'Mapped network drive (on both machines) yield error 3 (file not found) 'msiLocation = "O:\Citrix Deployment\Setup.msi" 'Set login information strDomain = "MyDomain" Wscript.StdOut.Write "user name:" strUser = Wscript.StdIn.ReadLine Set objPassword = CreateObject("ScriptPW.Password") Wscript.StdOut.Write "password:" strPassword = objPassword.GetPassword() 'Names of Citrix Servers Dim citrixServerArray If isDebug Then citrixServerArray = array("C4") Else 'citrixServerArray = array("C1","C2","C3","C5","C6") End If 'Loop through each Citrix Server For Each citrixServer in citrixServerArray 'Login to remote computer Set objLocator = CreateObject("WbemScripting.SWbemLocator") Set objWMIService = objLocator.ConnectServer(citrixServer, _ "root\cimv2", _ strUser, _ strPassword, _ "MS_409", _ "ntlmdomain:" + strDomain) 'Set Remote Impersonation level objWMIService.Security_.ImpersonationLevel = wbemImpersonationLevelImpersonate objWMIService.Security_.AuthenticationLevel = wbemAuthenticationLevelPktPrivacy 'Reference to a process on the machine Dim objProcess : Set objProcess = objWMIService.Get("Win32_Process") 'Change user to install for terminal services errReturn = objProcess.Create _ ("cmd.exe /c change user /install", Null, Null, intProcessID) WScript.Echo errReturn 'Install MSI here 'Reference to a product on the machine Set objSoftware = objWMIService.Get("Win32_Product") 'All users set in option parameter, I'm led to believe that the third parameter is actually ignored 'http://www.webmasterkb.com/Uwe/Forum.aspx/vbscript/2433/Installing-programs-with-VbScript errReturn = objSoftware.Install(msiLocation,"ALLUSERS=2 REBOOT=ReallySuppress",True) Wscript.Echo errReturn 'Change user back to execute errReturn = objProcess.Create _ ("cmd.exe /c change user /execute", Null, Null, intProcessID) WScript.Echo errReturn Next I also tried using this to install, it doesn't return an error code, but doesn't install the msi either, and it makes me wonder if the change user /install command is even really working. errReturn = objProcess.Create _ ("cmd.exe /c msiexec /i ""O:\Citrix Deployment\Setup.msi"" /quiet") Wscript.Echo errReturn

    Read the article

  • sudo ENV_KEEP not always preserving

    - by mafro
    When I run sudo -s, my environment is preserved. However, when running a simple sudo <command> it appears not to be preserved. The contents of my sudoers file: mafro@ip-10-xx-xx-250:~ > sudo cat /etc/sudoers.d/mafro Defaults env_reset Defaults env_keep += "HOME" mafro ALL=(ALL) NOPASSWD:ALL Using sudo -s, the ll alias is available: mafro@ip-10-xx-xx-250:~ > sudo -s root@ip-10-xx-xx-250:~ > ll total 8K drwxrwxr-x 2 mafro dev 4.0K Jun 9 23:59 bin drwxr-xr-x 20 mafro dev 4.0K Jun 9 23:59 dotfiles Using straight sudo, it is not: mafro@ip-10-xx-xx-250:~ > sudo ll sudo: ll: command not found What is happening here?

    Read the article

  • Permanent Routes Centos Questions

    - by user65053
    So with a little help I figured out how to setup these routes and I can set them in rc.local route add -net 208.82.236.0 netmask 255.255.255.0 dev ppp0 metric 1 route add -net 208.82.236.0 netmask 255.255.255.0 dev eth0 metric 10 my question is being that the first route is ppp0 as soon as I disconnect the modem the route is dropped how do I maintain the route or make it permanent so that next time the modem connects it will follow the route. Currently after ppp0 disconnects the route is dropped netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface laxapx03.o1.com * 255.255.255.255 UH 0 0 0 ppp0 208.82.236.0 * 255.255.255.0 U 0 0 0 eth0 10.0.1.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth0 default 10.0.1.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • NFSv3 Asynchronous Write Depends on Block Size?

    - by Joe Swanson
    I am trying to figure out if my NFSv3 deployment is performing SAFE asynchronous writes. I suspect that it is doing strictly synchronous writes, as I am getting poor performance in general. I used Wireshark to look at the 'stable' flag in write calls, and look for 'commit' calls. I noticed that, with especially large block sizes, writes to appear to be performed asynchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=2097152 count=512 However, smaller block sizes appear to be performed strictly synchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=8192 count=655360 What gives? How does the client decide whether to tell the server to perform writes synchronously or asynchronously? Is there any way I can get smaller block sizes to be performed asynchronously?

    Read the article

  • How can I login to Ubuntu using a USB serial port?

    - by marc
    How can enable remote terminal login into Ubuntu 9.10 using a USB serial port? I created device /dev/ttyUSB0 and i want to allow logins using Hyper-Terminal. I found some resources but they are related to real hardware rs232 ports. I can't find any information about USB converter. So far I have established connection between that USB-serial port and my laptop. I can send text to the port (cp sometext.txt /dev/ttyUSB0) and read it using hyperterminal. What do I need to do to enable logins on this port?

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • Easily recreate a server's "state" [closed]

    - by Brandon Wamboldt
    I want the ability to setup new servers for dev/testing/prod very easily. The reasons for being able to setup a new dev VM is obvious, but for prod my concern is adding a new production server/migrating to a new server. I assume a traditional backup solution won't work as hardware may be different so the binaries/config might be different. I want to get experience with puppet anyways, so I was thinking about creating a manifest that would setup my users, install Postgres, Nginx, PHP-FPM, etc, and configure them the way I specify. Then I could install puppet on a new server, copy down my manifest and apply it locally. This would make keeping my server configs in sync easier too. Is there a better approach I'm not aware of, and does my approach have any pitfalls?

    Read the article

  • KVM Guest installed from console. But how to get to the guest's console?

    - by badbishop
    I'm trying to install a fully virtualized guest (Fedora 14 x86_64) on KVM (RHEL 6), using command-line only (both hypervisor and guest). It goes without errors, and without a tangible result . I'd like to know how to do a text-only installation. So, here's what I've done: # virt-install \ --name=FE --ram=756 --vcpus=1 \ --file=/var/lib/libvirt/images/FE.img --network bridge:br0 \ --nographics --os-type=linux \ --extra-args='console=tty0' -v \ --cdrom=/media/usb/Fedora-14-x86_64-Live-Desktop.iso Starting install... Creating domain... | 0 B 00:00 Connected to domain FE Escape character is ^] ÿ Now what? As I understand after googling for a couple of days, I should see the guest's output from the text installation, but nothing happens. virt-viewer cannot connect to it, kindly suggesting that I explore all the options by adding --help (which I did). If I reconnect with virsh, I see this: Domain installation still in progress. You can reconnect to the console to complete the installation process. [root@v ~] # virsh console FEConnected to domain FE Escape character is ^] This shows that VM is running # virsh list Id Name State ---------------------------------- 8 FE running Qemu log: LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin /usr/libexec/qemu-kvm -S -M rhel6.0.0 -enable-kvm -m 756 -smp 1,sockets=1,cores=1,threads=1 -name FE -uuid 6989d008-7c89-424c-d2d3-f41235c57a18 -nographic -nodefconfig -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/FE.monitor,server,nowait -mon chardev=monitor,mode=control -rtc base=utc -no-reboot -boot d -drive file=/var/lib/libvirt/images/FE.img,if=none,id=drive-ide0-0-0,format=raw,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=/media/usb/Fedora-14-x86_64-Live-Desktop.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:0a:65:8d,bus=pci.0,addr=0x2 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 char device redirected to /dev/pts/1 Output of /etc/libvirt/qemu/FE.xml # cat /etc/libvirt/qemu/FE.xml <domain type='kvm'> <name>FE</name> <uuid>6989d008-7c89-424c-d2d3-f41235c57a18</uuid> <memory>774144</memory> <currentMemory>774144</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='rhel6.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/FE.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:0a:65:8d'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </memballoon> </devices> </domain> I'm obviously missing something that many others don't, but what is it? Thanx in advance!

    Read the article

  • Recover data from an ''unpartitioned'' hard drive

    - by Rafael S. Calsaverini
    I'm trying to recover data from a hdd for a friend from work. He was using it on an old win98 PC (so I guess it was a FAT 16 filesystem). When he installed the drive on a new PC his Windows XP can't recognize the filesystem and give an error message saying that the drive is unformatted. I tried to mount the hdd under linux but no partitions appear to be associated with the drive (I have only /dev/sdb associated with that drive and no /dev/sdb1 or sdb2 etc). I've found many articles on the web on how to recover partitions (with scripts like dd and ddrescue) but how do I make it when I have no partitions and the system say my drive is unpartioned? Is it possible to create a new partition without loosing the data?

    Read the article

  • Storage (EBS) attached to my EC2 instance reporting files on two devices at once

    - by Philip Isaacs
    I have on EC2 instance with two attached EBS drives. One drive /dev/sda1 is mounted on / The second /dev/sda2 is mounted on /var2 So here's what's strange. Whenever I add any files to /var2 the it is also reporting that the / device is also filling up. As if they are the same device. It's so strange. So if I were to save a 10 mb file to a directory on /var2. Both / and /var2 use 10 mb of space up. Is this bizarre?

    Read the article

  • using svnadmin in a php script

    - by fabjoa
    Howdie Scenario: Allow developers to submit new application packages to a market server. Developers run a bash script which contains a cURL call to market server (localhost/market/submit/$app-name). The submit script on the server creates a new folder in existing svn server with the name of the submitted app. Script on dev side waits for HTTP to issue a success message and then do a svn checkout in dev local machine. Problem: The submit script on the market server failed to create new svn directory through code: echo `svnadmin mkdir -m 'added new package $package' http://localhost/market/packages/$package`; this does not echo nothing and when I go on http://localhost/market/packages, the folder has not been added and the revision number has not been incremented. I've tried from a terminal in market server chown root:www-data /usr/bin/svnadmin but still no luck. Somebody has come acrosss similar problem? Any solutions? Thanks! Profile: Linux/Ubuntu, apache subversion

    Read the article

  • SNMP - Value of CPU processor load not reflecting reality

    - by Ovesh
    Trying to plot CPU load on my server, with the following hardware: ProLiant DL360p Gen8 (same behavior on ProLiant DL360 G7). The machine is running VMWare ESXi5.1 To create a CPU spike I run dd if=/dev/zero of=/dev/null, and I know the CPU is overloaded, because I can see a correlating spike in the graphs displayed on vCenter. However, running this snmpwalk: snmpwalk -v 1 -c ******** 192.168.MY_IP 1.3.6.1.2.1.25.3.3.1.2 Shows the following results: iso.3.6.1.2.1.25.3.3.1.2.1 = INTEGER: 3 iso.3.6.1.2.1.25.3.3.1.2.2 = INTEGER: 2 iso.3.6.1.2.1.25.3.3.1.2.3 = INTEGER: 2 iso.3.6.1.2.1.25.3.3.1.2.4 = INTEGER: 3 Am I not looking into the right MIB? Should I be multiplying these by a constant? By the way, using HP Agentless Monitoring I was able to get some cpu stats, but not what I'm looking for, at least nothing I could find wading through these MIBs.

    Read the article

  • Distributing entropy to virtual machines.

    - by Louis
    Dear All, I'm interested in generating secret keys for SSL on virtual machines using true randomness. By true randomness I mean the same level of entropy that can be generated by UNIX's dev/random and entropy gathering daemon (EGD). Is there a "general knowledge" recipe to route entropy from the physical layer to the virtual machines via the hypervisor regardless of the Hypervisor/Guest OS combination? Example: suppose one "hypervises" with VMware VSphere and instantiates Windows Guest OS. Can this hypervisor collect entropy from its peripherals (like dev/random/ would) and distribute it to these guest Windows OS? When considering the big vendors (VMware, Hyper-V, Citrix, etc), do they have entropy pools that gather entropy that can easily be pushed to their respective virtual machines? Louis

    Read the article

  • solaris + dladm + what is unknown state and how to bring it to up?

    - by yael
    I installed Solaris 10 on my netra machine from dladm show-dev I can see which interface are down or up all interfaces are connected to the Cisco switch , and all leds are light's on all LAN cards but I not understand why all interfaces except e1000g0 are in unknown ? Please advice how to bring the unknown interfaces to up ? # dladm show-dev e1000g0 link: up speed: 1000 Mbps duplex: full e1000g1 link: unknown speed: 0 Mbps duplex: unknown e1000g2 link: unknown speed: 0 Mbps duplex: unknown e1000g3 link: unknown speed: 0 Mbps duplex: unknown nxge0 link: unknown speed: 0 Mbps duplex: unknown nxge1 link: unknown speed: 0 Mbps duplex: unknown nxge2 link: unknown speed: 0 Mbps duplex: unknown nxge3 link: unknown speed: 0 Mbps duplex: unknown

    Read the article

  • renaming hard drives (sdc to sdb) on the fly

    - by w00t
    ata2: link is slow to respond, please be patient (ready=0) kernel: [2761026.198796] ata2: soft resetting link kernel: [2761031.226669] ata2.00: disabled kernel: [2761031.226720] ata2: EH complete kernel: [2761031.226753] sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK,SUGGEST_OK After receiving the error above, I couldn't access /dev/sdb anymore. Not wanting to restart the server, I rescanned for the device using echo "- - -" > /sys/class/scsi_host/host1/scan and it readded the drive as /dev/sdc. From what I have found, I need to use echo "scsi add-single-device 0 0 3 0" > /proc/scsi/scsi, "3" being the SCSI ID which corresponds to sdb. Everything nice up to the point I execute the command and get -bash: echo: write error: Invalid argument. All the solutions point to using this method, but I am unable to. Any other method available? Debian 5.0.8 - 2.6.26-1-686

    Read the article

  • Touch Screen Ubuntu 10.04LTS

    - by WalterJ89
    I'm trying to get a touch screen working with Ubuntu 10.04LTS (64bit) -it is a serial touchsceen, connected at /dev/ttyS0 ,i know that works because I get garbage in the terminal when I enable it. -before the screen used a 3m driver (I believe) in XP. My knowledge of Linux is passive so I generally pick up something when I need it. To get this working I came accross a lot of tutorials (a lot outdated a bit), I'm still at a loss to get this work. I'm not sure where to put linux drivers (/usr/ or /dev/?) most tutorials kind of skip over that part. I have tried editing the /etc/X11/xorg.conf unsuccessfully. I'm not sure what the syntax for that is supposed to be. Thank You

    Read the article

  • UPS - Two computers - How to get them to both shutdown when battery is low?

    - by hamlin11
    Short Version: How do I get 2 computers to shutdown when a UPS battery gets low? Long Version: I have an APC UPS, the RS 1500. It has a USB cord that goes into my main dev computer. My dev computer will shutdown when the battery gets low. However, in addition, I have now hooked up a database server to the same UPS. How can I have that database server also know that it needs to shut down when the battery gets low?

    Read the article

  • Unmount Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command.

    Read the article

  • Cygwin/Git Bizarre Terminal Issue

    - by emptyset
    Alright, this is weird. First off, this is mintty running on up-to-date cygwin, with git pulled from cygwin's setup.exe. I am running zsh. $ git clone https://<user>@<domain>/<repository>/ ~/src/project/dev Initialized empty Git repository in /cygdrive/c/src/project/dev/.git/ Password: <actual password in plain text appears> # Nothing happens... ^C $ <password text that I just typed> zsh: command not found: <same password text> What is going on here? Is this a terminal problem, a shell problem, a git problem, or a cygwin problem? Update: Yes, I'm running the Cygwin git version, not the Windows version: $ which git /usr/bin/git $ git --version git version 1.7.1 $ /cygdrive/c/Program\ Files\ \(x86\)/Git/bin/git.exe --version git version 1.7.0.2.msysgit.0

    Read the article

  • Install problems with XSendFile on Ubuntu

    - by Dan
    I installed the apache dev headers: sudo apt-get install apache2-prefork-dev Downloaded and compiled the module as outlined here: http://tn123.ath.cx/mod_xsendfile/ Added the following line to /etc/apache2/mods-available/xsendfile.load: LoadModule xsendfile_module /usr/lib/apache2/modules/mod_xsendfile.so Added this to my VirtualHost: <VirtualHost *:80> XSendFile on XSendFilePath /path/to/protected/files/ Enabled the module by doing: sudo a2enmod xsendfile Then I restarted Apache. Then this code still just provides me with an empty file with 0 bytes: file_path = '/path/to/protected/files/some_file.zip' file_name = 'some_file.zip' response = HttpResponse('', mimetype='application/zip') response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(file_name) response['X-Sendfile'] = smart_str(file_path) return response And there is not in the Apache error log that pertains to XSendFile. What am I doing wrong?

    Read the article

  • How to load tun module in linux?

    - by rabipelais
    I cannot manage to load the tun module in my archlinux box. I'm trying to connect with openvpn, but the log says nm-openvpn[6662]: Note: Cannot open TUN/TAP dev /dev/net/tun: No such device (errno=19) lsmod | grep tun returns nothing. If I run sudo modprobe tun it returns failure, but no error message, and lsmod still has no tun. The module seems to exist, as there is a tun.ko.gz in /lib/modules/....... I really dont know what else to try. Thanks in advance

    Read the article

  • fedora liveUSB fails, drops to debug shell

    - by evan
    Trying to install Fedora 15 via a live USB made with unetbootin. I get to the unetbootin boot menu, select Fedora-15-x86_64-Live-Desktop.is, I get to this screen, then it drops into a debug shell with the message sh: can't access tty: job control turned off. The last message is dmseg is dracut Warning: No root device "live:/dev/disk/by-label/Fedora-15-Beta-x86_64-Live-Desktop.is" found. Seems to be the same problem detailed here. Tried to try nk1eto's solution but there is no by-label directory in /dev/disk. There's by-id, by-path and by-uuid.

    Read the article

  • Postfix TLS issue

    - by HTF
    I'm trying to enable TLS on Postfix but the daemon is crashing: Sep 16 16:00:38 core postfix/master[1689]: warning: process /usr/libexec/postfix/smtpd pid 1694 killed by signal 11 Sep 16 16:00:38 core postfix/master[1689]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling CentOS 6.3 x86_64 # postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 disable_vrfy_command = yes home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all local_recipient_maps = mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:/var/lib/postfix/smtpd_tls_cache.db smtp_use_tls = yes smtpd_delay_reject = yes smtpd_error_sleep_time = 1s smtpd_hard_error_limit = 20 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_pipelining, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_destination reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_soft_error_limit = 10 smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550

    Read the article

  • Apache AliasMatch and DirectoryMatch not working?

    - by Alex
    I have the following config - please notice the Alias and Directory equivalent -- uncommented they work as expected but the dynamic/regex based versions don't - any ideas??? <VirtualHost *:80> ServerName temp.dev.local ServerAlias temp.dev.local DocumentRoot "C:\wamp\www\temp\public" <Directory "C:\wamp\www\temp\public"> AllowOverride all Order Allow,Deny Allow from all </Directory> # Alias /private/application/core/page/assets/images/ "C:/wamp/www/temp/private/application/core/page/assets/images/" # <Directory "C:/wamp/www/temp/private/application/core/page/assets/images/"> AliasMatch ^/private/application/(.*)/(.*)/assets/images/ /private/application/$1/$2/assets/images/ <DirectoryMatch "^/private/application/(.*)/(.*)/assets/images/"> Options Indexes FollowSymlinks MultiViews Includes AllowOverride None Order allow,deny Allow from all </DirectoryMatch> </VirtualHost>

    Read the article

  • Decreasing Root Disk Size of an "EBS Boot" AMI on EC2

    - by darkAsPitch
    So I have followed Eric's wonderful article here: http://alestic.com/2009/12/ec2-ebs-boot-resize This was the code basically that helped me increase the default size of the AMI: ec2-run-sintances ami-ID -n 1 --key keypair.pem --block-device-mapping "/dev/sda1=:250" Running Ubuntu 11.10 I didn't even have to re-size the disk afterwards, it was immediately a 250GB drive. How do I go about decreasing the default size of the AMI??? I tried: ec2-run-sintances ami-ID -n 1 --key keypair.pem --block-device-mapping "/dev/sda1=:100" Obviously... but I was told: Client.InvalidBlockDeviceMapping: Volume of size 100GB is smaller than snapshot ####### <250

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >