Search Results

Search found 16331 results on 654 pages for 'option'.

Page 502/654 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • Jenkins: Use it with SSL / https

    - by Tim
    I have a Fedora server running Jenkins which I install via yum. Everything is okay, I can access it with http://ci.mydomain.com. But now, I want to access it with https://ci.mydomain.com, so the login with username and password is encrypted. How can I do this? Best Regards Tim Update My /etc/sysconfig/jenkins file. Starting Jenkins works, but I can not access Jenkins with the webbrowser with https://ci.mydomain.com or http://ci.mydomain.com:443, ... ## Path: Development/Jenkins ## Description: Configuration for the Jenkins continuous build server ## Type: string ## Default: "/var/lib/jenkins" ## ServiceRestart: jenkins # # Directory where Jenkins store its configuration and working # files (checkouts, build reports, artifacts, ...). # JENKINS_HOME="/var/lib/jenkins" ## Type: string ## Default: "" ## ServiceRestart: jenkins # # Java executable to run Jenkins # When left empty, we'll try to find the suitable Java. # JENKINS_JAVA_CMD="" ## Type: string ## Default: "jenkins" ## ServiceRestart: jenkins # # Unix user account that runs the Jenkins daemon # Be careful when you change this, as you need to update # permissions of $JENKINS_HOME and /var/log/jenkins. # JENKINS_USER="jenkins" ## Type: string ## Default: "-Djava.awt.headless=true" ## ServiceRestart: jenkins # # Options to pass to java when running Jenkins. # JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true" ## Type: integer(0:65535) ## Default: 8080 ## ServiceRestart: jenkins # # Port Jenkins is listening on. # JENKINS_PORT="8080" ## Type: integer(1:9) ## Default: 5 ## ServiceRestart: jenkins # # Debug level for logs -- the higher the value, the more verbose. # 5 is INFO. # JENKINS_DEBUG_LEVEL="5" ## Type: yesno ## Default: no ## ServiceRestart: jenkins # # Whether to enable access logging or not. # JENKINS_ENABLE_ACCESS_LOG="no" ## Type: integer ## Default: 100 ## ServiceRestart: jenkins # # Maximum number of HTTP worker threads. # JENKINS_HANDLER_MAX="100" ## Type: integer ## Default: 20 ## ServiceRestart: jenkins # # Maximum number of idle HTTP worker threads. # JENKINS_HANDLER_IDLE="20" ## Type: string ## Default: "" ## ServiceRestart: jenkins # # Pass arbitrary arguments to Jenkins. # Full option list: java -jar jenkins.war --help # JENKINS_ARGS="--httpsPort=443 --httpsKeyStore=/root/.keystore --httpsKeyStorePassword=MYPASSWORD"

    Read the article

  • How to set shmall, shmmax, shmni, etc ... in general and for postgresql

    - by jpic
    I've used the documentation from PostgreSQL to set it for example this config: >>> cat /proc/meminfo MemTotal: 16345480 kB MemFree: 1770128 kB Buffers: 382184 kB Cached: 10432632 kB SwapCached: 0 kB Active: 9228324 kB Inactive: 4621264 kB Active(anon): 7019996 kB Inactive(anon): 548528 kB Active(file): 2208328 kB Inactive(file): 4072736 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 3432 kB Writeback: 0 kB AnonPages: 3034588 kB Mapped: 4243720 kB Shmem: 4533752 kB Slab: 481728 kB SReclaimable: 440712 kB SUnreclaim: 41016 kB KernelStack: 1776 kB PageTables: 39208 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 8172740 kB Committed_AS: 14935216 kB VmallocTotal: 34359738367 kB VmallocUsed: 399340 kB VmallocChunk: 34359334908 kB HardwareCorrupted: 0 kB AnonHugePages: 456704 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 12288 kB DirectMap2M: 16680960 kB >>> ipcs -l ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 4316816 max total shared memory (kbytes) = 4316816 min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 ------ Messages Limits -------- max queues system wide = 31918 max size of message (bytes) = 8192 default max size of queue (bytes) = 16384 sysctl.conf extract: kernel.shmall = 1079204 kernel.shmmax = 4420419584 postgresql.conf non defaults: max_connections = 60 # (change requires restart) shared_buffers = 4GB # min 128kB work_mem = 4MB # min 64kB wal_sync_method = open_sync # the default is the first option checkpoint_segments = 16 # in logfile segments, min 1, 16MB each checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 effective_cache_size = 6GB Is this appropriate ? If not (or not necessarily), in which case would it be appropriate ? We did note nice performance improvements with this config, how would you improve it ? How should kernel memory management parameters be set ? Can anybody explain how to really set them from the ground up ?

    Read the article

  • VSS Post Backup failures for Virtual Server 2005 R2 SP1 virtual machines

    - by califguy4christ
    We've been seeing strange errors with Volume Shadow Copy services on our Virtual Server 2005 R2 SP1 host. It appears to be failing on a strange mountpoint in the C:\WINDOWS\Temp\ folders, which I believe is used by VSS to mount a writeable image file. To summarize: The Microsoft Virtual Server 2005 Writer continually goes into a failed retryable state The Virtual Server log reports errors during the Post Backup phase VSS reports errors backing up a mount point of unknown origins The mount point causes NTFS and ftdisk errors The host is x86 Windows Server 2003 Standard, SP2. The virtual machine is the same. Both use basic disks. Here is the writer state: Writer name: 'Microsoft Virtual Server 2005 Writer' Writer Id: {76afb926-87ad-4a20-a50f-cdc69412ddfc} Writer Instance Id: {78df98e2-bf19-4804-890b-15865efef3bd} State: [11] Failed Last error: Retryable error From the Virtual Server log: Virtual Server - Vss Writer - Event ID: 1035: The VSS writer for Virtual Server failed during the PostBackup phase. The guest shadow copies did not get exposed on the host machine, after mounting all the virtual hard disks of the virtual machine VMACHINE. From the Application log: VSS - None - Event ID: 12290: Volume Shadow Copy Service warning: GetVolumeInformationW( \\?\Volume{fb84bae7-87f5-11dd-9832-001cc4961ca6}\,NULL,0, NULL,NULL,[0x00000000], , 260) == 0x0000045d. hr = 0x00000000. From the System log: Ntfs - Disk - Event ID: 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:\WINDOWS\Temp\ {fb84bae7-87f5-11dd-9832-001cc49.... My current theory is that VSS creates a mount point for an image file of the VHD, then the software panics for some reason, leaving everything in an inconsistent state. Removing the mount point doesn't resolve the problem. All of the other disks check out fine with CHKDSK. There's no exclusion option for VHDs or to turn off online backups. Has anyone seen this kind of thing before or point me in the right direction for getting more information about the mount point and it's origins? I haven't been able to trace what application is creating that mount point.

    Read the article

  • Access All VLANS over XenServer Interface

    - by Garrett
    For my current setup, I have a physical NIC on a XenServer machine that receives traffic tagged with various VLAN IDs. I have a virtual machine that is running Vyatta that needs to be able to access both tagged and untagged traffic in order to route traffic. Here's the problem: 1) If I bind the NIC in XenCenter to the VM (which has no VLAN ID associated with it), the VM cannot see any tagged traffic. I have verified this using tcpdump. However, the tagged traffic is flowing into the XenServer machine perfectly fine. 2) I have more than 7 VLANs, so adding each VLAN as an interface within XenCenter isn't an option. 3) Even though tcpdump shows no tagged traffic coming in the VMs NIC, I have tried adding VLAN interfaces within Vyatta. This also doesn't work. I have tried using both Linux bridge and openvswitch setups and neither seem to work. I am running XenServer 6.0.3 free and Vyatta VC6.3. Please help! I've run out of ideas. I've googled for hours and can't seem to find anything.

    Read the article

  • svchost consuming more than 50% CPU all the time in windows 7

    - by claws
    Hello, I'm using windows 7 ultimate. svchost containing DCOM Server Process Launcher Plug and Play Power services is consuming more than 50% of CPU for most of the time. I found this blog post: http://blog.hansmelis.be/2007/06/17/windows-vista-long-delay-when-switching-songs-in-media-player/ That process is associated with two services: DCOM Server Process Launcher and Plug and Play. For the Vulcans among us, all logic stops there for a second. What do those two services have to do with WMP? The answer is provided by Vista's new audio engine. The new engine supports several audio "enhancements". But for the enhancements to work, the engine needs to determine if your hardware is up to the task. And when does it check that? Each time a sound output device is accessed. That's pretty nice if you can do a hot swap of sound hardware, but I don't see me doing that anytime soon. Anyways, it does provide us with the link to the correct service because checking hardware is done by the "Plug and Play" service. One might think that deactivating each enhancement would solve the problem, but that's wishful thinking. The configuration of the enhancements is located in the properties of the sound hardware. When opening the tab, I found out that no enhancements were active. Hmmm... so why does it check the hardware? Well, it does that in case you actually enable an enhancement. To completely stop the hardware checking, you have to tick the box labelled Disable all enhancements. As soon as you do that, Vista finally understands you don't want to use them buts thats for vista. Is it the same case with windows 7 too? and I couldn't find any "Disable all enhancements" in my controlpanelsounds (mmsys.cpl). Where can I find this option in windows 7? How to solve this?

    Read the article

  • VMWare ESX, storage over 2TB

    - by Phliplip
    Hi, First of, i'm a webdeveloper and my server experience lies in setting up FreeBSD servers for webserver. I'm working on a project for at photographer, and i'm hired to develop a new online photo ordering system - where user of course can view their photos :) They have a massive need of storage, thus we have bought a HP G6 and 8x1TB SATA HDD. Our plan is to install VMWare ESX 4.0, running multiple virtual machines; FreeBSD 8 for webserver and some windows servers. Allready done that. Then mount one big storage to the BSD, and share it through Samba to the WinServers. The raid is set up with an array of 2x 1TB to handle the VMs. And the rest is setup as 3 2x1TB to handle the photo-data. Thus 2.73TB for photo-data (the raids are 1+0). Now if we add a datastore in the ESX and add the 3 LUNs we can get a datastore of 2.74TB. But i don't se how i can add this datastore direct to the VM. Only the BSD VM needs access to this. Only way is to create a VirtualDisk, with a max of 2TB (8MB blocksize). This is because the datastore where we save the virtualdisk has a maximum filesize of 2TB. Then add it as a harddisk to the BSD VM. In the 'Add Harddisk' pane for the VM, i see an option for Raw Disk Management. I think this is to access the datastore or the raid directly. Only problem is that its greyed out! Can i access the datastorage directly from the BSD? Without creating and adding virtualdisk.

    Read the article

  • How to get rid of auto-generated sequence number in network's device name in Windows?

    - by Piotr Dobrogost
    Every time one plugs in the same usb wireless adapter in a new usb port, Windows creates new network device with auto-generated sequence number which looks like this Wireless-N USB Network Adapter #2, Wireless-N USB Network Adapter #3, ... The name of a device is being displayed as part of network's information in Control Panel|Network Connections. How can I get rid of this sequence number? I found out device name which is displayed in network's information is kept in the FriendlyName REG_SZ value under HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_[device specific string]\[usb port specific string] However when I try to modify this value I get error Cannot edit FriendlyName: Error writing the value's new contents. I tried to delete extra keys under HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_13B1&PID_0029 but got Cannot delete KEY NAME: Error while deleting key. error. Trying to solve this problem I followed this answer but trying to change owner with Replace owner on subcontainers and objects option checked I got this error - Registry Editor could not set owner on the currently selected, or some of its subkeys. To find out which subkey is the source of problem I tried changing owner of each subkey. After successfully changing owner of Properites subkey I saw it has subkeys which were previously hidden. Now trying to change owner of these subkeys looks like this: Any idea how to delete these keys?

    Read the article

  • AirPort Express Discoverability

    - by andybjackson
    I bought an AirPort Express to enable music in a different part of a friend's house using the AirTunes feature. Unfortunately, iTunes or the AirPort Utility don't reliably discover the existence of the device. If I use the "Configure Other..." function within the AirPort Utility and enter the AirPort Express' IP address and password, then I can reliably get access in a daughter window to configure it. This seems to nudge the underlying AirPort Utility into "finding" and displaying the AirPort Express, which it doesn't do on its own even after clicking the "Rescan" button. iTunes then also seems to cotton on to this discovery and present the AiportExpress as an AirTunes option at the bottom right of iTunes. Things then works as we'd like them to. If I close down the AirPort Utility, then iTunes loses the AirPort Express AirTunes speaker, often giving "An unkown error (-15006) occurred while connecting to the remote speaker". Of course, starting the Airport Utility, forcing it to recognise the Airport Express and then starting iTunes, isn't the ease of use I was after. Background info: iTunes is running on Windows XP. The AirPort Express is running in wireless client mode (i.e. is connecting to an unsecured wireless network in the house with nothing connected to its ethernet port). The network router is a Swisscom Motorola 3347NWG (with firmware 7.8.5r1). I have already tried: Disabling the Windows XP firewall Updating the AirPort Express firmware, the AirPort Utility and the router firmware Ensuring Wireless privacy and similar potetnially problematic router settings are off Solutions, or even just ideas of other things to try would be gratefully received.

    Read the article

  • Windows Server 2008 Stops Responding (Hyper-V Role Enabled)

    - by blackf0rk
    The machine is a brand new Dell Precision m6500, Core i5, 8GB RAM. Windows Server 2008 R2 (fully patched) with Hyper-V Role Enabled. Virtualization options in the BIOS are ON, SpeedStep is OFF, couldn't find C1E option in the BIOS to turn it off (I also got the impression that SpeedStep is C1E, but the Intel Product site lists them as separate "features." shrug) The server stops responding without any apparent reason. I've tried testing in multiple scenarios, all of which result in a crash at seemingly random times: With the Server sitting idle, no apps running. Server sitting idle with a Virtual Machine running. Using a BurnInTest application There's no blue screen. It doesn't restart. The screen just sits there. The keyboard backlight still responds and comes on with input, but nothing on the screen changes. There are no errors in the error log. I have to hold down the power button to turn it off. Doing memory tests on bootup results in no errors with the memory. I have a second identical system and the same thing happens there too. I've dual-booted into Windows 7 Profession x64 on this system with no problems. Further testing has shown that the issue is definitely related to Windows Server 2008 R2 and Hyper-V as it appears the crashing does not happen when the services are not running. I've installed all hotfixes relating to this issue (that I could find): 975530, 979444, 979491, 976427 System is still crashing without response.

    Read the article

  • Why can't I reconnect to my Philips SHB9000 bluetooth headset?

    - by Thomas Eyde
    This is so annoying. When I connected my Philips SHB9000 bluetooth to my Windows 7 (64 bit) for the very first time, it worked well. I had to manually change the default playback device, but otherwise it worked. Then, when I start up my computer from standby, it's nearly random when I can reconnect or not. My last option is to remove the bluetooth device and reconnect it. But now, even that doesn't work. The sad thing is, this used to work better on Windows 7 beta. Windows Update has a new driver which fails to install. Searching for this driver yields nothing. I thought all vendors had an official site for their drivers? Well, Philips seems to have none. If there is no answer to this problem, my advice is to NOT buy this headset. It's good looking, the sound is nice, but what need do we have from that if we can't use the bloody device?

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • Install GRUB bootloader without installing Linux.

    - by Kavitesh Singh
    I have Windows 7 installed on the system and I want to create a separate WinPe bootable partition which system can fallback when things go wrong. Now Win7 does give this option and i might also edit the BCD store to make changes in the boot menu of Win7 or could use EasyBCD. I dont want to use these options as i need to customize hiding/unhiding of partitions at the time of booting etc. I search and found GRUB might the tool i am looking for. I want to use GRUB loader without any version of Linux installed on the system. Can someone guide me how i can install the GRUB on harddisk MBR and configure the bootmenu. I search internet and mostly i came across commands which search the GRUB on the harddisk because of existing linux installation and then try to repair it. In my case there is no Linux at all. I have Ubuntu 9.10 bootable CD/ OpenSUSE 11.2 liveCD and installation disc. Can i use them to install GRUB on my system?

    Read the article

  • Moving from WDS to MDT + WDS - Prestaged Computer Name

    - by MSCF
    We previously used just WDS to deploy our images. WDS was setup to request approval for new machines. We used the "Name and Approve" option to name the machines as we added them. If it was pre-existing, it would just use the existing computer name from AD. Then in our unattend.xml file we had Computername=%MACHINENAME%. This picked up the name we gave it during approval and set the computer name accordingly. We are now implementing MDT to manage our images and drivers. But upon testing, we noticed it would assign random computer names. I went into the Unattend.xml for the deploy task sequence and added that value under Specialize amd64_Microsoft-Windows-Shell-Setup_neutral Computername=%MACHINENAME%. But when we try applying the image, it errors out at that point of the install. How can an MDT deployment be configured to leverage the pre-staged name? Some additional info: Error message during the imaging process: Windows could not parse or process the unattend answer file for pass [specialize]. The settings specified in the answer file cannot be applied. The error was detected while processing settings for component [Microsoft-Windows-Shell-Setup].? setuperr.log: 2014-07-22 14:02:13, Error [setup.exe] [Action Queue] : Unattend action failed with exit code 4 2014-07-22 14:02:13, Error [setup.exe] Execution of unattend GCs failed; hr = 0x0; pResults-hrResult = 0x8030000b

    Read the article

  • centos postfix send email problem

    - by Catalin
    I have a big problem with postfix. I can receive mail in webmin and outlook but I can't send (only on local I can - user to user). Dovecot is working just fine. Sendmail is disable. Please help me. postfix -n postfix: invalid option -- n postfix: fatal: usage: postfix [-c config_dir] [-Dv] command [root@xprivatecams usr]# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man milter_default_action = acceptsmtpd_tls_auth_only = no milter_protocol = 2 mydestination = $myhostname, localhost.$mydomain, localhost myhostname = xprivatecams.com mynetworks = 94.177.41.0/24, 127.0.0.0/8 newaliases_path = /usr/bin/newaliases.postfix non_smtpd_milters = inet:localhost:20207 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_use_tls = yes smtpd_milters = inet:localhost:20207 smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550 Jan 18 00:46:17 xprivatecams postfix/postfix-script: starting the Postfix mail system Jan 18 00:46:17 xprivatecams postfix/master[15545]: daemon started -- version 2.3.3, configuration /etc/postfix Jan 18 00:48:00 xprivatecams postfix/pickup[15546]: EDE7EA8001B: uid=0 from=<[email protected]> Jan 18 00:48:00 xprivatecams postfix/cleanup[15817]: EDE7EA8001B: message-id=<[email protected]> Jan 18 00:48:00 xprivatecams opendkim[2776]: EDE7EA8001B: DKIM-Signature header added Jan 18 00:48:01 xprivatecams postfix/qmgr[15547]: EDE7EA8001B: from=<[email protected]>, size=615, nrcpt=1 (queue active) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: connect to mail.flabell.com[72.47.224.75]: Connection timed out (port 25) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: EDE7EA8001B: to=<[email protected]>, relay=none, delay=30, delays=0.08/0.03/30/0, dsn=4.4.1, status=deferred (connect to mail.flabell.com[72.47.224.75]: Connection timed out) telnet 94.177.41.70 25 Trying 94.177.41.70... Connected to xprivatecams.com (94.177.41.70). Escape character is '^]'. 220 xprivatecams.com ESMTP Postfix ehlo me 250-xprivatecams.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • Cannot delete old NFS directory: Device or resource busy

    - by Jakobud
    On server1, we had an NFS share mounted from server 2 like this: /nfs/server2/share Recently, we took down server2 to install a new OS on it. Now we can't get NFS setup the way it was. When I do this: ls -l /nfs I get this: drwxr-xr-x 2 root root 0 2010-03-15 09:59 server2 Notice how the directory size is 0 instead of 4096 like usual? Anyways I go into server2 expecting to see a share directory, but I don't. It's empty. So therefore I cannot mount my share at /nfs/server2/share. When I try to create /nfs/server2/share directory, I get mkdir: cannot create directory `share': No such file or directory I think this is because it doesn't really think the /nfs/server2 directory really exists. Even if I use the -p option with mkdir, it doesn't work. Next I tried to remove /nfs/server2 so I could just recreate it. I try to rm -r /nfs/server2 but I get rm: cannot remove directory `/nfs/server2': Device or resource busy So now I'm at a loss. I need to mount this NFS share in the same exact place on server1 (at /nfs/server2/share) because other software on server1 depend on this. But if I can't create that share directory and I can't remove that directory, what do I do? Also, just for testing, I attempted to mount the share at /nfs/testing/share and it mounted just fine. But like I said, I need to mount it back in the same location.

    Read the article

  • How do I use a Zyxel P660 router as just modem so that I can connect a WRT54GL router in cascade?

    - by Kenji Kina
    I have a Zyxel P660HW-t1 v2 router (which has a DSL port) and a WRT54GL router (which does not) and the exact same situation as in this thread (UPDATE: the connection between both devices is the important part, since I have been able to set the zyxel router to act as bridge by itself quite nicely. I have accessed my internet connection directly through a PC using PPPoE without any problems, the issues arise when I try to connect the WRT54GL router between the zyxel "modem" and my PCs). I've been trying to use my Zyxel P660 as a modem only: Setup P660 to bridge mode. Changed WRT54GL's IP address to 192.168.2.1 to avoid a conflict on the network. Configured the PPPoE settings as required on WRT54GL. The thing is that when I connect the Zyxel modem/router on the WRT54GL's internet port the light doesn't turn on. I can confirm that this port has been working ok, so I'm not really sure what's going between the devices. I checked several settings such as IPs, tried disabling DHCP on Zyxel/Linksys, Firewall on both and still nothing. Also, I tried connecting Zyxel directly to a computer in bridge mode and dialed successfully. I have even posted a question here before, thinking that what I asked there was the only thing I needed to get things done. Unfortunately it wasn't, and the guy that solved his issue didn't give enough details in his post (and is quite unlikely to give more details since he was an anonymous user). For one, I don't know how to do this part: connected to the Zyxel through telnet and forced LAN port 1 to be at 100mb as well I can't find the option that does this on the zyxel router. Not through telnet or the web admin. Can anyone help me solve this?

    Read the article

  • Redmine subversion won't ignore certificate error even if told

    - by Pekka
    I have set up a copy of Redmine through the Bitnami Redmine Stack and am having trouble accessing a remote SVN repository through https. The trouble seems to be related to the fact that I don't have a signed certificate, and the certificate provided doesn't match the host name (I am accessing the same server through a number of host names). I am new to Ruby, Mongrel, Rails and Redmine. Following the advice in this forum thread, I changed the path Redmine uses to invoke the svn client in \apps\redmine\lib\ redmine\scm\adapters\subversion_adapter.rb from SVN_BIN = "svn" to SVN_BIN = "svn --trust-server-cert --non-interactive --config-dir c:/user/temp" I was hoping that the --trust-server-cert option would fix the certificate problem. However, I am still getting the following error message in mongrel.log: svn: OPTIONS of 'https://server.xyz:8443/svn/reponame': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://server.xyz:8443) Does anybody know what to do about this? Additional info: I re-started the mongrel service after each change I am sure the configuration change has taken effect because subversion has created a full configuration directory in c:\user\temp I can access the remote repository using command line svn no problem The remote repository runs on a Windows box with VisualSVN

    Read the article

  • Ubuntu software stack to mimic Active Directory auth

    - by WickedGrey
    I'm going to have an Ubuntu 11.10 box in a customer's data center running a custom webapp. The customer will not have ssh access to the box, but will need authentication and authorization to access the webapp. The customer needs to have the option of either pointing the webapp at something that we've installed locally on the machine, or to use an Active Directory server that they have. I plan on using a standard "users belong to groups; groups have sets of permissions; the webapp requires certain permissions to respond" auth setup. What software stack can I install locally that will allow an easy switch to and from an Active Directory server, while keeping the configuration as simple as possible (both for me and the end customer)? I would like to use as much off-the-shelf software for this as possible; I do not want to be in the business of keeping user passwords secure. I could see handling the user/group/permission relationships myself if there is not a good out-of-the-box solution (but that seems highly unlikely). I will accept answers in the form of links to "here is what you need" pages, but not "here is what Kerberos does" unless that page also tells me if it's required for my use case (essentially, I know that AD can speak Kerberos, but I can't tell if I need it to, or if I can just use LDAP, or...).

    Read the article

  • Setting up SQL Server 2005 to use all available memory in 32bit Windows Server 2003 - and verifying

    - by Rizwan Kassim
    There are a number of questions along this line - but they either sometimes contradict each other, or don't show how to properly verify that everything is actually working - hopefully this can be comprehensive... I'm running SQL Server 2005 SP3 Standard on Windows Server 2003 R2 Standard. My server has 8GB of memory installed - my system is almost entirely used as a Database Server - there are some services running on them, but the OS + services can run within 1Gb of RAM. What I've done (please tell me if I'm doing something wrong): /3GB in the boot.ini. (To increase the amount of user-space memory available - info) /PAE in the boot.ini. (Windows claimed to be doing PAE even without this switch, somethow.) Enabled AWE in SQL Server. Enabled Lock Pages in Memory Option for users SYSTEM and Local Service. (info). SQL Server Standard doesn't seem to use this until Cumulative Update 4, which isn't installed on my server. (info) Set Min/Max Memory to : 1024Mb/5112Mb After doing all the above, we definately saw a level of improvement - but I'd like now to verify my settings, make sure that I'm making full use of the memory available. (There appeared to be a slowdown when max = 7Gb, so I edged off from that value, but it might have been just perceptual.) To verify, I checked the following levels in PerfMon : Process(sqlserv):Working Set : 76386304 SQL Server(Memory Manager) : Total Server Memory : 3538944 (I saw a doc that noted that this wasn't the full memory used by SQL Server, so I'm not sure whether to trust it) So -- my questions... Should my max be around 7Gb? If not, what should it be? Why is total server memory at 3.5G, when it's been allocated 5G? What is the proper metric for the amount of memory allocated to SQL Server? The Working Set seems a bit large... Am I possibly missing any steps in the setup? Any recommended resources on starting to tune the caching system now? Thanks

    Read the article

  • Migrating a virtual domain controller for DR exercise

    - by Dips
    Hello gurus, I have a question. I have a requirement where I have a virtual domain controller and I have to migrate it to another virtual server in a different location. It is for test purposes to test out a DR scenario and the test will be deemed successful if the users that authenticate using the production DC can do so in the backup DC. I don't know much about this and thus don't know why it was assigned to me. So any assistance will be greatly appreciated. What I had in mind was: 1) Taking a snapshot of the production server and then restoring it in the other server. But I was told that this is not the suggested way of doing it. I was not told why. Is that right?If a snapshot is to be taken then what is the best way to do it. Any ideas on where I can get the documentation for this? 2) Another way would be to build the test DC from ground up, match it to the specs of production DC and then perform the DR test. Is this a better option? What will be needed to perform such an activity? Where can I find documentation on that? I apologise for the length of this query. As I said I am quite a novice and hope to get a better resolution. Any assistance will be greatly appreciated. Regards,

    Read the article

  • Why can't “knife data bag from file” find existing json file on chef server?

    - by ellisera
    Summary: I'm running into a problem with "knife data bag from file", where knife doesn't recognize the .json data bag file pulled down from a remote git repo. Background: I'm currently trying to transition from chef-solo use to chef server while using the cookbooks, data bags and other chef info from our remote git repo. I've currently pulled down a copy of our git repo and set the cookbook path and data bag path in knife.rb. I also loaded the cookbooks, made adjustments, etc. Details: When trying to load our .json data bags by doing "knife data bag add from file FOLDER FILE" it looks like it worked until I do "knife data bag list" and it comes up blank. So I decided to try adding the edit option at the end to see what's being loaded, if it is. This is the error I get: knife data bag from file local_settings test.json -e nano ERROR: Could not find or open file 'test.json' in current directory or in 'data_bags/local_settings/test.json' The data bag file does exist, in the proper location, in a tested, working json file. I've also sometimes gotten an error saying "could not open data bag "local_settings". I would obviously like to keep the data bag path within the appropriate git repo folder to be able to keep track of changes in a more centralized location (our git repo, as opposed to the chef server). Any solutions, advice or pointers in the right direction are appreciated.

    Read the article

  • unable to boot into safe mode even after fixing registry

    - by Anirudh Goel
    I have a windows XP sp3 system which is affected by Sality Worm. The usual symptoms of taskmanager and regedit disabled were there, and i saw that i was unable to boot my system in safe mode. Then i found that the sality worm removes the SAFEBOOT keys from registry hive. So i downloaded this reg file from http://support.kaspersky.com/faq/?qid=208279889 and was successfully able to update the reg file to my system. But still when i hit F8 during boot and select safe mode option, it still restarts after loading mup.sys file. i don't know what more to do to get to safe mode. The virus is still there in dormant stage, i can verify that because taskmanager and regedit is not disabled after i restarted in normal mode and i could browse any site and it did not kill the browser process. I also ran the salitykiller from same link above and it healed all infected exe files. This is related to another question which i have asked here,but i don't see how a common solution can solve both of those problems. Any help folks? Thanks

    Read the article

  • Recovering a VHD after resizing it using VBoxManage

    - by tjrobinson
    I am using VirtualBox 4.1.18 and had a virtual machine running Windows 8 RC with a single VHD, which was initially sized at 25GB (too small!). After installing the OS and some applications I ran out of disk space so shut down the guest and then used this command to resize the VHD to 80GB: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe modifyhd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" --resize 81920 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" UUID: 03fb26e7-d8bb-49b5-8cc2-1dc350750e64 Accessible: yes Logical size: 81920 MBytes Current size on disk: 24954 MBytes Type: normal (base) Storage format: VHD Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd So far so good. However, when I started the guest up again I got the dreaded: Fatal: No bootable medium found! system halted If I boot into GParted it shows a single 80GB drive as "unallocated". The option to scan for and attempt to repair a filesystem doesn't find anything. I also tried cloning the VHD into a VDI file, just in case that magically fixed it: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe clonehd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" --format VDI 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VDI'. UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b Accessible: yes Logical size: 81920 MBytes Current size on disk: 24798 MBytes Type: normal (base) Storage format: VDI Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi Is there anything else I could try to recover the drive? No, I don't have a backup :( My host OS is Windows 7 64-bit.

    Read the article

  • File is not compatible with the version of Windows you're running

    - by vaccano
    I have a really old installer (legacy app) that we are trying to get running on a Windows 7 64 bit os. Previously it has only been installed on Windows XP 32 bit. I get the following error when I try to run it: The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher. Contacting the software publisher is not an option (software is super old). Is there a way to get this to work? Some sort of compatibility mode? The only thing I have heard of that will work is a Virtual XP on the Win 7 box. The problem is that this software is a part of a whole software set. I would have to put all of the pieces on the Virtual XP or none at all. Before I go down the road of putting it all on the virtual xp I would like to know that there is no way to get it all on the Win 7 os.

    Read the article

  • PowerPoint '10 avoid animation completion on click & advance slide or start new one

    - by ScottS
    Scenario I have PowerPoint 2010 On the "Transitions" tab the "Advance Slide On Mouse Click" check box is checked. I have a long, slow, timed, non-repeating animation working in the background of the slide. I click to advance the slide before the animation is finished, but ... Instead of advancing the slide, the animation moves to the completed state ... Forcing a second click to actually advance the slide. Additionally If I have other animations on the slide that are initiated by a click, the long animation also advances to a finished state before starting the new animation. Desired Behavior On click, I want the slide to advance or the next on-click animation to start whether the long animation is done or not, and without having that long animation first "complete" itself. In the case of another animation, I simply want the long animation to continue, while also doing the new animation. Ultimate Question Is there a way to either: Set an option somewhere to not have that animation complete on click and simply "continue" to animate with the start of a new animation or to advance the slide (as the case may be)? Create a VBA script that will produce the desired behavior for the long animation?

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >