Search Results

Search found 5444 results on 218 pages for 'svn verify'.

Page 171/218 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • SQL Server Backup problem when browsing to the directory

    - by Richard West
    I want to allow a group (eg. 'BackupManagers') who can only preform backup and restore operations on certain databases. When creating the BackupManagers user account I checked db_backupoperator. When the user logs in to create a backup they get an error message similar to the following when the select Tasks - Backup - Click on Add in the destiantion block - click on the "..." button to browse TITLE: Locate Database Files - MYSERVER\SQL2005 E:\MSSQL\Backup Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists. If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box. I have confirmed that the user has permissions to the folder. I have even created a share to this folder and had them access it through explorer. They are able to create and delete files within the folder. I have found that if they type in the path to the file instead of using the "..." button to browse the directory tree then they can create a backup file fine. Why is the browse button not working as expected? Thanks!

    Read the article

  • SSL certificates work fine from command line but fail in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Installing/enabling PHP Pecl Intl extension on CentOs 5

    - by Marijn Huizendveld
    Original question: I'm having trouble installing the PHP Pecl Intl extension on my CentOs 5 machine. After installing both icu and libicu with the following commands: $ yum install icu $ yum install libicu I tried to install the Intl extension like so: $ /usr/bin/pecl install intl I selected to search for the default location for the ICU libraries and header files. It ends up crashing like this: checking whether to enable internationalization support... yes, shared checking for icu-config... no checking for location of ICU headers and libraries... not found configure: error: Unable to detect ICU prefix or no failed. Please verify ICU install prefix and make sure icu-config works. ERROR: `/tmp/pear/temp/intl/configure --with-icu-dir=DEFAULT' failed update After successfully installing the development version of icu as suggested by RusAlex (thanks RusAlex) like so: $ yum install libicu-devel I ran into a new problem which I also encountered locally the following command: $ /usr/bin/pecl install intl now produces this error: /private/tmp/pear/temp/intl/collator/collator_class.c:92: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:96: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:101: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:107: error: duplicate 'static' make: *** [collator/collator_class.lo] Error 1 ERROR: `make' failed It appears to have something to do with PHP 5.3 being bundled with Intl already. But how can I enable this extension, if I look in my PHP Info than I cannot find any reference to it...

    Read the article

  • Need help to configure file:default on apache2

    - by turk182
    hi all!! im trying to use xen on ubuntu 8.04 hardy heron, because it is a project that assign to me in my new job, i have already installed xen and im running the virtual machines. according to the guide that they give me, i have to configure de file: default, from apache2 directory, like this: vi /etc/apache2/sites-available/default inside of this file i have to write the next information: NameVirtualHost * VirtualHost * ServerName "www".ejemplo.com ServerAlias ejemplo.com DocumentRoot /var/www/ ProxyRequests Off Proxy * Order deny,allow Allow from all /Proxy ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=BALANCEID nofailover=On ProxyPassReverse / "http"://http1.ejemplo.com/ ProxyPassReverse / "http"://http2.ejemplo.com/ Proxy balancer://mycluster BalancerMember "http://10.10.2.101:8080 loadfactor=1 BalancerMember "http://10.10.2.102:8080 loadfactor=2 ProxySet lbmethod=byrequests /Proxy Location /balancer-manager SetHandler balancer-manager Order deny,allow Allow from all /Location /VirtualHost in the section of balancermember im using the ip of the virtual machine: virtual machine 1 has ip 10.10.2.101 and virtual machine 2 has ip 10.10.2.102 then i have to install apache2 on each virtual machine and restart apache2 the question is what i hace to do to verify if all of this works allegedly i have to open a browser and write "www.ejemplo.com" and suppost show something thats the reason that im ask for help cause i dont know what to do, im looking for on the web and i cant find nothing related with this... ill appreciatte your help. THXS!!! pd. i closed "www" and "HTTP" in quotes by rules of this sites cause im a new user

    Read the article

  • Exchange DiskShadow/Robocopy backup does not purge log files

    - by Robert Allan Hennigan Leahy
    I have a series of scripts setup to backup my Exchange. The following command is executed to start the process: diskshadow /s C:\Backup_Scripts\exchangeserverbackupscript1.dsh This is exchangeserverbackupscript1.dsh: #DiskShadow script file set verbose on #delete shadows all set context persistent writer verify {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7} set metadata C:\Backup_Scripts\shadowmetadata.cab begin backup add volume C: alias SH1 create expose %SH1% P: exec C:\Backup_Scripts\exchangeserverbackupscript1.cmd end backup delete shadows exposed P: exit #End of script And this is exchangeserverbackupscript1.cmd: robocopy "P:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group" "\\leahyfs\J$\E-Mail Backups\Day 1" /MIR /R:0 /W:0 /COPY:DT /B This is not causing Exchange to purge its log files. The edb file is 4.7 gigabytes, but the First Storage Group folder itself is 50+ gigabytes due to many, many log files for each day going back to 2009. Is there any way -- I've Googled and haven't found anything -- to notify Exchange when I've completed a full backup, and have it purge its log files? According to this and this, end backup should cause Exchange to "flush the transaction logs for that storage group" but only "if a successful backup of a storage group occurred", which leaves my question as: What constitutes a "successful backup", and why is what I'm doing not it?

    Read the article

  • .NET Framework 1.1 on IIS 7

    - by Zack Peterson
    I have inherited a .NET Framework 1.1 web site that I must host with IIS 7 on Windows Server 2008. I'm having some trouble. 1. Installation I installed .NET Framework 1.1 following these instructions. The installation automatically created a new Application Pool "ASP.NET 1.1". I use that. 2. Trouble When I launch the web site I see web.config runtime errors: The tag contains an invalid value for the 'culture' attribute. I fix that one and then see: Child nodes are not allowed. I don't want to keep playing this whack-a-mole game. Something must be wrong. 3. Am I sure this is .NET 1.1? I examine the automatically created application pool. I see that it's 1.1. Advanced Settings... Basic Settings... This doesn't seem right. While 1.1 is set, it's not an option in the Advanced drop down selectors. And why in the Basic box is it just "v1.1" and not ".NET Framework v1.1.4322"? That would be more consistent. 4. I cannot create other .NET 1.1 App Pools I cannot select .NET Framework 1.1 for other application pools. It's not an option in the drop down selectors. What's up with that? What now? Why isn't v1.1 an option for all AppPools? How can I verify my application is in fact using .NET Framework 1.1? Why might I get these runtime errors?

    Read the article

  • Adding add-ins to excel - strange communicates

    - by Jacob
    I am using Excel 2010 and 2013. I would like to add an excel add-in from page http://xlloop.sourceforge.net/ . There is file with name xlloop-0.3.2 and extension Microsoft Excel XLL Add-In. I added this file from menu File - Options - Add-Ins - In combobox Manage i choosed Excel Add-Ins - Go... - Browse and I choosed my file. I see the following comunicate: "C:\...\xlloop-0.3.2.xll" is not a valid add-in. Thus, I do next attempt. I go from menu File - Open - and I choosed my file. I see comunicate: The file you are trying to open "xlloop-0.3.2.xll", is in a difference format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Do you want to open the file now? After I clicked Yes I see a lot of signs (something like from chinese :)) My last attempt was double clicked on file. I see: The file format and extension of "xlloop-0.3.2.xll" don't match. The file could be corrupted or unsafe. Unless you trust its source , don't open it. Do you want open it anyway? After clicked yes I see something like the second attempt. I am really very confused because some of my friends have the same version of excel and they don't have these communicates. Do you have any idea where is the problem in my excel? I very need this addin to work with Java. I will very grateful for your help! Thanks in advance!

    Read the article

  • Installing ffmpeg + dependencies on AWS Linux AMI (repo issues)

    - by HdN8
    I'm installing ffmpeg to run on an Amazon linux AMI, and have added the rpmforge repo and the dag repo. Here are some guidelines I'm using for reference: TWoZaO and Razuna The rpmforge repo has ffmpeg, but if you try to install it then it will complain that is missing dependencies (for me libSDL-1.2.so.0()(64bit)). Regardless I will install ffmpeg from svn so I can be sure to enable the options I want (namelylibx264). It seems strange to me though that SDL is not inrpmforgeordag`, and in according to both of my references above, it should be there. I tried to grab it manually from here, but it needs these dependencies, so no-go: > error: Failed dependencies: SDL = > 1.2.10-8.el5 is needed by SDL-devel-1.2.10-8.el5.x86_64 > alsa-lib-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGL-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGLU-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libSDL-1.2.so.0()(64bit) is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libX11-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXext-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrandr-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrender-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXt-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64

    Read the article

  • Regular Windows 7 BSOD with Shrew VPN client

    - by Junto
    The Shrew VPN client appears to be a good alternative to the Cisco VPN Client on x64 Windows 7. However, since installing it I've seen fairly regular BSODs. Minidump attached: Microsoft (R) Windows Debugger Version 6.11.0001.404 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [D:\Temp\020810-23431-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*d:\symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16385.amd64fre.win7_rtm.090713-1255 Machine Name: Kernel base = 0xfffff800`0285f000 PsLoadedModuleList = 0xfffff800`02a9ce50 Debug session time: Mon Feb 8 18:08:12.887 2010 (GMT+1) System Uptime: 0 days 7:52:06.120 Loading Kernel Symbols ............................................................... ................................................................ .............. Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {0, 2, 0, fffff800028d50b6} Unable to load image \SystemRoot\system32\DRIVERS\vfilter.sys, Win32 error 0n2 *** WARNING: Unable to verify timestamp for vfilter.sys *** ERROR: Module load completed but symbols could not be loaded for vfilter.sys Probably caused by : vfilter.sys ( vfilter+29a6 ) Followup: MachineOwner Machine is a brand spanking new Dell Precision T5500. Superuser appears to have several recommendations for the Shrew VPN Client as an alternative to the Cisco VPN client on 64 bit machines, so I wondered if anyone here has seen this problem and possibly found a solution to the problem? I've decided to run the VPN client under Windows 7 compatibility mode for the moment (Vista SP2) with administrator privileges to see if it makes a difference. Oddly, the VPN doesn't necessarily need to be connected. I've noticed it when browsing using Google Chrome and Internet Explorer, usually when I open up a new tab. If it carries on I think I'll be forced to shell out the 120 EUR for the NCP Client instead.

    Read the article

  • Setting "Run WWW service in IIS 5.0 isolation mode" does not persist in IIS 6

    - by Saul Dolgin
    Our IIS server was recently patched with the latest Microsoft Security Updates and since then, I am unable to enable the "Run WWW service in IIS 5.0 isolation mode" setting. This setting was enabled prior to patching and somehow changed during the updates. I have tried both using the IIS Manager console and the adsutil.vbs approach to change it. Either way, after resetting IIS for the change to take effect, when I go to verify that the isolation mode setting is enabled (true) I find that is reverts back to being disabled (false). Now... The patches have already been rolled back, however the setting still does not persist when I enable it. While I am trying to research the patches that were applied to see if there is a known issue (or perhaps a change in this setting's behavior) I was hoping someone else might have come across the same problem. Any help towards a workaround would be greatly appreciated! >cscript adsutil.vbs set W3SVC/IIs5IsolationModeEnabled TRUE IIs5IsolationModeEnabled : (BOOLEAN) True >iisreset Attempting stop... Internet services successfully stopped Attempting start... Internet services successfully restarted >cscript adsutil.vbs get W3SVC/IIs5IsolationModeEnabled IIs5IsolationModeEnabled : (BOOLEAN) False

    Read the article

  • Default Browser hangs (IE, Chrome)

    - by Craig Hinrichs
    IE was my default browser about three months ago when I started experiencing this issue. Intermittent hangs would occur when I would open a new main page or new tab to a site I know would be up. What I mean by a hang: The browser would open and say "Waiting for site " and do nothing more. If I closed the window and reopened it it would immediatly connect. Over time I would have to close and reopen the window to get to the page. This would happen to any page including Google. I finally got sick of it and started using chrome and I will never go back. I recently upgraded my anti-virus and now I am experiencing the same issue with Chrome. I use AVG for my antivirus. Empirically it seems if I don't make Chrome my default browser I don't experience the issue. I tested this theory for over two hours yesterday. Possible issues I have found this coudl be but not confirmed yet: MTU settings are not correct. I am infected but my antivirus has not caught it (unlikely but possible) ?? I would like to think this is related to my antivirus but I am unsure how to verify. I don't like the idea of killing my antivirus if #2 is a possibility. I am looking for tips on how I can trouble shoot possible issues. I am on Windows XP SP3 Thanks in advance.

    Read the article

  • Best photo management software?

    - by Niels Basjes
    Hi, What I would like is a single piece of software (or a smart combination of tools) that allow me to manage my photos in a better way than what I've found so far. 1. Tags Primarily I need a way of tagging the images. So I can manually tag photos the same way we tag questions here at SO/SF/SU. I want this software to place a lot of the tags automagically (obvious things like date and resolution). 2. Face recognition What I would really like is that this software has a feature that it can recognize faces in images and places tags with the name of the person. So far I've only heard of one online photo system that can do that (Picasa) and not yet of any offline tool. 3. Version database I must have some way of having a central GIT/SVN/... that contains all images. I have had a harddrive corruption a few years ago and it took me a long time to figure out which images had been damaged. I always want to be able to go back to what the camera produced. 4. Website I want to be able to generate a website (few 'tag' specific websites) based on the actual content. 5. Easy bulk uploading Many photo tools have a one on one uploading option. I prefer simply 'throwing' my images on a file server under Linux (Samba) and let the system automagically integrate, tag, recognize, etc. all images. Ok, I know these are a bit much. Perhaps you guy's have some suggestions about existing tools that can make this possible. Or even a complete system that does this. EDIT: To clarify on the OS. I prefer Linux for any 'server' task and Windows XP for any 'desktop' task. Thanks for all your input. Niels Basjes

    Read the article

  • OpenWrt vs DDWrt

    - by Ioan Paul Pirau
    I have a TP-Link Wr1043ND router and I want to install one of these two firmwares: OpenWRT DD-WRT I read that I can install custom packages and do much more than I can with the original firmware. I would like to ask someone with experience in using both OpenWRT and DD-WRT which he would recommend and why. And to give a few reference points I'm interested in: reliability – network stability both on cable and wireless and on the usb drive performance – network speed, very important also usb drive speed configurability – the possibility to add extensions such as a torrent client, FTP, SSH, WWW and SVN server directly ease of use – the ease of installation and configuration of the router support/docs – how much info there is if you stumble upon a problem and you have to find some documentation, or if there's any free support (but that's a longshot) Of course I don't imagine that I will find the perfect firmware and that one is vastly superior over the other. Also if there's anyone out there who uses one of these firmwares on a TP-Link Wr1043ND, it would be great to get some feedback about the impact of the changes from the original firmware. P.S. I'm open also for Tomato if it's the better one.

    Read the article

  • Exchange 2010 Hub cannot deliver to Exchange 2007 Hub - "451 5.7.3 Cannot achieve Exchange Server authentication"

    - by Graeme Donaldson
    We have an existing Exchange 2007 server in Site A (exch07). I've installed an Exchange 2010 server in Site B (exch10). Both servers have the CAS, Mailbox and Hub roles. Messages sent via SMTP on exch10 which are destined for mailboxes on exch07 are queued with the "Last Error" reported in Queue Viewer as '451 4.4.0 Primary target IP address responded with: "451 5.7.3 Cannot achieve Exchange Server authentication." Attempted failover to alternate host, but that did not succeed. Either there are no alternate hosts, or delivery failed to all alternate hosts.' I've found that some people have resolved this by creating new Receive Connectors which are scoped specifically to apply to connections from the remote hub/s, but I have had no luck doing this. Specifically I created new receive connectors on both servers with the following settings: Remote IP = IP/s of remote server Authentication = "Transport Layer Security (TLS)" and "Exchange Server authentication" Permission Groups = "Exchange servers" and "Legacy Exchange Servers" This made no difference, I see the same error message. What am I missing? Update: We noticed that the Application log had this error message from MSExchangeTransportService: Microsoft Exchange could not find a certificate that contains the domain name exch07.domain.local in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector exch10 with a FQDN parameter of exch07.domain.local. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key. It turns out that the default self-signed certificate was no longer enabled for the SMTP service for some reason. After enabling the self-signed certificate for SMTP, we no longer get the error in the event logs, but delivery is still failing with the same error message. Update 2: I put a mailbox on exch10 and attempted to deliver a message via SMTP on exch07 and I get the same error.

    Read the article

  • CDROM does not appear on desktop, MACOS 10.5.7

    - by Cheeso
    When I pop a CDROM into the drive of my Macbook Pro, It spins up, I hear it, but no icon appears on the desktop. (I think it's 10.5.7; actually not sure how to verify this on Mac, but I think I saw a 10.5.7 flash by somewhere). In the finder preferences, I have "Show these items on the Desktop" set to show HDs, External Disks, and CDs, DVDs, and ipods. All three of those are checked. I do see the internal HD on the desktop. In Disk utility I can see the CD/DVD hardware. It says "MATSHITA DVD-R UJ-857E...". From Disk Utility I can eject the drive. But in Finder, there is never a CD/DVD listed under "Devices". When I insert a disk, nothing happens, I cannot see it. I also cannot boot from bootable CDROMs by holding C down . Suggestions? I am not very experienced with Mac; I have used Windows for years. EDIT Two updates: I saw this article on support.apple.com, and modified the hostconfig appropriately. It did not have the AUTODISKMOUNT entry, so I added one, rebooted. Same behavior. It does not see the CDROM in Finder, does not mount it on desktop. I put an old manufactured CDROM into the drive, and voila! it showed up on the desktop. The CD that does not appear is a GNome Partition Editor Live CD, which I guess is based on debian. That CD boots in other (non-Mac) PCs. I want to use this to adjust the Bootcamp partition. Suggestions?

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • Active Directory Child Domain Replication Problems

    - by MikeR
    Hi, I've recently inherited an Active Directory (all DCs Windows 2003) which has been configured with several child domains that are used as test environments for out CRM software. Two of these child domains have been used for testing using dates in the future (2015), throwing them well outside of the Kerberos tolerance for time, and they're flooding my event logs with replication errors such as the following: Description: The attempt to establish a replication link for the following writable directory partition failed. Directory partition: CN=Schema,CN=Configuration,DC=ad,DC=xxxxxxx,DC=com Source domain controller: CN=NTDS Settings,CN=TESTDC001,CN=Servers,CN=SiteName,CN=Sites,CN=Configuration,DC=ad,DC=xxxxxxx,DC=com Source domain controller address: 38e95b2a-35af-4174-84ba-9ab039528cce._msdcs.ad.xxxxxxx.com Intersite transport (if any): This domain controller will be unable to replicate with the source domain controller until this problem is corrected. User Action Verify if the source domain controller is accessible or network connectivity is available. Additional Data Error value: 5 Access is denied. I'd also like to upgrade to Windows 2008 at some point, but wouldn't want to attempt any schema updates while I'm not 100% confident on the replication. I'm guessing my only real solution will be to get rid of these child domains. The child domains are operating as stand alone domains, the DC is up and running and authenticating test users fine. I'm guessing the best solution to this would be to delete the domains (although I'd be happily told otherwise). The clock forwarding appears to have been happening for several years, so I'm assuming I can't just put the clock right (I'm guessing scope for this would be 180days, the same as the tombstone lifetime) With the replication errors would I be able to dcpromo the child domains DC, select it as the last domain controller in the domain and the child domain would be deleted? Or would I be better off treating the domain as an orphaned domain and use Microsoft's instructions to clear up as such. Any advice would be much appreciated.

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • How to set CA cert file for LDAP backend server in smbpasswd configuration

    - by hayalci
    I am having a problem with smbpasswd, an LDAP backend server and SSL/TLS certificates. The client machine that I run smbpasswd on is a Debian Etch machine, and the Ldap server is Sun DS running on Solaris. All the following occurs on the client. When I disable SSL, by setting "ldap ssl = no" in smb.conf, the smbpasswd program works without errors. When I set "ldap ssl = start tls", the following messages are printed by smbpasswd and there is a long timeout period before any password is asked by it Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! ..... long delay ..... New SMB password: Retype new SMB password: Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! smbpasswd: /tmp/buildd/openldap2-2.1.30/libraries/liblber/io.c:702: ber_get_next: Assertion `0' failed. Aborted I conducted some tests with "ldapsearch -ZZ". It was not working at first, but after I added the TLS_CACERT line to /etc/ldap/ldap.conf, /etc/libnss-ldap.conf and /etc/pam_ldap.conf, it started working. So relevant TLS sections in all those files are: ssl start_tls tls_checkpeer no tls_cacertfile /path/to/ca-root.pem TLS_CACERT /path/to/ca-root.pem But the smbpasswd program continued giving the error. I tried creating /etc/smbldap-tools/smbldap.conf file with following content (after consulting debian docs for smbldap-tools package) But as I see, smbpasswd comes with samba-common package and does not use the configuration for smbldap-tools utilities. verify="optional" cafile="/path/to/ca-root.pem" My question is: How can I set which SSL CA Certificate is used by smbpasswd program ?

    Read the article

  • DriveImage XML fails with a Windows Volume Shadow Service Error

    - by ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • Windows 7 x64 RTM USB Port Has Power But Won't Recognize Mouse/Keyboard/Anything

    - by ben
    I have an odd error that doesn't seem to fit in with any of the other odd Windows 7 x64 USB errors that have been kicked up on Google. Here we go: Uninstalled Tortoise SVN and clicked restart computer. My machine had been up for around 28 days On reboot my mouse and keyboard failed to work anymore, couldn't log in. Tried every USB port I have on my Dell 390 and the ports on my Dell 19's, nothing worked. They had power but Windows would not respond when I manipulated the keyboard/mouse. Rebooted my computer and pressed F2 to get into bios, my keyboard is working fine in bios. Keyboard and mouse work fine on other computers when using USB. Found adapters for keyboard and mouse to convert from USB to PS/2 ports, works fine. I'm actually typing this question on the same keyboard, same computer, just using PS/2 ports for my mouse and keyboard. It appears to be a Windows 7 x64 issue. Other things I have tried: Multiple other mice and keyboards, iphone, all with no luck. Each one gets power, but Windows never tries to install drivers or sees that they are connected. Uninstall and reinstall all USB drivers. Drives uninstall and reinstall fine and report no errors in Control Panel. In Power Management I disallow Windows from turning off USB ports to save power Installed the latest nVidia drivers for my graphics card, no change. Anyplace else I can look/try? Thanks!

    Read the article

  • Is wiper.sh working?

    - by Aleksander Blomskøld
    I'm setting up a server running Ubuntu Precise, and I'm trying to verify if SSD TRIM is working. fstrim is failing: ~ sudo fstrim -v / fstrim: /: FITRIM ioctl failed: Operation not supported So I tried wiper.sh in hdparm: wiper-3.5 sudo ./wiper.sh --verbose --commit /dev/sda1 wiper.sh: Linux SATA SSD TRIM utility, version 3.5, by Mark Lord. rootdev=/dev/sda1 fsmode2: fsmode=read-write /: fstype=ext4 freesize = 169502088 KB, reserved = 1695020 KB Preparing for online TRIM of free space on /dev/sda1 (ext4 mounted read-write at /). This operation could silently destroy your data. Are you sure (y/N)? y Creating temporary file (167807068 KB).. Syncing disks.. Beginning TRIM operations.. get_trimlist=/sbin/hdparm --fibmap WIPER_TMPFILE.11503 /dev/sda: trimming 3211263 sectors from 64 ranges succeeded trimming 3571713 sectors from 64 ranges succeeded trimming 3915776 sectors from 64 ranges succeeded (...) trimming 3657913 sectors from 60 ranges succeeded Removing temporary file.. Syncing disks.. Done. It seems to be working, but I'm wondering if it really is. Are there any cases where wiper.sh should work when fstrim isn't? Is there any way I can check if the TRIMing actually has succeeded (other than trusting the wiper.sh-log)?

    Read the article

  • AMD switchable graphics are not working OR I don't know how to make them work

    - by Deus Deceit
    I don't know if this is the right place for this question, but I'll give it a shot. I have a Dell Inspiron 17R 5721. It's supposed to be using switchable graphics. It has an Intel HD 4000 and a Radeon HD 8730M and I'm using windows 8. My problem is this, I Installed the drivers that dell gives me but I don't see the AMD graphics card ANYWHERE ( I do see it in device manager but not anywhere else to select and play a game using AMD). I installed the latest drivers from AMD and same thing, I can't run a game with AMD graphics card. I change the applications preferences in the Catalyst control center but even after that, games don't give me the option to select the AMD card, they list only the Intel HD 4000. Can someone tell me what I have to do to make this work? ------------------- After looking around and messing with stuff. I think.... I THINK that you don't really get the option to select a graphics card. Switchable graphics is all about switching automatically depending on application's needs. Cause when I uninstalled AMD's drivers or actually (screwed up lol) games were playing much worse. When I re-installed them, games went back to being good looking. So even if a game sees only the Intel HD 4000 graphics card windows or AMD's drivers will switch to the AMD Readeon graphics card automatically. I hope someone can verify this. Cause seriously I don't think you get to play scyrim with High graphics settings or even ultra with the Intel HD Graphics card. -------------------

    Read the article

  • Unable to remove Read-Only attribute from folder in Windows XP

    - by elcuco
    I have this directory which I cannot remove the read only attribute from. The computer is running XP SP2 (or SP3, not sure) and the directory sits in a NTFS file system. Looking into the web I found this: http://support.microsoft.com/kb/256614 which tells that if the directory is "customized" it's treated as a system folder and thus "read only". I don't think this is a scenario in my case, but anyway it's not helping, their recommendation is more or less: attr -r -s /d /s d:\data and this is not working for me. Any other ideas? More info: The directory is served to an HTTP server (wamp) and the directory is an SVN check out. What happens is that the web server cannot write files into the directory (imagechace from drupal is you are really interested). Edit 2: The original post claimed that the directory sits on a VFAT FS, however I booted Fedora 11 from livecd and the partition is marked as NTFS. Edit 3: I left the company which I worked on, on which this situation happened... so I cannot fully close this question. But things get even worse: I tested the "attr -r" answer I put, it did not work for me, and now the developer said that it worked for her. A nice WTF moment. Probably a reboot helped... Sorry for loosing details. If anyone has the same problem, and one of the answers helps him - please comment.

    Read the article

  • Cisco Router - Add a missing MIB file

    - by Jonathan Rioux
    I have a Cisco 881w, and I would like to setup NBAR in my NetFlow Analyzer. But it says that my router misses this MIB in order to allow NFA to poll the router with snmp to get NBAR infos. From the FAQ page of the NetFlow Analyzer website, it responds to my error: Q. I am able to issue the command "ip nbar protocol-discovery" on the router and see the results. But NFA says my router does not support NBAR, Why? A. Earlier version of IOS supports NBAR discovery only on router. So you can very well execute the command "ip nbar protocol-discovery" on the router and see the results. But NBAR Protocol Discovery MIB(CISCO-NBAR-PROTOCOL-DISCOVERY-MIB) support came only on later releases. This is needed for collecting data via SNMP. Please verify that whether your router IOS supports CISCO-NBAR-PROTOCOL-DISCOVERY-MIB. The missing MIB is: CISCO-NBAR-PROTOCOL-DISCOVERY-MIB I found it here: ftp://ftp.cisco.com/pub/mibs/v2/CISCO-NBAR-PROTOCOL-DISCOVERY-MIB.my But how can I add this MIB into the router? The IOS of my router is: c880data-universalk9-mz.151-3.T1.bin

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >