Search Results

Search found 5053 results on 203 pages for 'guide'.

Page 156/203 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • attach / detach mssql 2008 sql server manager [SOLVED]

    - by Tillebeck
    An external consult wrote a guide on how to copy a database. Step two was detach the database using Sql Server Manager. After the detach the database was not visible in the SQL Server Manager... Not much to do but write a mail to the service provider asking to have the database attached again. The service porviders answer: Not posisble to attach again since the SQL Server security has been violated". Rolling back to last backup is not the option I want to use. Can any one give feedback if this seems logic and reasonable to assume that a detached database in a SQL Server 2008 accessed through SQL Server Manager cannot be reattached. It was done by rightclicking the database and choosing detach. -- update -- Based on the comments below I update the question with the server setup. There are two dedicated servers: srv1: Web server with remote desktop and an Sql Server Manager srv2: Sql server that can be accessed through the Sql Server Manager on the web server -- update2 -- After a restart of the server the DBA could suddenly do the attachment of the database. And I guess that after the restart it was a simple task. So all of your answer were rigth! It seems that I can only mark one as a correct answer so I marked the first answer correct. But all are correct answer. Thanks a lot. Without posting the link to this thread then we might had so suffer while watching our database beeing restored by a backup :-) Thanks a lot. BR. Anders

    Read the article

  • what's the difference between a Volume and a Partition in Windows 7 diskpart

    - by user170232
    I was trying to follow the Intel guide for setting up iRST (Intel Rapid Start Technology) on my new laptop. The Intel manual says you need to create a *Volume that is as big or bigger than your available memory, set it to a specific id (id=84), then go into the iRST tool and adjust some settings. Looking at the disk manager on the laptop, I see there is already a Partition labeled as "Hibernation Partition" which is a little bigger than the memory in my system. So it looks like iRST was already set up...BUT, it's a Partition, not a Volume. Here's what the manual says to do: (from: http://download.intel.com/support/motherboards/desktop/sb/rapid_start_technology_user_guide.pdf) diskpart list disk select disk x (where x is the disk to use, there's only one disk in this laptop) create partition primary size=X000 (where X000 is the size to create) detail disk (which lists details for the disk. This is where i get hung up) select volume Z (where Z is the *partition you created previously) ** it says the 'detail disk' command will list the volume #, but it doesn't. ** 'detail disk' only lists two "volumes" for Recovery and OS. ** if i do 'list partition', i see the 8 GB *partition labeled as "Hibernation Partition") ** so I can't continue with the following steps: set id=84 override exit The reason I went looking for the manual is because when iRST is enabled in the BIOS, the system won't resume from sleep. When it's disabled, it works fine, but the system goes into (legacy?) Hibernation mode and takes a while to come out of Hibernation. the iRST is supposed to resume from deep sleep very quickly. So, what's the difference between a Volume and a Partition? Should I delete the Hibernation Partition and create a Hibernation Volume? Anyone have any ideas? (if it matters, this is on a Dell XPS 13 with BIOS A08) Thanks! J

    Read the article

  • copSSH and cygwin - Can't use windows style paths

    - by DrFredEdison
    I setup copSSH on one of my windows servers, and within the copSSH bash shell, I can't seem to use windows-style paths to remove and copy files. If I do try, I get the following: $ /bin/cp -r C:/Domains/_temp/collage_push/* C:/Domains/collage/ cygwin warning: MS-DOS style path detected: C:/Domains/_temp/collage_push/ Preferred POSIX equivalent is: /cygdrive/c/Domains/_temp/collage_push/ CYGWIN environment variable option "nodosfilewarning" turns off this warning. Consult the user's guide for more details about POSIX paths: http://cygwin.com/cygwin-ug-net/using.html#using-pathnames I have created a windows environment variable CYGWIN set to nodosfilewarning. It has no effect. I added export CYGWIN=nodosfilewarning to my .bashrc and doing a echo $CYGWIN in my ssh session confirms it is indeed getting set; yet again, it has no effect finally, I noted that when not doing my own export that CYGWIN contains "nontsec binmode" (no quotes), so I tried: export CYGWIN="nodosfilewarning nontsec binmode" in my .bashrc and still no dice. Older versions of CopSSH didn't have this issue. How can I actually override this error? I have a lot of scripts that already use windows-style paths, and I'd rather not change them if possible.

    Read the article

  • Ubuntu server PPTPD with OS X clients Problems

    - by Nakedsteve
    I'm trying to get a PPTP server running on a ubuntu server, but I've run into some issues with it. I followed this guide on how to set up pptpd on my server, and everything went smooth, but when I try to connect with my mac, it gives me this error: Here's my configuration: Does anyone have any idea as to what I'm doing wrong here? Update: Here's what the pptpd.log has to say about it: steve@debian:~$ sudo tail /var/log/pptpd.log sudo: unable to resolve host debian Sep 3 21:46:43 debian pptpd[2485]: MGR: Manager process started Sep 3 21:46:43 debian pptpd[2485]: MGR: Maximum of 11 connections available Sep 3 21:46:43 debian pptpd[2485]: MGR: Couldn't create host socket Sep 3 21:46:43 debian pptpd[2485]: createHostSocket: Address already in use Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection started Sep 3 21:46:56 debian pptpd[2486]: CTRL: Starting call (launching pppd, opening GRE) Sep 3 21:46:56 debian pptpd[2486]: GRE: read(fd=6,buffer=204d0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Sep 3 21:46:56 debian pptpd[2486]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Sep 3 21:46:56 debian pptpd[2486]: CTRL: Reaping child PPP[2487] Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection finished My pptpd options are: asyncmap 0 noauth crtscts lock hide-password modem debug proxyarp lcp-echo-interval 30 lcp-echo-failure 4 nopix

    Read the article

  • Nokia 5800 v50, getting "Invalid server name" with ad-hoc wlan with Windows 7

    - by Krunal
    After upgrading to v50 firmware in my Nokia 5800 xpress music phone. I am not able to connect to ad-hoc wlan network in my laptop(Windows 7 OS). I have a dialup internet connection and created wireless ad-hoc network to access the internet in my phone. But I am getting "invalid server name" in the phone and my wireless network access type in Windows 7 shows "no internet access". I am able to connect to wlan infrastructure network using 5800 in my office. So my phone is working with wlan and I m able to access internet on it. BUT The problem is "dial up with ICS enabled" and "AD-HOC wlan network" in Windows 7. Can anyone give some solution on it. Or can you provide a guide/tutorial to connect 5800 with adhoc network and dialup internet in Windows 7. System: Windows 7 Dial up internet connection with ICS enabled. Ad-Hoc wireless network with WEP Nokia 5800 Xpress Music phone Note: I have read other posts also and tried their solutions but didn't worked for me. the first solution is to bridge the network and access it. but how can I bridge the dialup and wlan netwok? second solution is manually configure IP and DNS addresses in phone. But what should I add as IP and DNS as my dial up and adhoc network have automatic addresses.

    Read the article

  • hdparm - how to secure erase SATA SSD over USB

    - by cc0
    I have been following this guide on how to secure erase an SSD (trying to improve the performance of mine, which currently only writes at about 30mb/s seq). However, I'm using an USB--Sata docking device to avoid having the harddrive frozen. Apparently using this solution the SATA device is recognized as a SCSI drive, which is giving me trouble. I use the "hdparm -I /dev/sda" command with those parameters, and I get the error; HDIO_DRIVE_CMD (identify) failed: Invalid Exchange After a lot of googling on the issue I can't seem to find anyone who has actually solved this problem. However, I have not tried to just go ahead and use the secure erase. So I'm not sure if this would actually still work. I would love any and all input I can get on this, especially on whether it will still work to do a secure erase with the drive being recognized as a SCSI drive. The drive itself is a Samsung 256gb SSD (pm800), I'm sure you can understand my reluctance to go through this procedure without feeling reasonably safe that I won't mess it up beyond repair.

    Read the article

  • Simple P2V help from Linux to Windows

    - by Ke
    Hi, I have two OS's installed on different drives in my PC. One linux (Centos 5.4) and one windows 7. Its getting tiresome to constantly have to stop and restart the PC when I want to use either OS. I would very much like to use Windows 7 as my host OS and access my linux OS from within Windows. However, im having trouble deciphering exactly how to do this (many of the articles seem confusing and a bit overkill) From what i have seen its possible to use VMWare converter to convert the physical linux image to a virtual image so that I can use it in windows. As im having problems understanding how this is done, I would really appreciate a step by step guide (for a newbie), or any simple tutorials that you can point me at. Some questions beforehand: 1) My linux image is around 80gb, do i need to take this into consideration? The linux drive is around 180gb in total. All my other drives are NTFS non writeable in linux (as I use them in windows and ntfs is dodgy in linux), so probably not possible to move the image over to my ntfs drives 2) Can I just zip the linux files up somehow and transfer it to windows to create the p2v? 3) Is it possible to do the P2V conversion while I am logged into windows. I can see the actual linux drive loaded in disk manager, but windows doesnt read linux file systems so im confused as to how to access the linux drive if this is possible. 4) Or will i need to do the whole p2v conversion inside linux? Cheers, any help is much appreciated Ke (a confused p2v newbie)

    Read the article

  • Solaris Fibre Channel target - Configure QLogic QLA2340

    - by growse
    I'm currently trying to set up a small storage system as a fibre channel target. This is for testing, so I'm currently using Solaris (Nexenta) and a QLogic QLA2340 HBA. For some reason, the qlc and qlt drivers don't support the QLA2340, so I'm using the qla2300 driver from QLogic's website. I've also got the scli utility installed for configuration. The HBA is detected by the system. That said, it's not clear how I get from this point to a point where I have a ZFS volume being exposed as an FC target. I was originally following this guide (http://www.youtube.com/watch?v=yzEBd3l7Qn4) but it seems that without the qlc/qlt drivers, Sun's configuration tools won't work. Does that also imply that COMSTAR also won't work? What's the best way to expose an FC target with this setup? Most of the options I'm seeing in scli complain that the port state is LinkDown (it is, I've not plugged anything in yet). Do I have to have my FC client plugged up and working before I can configure the target? Apologies for the slight vagueness of the question, but I'm not overly familiar with the terminology.

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

  • I flashed my DS4700 with a 7 series firmware, now my DS4300 cannot read the disks I moved to that lo

    - by Daniel Hoeving
    In preparation for adding a number of 1Tb SATA disks to our DS4700 I flashed the controller firmware from a 6 series (which only supports up to 2Tb logical drives) to a 7 series (which supports larger than 2Tb logical drives). Attached to this DS4700 was a EXP710 expansion drawer that we had planned to migrate out to our co-location to allieviate the storage issues we were having there. Unfortunately these two projects were planned in isolation to one another so I was at the time unaware of the issue that this would cause. Prior to migrating the drawer I was reading the "IBM TotalStorage DS4000 EXP700 and EXP710 Storage Expansion EnclosuresInstallation, User’s, and Maintenance Guide" and discovered this: Controller firmware 6.xx or earlier has a different metadata (DACstore) data structure than controller firmware 7.xx.xx.xx. Metadata consists of the array and logical drive configuration data. These two metadata data structures are not interchangeable. When powered up and in Optimal state, the storage subsystem with controller firmware level 7.xx.xx.xx can convert the metadata from the drives configured in storage subsystems with controller firmware level 6.xx or earlier to controller firmware level 7.xx.xx.xx metadata data structure. However, the storage subsystem with controller firmware level 6.xx or earlier cannot read the metadata from the drives configured in storage subsystems with controller firmware level 7.xx.xx.xx or later. I had assumed that if I deleted the logical drives and array information on the EXP710 prior to migrating it to the DS4300 (6.60.22 firmware) this would satisfy the above, unfortunately I was wrong. So my question is a) Is it possible to restore the DAC information to its factory settings, b) What tool(s) would I use to accomplish this, or c) is this a lost cause? Daniel.

    Read the article

  • Postfix not sending email after upgrading to Ubuntu 12.04

    - by Luke
    After upgrading a server from Ubuntu 10.04 to 12.04, postfix is no longer sending email through sendgrid.com. I followed this guide about 6 months ago and everything had been working perfectly until the upgrade. Now it doesn't seem to be authenticating with sendgrid. This is the error I get in my syslog when I try to send an email. May 22 10:19:55 server postfix/smtp[3844]: 983B11C5DA: to=<to address>, relay=smtp.sendgrid.net[174.36.32.204]:587, delay=0.05, delays=0.01/0/0.04/0, dsn=5.0.0, status=bounced (host smtp.sendgrid.net[174.36.32.204] said: 550 Cannot receive from specified address <sendgrid username>: Unauthenticated senders not allowed (in reply to MAIL FROM command)) This is from postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = no config_directory = /etc/postfix header_size_limit = 4096000 inet_interfaces = loopback-only mailbox_size_limit = 0 mydestination = localhost, mylinode.members.linode.com myhostname = hostname mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relayhost = [smtp.sendgrid.net]:587 smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl/sendgrid smtp_sasl_security_options = noanonymous smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Any help would be greatly appreciated. I would be happy to post any other logs or other relevant information.

    Read the article

  • Mail server: can't connect via POP/IMAP

    - by MelkerOVan
    I've followed this guide on setting up a mail server on my dedicated server. I've been able to send mails from the php application I'm using and the linux commandline (using telnet, php, etc). The problem is that I cannot connect to the server via IMAP/POP which I've setup using Courier. I've tried using thunderbird but it complains that the username or password is wrong. I doubt it is the username/password but I don't know how to trouble shoot this. Edit: Here's the messages in mail.log: Jan 9 22:43:38 mail authdaemond: received auth request, service=imap, authtype=login Jan 9 22:43:38 mail authdaemond: authmysql: trying this module Jan 9 22:43:38 mail authdaemond: SQL query: SELECT id, crypt, "", uid, gid, home, "", "", name, "" FROM users WHERE id = '[email protected]' AND (enabled=1) Jan 9 22:43:38 mail authdaemond: password matches successfully Jan 9 22:43:38 mail authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: authmysql: clearpasswd=<null>, passwd=password Jan 9 22:43:38 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: Authenticated: clearpasswd=peter, passwd=password Jan 9 22:43:38 mail imapd: chdir Maildir: No such file or directory

    Read the article

  • All browsers refusing to load a specific image on a webpage?

    - by Johnson
    Out of nowhere today, all 3 of my browsers (FF/Chrome/IE, OS = Win7 x64) are refusing to load the homepage of interfacelift.com correctly. It works fine on other PC's in the house (on the same network), so it is definitely related to this one PC. The browser won't load the main image on the page correctly (even though the source code looks good), however if I direct the browser to the exact location of that image, then it displays fine. So obviously I can get the HTML index (which locates the resource) and I can get to the resource. So why heck isn't it displaying properly on the index page? It's almost as if the HTML rendering engine has gone bad, on all 3 browsers at once. I've browsed to a bunch of other sites (including sites very heavy on JS, with HTML much more complex than the one in question here) and am seeing nothing funny. Only thing wonky I've done with my PC in the past several hours was replacing the system file Magnifier.exe with a copy of cmd.exe while playing around with some of the ideas mentioned in this guide. However, I've since then restored the files to their previous state, and I don't know how Magnifier would be related to this even if I hadn't restored it. Any ideas? I'm stumped! EDIT: Here is what the broken page looks like in Chrome. And here is the image loaded correctly by itself.

    Read the article

  • Compile php 5.3 ldap extension

    - by toups
    So trying to follow the very un-descriptive guide at my webhost for compiling a new php extension: **Compiling PHP 5.3 extensions You can also compile and load your own extensions. Here's how:** 1. Download and unpack the extension (from PECL, for instance). 2. If the extension is already compiled (most binary PHP loaders will be, for instance), skip to step 6. 3. /usr/local/php53/bin/phpize 4. ./configure --with-php-config=/usr/local/php53/bin/php-config 5. make 6. Copy the module to your .php/5.3/ directory. 7. Assuming your user is called "username" and your module is named "mymodule.so", add the following to your .php/5.3/phprc: extension = /home/username/.php/5.3/mymodule.so Downloaded Openldap stable release online, uploaded the unpacked gzip via ftp to my server, did step 3, 4, 5. Now on step 6 is says "copy the module...". My question is where is the module for me to copy? Sorry if it's obvious and I'm not seeing it; first time compiling a php extension :O

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • IIS 7 URL Rewrite to GeoServer running on Apache

    - by Maxim Zaslavsky
    I'm building a mapping application based on OpenLayers that uses GeoServer to serve up mapping data. The problem I'm having is that besides the map images I'm requesting through WMS, I'm using jQuery AJAX to get information from GeoServer. As GeoServer is running on a different port, my requests are being blocked due to cross-site scripting security policies in JavaScript. As a Java application, GeoServer runs on Apache on port 8080, while my IIS instance is running on port 80. Instead of building a proxy, I've decided to use URL Rewriting in IIS7 to fix this problem. I'm following this guide, but it's still not working. Here are my URL Rewrite rule settings: Matches URL: (.*) Condition: {HTTP_URL} matching /geoserver Action: rewrite to http://localhost:8080/{R:1}, appending query string When I request http://localhost/geoserver/wms?QUERY_LAYERS=SanDiego:FWSA_sandiego&LAYERS=SanDiego:FWSA_sandiego&SERVICE=WMS&VERSION=1.1.1&FEATURE_COUNT=20&REQUEST=GetFeatureInfo&EXCEPTIONS=application/vnd.ogc.se_xml&BBOX=-13009123.590156,3862057.2905992,-13006066.109025,3865114.7717302&INFO_FORMAT=text/html&x=20&y=20&width=40&height=40&srs=EPSG:900913, however, all I get is a 404, although the same request on port 8080 returns the proper result. What am I doing wrong? Thanks in advance.

    Read the article

  • Joining an Ubuntu 14.04 machine to active directory with realm and sssd

    - by tubaguy50035
    I've tried following this guide to set up realmd and sssd with active directory: http://funwithlinux.net/2014/04/join-ubuntu-14-04-to-active-directory-domain-using-realmd/ When I run the command realm –verbose join domain.company.com –user-principal=c-u14-dev1/[email protected] –unattended everything seems to connect. My sssd.conf looks like the following: [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = DOMAIN.COMPANY.COM config_file_version = 2 services = nss, pam [domain/DOMAIN.COMPANY.COM] ad_domain = DOMAIN.COMPANY.COM krb5_realm = DOMAIN.COMPANY.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad My /etc/pam.d/common-auth looks like this: auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_sss.so use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so However, when I try to SSH into the machine with my active directory user, I see the following in auth.log: Aug 21 10:35:59 c-u14-dev1 sshd[11285]: Invalid user nwalke from myip Aug 21 10:35:59 c-u14-dev1 sshd[11285]: input_userauth_request: invalid user nwalke [preauth] Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_krb5(sshd:auth): authentication failure; logname=nwalke uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): check pass; user unknown Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname user=nwalke Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): received for user nwalke: 10 (User not known to the underlying authentication module) Aug 21 10:36:12 c-u14-dev1 sshd[11285]: Failed password for invalid user nwalke from myip port 34455 ssh2 What do I need to do to allow active directory users the ability to log in?

    Read the article

  • samba4 dc "network location cannot be reached"

    - by mitchell babies peters
    to clear the air centos 6.4? (maybe 6.3) as the server, running samba 4.0.10, trying to add a windows 7 client that has connectivity to the server. this is what windows shouts as me as it mocks my dependence on network infrastructure. "the network location cannot be reached." i have access to the domain contoller (dc) im using the dc as the domain name server (dns) already, and the name is correctly resolving, and it is correctly forwarding outbound traffic. i have nothing but self taught experience with active directory(ad) so if i am missing something obvious, please shout it out, but keep the verbal abuse to a minimum. i checked samba4DC + my error and found nothing relevant to my issue, if i missed something please point me in that direction. the weekend is just starting as i write this so i probably wont be back on to check this post for a day or three, but i might because this mystery is killing me. i followed the samba4 as a dc guide here and i supplimented gaps with this i have tested kerberos, ntp, and set my DC as the clock to sync to in my windows client and it appears to be a very small fraction of a second off so that shouldn't be it. also, firewall and selinux are both off for testing. i have also tried disabling ipv6, and cleared the registry of ipv6 records (allegedly the default samba4 as a DC runs as windows server 2003 which allegedly does not support or tolerate the existence of ipv6, fair warning, i heard this on the internet so it is probably a lie) i have tried a few other things that i have forgotten because i have been doing this for a day and a half now. ideas welcome. suggestions for alternatives are also welcome, as long as they are free. i was given a budget of $0 dollars and told to implement active directory (no prior knowledge of active directory at that point).

    Read the article

  • Should I persist images on EBS or S3 ??

    - by enes
    Hi; I am migrating my Java,Tomcat, Mysql server to AWS EC2. I have already attached EBS volume for storing Mysql data. In my web application people may upload images. So I should persist them. There are 2 alternatives in my mind. 1- Save uploaded images to EBS volume. 2- Use S3 service. The followings are my notes, please be skeptic about them, as my expertise is not on servers, but software development. EBS plus: S3 storage is more expensive. (0.15 $/Gb 0.1$/Gb) S3 plus: Serving statics from EBS may influence my web server's performance negatively. Is this true? Does Serving images affect server performance notably? For S3 my server will not be responsible for serving statics. S3 plus: Serving statics from EBS may result I/O cost, probably it will be minor. EBS plus: People say EBS is faster. S3 plus: People say S3 is more safe for persistence. EBS plus: No need to learn API, it is straight forward to save the images to EBS volume. Namely I can not decide, will be happy if you guide. Thanks

    Read the article

  • Wordpress Directory Permission to allow uploads, plugin folders, etc

    - by user1015958
    I have a wordpress pre-made site which were developed on my localmachine, and i uploaded it too a vps running on debian6, using nginx, mysql, php. Following this guide: 1) Create an unprivilaged user, this could be say 'karl' or whatever, and make them belong to the www-data group. So that if I were to login as karl and create a web root in say /home/karl/www/ , all the files will be owned by karl:www-data 2) Set up nginx as the user www-data in nginx.conf 3) Set up PHP-FPM to run as www-data 4) Place your files in /home/karl/www/[domain name maybe]/public_html/, upload as 'karl' so you don't have to chown everything again. when i type ls -l inside public_html/ it shows that all the files inside are owned by karl:karl. But the public_html directory is owned by karl:www-data. I chmod 0755 the folder wp-content but i still get the error: ERROR: Path ../wp-content/connection_images does not seem to be writeable. I know i shouldn't set it too 777 due to security reason, how should i set it too proper permission? and what should i set also to allow my users to upload,write posts,edit articles? Sorry for my english by the way.

    Read the article

  • trying to allow domain admins access in apache

    - by sharif
    I am trying to authenticate domain admins through apache and it is not working. Error i get is as follows [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(1432): [client 172.16.0.85] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(915): [client 172.16.0.85] Using HTTP/[email protected] as server principal for password verification [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(655): [client 172.16.0.85] Trying to get TGT for user [email protected] [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(569): [client 172.16.0.85] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(994): [client 172.16.0.85] kerb_authenticate_user_krb5pwd ret=0 [email protected] authtype=Basic [Mon Sep 24 14:54:45 2012] [debug] mod_authnz_ldap.c(561): [client 172.16.0.85] ldap authorize: Creating LDAP req structure [Mon Sep 24 14:54:45 2012] [debug] mod_authnz_ldap.c(573): [client 172.16.0.85] auth_ldap authorise: User DN not found, LDAP: ldap_simple_bind_s() failed Below is what I have in my httpd file Alias /compass "/data/intranet/html/compass" <Directory "/data/intranet/html/compass"> AuthType Kerberos AuthName KerberosLogin KrbServiceName HTTP/intranet.xxx.com KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms xxx.COM Krb5KeyTab /etc/httpd/conf/intranet.keytab # require valid-user # Options Indexes MultiViews FollowSymLinks # AllowOverride All # Order allow,deny # Allow from all # SetOutputFilter DEFLATE # taken from http://blogs.freebsdish.org/tmclaugh/2010/07/15/mod_auth_kerb-ad-and-ldap-authorization/ # download extra module and install # Strip the kerberos realm from the principle. # MapUsernameRule (.*)@(.*) "$1" AuthLDAPURL "ldap://echo.uk.xxx.com akhutan.usa.xxx.com/dc=xxx,dc=com?sAMAccountName" AuthLDAPBindDN cn=Administrator,ou=Users,dc=xxx,dc=com AuthLDAPBindPassword *** Require ldap-group cn=Domain Admins,ou=Users,dc=xxx,dc=com </Directory> I have followed this guide. I have download and install the tarball. when I try to uncomment MapUsernameRule i get failed error when restarting apache Reloading httpd: not reloading due to configuration syntax error I am using centos 5 64bit. I have added the following line but i still get syntax error LoadModule mod_map_user modules/mod_map_user.so

    Read the article

  • Is it possible to re-cab an Administrative Install Point?

    - by Nathaniel Bannister
    We have Acrobat 8 Pro at work, and our media was painfully out of date. Rather than install all of the machines at 8.0.0 and then do the 6 or 7 consecutive reboots adobe expects you to be ok with I decided I'd integrate the .msp files into the installer. After reading up on it, I figured out the exact patch order that adobe required, extracted my cd to an Administrative install point, and ran the patches against it: msiexec /a AcroPro.msi /p AcrobatUpd810_efgj_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd811_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd812_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd813_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd816_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd817_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd820_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd822_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd823_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd825_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd826_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" Now I have a AIP that is fully patched to 8.2.6 (Tested working prior to attempting to CAB it), but is absolutely huge (1.2gb) what I would like to do is take the folders within the AIP and put them back into a cab file for the sake of convenience in transferring the files around. I tried the command: cscript "C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\sysmgmt\msi\scripts\WiMakCab.vbs" AcroPro.msi Data1 /L /C /S Per the guide I was using, while this did produce the cab file I Wanted, however the resulting MSI fails to install with an error 2602: It's been a while since I've done something like this, and it's probably a glaring oversight on my part, but any insight would be much appreciated.

    Read the article

  • How to install subversion on 1&1 server with windows?

    - by Miles M.
    I would like to start using Unfuddle for my project on 1&1 server. I never used subversion and core control before. So, I read a lot of documentation about it but each time, I get lost at the very beginning : I've downloaded the latest version of subversion. But on every tutorial, the way to follow is different. First I sae, on a lot of tuts, that you have to enter command lines. Is that ONLY for Linux ? Like here : http://chwalisz.org/2007/08/05/subversion-on-11-shared-hosting/ I also find something completely different on some website, I think (correct me if I'm wrong) it is the Windows tuts, deeply different frm the linu one. So I found that : http://www.codinghorror.com/blog/2008/04/setting-up-subversion-on-windows.html http://geekswithblogs.net/emanish/archive/2006/06/14/81905.aspx http://better-scm.shlomifish.org/subversion/Svn-Win32-Inst-Guide.html And I don t understand : Do I still have to put the sibversion file on the server ? Do I have to install Apach ? where, on my computer or on my server ? I'm working ith WampServer so I thing I have already Apach installed right ? When they say it is for Windows, do they mean it is for windows servers or for your own OS ? 'Cause my servers are on linux. How could I install Subversion on a 1&1 linux server from my W7 OS computer ? Thanks, that's a lot of question but that realle messy in my mind, I can't find something clear ..

    Read the article

  • vmware vcenter 5.1 installation with FQDN error

    - by CSG
    I'm trying to install vCenter 5.1 on a windows 2012 dedicated (with SQL express standalone) During the installation of the Single Sign On module i've a warning "the fully qualified domain name cannot be resolved with nslookup. if you continue the installation some features might not work correctly. for detailed requiments see the installation and setup guide" The only indication that i've found are about the reverse zone dns resolution.. and this works! i've verified that the dns works properly with nslookup C:\Users\admin>nslookup srv6.mydomain.local Server: srv2.mydomain.local Address: 172.25.4.22 Nome: srv6.mydomain.local Address: 172.25.1.26 C:\Users\admin>nslookup 172.25.1.26 Server: srv2.mydomain.local Address: 172.25.4.22 Nome: srv6.mydomain.local Address: 172.25.1.26 (all ip are right: I've the vCenter=srv6 and DC+DNS=srv2 on different vlan) i've tryed to force the resolution of the ip changing the [..]\drivers\etc\hosts file i've disabled the IPv6 support i've used all combination with domain prefixes (explicit, by dhcp, undefined..) i've disabled all antivirus/firewall (kaspersky end point 10) is this a bug of vcenter 5.1.0-1065152 ? have you got any suggestions for me?

    Read the article

  • Chipset fan on the frtiz - compressed air hasn't fixed anything - is there anything I can do?

    - by Anthony
    Yesterday, my computer started to make an annoying whining noise. Knowing that this is likely a fan issue, I opened the case and proceeded to determine which fan was causing the issue. I got some compressed air and tried cleaning out the dust around it (and the rest of the computer while I was at it). This hasn't seemed to fix the issue. Now, if it were just any fan, I would probably just replace the fan - they're relatively cheap after all. However, this is a special fan. Aside: For what its worth, I feel bad that the graphics card blocks part of the fan, but it is the only slot the graphics card fits, so I had no choice. After pulling out my motherboard user guide, it looks like this is a fan placed directly on top of the chipset. To be perfectly honest, I have no clue what the purpose of the chipset is - but it sounds important. After some quick research, I see that it is responsible for providing the bridge between my CPU, RAM and graphics, among other things. Just a quick search at Newegg tells me that chipset fans can be purchased at pretty reasonable prices (< 20 dollars). Is it practical to replace this fan? It is an old computer as computers go and I wouldn't be terribly upset to upgrade the motherboard and processor, so perhaps this is a sign. Hardware Specs: Motherboard: Asus A8N-E Chipset: NVIDIA nForce4 Ultra

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >