Search Results

Search found 13180 results on 528 pages for 'non interactive'.

Page 407/528 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Kerberos issues after new server of same name joined to domain

    - by MentalBlock
    Environment: Windows Server 2012, 2 Domain Controllers, 1 domain. A server called Sharepoint1 was joined to the domain (running Sharepoint 2013 using NTLM). The fresh install for Sharepoint1 (OS and Sharepoint) is performed and set up for Kerberos and joined to the domain using the same name. Two SPNs added for HTTP/sharepoint1 and HTTP/sharepoint1.somedomain.net for account SPFarm. Active Directory shows a single, non-duplicate computer account with a create date of the first server and a modify date of the second server creation. A separate server also on the domain has the server added to All Servers in Server Manager. This server shows a local error in the events exactly like This from Technet (Kerberos error 4 - KRB_AP_ERR_MODIFIED). Question: Can someone help me understand if the problem is: The computer account is still the old account and causing a Kerberos ticket mismatch (granted some housekeeping in AD might have prevented this) (In my limited understanding of Kerberos and SPNs) that the SPFarm account used for the SPNs is somehow mismatched with HTTP calls made by the remote server management tools services in Windows Server 2012 Something completely different? I am leaning towards the first one, since I tested the same SPNs on another server and it didn't seem to cause the same issue. If this is the case, can it be easily and safely repaired? Is there a proper way to either reset the account or better yet, delete and re-add the account? Although it sounds simple enough with some powershell or clicking around in AD Users and Computers, I am uncertain what impact this might have on an existing server, particularly one running SharePoint. What is the safest and simplest way to proceed? Thanks!

    Read the article

  • AT&T Filtering FTP traffic?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the file names from upper to lower case and they upload fine. If I change back to the original file name, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • Unix sort 10x slower with keys specified

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields, four of which are strings of avg. 15 characters, two are integers. Three of the fields are sometimes empty. All six fields combine to form a unique key - and that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o a_out.csv a_in.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since these results are so bad as-is. Any suggestions?

    Read the article

  • Postfix: How to configure Postfix with virtual Dovecot mailboxes?

    - by user75247
    I have configured a Postfix mail server for two domains: domain1.com and domain2.com. In my configuration domain1 has both virtual users with Maildirs and aliases to forward mail to local users (eg. root, webmaster) and some small mailing lists. It also has some virtual mappings to non-local domains. Domain2 on the other hand has only virtual alias mappings, mainly to corresponding 'users' at domain1 (eg. mails to [email protected] should be forwarded to [email protected]). My problem is that currently Postfix accepts mail even for those users that don't exist in the system. Mail to existing users and /etc/aliases works fine. Postfix documentation states that the same domain should never be specified in both mydestination and virtual_mailbox_maps, but If I specify mydestination as blank then postfix validates recipients against virtual_mailbox_maps but rejects mail for local aliases of domain1.com. /etc/postfix/main.cf: myhostname = domain1.com mydomain = domain1.com mydestinations = $myhostname, localhost.$mydomain, localhost virtual_mailbox_domains = domain1.com virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_mailbox_base = /home/vmail/domains virtual_alias_domains = domain2.com virtual_alias_maps = hash:/etc/postfix/virtual alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases virtual_transport = dovecot /etc/postfix/virtual: domain1.com right-hand-content-does-not-matter firstname.lastname user1 [more aliases..] domain2.com right-hand-content-does-not-matter @domain2.com @domain1.com /etc/postfix/vmailbox: [email protected] user1/Maildir [email protected] user2/Maildir /etc/aliases: root: :include:/etc/postfix/aliases/root webmaster: :include:/etc/postfix/aliases/webmaster [etc..] Is this approach correct or is there some other way to configure Postfix with Dovecot (virtual) Maildirs and Postfix aliases?

    Read the article

  • My Samsung R480 laptop freezes for short and inconsistent periods of time.

    - by anonymous
    I have a Samsung R480 laptop running Windows 7. It's great, I love it to death, but every so often it'll start having, for lack of a better term, "hiccups". It would freeze completely, emit a loud BZZZZZZT from the speakers (think when a video freezes and the sound gets stuck), then return to working order all in a span of 0.5~2 seconds. It has happened while I was playing Mass Effect, while I was watching both HD and non-HD videos, and while I was surfing the internet (which I've noticed while watching YouTube videos and playing Entanglement, but also noticed while using Facebook, minus the buzzing sound). My Initial hypothesis was that my GPU or CPU were overheating as it would shut down while playing Mass Effect, but when I turned off my WiFi card hardware using [fn + F9], the problem was resolved. As I currently see it, it could be a problem with my CPU, my WiFi card, or my sound card. It could also be a software related issue, possibly an ill-functioning process. It could be chrome related. A memory leak from chrome maybe? Does anyone know how to resolve this?

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • How to change password on RAR archive w/o modifying arch. files attributes (modified/created)?

    - by Larry78
    How do I change the password of an .RAR archive, without changing the date/time attributes of the files in the archive? Unfortunately you can't directly change the password of the archive with WinRAR, you have to extract the files, and then make a new archive with the new password. So the created/modified attributes of the files in the archive get changed. I know you can manually change the attributes of a file with available utilities - but there are hundreds of files in the archive, each with unique attributes, so it would take a very long time to "fix" each file before re-archiving it. I'm using WinRAR 3.51, the last free version. Windows XP Pro SP3. Update: I don't care if the output is a .RAR file or a ZIP file IZArc4.1 will convert the RAR to a ZIP, and it keeps the dates. The problem is it compresses the file - there isn't a "store" option, and setting the default to store in the main configuration doesn't effect conversions. The RAR contains uncompressed files. None of these other archiving programs will even do a conversion. A couple claim to, or try to, but the errors returned indicate a very lousy application. So far I've tried PeaZip, 7-Zip, FilZip, TugZip, SimplyZipSE, QuickZip, and WinShrink (from downloads.cnet.com). WinRAR gives the error "skipping encryped archive" when I try the conversion. It asks for the password first, and I know it's right, as I opened the archive, and I can read/view all the files in it. It works on non-encrypted files.

    Read the article

  • How do you enable webcam support in facebook for ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated! Cheers!

    Read the article

  • Is there a remote desktop or vnc app for the IPad that properly handles Bluetooth keyboard shortcuts?

    - by Steve Bison
    I've tried 4 or 5 remote desktop apps, the most notable being Jump Desktop and Splashtop Streamer. Most of these remote desktop apps have some sort of on-screen keyboard for typing with the IPad, including special keys like shift, control, alt. The special keys act like "sticky keys" meaning they stay depressed until another key is pressed, to make it easier to do key combinations. Even non-standard keyboard combinations like shift+enter work, in this sticky sense. When using a Bluetooth keyboard with the remote desktop apps, both Jump and Splashtop Streamer recognize the shift + letter combination for doing capital letters. However, generically pressing shift, cntrl, or alt does not depress the sticky on screen shift buttons or do anything at all. Only a few combinations are recognized (again like shift+letter, cntrl+C). Most combinations do not work (shift+enter, alt+tab). Even having the keyboard shortcuts work like sticky keys (press shift then enter, not both at once) would be much better than the limited functionality they have now. Is there an app, jailbreak app, or workaround that lets me use bluetooth keyboard properly with remote desktop on the ipad?

    Read the article

  • Simulate a DFS share for a user not on domain with a folder in path

    - by user223655
    I have a consultant whose computer is not on the domain and needs to access various network resources. Unfortunately while adding a computer to the domain is a difficult bureaucratic process (and would disallow much of his development software from even running given the domain restrictions), we can allow him to have credentials to access network resources. As such, he accesses various network resources via NET USE etc. without using DFS. There is one piece of software which requires him to have the same hardcoded path as other domain users but that path is a DFS path which he can't map (i.e., the software checks the path at runtime and will only run if it matches the registered path and will reject it in the context of using a DFS versus conventional machine path) I was wondering if there's some method to simulate the DFS path without actually using DFS. e.g., the path the software needs to see is "\ABC\DFS\software\app.exe" whereas the non DFS path is "\DEF\Software\app.exe" while I could make his hosts file point DEF to ABC, I'm not sure if I can somehow make it point there with the DFS "folder" as well are there any methods for this short of making changes to the AD to allow him to use DFS or add him to the domain (both of which are politically/technically challenging sadly)? Thanks guys

    Read the article

  • Windows 7 32bit resolution is limited for HDTV monitor

    - by Nick
    I have a small Magnavox HDTV that i am using to test a Frankenstein PC build. The goal is to eventually connect to my old rear projection HDTV which supports 1080i via component input. The goal is also not to buy anymore stuff, otherwise i will just buy a smartTV and be done. I have a ATI Radeon HD 3450 with component out adapter YPrPb. The monitor supports 1080p, but over analog component out, should only go upto 1080i. I have had this working with another setup. On this particular setup, i have Windows 7 32bit, with the latest 12.8 catalyst drivers installed. the windows splash screen starts in 480p, then switches to 480i when the login prompt is shown. When try to change the resolution, 720x480 is the maximum value of the slider. I have also tried the "list all modes" and that also maxes out at 720x480. There are two options for this monitor in the devices seciton, Generic PNP monitor, and Generic non-PNP monitor. Neither setting fixes this. Any ideas on how to get 1080i?

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • What can be done to improve time synchronization on networks with sporadic internet access?

    - by anregen
    I'm looking for advice setting up time servers for a very non-typical network. I support many closed networks that have occasional access to the internet. A network would get access most days for a few hours, but would frequently go 1-3 weeks blacked-out. The computers/servers on this network are mostly *nix-based, but not all the same flavor. The entire network is mobile, so when it connects, it will have very different hops/latency to internet time servers. The servers on the closed network are powered-off frequently (at least daily). Right now, my gut tells me to use NTP (because I hate re-learning all the stuff that someone else already got working pretty well). But I have several issues, and am looking for someone with experience in this type of strange situation. I currently have no solution in place, I'm simply letting the internal clocks drift. This results in errors of ~600s in a majority of networks. I have seen mismatch worse than 10,000s. Is there something "better" than NTP in this situation? I know NTP likes to have very frequent, consistent access to servers that give nearly identical answers. I won't have that. How many internal NTP servers should I configure, so that during periods of internet blackout, I have internal time that is consistent within the closed network? There is no human access. No matter how large the mismatch, the server(s) must attempt to correct itself. Discrete steps are very bad. No matter how large the mismatch, the correction must be "slewed", not "stepped". I understand that this could take many hours to correct.

    Read the article

  • Windows 7 not booting up and stuck at startup repair

    - by mikimr
    I've been having issues with Windows 7 Home Premium on a Lenovo laptop. At first, it would not start up normally at all. I started it in Safe Mode, where I disabled all non-MS services and tried again to no avail. It then goes into Startup repair where it failed several times. I tried copying the original registry settings, still the same. I resorted to booting with an Ubuntu DVD, where I ran the boot-repair, where it is supposed to correct the Windows boot. No luck. I used Win7 DVD to start up from there, where I had the option to install or repair. I chose the repair, got into command prompt, ran chkdsk /i /r, where it found 3 unreadable segments, went through the 2nd step without issues, and the 3rd step completed with some errors (can't recall the exact errors). When I restarted the machine, it went to straight to the Stratup Repair, indicating "Attempting repairs... Repairing dis errors. This might take over an hour to complete." It's been like this for nearly 15 hours. When I try to cancel or close the Startup Repair window, I get a message "The current repair operation cannot be cancelled." Should I let it run or force shut the machine? If force shut, how can I resolve this problem? Thanks.

    Read the article

  • Apache Virtualhost entry with Windows hostname

    - by gshauger
    I have a Windows Domain Controller and we use it for DNS for our internal network. I have an Ubuntu box with an IP address of 172.16.34.149. Within the Windows DNS I created the forward and reverse lookup entries for the name Endymion. Naturally when ever I FTP/SSH/HTTP/etc to the hostname Endymion it resolves correctly to my Ubuntu box. I wanted to do some web development on this box for an existing site. There were problems when I placed the website in a subfolder of /var/www/. Let's just say it was in folder /var/www/projectx/. The issue involved the incorrect resolution of non-relative urls. So I figure I could create a new DNS entry for the hostname projectx. Sure enough when I FTP/SSH/HTTP/etc to the hostname projectx it takes me to the same ubuntu box as the hostname Endymion...this is what I would expect. I now have two hostnames for the same box. I then create a Virtualhost entry in httpd.conf that looks like the following: <VirtualHost *:80> DocumentRoot /var/www/projectx ServerName projectx ServerAlias projectx </VirtualHost> Sure enough when I go to a browser and type in http://projectx/ it takes me to the correct subfolder. Everything works!!! Not so fast. I then go to http://endymion/ and instead of taking me to /var/www/ it takes me to /var/www/projectx/ Clearly I'm missing something. Help please! ;)

    Read the article

  • No access to Windows 2003 admin shares

    - by ARomo
    This is the environment: Several Win 2003 SP 2 servers and several Win XP SP2 & SP3 clients. All in the same LAN. Firewall is disabled everywhere. No recent Windows updates or configuration changes. This is the problem: Since last Thursday, I log on to any other server or workstation as any regular (non-admin) user and I fail to be able to open ADMIN SHARES ONLY (namely \\server1\c$, \\server1\e$ and \\server1\admin$). The error message is: "\server1\c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again." I can, however, open the same shares if I use FQDN or IP address: \\server1.domain.local\c$ \\172.0.0.1\c$ Other shares do not have this issue and I can open them without any issue. Any ideas or suggestion would be truly appreciated. Thank you in advance.

    Read the article

  • How to avoid en.voyages-sncf.com redirecting to uk.voyages-sncf.com?

    - by Mark Smith
    OK, so en.voyages-sncf.com is French Railways' English language website with full functionality for train booking in France - it sells iDTGV, offers seating options etc. uk.voyages-sncf.com is their UK subsidiary, with reduced functionality, no seat options, no iDTGV etc. Previously, I have been able to select 'Other countries (EUR)' top right and go from the uk version to the en version, or just type in the direct url 'en.voyages-sncf.com and go there. Now, they seem to have implemented an automatic redirect so whenever I enter 'en.voyages-sncf.com' on my UK-based PC or indeed try to select 'Other countries (EUR)' it automatically bumps me to uk.voyages-sncf.com, which I don't want. I can't get onto en.voyages-sncf at all. So, short of using a heavyweight solution like using a non-UK proxy server or downloading the TOR browser, is there any simple solution? Like telling my browser to go to en.voyages-sncf, go directly to en.voyages-sncf and no other site, do not pass go, do not collect £200, do not go anywhere else, ignore all redirects and do what you're told by ME, not by those Machiavellian so-and-sos?

    Read the article

  • Does Google sometimes ignore "special" characters, possibly depending on your location or font type settings? [closed]

    - by RLH
    TLDR Google tends to ignore special characters in my search strings. Is there anything that I can do about it and is it, possibly, happening because Google makes certain assumptions based off of my default text-encoding settings and my location? I just posted this question over at StackOverflow. I had found a C preprocessor that I'd never seen before. As I should have done, I Googled it and tried to find out further information. I attempted various search terms which were all variations of "C Operator ##" (some times with and some times without the double-quotes.) Google didn't bring back anything of use so I posted my question on SO. As you can see from the comments, someone mentioned a search string (ironically one which I did try to search) and stated that I could have even hit the "I'm feeling lucky" button and have gotten my answer. The problem is I did search that, and the results that I received were far more basic and even after following the top results and searching the resulting pages, I could find nothing referencing the string "##". I'm not posting this question to complain but it does provide an empirical example of something I've seen before that really bugs me-- Google often ignores special characters in my search strings and the results are often useless. As a developer I often need to search for string values containing non-alphanumeric characters. Some characters (like the underscore or hyphen) can be used without trouble. However, other characters (such as the ampersand, carat, tilde and pound sign) are often ignored in my query strings. Is there a way to prevent this from happening so that I can get meaningful results from Google? NOTE I stay logged into Google and I live in the US. I wonder if Google detects some form of text-encoding setting or derives my results based off of certain, localized text-based assumptions. Regardless, I would like to for Google to search for what I give it. Is there anything that I can do to improve my results?

    Read the article

  • Sound doesn't work anymore after replacing RAM

    - by thejh
    Hello, today, I replaced one old RAM module with two newer, bigger ones, but now, the sound doesn't seem to work anymore. Already ran alsaconf and it didn't help. Output of lspci for the audio device: 00:07.0 Audio device: nVidia Corporation MCP67 High Definition Audio (rev a1) Subsystem: Giga-byte Technology Device a002 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 (500ns min, 1250ns max) Interrupt: pin A routed to IRQ 21 Region 0: Memory at f5100000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+) Status: D0 PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/0 Enable- Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [6c] HyperTransport: MSI Mapping Enable+ Fixed+ Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel The audio device is onboard and has six configurable outputs, two or so are also capable of being an input (if I remember it correctly), but I don't know how to control it under linux. Does somebody know how/whether replacing the RAM could be related to my problem and/or how to fix it?

    Read the article

  • How do you enable webcam support in Facebook for Ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in Ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated!

    Read the article

  • Windows 7 not booting after failed SRT (SSD caching) install

    - by david
    This is a fairly new computer, only about a month old. i7 2700k, z68 motherboard, with a 1.5tb WD black HD, and a 128gb crucial M4 ssd. I followed the instructions for setting up ssd caching, the SATA controller was set to RAID, I installed the intel software and enabled acceleration and it said everything went fine. But when I went to reboot, I received the lovely "Reboot and Select proper Boot device" error message. I checked the bios, and it was booting from the correct HD (I tried the only other option anyway just in case, it was the ~50 odd gb of unformatted space left on the SSD) AFter that I entered the raid until (ctrl-i at boot) and removed the acceleration and deleted the raid array (because it was being used as a cache this was non destructive) Still no boot. So I reinstalled win7 directly on the SSD, booted, and checked the HDD to make sure it hadn't been wiped. It hadn't, all the files were still there, including all the windows stuff. I backed up my data to an external drive just in case, but I'd really like to get this install booting again. I trawled the webs a bit, and have tried entering recovery mode and using the bootrec.exe and bootsect.exe to fix it, but to be honest I'm not sure what I'm doing with those. My question is basically: How do I make my harddrive bootable again?

    Read the article

  • pam_ldap.so before pam_unix.so? Is it ever possible?

    - by user1075993
    we have a couple of servers with PAM+LDAP. The configuration is standard (see http://arthurdejong.org/nss-pam-ldapd/setup or http://wiki.debian.org/LDAP/PAM). For example, /etc/pam.d/common-auth contains: auth sufficient pam_unix.so nullok_secure auth requisite pam_succeed_if.so uid >= 1000 quiet auth sufficient pam_ldap.so use_first_pass auth requiered pam_deny.so And, of course, it works for both ldap and local users. But every login goes first to pam_unix.so, fails, and only then tries pam_ldap.so successfully. As a result, we have a well-known failure message for every single ldap user login: pam_unix(<some_service>:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=<some_host> user=<some_user> I have up to 60000 of such log messages per day and I want to change the configuration so, that PAM will try ldap authentication first, and only if it fails - try pam_unix.so (I think it can improve the i/o performance of the server). But if I change common-auth to the following: auth sufficient pam_ldap.so use_first_pass auth sufficient pam_unix.so nullok_secure auth requiered pam_deny.so Then I simply can't login anymore with local (non-ldap) user (e.g., via ssh). Does somebody knows the right configuration? Why Debian and nss-pam-ldapd have pam_unix.so at first by default? Is there really no way to change it? Thank you in advance. P.S. I don't want to disable logs, but want to set ldap authentication on the first place.

    Read the article

  • Nginx Config - I can't access WordPress admin area

    - by WebDevDude
    I am a complete noob when it comes to Nginx, but I'm trying to make the switch over for my WordPress site. Everything works, even the permalink, but I can't access my WordPress admin directory (I get a 403 error). I have my WordPress install in a subfolder, so that complicates things a bit for me. Here is my Nginx config file: server { server_name mydomain.com; access_log /srv/www/mydomain.com/logs/access.log; error_log /srv/www/mydomain.com/logs/error.log; root /srv/www/mydomain.com/public_html; location / { index index.php; # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location /myWordpressDir { try_files $uri $uri/ /myWordpressDir/index.php?$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/mydomain.com/public_html$fastcgi_script_name; fastcgi_split_path_info ^(/myWordpressDir)(/.*)$; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } }

    Read the article

  • if I define `my_domain`, postfix does not expand mail aliases

    - by Norky
    I have postfix v2.6.6 running on CentOS 6.3, hostname priest.ocsl.local (private, internal domain) with a number of aliases supportpeople: [email protected], [email protected], [email protected] requests: "|/opt/rt4/bin/rt-mailgate --queue 'general' --action correspond --url http://localhost/", supportpeople help: "|/opt/rt4/bin/rt-mailgate --queue 'help' --action correspond --url http://localhost/", supportpeople If I leave postfix with its default configuration, then the aliases are resolved correctly/as I expect, so that incoming mail to, say, [email protected] will be piped through the rt-mailgate mailgate command and also be delivered (via the mail server for ocsl.co.uk (a publicly resolvable domain)) to [email protected], user2, etc. The problem comes when I define mydomain = ocsl.co.uk in /etc/postfix/main.cf (with the intention that outgoing mail come from, for example, [email protected]). When I do this, postfix continues to run the piped command correctly, however it no longer expands the nested aliases as I expect: instead of trying to deliver to [email protected], user2 etc, it tries to send to [email protected], which does not exist on the upstream mail server and generates NDRs. postconf -n for the non-working configuration follows (the working configuration differs only by the "mydomain" line. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ocsl.co.uk newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 We did have things working as we expected/wanted previously on an older system running Sendmail.

    Read the article

  • Fixing Windows install by connecting its hard drive via USB to a different laptop

    - by Jason
    I tried to upgrade a laptop to SP3, which broke it. I later found out SP3 doesn't work on that 2002 laptop. I can't uninstall SP3, or fix SP2, because the hard drive is now not detected during setup (I've read that's the problem you get). I put the hard drive in a USB drive case and plugged it into my other laptop, and I can read (& write to) the disk okay. (The hard drive won't fit in my other laptop, so I'm using USB.) I need to get that disk back to SP2, or fix whatever files got screwed up causing the disk to not be recognized. I don't want to do a re-install as there are 80GB of files on it I need, and they won't fit on the HD of my other laptop, and also because I no longer have some of the install CDs for software on it. What do I need to do to fix that drive from my other laptop? (I don't want my working laptop (XP SP3) to get screwed with by putting an SP2 disk in the CD drive, or the non-o/s data on the other hard drive screwed with.)

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >