Search Results

Search found 17955 results on 719 pages for 'sub domain'.

Page 158/719 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Change of domain deleted data in Team Foundation Server?

    - by glumesc
    Dear All, Maybe my google-fu is failing me, but I cannot seem to find any information on the following: My Windows user account was recently moved, accidentally, to another domain in my company's Active Directory. While in the other domain, I was unable to access my data stored in TFS 2008 (e.g. shelvesets, pending changes, workspaces, etc). I assume this was because it was associated with my ORIGINALDOMAIN\userId account, rather than NEWDOMAIN\userID account. My account has now been moved back to ORIGINALDOMAIN, however I still cannot see any of my data in TFS. In fact, it appears that all of my data (all my shelvesets!) have been deleted. It is almost as if TFS saw that my userId had disappeared from ORIGINALDOMAIN and assumed that I had been "deleted" and thus deleted all my data. Has anybody else encountered this? Is there hope for my data or am I royally stuffed? Thanks in advance, Steve

    Read the article

  • Care to be taken when serving static content (JS, CSS, Media) from different domain?

    - by Aahan Krish
    Let me try to explain by example. Say website is hosted at example.com (NOT www.example.com). In order to serve static content cookie-free, I've chosen to use a different domain example-static.com. Now, lets consider that my static content is currently served like this: http://example.com/js/script.js http://example.com/css/style.css http://example.com/media/image.jpg ** Now I create a CNAME record aliasing example-static.com to my main domain i.e. example.com so that the static content is served as such: http://example-static.com/js/script.js http://example-static.com/css/style.css http://example-static.com/media/image.jpg ** Is that all I have to do? Will all browsers execute JavaScript files and load web fonts without any security concerns? OR should I be using some .htaccess rules to modify header information and the like? PS: It would be great if you can provide what rules should be added, if need be.

    Read the article

  • Why I sould not develop an opensource runtime UI Autogeneration from domain objects?

    - by Marco Bettiolo
    I'm using for my projects a rather complete UI auto-generation tool from database entities for windows forms and asp.net I wrote. Now I've built a working prototype UI auto-generation tool from domain objects. Right now it is in early stage of development and by reflection it generates user interface for creating and updating domain objects. I searched a bit and I didn't find other opensource projects that have the same goal. Why? This type of tool is not useful? Is this idea fundamentally flawed? Thanks.

    Read the article

  • In Grails, How can I create a domain model to link two of another model?

    - by gerges
    Hey all, I'm currently trying to create a Friendship domain object to link two User objects (with a bit of additional data: createDate, confirmedStatus). My domain model looks as follows class Friendship { User userOne User userTwo Boolean confirmed Date createDate Date lastModifiedDate static belongsTo = [userOne:User , userTwo:User] static constraints = { userOne() userTwo() confirmed() createDate() lastModifiedDate() } } I've also added the following entries to the user class static hasMany = [ friendships:Friendship ] static mappedBy = [ friendships:'userOne' , friendships:'userTwo' ] When I do this, the result is a new friendship created (and viewable through the controller) with both users listed in their respective places. When I view the details of userOne, I see the friedship listed. When I view the details of userTwo, no friendship is listed. This is not the behavior I expected. What am I doing incorrectly? Why can't I see the friendship listed under both users?

    Read the article

  • Can I access an iframe of the same domain in a separate window?

    - by jozecuervo
    How can I detect the presence of and then call a function on a frame that is already loaded in one tab (my iframed facebook app) from a page being loaded in a new tab (from an ad-served link). It seems most examples focus on parent/child iframe communication. In this case, a link will be served from Google Ad Manager, which only allows _top or _blank to be targeted. I want to pass an id through the ad click into the new page/tab on my domain and then JS call over to the frame my app is in to switch state. Both frames are on my domain but not in the same document or window. Is this possible?

    Read the article

  • Application Servers(java) : Should adding RAM to server depend on each domain's -Xmx value?

    - by ring bearer
    We have Glassfish application server running in Linux servers. Each Glassfish installation hosts 3 domains. Each domain has a JVM configuration such as -Xms 1GB and -XmX 2GB. That means if all these three domains are running at max memory, server should be able to allocate total 6GB to the JVMs With that math,each of our server has 8GB RAM (2 GB Buffer) First of all - is this a good approach? I did not think so, because when we analyzed memory utilization on this server over past few months, it was only up to 1GB; Now there are requests to add an additional domain to these servers - does that mean to add additional 2 GB RAM just to be safe or based on trend, continue with whatever memory the server has?

    Read the article

  • How to make sure no scripts except those under my own domain, can include the db connection file?

    - by Jack
    I would like to ensure that any scripts that are trying to "include" my database connection file are located under my own domain. I don't want a hacker to include the database connection file to their malicious script and gain access to my database that way. My connection file's name is pretty easy to guess, it's called "connect.php". So without renaming it and taking the security through obscurity route, how can I protect it by making sure all connection requests are made by scripts residing under my own domain name? How can this be checked using PHP?

    Read the article

  • Why does setting document.domain require me to set it in all popups and iframes too?

    - by Chris
    I'm using a long-polling iframe solution for a chat script. Unfortunately, this requires me to set document.domain='yourdomain.com' in the iframe and main document, because the iframe is a subdomain call. The huge problem is...now all my other scripts that use popups and iframes are broken. They now require me to put document.domain in them too. It does fix it, but this is not an ideal solution at all. Is there another way around this problem?

    Read the article

  • Domain entities into (ASP.NET) Session, or better some kind of DTOs?

    - by Robert
    Currently we put Domain Objects into our ASP.NET Sessions. Now we considering moving from InProc sessions to state server. This requires that all objects inside session are serializable. Instead to annotate all objects with the [Serializable] attribute, we thought about creating custom-session objects (DTO Session Objects?), which only contain the information we need: CONS: Entities must be reloaded, which requires additional DB round-trips PROS: Session State is smaller Session information is more specific (could be a CON) No unneeded annotation of Domain-Entities What do you think? Should we use some kind of DTOs to store inside the session, or should we stick with god old entities?

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Ubuntu 12.04 // Likewise Open // Unable to ever authenticate AD users

    - by Rob
    So Ubuntu 12.04, Likewise latest from the beyondtrust website. Joins domain fine. Gets proper information from lw-get-status. Can use lw-find-user-by-name to retrieve/locate users. Can use lw-enum-users to get all users. Attempting to login with an AD user via SSH generates the following errors in the auth.log file: Nov 28 19:15:45 hostname sshd[2745]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:15:45 hostname sshd[2745]: PAM adding faulty module: pam_winbind.so Nov 28 19:15:51 hostname sshd[2745]: error: PAM: Authentication service cannot retrieve authentication info for DOMAIN\\user.name from remote.hostname Nov 28 19:16:06 hostname sshd[2745]: Connection closed by 10.1.1.84 [preauth] Attempting to login via the LightDM itself generates similar errors in the auth.log file. Nov 28 19:19:29 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:29 hostname lightdm: PAM adding faulty module: pam_winbind.so Nov 28 19:19:47 hostname lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "DOMAIN\user.name" Nov 28 19:19:52 hostname lightdm: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:19:54 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:54 hostname lightdm: PAM adding faulty module: pam_winbind.so Attempting to login via a console on the system itself generates slightly different errors: Nov 28 19:31:09 hostname login[997]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:31:09 hostname login[997]: PAM adding faulty module: pam_winbind.so Nov 28 19:31:11 hostname login[997]: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:31:14 hostname login[997]: FAILED LOGIN (1) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info Nov 28 19:31:31 hostname login[997]: FAILED LOGIN (2) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info I am baffled. The errors obviously are correct, the file /lib/security/pam_winbind.so does not exist. If its a dependancy/required, surely it should be part of the package? I've installed/reinstalled, I've used the downloaded package from the beyondtrust website, i've used the repository, nothing seems to work, every method of installing this application generates the same errors for me. UPDATE : Hrmm, I thought likewise didn't use native winbind but its own modules. Installing winbind from apt-get uninstalls pbis-open (likewise) and generates failures when installing if pbis-open is installed first. Uninstalled winbind, reinstalled pbis-open, same issue as above. The file pam_winbind.so does not exist in that location. Setting up pbis-open-legacy (7.0.1.918) ... Installing Packages was successful This computer is joined to DOMAIN.LOCAL New libraries and configurations have been installed for PAM and NSS. Clearly it thinks it has installed it, but it hasn't. It may be a legacy issue with the previous attempt to configure domain integration manually with winbind. Does anyone have a working likewise-open installation and does the /etc/nsswitch.conf include references to winbind? Or do the /etc/pam.d/common-account or /etc/pam.d/common-password reference pam_winbind.so? I'm unsure if those entries are just legacy or setup by likewise. UPDATE 2 : Complete reinstall of OS fixed it and it worked seamlessly, like it was meant to and those 2 PAM files did NOT include entries for pam_winbind.so, so that was the underlying problem. Thanks for the assist.

    Read the article

  • Cannot cd to parent directory with cd dirname

    - by Sharjeel Sayed
    I have made a bash command which generates a one liner for restarting all Weblogic ( 8,9,10) instances on a server /usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed 's/weblogic.policy//' | sed 's/security\///' | sort | sed 's/^/cd /' | sed 's/$/ ; cd .. ; \/recycle_script_directory_path\/recycle/ ;' | tr '\n' ' ' To restart a Weblogic instance, the recycle ( /recycle_script_directory_path/recycle/) script needs to be initiated from within the domain directory as the recycle script pulls some application information from some .ini files in the domain directory. The following part of the script generates a line to cd to the parent directory of the app i.e. the domain directory sed 's/$/ ; cd .. ; \/recycle_script_directory\/recycle/ ;' | tr '\n' ' ' I am sure there is a better way to cd to the parent directory like cd dirname but every time i run the following cd command , it throws a "Variable syntax" error. cd $(dirname '/domain_directory_path/app_name') How do i incorporate the cd to the directory name in a better way ? Also are there any enhancements for my bash command Some info on my script 1) The following part lists out the weblogic instances running along with their full path /usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed 's/weblogic.policy//' | sed 's/security\///' | sort 2) The grep domain part is required since all domain names have domain as the suffix

    Read the article

  • RDS, RDWeb, and RemoteApp: How to use public certificate for launching apps on session host?

    - by Bret Fisher
    Question: How do i tell RDWeb to launch apps from remote.domain.com rather then host.internaldomain.local? Environment: Existing org with AD forest. New single Server 2012 running all Remote Desktop Services roles for session host. Used the new 2012 wizard to setup "QuickSessionCollection" with roles: RD Session Host RD Connection Broker RD Gateway RD Web Access RD Licensing Everything works with self-signed cert, but we want to prevent those. The users are potentially non-domain machines so sticking a private root cert for on their machines isn't an option. Every part of the solution needs to use public cert. Added public remote.domain.com cert to all roles using Server Manager GUI: RD Connection Broker - Enable Single Sign On RD Connection Broker - Publishing RD Web Access RD Gateway So now everything works beautifully except the last step: user logs into https://remote.domain.com user clicks a app icon, which in background downloads a .rdp file that is signed by remote.domain.com. .rdp is set to use RD Gateway, which is remote.domain.com .rdp says app is hosted on internal host.internaldomain.local, which doesn't match the RDP-tcp TLS cert of remote.domain.com, and pops a warning. It's this last step that I'd like to fix. Is there a config option in PowerShell, WMI, or .config to tell RDWeb/RemoteApp to use remote.domain.com for all published apps so the TLS cert for RDP matches what the Session Host is using? NOTE: This question talks about this issue, and this answer mentions how you might fix it in 2008, but that GUI doesn't exist in 2012 for RemoteApp, and I can't find a PowerShell setting for it. NOTE: Here's a screenshot of the setting in 2008R2 that I need to change. It tells RemoteApp what to use for the Session Host server name. How can I set that in 2012?

    Read the article

  • Apache to read from /home/user/public_html on CentOS 5.7

    - by C.S.Putra
    this is my first experience using CentOS 5.7 / Linux as my web server OS and I have just finished installing Apache. Then I created a new account using WHM. The account is now created and the domain name can be accessed. I have put the web files under /home/user/public_html/ but when I access the domain assigned for that user which I assigned when creating new account in WHM, it doesn't read the files. In /usr/local/apache/conf/httpd.conf : <VirtualHost 175.103.48.66:80> ServerName domain.com ServerAlias www.domain.com DocumentRoot /home/user/public_html ServerAdmin [email protected] User veevou # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup group1 group1 </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup group1 group1 </IfModule> CustomLog /usr/local/apache/domlogs/domain.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/domain.com combined ScriptAlias /cgi-bin/ /home/user/public_html/cgi-bin/ </VirtualHost> Instead of reading from /home/user/public_html/ apache will read the /var/ww/html/ folder. How to set the apache so that when user access www.domain.com, they will access the files under /home/user/public_html/ ? Please advice. Thanks

    Read the article

  • Joining an Ubuntu 14.04 machine to active directory with realm and sssd

    - by tubaguy50035
    I've tried following this guide to set up realmd and sssd with active directory: http://funwithlinux.net/2014/04/join-ubuntu-14-04-to-active-directory-domain-using-realmd/ When I run the command realm –verbose join domain.company.com –user-principal=c-u14-dev1/[email protected] –unattended everything seems to connect. My sssd.conf looks like the following: [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = DOMAIN.COMPANY.COM config_file_version = 2 services = nss, pam [domain/DOMAIN.COMPANY.COM] ad_domain = DOMAIN.COMPANY.COM krb5_realm = DOMAIN.COMPANY.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad My /etc/pam.d/common-auth looks like this: auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_sss.so use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so However, when I try to SSH into the machine with my active directory user, I see the following in auth.log: Aug 21 10:35:59 c-u14-dev1 sshd[11285]: Invalid user nwalke from myip Aug 21 10:35:59 c-u14-dev1 sshd[11285]: input_userauth_request: invalid user nwalke [preauth] Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_krb5(sshd:auth): authentication failure; logname=nwalke uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): check pass; user unknown Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname user=nwalke Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): received for user nwalke: 10 (User not known to the underlying authentication module) Aug 21 10:36:12 c-u14-dev1 sshd[11285]: Failed password for invalid user nwalke from myip port 34455 ssh2 What do I need to do to allow active directory users the ability to log in?

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • Only one domains not resolving via Windows DNS server at multiple locations, but is at others

    - by Brett G
    I'm having quite a weird issue. Had mail delivery issues to a specific domain. After looking closer, I realized that the DNS for that domain isn't resolving via the in-house Windows 2003 SP2 DNS server. C:\>nslookup foodmix.net Server: DC.DOMAIN.com Address: 10.1.1.1 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to DC.DOMAIN.com timed-out (DC.DOMAIN.com and 10.1.1.1 are generic values to replace the actual ones) Even if I run this nslookup from the DC.DOMAIN.com server, I get the same result. However, all other requests are working as they should. I tried it on severs at completely separate organizations on different networks(Windows 2003 AD servers). The weird thing is some of these were having the same exact issue. However using public DNS servers work. I have tried clearing the DNS cache, restarting the server, restarting the services, etc. Nothing has worked. One weird event I noticed in the DNS Server Event Logs that might be related is an event ID of 5504 with the following description: The DNS server encountered an invalid domain name in a packet from 192.33.4.12. The packet will be rejected. The event data contains the DNS packet. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. In the data section below, I can see the following mentioned: ns2.webhostingstar.com Which happens to be the nameserver for the domain in question. Several discussion threads and a MS KB have pointed to disabling EDNS. I have done this via "dnscmd /config /enableednsprobes 0" and it has not fixed the issue.

    Read the article

  • SQL Server 2008 R2 - Cannot create database snapshot

    - by Chris Diver
    Server: Windows Server 2008 R2 X64 Enterprise SQL: SQL Server 2008 R2 Enterprise X64 I have a default SQL Server instance, the SQL Server service account is running as a domain user. I am trying to create a database snapshot in the directory where the mdf files are stored. The T-SQL syntax is correct. The file system is NTFS. The error message I get is: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 5119, Level 16, State 1, Line 1 Cannot make the file "e:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestDB.ss" a sparse file. Make sure the file system supports sparse files. The local SQLServerMSSQLUser$db$MSSQLSERVER group has Full Control permission on the folder where I am trying to create the snapshot. I can fix the problem in two ways, neither of which are suitable. Add the SQL Server service (domain) account to the local Administrators group and restart the SQL service. Grant the local SQLServerMSSQLUser$db$MSSQLSERVER group Full control on E:\ I have tried to change the owner of the DATA directory to SQLServerMSSQLUser$db$MSSQLSERVER to no avail. I have no issue creating a new database Why can I not create a snapshot by giving permission only on the DATA folder? Update 23/09/2010: I have tried mrdenny's suggestion with no luck (but learned something new in the process), I suspect the problem may be due to the fact that the domain is a windows 2000 domain running in mixed mode. I had to install hotfix KB976494 for Server 2008 R2, as the SQL Server 2008 R2 installer would not verify the service account correctly with the domain. I noticed that Server 2000 isn't a supported operating system for SQL 2008 R2 but cannot find anything that would suggest it shouldn't work in a 2000 domain. I dis-joined the test server from the domain and changed the service accounts to the local service account and I still have the same issue. I will try to re-install the server without joining the domain and without the hotfix and see if the issue persists.

    Read the article

  • Running WordPress and Ghost on Apache with mod_proxy

    - by Jack Perry
    I currently have three WordPress sites hosted on Apache with virtual host files to direct the right domain to the right DocumentRoot. Ghost (node.js) just came out and I've wanted to tinker with it and just play around on one of my spare domains. I'm not really interested in moving over to nginx so I'm trying to get Ghost working on Apache via mod_proxy. I've managed to get Ghost working on my spare domain, but I think there's a problem with my virtual host files, as all of my other domains start pointing to Ghost as well. Here are two virtual host files, one for my main WordPress site that works fine, and the second for Ghost. Domains removed and replaced with DOMAIN and DOMAIN2. DOMAIN <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName DOMAIN.com ServerAlias www.DOMAIN.com DocumentRoot /var/www/DOMAIN.com/public_html <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/DOMAIN.com/public_html> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> DOMAIN2 <VirtualHost IP:80> ServerAdmin EMAIL ServerName DOMAIN2.com ServerAlias www.DOMAIN2.com ProxyPreserveHost on ProxyPass / http://IP:2368/ </VirtualHost> I get the feeling I'm not working with virtual hosts or mod_proxy right, and Google-fu has let me down after many suggested attempts. Any ideas? Thanks!

    Read the article

  • Creating Active Directory on an EC2 box

    - by Chiggins
    So I have Active Directory set up on a Windows Server 2008 Amazon EC2 server. Its set up correctly I think, I never got any errors with it. Just to test that I got it all set up correctly, I have a Windows 7 Professional virtual machine set up on my network to join to AD. I set the VM to use the Active Directory box as its DNS server. I type in my domain to join it, but I get the following error: DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "ad.win.chigs.me": The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.win.chigs.me The following domain controllers were identified by the query: ip-0af92ac4.ad.win.chigs.me However no domain controllers could be contacted. Common causes of this error include: - Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. It seems that I can talk to Active Directory, but when I'm trying to contact the Domain Controller, its giving a private IP to connect to, at least thats what I can make out of it. Here are some nslookup results. > win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Non-authoritative answer: Name: ec2-184-73-35-150.compute-1.amazonaws.com Address: 10.249.42.196 Aliases: win.chigs.me > ad.win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Name: ad.win.chigs.me Address: 10.249.42.196 win.chigs.me and ad.win.chigs.me are CNAME's pointing to my EC2 box. Any idea what I need to do so that I can join my virtual machine to the EC2 Active Directory set up I have? Thanks!

    Read the article

  • PHP include() through HTTP makes Apache time out

    - by Adam Interact
    I have a problem with ExpressionEngine2 after moving from an old server to WHM/cPanel running on CentOS6.4. Simple test code to reproduce that issue: <?php $protocol = strpos(strtolower($_SERVER['SERVER_PROTOCOL']),'https') === FALSE ? 'http' : 'https'; $host = $_SERVER['HTTP_HOST']; include($protocol . '://' . $host . '/header.html'); ?> <p> Main text...</p> <?php include($protocol . '://' . $host . '/footer.html'); ?> Where header.html looks like <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> and footer.html looks like: </body> </html> Creates Apache time out: Warning: include(http://www.domain.com/header.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 5 Warning: include() [function.include]: Failed opening 'http://www.domain.com/header.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 5 Main text... Warning: include(http://www.domain.com/footer.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 12 Warning: include() [function.include]: Failed opening 'http://www.domain.com/footer.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 12 Any clue what can be wrong with Apache or PHP configuration? Thanks

    Read the article

  • Courier IMAP always disconnects since update

    - by Raffael Luthiger
    Since one of our customers updated their server courier does not handle IMAP connections properly any more. POP3 works without any problems. When I try to test IMAP with telnet then it is always like this: $ telnet domain.com 143 Trying 188.40.46.214... Connected to domain.com. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information. 01 LOGIN [email protected] test Connection closed by foreign host. I enabled debugging in the authdaemond but the output does not really help much: Apr 12 23:10:04 servername authdaemond: received auth request, service=imap, authtype=login Apr 12 23:10:04 servername authdaemond: authmysql: trying this module Apr 12 23:10:04 servername authdaemond: SQL query: SELECT login, password, "", uid, gid, homedir, maildir, quota, "", concat('disableimap=',disableimap,',disablepop3=',disablepop3) FROM mail_user WHERE login = '[email protected]' Apr 12 23:10:04 servername authdaemond: password matches successfully Apr 12 23:10:04 servername authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Apr 12 23:10:04 servername authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Right after the "Authenticated" line the output stops. There is no other message. And in no other log file I've checked I could find any other related message. The system was updated from Ubuntu 10.10 to 12.04. How could I get more information? Or does anybody have an idea what could go wrong here?

    Read the article

  • How make rewrite rules relative to .htaccess file.

    - by Kendall Hopkins
    Current I have an .htaccess file like this. RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f [OR] RewriteCond %{REQUEST_URI} ^/(always|rewrite|these|dirs)/ [NC] RewriteRule ^(.*)$ router.php [L,QSA] It works create when the site files are in the document_root of the webserver (ie. domain.com/abc.php - /abc.php). But in our current setup (which isn't changeable), this isn't ensured. We can sometimes have arbitrary folder in between the document root and folder of the .htaccess file (ie. domain.com/something/abc.php - /something/abc.php). The only problem with is that is the second RewriteCond no longer works. Is there anyway to dynamically check if the accessed path by a path relative to .htaccess file. For Example: If I have a site where domain.com/rewrite/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/rewrite/index.php FORCED TO REWRITE -> domain.com/rewrite/rewrite/index.php If I have a site where domain.com/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/index.php FORCED TO REWRITE -> domain.com/rewrite/index.php

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >