Search Results

Search found 15712 results on 629 pages for 'location href'.

Page 502/629 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • Problem adding second domain controller to SBS 2008

    - by Quango
    Have an SBS 2008 server in one location, and want to add a backup domain controller at a different site. The two sites are linked by a VPN. New server is running Server 2008 R2, fully patched. At present it is a member server and the DNS is pointing at the SBS DNS. When I try running DCPROMO to connect the server, the wizard runs fine up to the point where the wizard is 'configuring Active Directory Domain Services' and 'examining forest': "The operation failed because: The wizard could not read operational attributes from the remote Active Directory Domain Controller SERVER.DOMAIN.LOCAL using LDAP. "The specified server cannot perform the requested operation." This error can occur if you have not been granted necessary permissions to read data in the directory. For more information, please see article 936241 in the Microsoft Knowledge Base (http://go.microsoft.com/fwlink/?LinkId=88420)." I was logged on as domain administrator. Interestingly the link is invalid and the KB article does not exist..! Settings: Configure this server as an additional Active Directory domain controller for the domain "[domain]". Site: [site] Additional Options: Read-only domain controller: "No" Global catalog: Yes DNS Server: Yes Update DNS Delegation: No Source domain controller: any writable domain controller Database folder: C:\Windows\NTDS Log file folder: C:\Windows\NTDS SYSVOL folder: C:\Windows\SYSVOL The DNS Server service will be configured on this computer. This computer will be configured to use this DNS server as its preferred DNS server.

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • cannot connect to sql server express from sql server standard

    - by Jackson Sunuwar
    ... like my title says... I cannot connect to my instance on sql server express from sql server standard... I have tried disabling firing wall and checked sqlbrowser is started but for some reason I cannnot connect to my datbase... called server_name\sqlexpress.. I have a virtual machine and a full scale MS SQL Server 2008 R2 running on it... and I have several other vm running sqlexpress. they run fine and I can connect to them using sqlexpress... but when i try to access from sqlserver... I get this error. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) Digging deep into the error, I found this Error Number: -1 Severity: 20 State: 0 and finally this... Program Location: at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.SqlServer.Management.SqlStudio.Explorer.ObjectExplorerService.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser() Firewall is turned off on the VM that's running mssqlserver... I turned of firewall on one of the vm that's running the sqlexpress but I still get the error... can someone please help... thank you

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • Weird connectivity issue wtih USB Wifi stick.

    - by Carlos Nunez
    Hi, all! I'm not sure if this is the appropriate place to throw this question out there, but I'll give it a shot. I'm setting up two PCs, and I've been having massive troubles getting a USB wireless dongle working. I have two Sony VAIOs (Windows XP, SP2) that I found second-hand, and since they will be in a location too far to connect by Ethernet (no, can't do patch panels here :p), I need to connect them by wireless. Easiest and cheapest way to do that at the moment is by using two USB wireless sticks that I've had for a while, but never used. One of the computers is using a SMC-manufactured card, whereas the other is using a Belkin F5D7050. The box with the SMC card can see and authenticate with my router just fine, and has no problem obtaining a DHCP lease. The box with the Belkin, on the other hand, isn't so lucky. While it can see my router and associate with it, it will not obtain a DHCP-issued address. Worse, when I assign a static IP address to the NIC, it can ping the entire network and access the internet (meaning it can authenticate with the router), but no computer can ping to it UNLESS that computer pinged the computer that's pinging it first. Confused? Well, so am I. Has anyone had this issue before? Is this just a sign of a bad card? (For the moment, I have it connected by Ethernet, as I haven't moved it yet. However, this will be a problem when I set it up in its new home later.) Thanks! -Carlos Nunez

    Read the article

  • All browsers refusing to load a specific image on a webpage?

    - by Johnson
    Out of nowhere today, all 3 of my browsers (FF/Chrome/IE, OS = Win7 x64) are refusing to load the homepage of interfacelift.com correctly. It works fine on other PC's in the house (on the same network), so it is definitely related to this one PC. The browser won't load the main image on the page correctly (even though the source code looks good), however if I direct the browser to the exact location of that image, then it displays fine. So obviously I can get the HTML index (which locates the resource) and I can get to the resource. So why heck isn't it displaying properly on the index page? It's almost as if the HTML rendering engine has gone bad, on all 3 browsers at once. I've browsed to a bunch of other sites (including sites very heavy on JS, with HTML much more complex than the one in question here) and am seeing nothing funny. Only thing wonky I've done with my PC in the past several hours was replacing the system file Magnifier.exe with a copy of cmd.exe while playing around with some of the ideas mentioned in this guide. However, I've since then restored the files to their previous state, and I don't know how Magnifier would be related to this even if I hadn't restored it. Any ideas? I'm stumped! EDIT: Here is what the broken page looks like in Chrome. And here is the image loaded correctly by itself.

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

  • Static file serving only works if root is a subfolder under public

    - by lulalala
    I am trying to serve static cache files using nginx. There are index.html files under the rails_root/public/cache directory. I tried the following configuration first, which doesn't work: root <%= current_path %>/public; # $uri always contains one slash(the first slash but not the last) try_files /cache$uri/index.html /cache$uri.html @rails; This give error: [error] 4056#0: *13503414 directory index of "(...)current/public/" is forbidden, request: "GET / HTTP/1.1" I then tried root <%= current_path %>/public/cache; # $uri always contains one slash(the first slash but not the last) try_files $uri/index.html $uri.html @rails; And to my surprise this works. Why is it that I can do the latter not the former( since they point to the same location) The permissions of the folders are: 775 public 755 cache 644 index.html The thing is that my favicon sitting under public/ is served correctly: # asset server server { listen 80; server_name assets.<%= server_name %>; expires max; add_header Cache-Control public; charset utf-8; root <%= current_path %>/public; }

    Read the article

  • ServerName not working in Apache2 and Ubuntu

    - by CreativeNotice
    Setting up a dev LAMP server and I wish to allow dynamic subdomains, aka ted.servername.com, bob.servername.com. Here's my sites-active file <VirtualHost *:80> # Admin Email, Server Name, Aliases ServerAdmin [email protected] ServerName happyslice.net ServerAlias *.happyslice.net # Index file and Document Root DirectoryIndex index.html DocumentRoot /home/sysadmin/public_html/happyslice.net/public # Custom Log file locations LogLevel warn ErrorLog /home/sysadmin/public_html/happyslice.net/log/error.log CustomLog /home/sysadmin/public_html/happyslice.net/log/access.log combined And here's the output from sudo apache2ctl -S VirtualHost configuration: wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server happyslice.net (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost happyslice.net (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost happyslice.net (/etc/apache2/sites-enabled/happyslice.net:5) Syntax OK The server hostname is srv.happyslice.net. As you can see from apache2ctl when I use happyslice.net I get the default virtual host, when I use a subdomain, I get the happyslice.net host. So the later is working how I want, but the main url does not. I've tried all kinds of variations here, but it appears that ServerName just isn't being tied to the correct location. Thoughts? I'm stumped. FYI, I'm running Apache2.1 and Ubuntu 10.04 LTS

    Read the article

  • Google Maps mashup for notes/househunting

    - by afray
    I'm house-hunting at the moment and I'm trying to geek it out er I mean streamline the decision-making process. I'm currently using google maps's "my maps" feature to store pins to properties. I create one map per estate agent, then put relevant into into the individual pins. The idea being, I can look at the map and quickly choose which property to view next. However the pins don't currently link back to the map they're owned by, so you have to hunt a bit to get the estate agent info, it's a hassle to get all maps displayed in a new session if you have lots of agents, and each pin doesn't automatically show its bubble so you have to do lots of clicking to see all the info you want. I've tried Evernote, but despite its tag system initially showing some promise, I can't find a way to seemlessly integrate maps. A few google searches don't turn anything up either. Even the big sites, like http://www.rightmove.co.uk, don't seem to provide any maps integration by default. You can see an individual house's location, but not all results of a search. So is there a web site or windows program I could use to do something like this? Viewing all properties on a map is a must, as is quick access to contact details.

    Read the article

  • Problems with image/file upload in MediaWiki on Windows 2008 Server R2, using wrong temp directory

    - by Lasse V. Karlsen
    I have installed MediaWiki 1.15.2 under IIS as per the MediaWiki installation instructions for Windows 2008 Server. I have configured PHP to use a specific temp directory: upload_tmp_dir="C:\php\uploadtemp" I have specified that MediaWiki is allowed to upload: $wgEnableUploads = true; But when I try to upload an image, I get this error message in my browser: Internal error Could not find file "C:\Windows\Temp\php1AEA.tmp". Retrying will simply give me a new filename, but in the same location. The directory does not have any php* files in it, but since they're "temporary", they might be gone in a flash before Windows Explorer is able to show them so that might be a red herring. I've googled for this, and the most promising lead I found was on this page: Image upload problem - Is this bug fixed?, but since the text says "a bugfix was posted on the bug-report page", but provides no link to which bug page this relates to (php or mediawiki) nor the actual bug report, I've not found conclusively the bug report in question so that didn't help me much. Lots of pages indicates that this is a permission issue, so I tried setting permissions on c:\windows\temp as Modify by Everyone, still no dice. I tried changing the two system environment variables TEMP and TMP to point to C:\Temp instead, but MediaWiki still complains about not finding the file in C:\Windows\Temp. Note that I don't care a lot about where the files will actually be stored temporarily, so c:\windows\temp is fine by me. I do, however, care about them actually being uploaded correctly. Does anyone know of a fix, have any leads I can follow, or whatnot? The server is running Windows 2008 Server R2, all patches installed, and the PHP installed is 5.3.2, using IIS FastCGI.

    Read the article

  • IIS 7 - 403 Access Denied error on wwwroot trying to redirect to /owa

    - by cparker4486
    I'm trying to setup a redirect from http://mail.mydomain.com to https://mail.mydomain.com/owa. I've been unsuccessful in doing this by using IIS's HTTP Redirect so I looked to other options. The one I settled on is to create a default document in the wwwroot folder to handle the redirect. I created a file called index.aspx (and added index.aspx to the list of default documents) and put the following code in it: <script runat="server"> private void Page_Load(object sender, System.EventArgs e) { Response.Status = "301 Moved Permanently"; Response.AddHeader("Location","https://mail.mydomain.com/owa"); } </script> Instead of getting a redirect I get: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've been trying to find an answer to this but have been unsuccessful so far. One thing I did try was to add the Everyone group to wwwroot with read access. No change. The AppPool for Default Web Site is DefaultAppPool and the Identity is ApplicationPoolIdentity. (I don't know what these things are but maybe knowing this will help you.) Thanks!

    Read the article

  • Terminating multi-mode fiber

    - by murisonc
    I'm looking at the feasibility of terminating multi-mode fiber connections ourselves. We would be using LC connectors. I've done some research and found two different methods. One requires polishing the ends and using epoxy while the other doesn't. I like the idea of not having to polish the ends but there doesn't seem to be much information on quality or ease of use. I've found two vendors (3M and Corning) that offer kits for terminating fiber without polishing or using epoxy. Does anyone have any experience with both methods that can offer some advice? Copper is easy but fiber seems to be a whole different animal. EDIT: After looking into fusion splicing suggested in the answer I've determined it's not for us. It's my understanding that is primarily used for outside plant and is better suited for single mode fiber. It's a good answer but doesn't address the question directly. Some more information about our situation. We will only be terminating multi-mode fiber inside a building and only doing between 4 and 20 pair a year. Hiring an outside person won't work due to our location. There are currently a couple people on-site that can terminate fiber (working for another company and charging large fees) but they can only do ST and SC connectors and we only use LC. So once again does anyone have experience with terminating using both epoxy type connectors and the other type (similar to Corning Unicam)?

    Read the article

  • Adding 2008 Server to 2008 Domain

    - by Phillip
    Hello, I'm trying to create a lab for testing before I deploy solutions, I'm no experienced IT Administrator, and therefore I come here for help. I'm running 2 Virtual Servers on the same machine on a local connection between those two. They'are able to ping each other. Their names is TSDATA1 and TSDATA2 where TSDATA1 is the Domain Controller. I am able to ping between those two, on both "ping TSDATA1" and "ping 10.0.0.1" which is the IP address of TSDATA1. The IP address of TSDATA2 is 10.0.0.2. I'm trying to join the domain with TSDATA2 both I'm getting this error when trying: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain tsdata.local: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.tsdata.local Common causes of this error include the following: The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 10.0.0.1 One or more of the following zones do not include delegation to its child zone: tsdata.local local . (the root zone) For information about correcting this problem, click Help. I've figured out it has something to do with DNS lookup, but I have no clue what to do. Can anyone help?

    Read the article

  • nginx not serving admin static files?

    - by toto_tico
    First, I want to clarify that this error is just for the admin static files. This means my problem is specific just to the static files that corresponds to the Django admin. The rest of the static files are working perfect. Basically my problem is that for some reason I cannot access those admin static files with the ngix server. It works perfect with the micro server of Django and the collect static is doing its job. This means it is putting the files on the expected place in the static folder. The urls are correct but I cannot even access the admin static files directly, but the others I can. So, for example, I am able to access this url (copying it in the browser): myserver.com:8080/static/css/base/base.css but i am not able to access this other url (copying it in the browser): myserver.com:8080/static/admin/css/admin.css I also tried to copy the admin/ directory structure into other_admin_directory_name/. Then I can access myserver.com:8080/static/other_admin_directory_name/css/admin.css Then, it works. So, I checked permissions and everything is fine. I tried to change ADMIN_MEDIA_PREFIX = '/static/admin/' to ADMIN_MEDIA_PREFIX = '/static/other_admin_directory_name/', it doesn't work. This a mistery in itself that I am exploring but still no luck. Finally, and it seems to be an important clue: I tried to copy the admin/ directory structure into admin_and_then_any_suffix/. Then I cannot access myserver.com:8080/static/admin_and_then_any_suffix//css/admin.css So, if the name of the directory starts with admin (for example administration or admin2) it doesn't work. * added thanks to sarnold observation ** the problem seems to be in the nginx configuration file /etc/nginx/sites-available/mysite location /static/admin { alias /home/vl3/.virtualenvs/vl3/lib/python2.7/site-packages/django/contrib/admin/media/; }

    Read the article

  • How to restore broken Ethernet functionality on Mac G5 running Mac OS 10.4.11 (Tiger)

    - by willc2
    I had a disk error that rendered my Mac unbootable. I repaired it with Tech Tool 4, but now networking does not work. Network Preferences reports that my Ethernet cable is unplugged. I know this is bogus because when I boot from an emergency partition, networking works correctly. Furthermore, wireless networking is also broken, which I tested with a known-good Wi-Fi dongle. Whenever I try to change Network Port Configurations by creating a New or Renaming an existing one, example: I get this message in the console: Error - PortScanner - setDevice, device == nil! Error - PortScanner - setDevice, device == nil! In sets of two as shown. When I try to invoke the Network Diagnostics app, it immediately crashes. My first thought is to reinstall Tiger with the Archive and Install method so I don't have to reinstall all my applications but I have lost my Tiger installer disk. My next thought is to buy Leopard for $107 on Amazon. If there is any way I can just repair my Tiger install I would be happy to save that money, though. This is not my main machine and I am loathe to put more money into it. How can I recover my network functionality? UPDATE: I found my Tiger install disk and tried an Archive and Install. It failed with an unhelpful error message along the lines of "Can't install, try again". I tried again but had the same error. My guess is, some corrupt or missing file in my User folder is preventing migration. I have a backup created with Super Duper that is a bit out of date but will startup the machine (with functional networking). I would love to just copy over the file(s) that got messed up but I don't even know where to look. What is the likely location of the System files that would cause the aforementioned symptoms?

    Read the article

  • Setting up 802.1X wireless connection on OSX

    - by hizki
    I am an OSX user, I have Snow Leopard 10.6.5 and an updated AirPort (version 5.5.2). I am trying to connect to my university's wireless network, but it has a 802.1x security that I am having trouble defining... Here there are instructions for connecting with Windows XP, Windows 7 and Linux. Can someone please instruct me what should I do to set up this network on my Mac? I have had previous success in setting up this network, but I have no idea what I did that made it work. Since I updated my AirPort (to version 5.5.2) it worked only seldomly and very slowly... Before the update, even when it worked it never remembered my password. Update: I have already tried to create a new "location", removed all the 802.1x user profiles and all the remembered networks, and made sure the in the TCP/IP tab 'Configure IPv4' is set to "Using DHCP". I also moved /Library/Preferences/SystemConfiguration/preferences.plist to my desktop in attempt to force the system to create a new set of settings. Still I can define the connection to work.

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • nginx rewrite regex for API versioning

    - by MSpreij
    What I want is for the first to be turned into the second.. /widget => /widget/index.php /widget/ => /widget/index.php /widget?act=list => /widget/index.php?act=list /widget/?act=list => /widget/index.php?act=list /widget/list => /widget/index.php?act=list /widget/v2?act=list => /widget/v2.php?act=list /widget/v2/?act=list => /widget/v2.php?act=list /widget/v2/list => /widget/v2.php?act=list v2 could also be v45, basically "v\d+" act, "list" in this case, can have many values and more will be added. Any additional query parameters would just be passed on with $args, I guess. Basically URLs not specifying the version will go to index.php, which can then decide what specific version file to include. What I am afraid of happening is loops - this should sit in location /widget { right?. (As for putting the version of the API in the URL, I'm not trying to be RESTful, and target audience is small) Resources on how to do this entirely in index.php using "routers" also welcome :-/

    Read the article

  • How do photoshop slices and layer comps interact?

    - by Steve314
    I'm interested in using Photoshop (I have CS2) for some user interface design. I was hoping to be able to use slices and layer comps to mark out particular elements, and use Javascript scripting to export multiple graphics files and text descriptions (positions and sizes of slices mainly) that will be used by my program. My problem is that I've never used Photoshop for web design, or otherwise used slices, and I'm not confident that I understand how they interact with layer comps. This is what I believe (and hope) is correct... Manual slices aren't affected by layer comps in any way - they aren't saved as part of a layer comp. The same manual slices will be active irrespective of which layer comp is selected. Layer-based slices aren't directly affected by layer comps, but they are indirectly affected in that the layer comp saves details of layer position and style. Thus selecting a layer comp may move a layer and change its style, affecting the location and size of its layer-based slice, or may effectively disable the slice by hiding the layer. Automatic slices aren't directly affected by layer comps, but are indirectly affected due to changes to the layer-based slices. So, layer based slices (which are my main interest) may move, may change size (to accomodate a style such as a drop shadow), and may be effectively disabled by the layer being hidden. Other details (and all details of manual slices) will remain constant irrespective of which layer comp is active. Is that correct?

    Read the article

  • Win7 Domain User Profile- Desktop Icon management best practices request

    - by Doltknuckle
    Here's the situation: We have a large (5,000+ user) organization that is currently using folder redirection to manage the windows desktop icons. This folder is redirected to a network share where we can centrally manage the different sites and such. When a user tries to use a computer when the network is not available, they are unable to use any shortcuts in the Public folder. We only redirect the C:\Users\%username%\Desktop folder. Does anyone have any suggestions of how to go about managing desktop icons? We still want a central location to manage these items, but find a way to keep the system working when the network is unavailable. As a point of clarification, the network rarely goes down. We do have instances where a few computers do not have a network connection. Usually, something is simply unplugged. Since we have multiple sites, the line from a branch to the central office has gone down a few times. This is more of an attempt to maintain a positive end user experience when disconnected from the network.

    Read the article

  • HP Officejet 4500 G510n-z Not Showing up in Remote Desktop (Terminal Services)

    - by Greg_the_Ant
    I installed this printer on a windows XP machine. First using the wireless option, and later using USB. In both cases when I connect to my other computer (also Windows XP) via terminal services and check printers in the local resources tab it does not show up on the remote session. I used to have a Samsung connected to my local computer over USB and and that worked fine over terminal services. Things I tried so far: I did read this page and installed the software fix on both computers: (Printers that use ports that do not begin with...) I installed the minimum HP software install on the remote computer and that didn't help either. I also tried running the add new printer wizard on the remote computer: I selected "local printer attached to this computer" and did not check the "automatically.." option. On the next page of the wizard I can select an option for "use the following port". I see options for TS001 through TS009 there. I'm assuming those are coming from the local machine. I tried clicking each one and then checking "have disk" and pointing it to C:\3be8dc611b11322e8ddf8a67\i386\msxpsdrv.inf 1 but for every single TS00.. port it says "The specified location does not contain information about your hardware." Any help would be greatly appreciated. I'm pretty stuck at this point. 1 C:\3be8dc611b11322e8ddf8a67 is the folder I extracted the HP driver software to after I downloaded it.

    Read the article

  • Correcting owner/permissions on damaged directory tree in linux

    - by mcs130
    I inadvertently made a backup copy of a directory recursively and forgot the -a (--preserve) switch when doing so. This damaged my backup directory (which contains data we need to access). The directory and all of its child folders and files comprise an installation of an application including postgress DB and solr files. The original copy was used to for a failed re-config attempt. Now I need to use the backup copy to start over, only the ownership of the backup copy is now root across everything and it is no longer usable (processes won't run due to ownership problems I created when I forgot the -a on the cp -r). I've re-installed a clean copy of the application into a 3rd location now (which has the correct owner/perms) and need to copy the owner/perms from this good directory over onto the damaged directory. What is the best way (if even possible) to do this. (I've Googled and seen things from perl scripting to setfacl/getfacl to do this but am unfortunately still confused). Apologies if this seems a dumb question. Thanks.

    Read the article

  • nginx reverse ssl proxy with multiple subdomains

    - by BrianM
    I'm trying to locate a high level configuration example for my current situation. We have a wildcard SSL certificate for multiple subdomains which are on several internal IIS servers. site1.example.com (X.X.X.194) -> IISServer01:8081 site2.example.com (X.X.X.194) -> IISServer01:8082 site3.example.com (X.X.X.194) -> IISServer02:8083 I am looking to handle the incoming SSL traffic through one server entry and then pass on the specific domain to the internal IIS application. It seems I have 2 options: Code a location section for each subdomain (seems messy from the examples I have found) Forward the unencrypted traffic back to the same nginx server configured with different server entries for each subdomain hostname. (At least this appears to be an option). My ultimate goal is to consolidate much of our SSL traffic to go through nginx so we can use HAProxy to load balance servers. Will approach #2 work within nginx if I properly setup the proxy_set_header entries? I envision something along the lines of this within my final config file (using approach #2): server { listen Y.Y.Y.174:443; #Internally routed IP address server_name *.example.com; proxy_pass http://Y.Y.Y.174:8081; } server { listen Y.Y.Y.174:8081; server_name site1.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer01:8081; } server { listen Y.Y.Y.174:8081; server_name site2.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer01:8082; } server { listen Y.Y.Y.174:8081; server_name site3.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer02:8083; } This seems like a way, but I'm not sure if it's the best way. Am I missing a simpler approach to this?

    Read the article

  • Python coding with VLC player (quite a basic query I expect)

    - by Todd
    I'm fairly new to the whole coding realm so my knowledge is fairly limited, and I can't seem to find any basic tutorials on how to use scripts with VLC player. More specifically, the reason I'm asking here is because I stumbled across a post on this site about playing random clips from random videos on VLC player automatically. This is the forum post: Playback random section from multiple videos changing every 5 minutes My situation is similar to this lovely gentleman's was, though he clearly knows a lot more about coding than I do. In short, I'd like to copy this coding into a file of some sort and apply it to VLC player myself. Only I'm not sure what file type I'd have to save it as (I have Python by the way, and I tried saving it as a .py file but I didn't know if it was correct or where to go from there). Additionally, I'm not sure how to get VLC to "read" the script, so to speak - is there a specific location the file needs to be, and do I run the script from another program or through VLC? I'll reiterate that I'm relatively new to this, so if anybody would be so kind as to post a quick list of steps on how to save/place the file and use it with VLC player I really would appreciate it! P.S. I'm not computer illiterate, I'm fine with most programs and I'd understand if you just said things like "C:\Program Files (x86)\VideoLAN\VLC\plugins" or "in VLC, select Tools Plugins and extensions", I just wouldn't catch on to anything about adding a line of coding that does something without being told exactly what to write! Many thanks in advance! :) Todd

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >