Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 245/1180 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

  • Wanna be a Rock Star?

    - by lydia.smyers
    We just launched the Oracle PartnerNetwork Specialized Partner Premier site, a new site specifically targeted to customers, partners and Oracle employees. The site is all about "You", our partners, who have achieved one or more of the over 50 specializations available today. Specialized. Recognized. Preferred. Yes, that's you - the chosen vendors. And just for you we're going to produce videos starring "You" and your stories which will available everywhere Oracle customers, partners and employees are. Sundance may be over but we're just getting started. This is just one of the new benefits, want more? Industry Analyst Firm, IDC offers kudos and says Oracle's premier "approach to putting top partners front and center with customers and prospects" has "raised the bar." We'll be advertising this site all over Oracle's major websites to give our specialized partners their deserved recognition. Bring on the buttered popcorn! Lights! Camera! What are you waiting for? http://www.oracle.com/specialized. Need to get specialized? Check out how: http://www.oracle.com/partners/en/opn-program/specialize/index.html

    Read the article

  • Oracle Enterprise Content Management 11gR1 Patch Set 3 Released

    - by michelle.huff
    We're pleased to announce an updated patch set for Oracle Enterprise Content Management 11gR1 PS3 (11.1.1.4.0). Patch Set 3 (PS3) supports additional platforms and applications, and adds several new features to the products. Highlights include: Content Server (repository for UCM, URM & I/PM): New security capabilities, file store provider updates. Desktop Integration Suite: Windows 7 64-bit and Office 2010 (32 & 64-bit) support and new "Recent Content Items" menu. Universal Content Management (UCM): Site Studio Manager for Site Studio for External Applications, new template management options and ability to run Site Studio & Site Studio for External Applications 11g components on Content Server 10gR3. Imaging and Process Management (I/PM): Now certified with Oracle Business Process Management (BPM) 11g, Oracle Single Sign On (OSSO) 10g and Oracle Access Manager (OAM) 10g, export search results to Microsoft Excel. ECM Adapter for PeopleSoft: Support for UCM 11g Managed Attachments (support for 10g released earlier in 2010) and certification with PeopleTools 8.50. Information Rights Management (IRM): Desktop support for Microsoft Office 2010, Adobe Reader X and Microsoft SharePoint 2010. Customer Webcast We'll be covering this new release in our Quarterly Customer Update Webcast scheduled for this week, January 19/20, 2011. Register today. More Information Downloads now available on Oracle Technology Network (OTN) - it will be available via eDelivery soon. Read the updated ECM documentation for 11.1.1.4.0 Review the ECM 11.1.1.4.0 Upgrade & Patch Guides See the Release Notes

    Read the article

  • Firefox or Chrome - how to force a specific encoding for a page

    - by Mike
    Hi, I am accessing an intranet site built by amateurs, that was constructed to be "best viewed by IE" (arghhh!). The site is in portuguese. All accented letters are jammed and do not appear as they should. As I create sites myself, I know that the best way to build a site in portuguese and other latin languages is to use the "charset=iso-8859-1" on the page's HTML encoding. This will ensure cross-browser and platforms compatibility. But I have no way to change this, because I am a visitor on this site. I don't know the encoding they are using. What I ask is: is there a way I can force my browser (Chrome or Firefox) to recode the page using the correct charset? I need this to work on Ubuntu.

    Read the article

  • PublishingWeb.ExcludeFromNavigation Missing

    - by Michael Van Cleave
    So recently I have had to make the transition from the SharePoint 2007 codebase to SharePoint 2010 codebase. Needless to say there hasn't been much difficulty in the changeover. However in a set of code that I was playing around with and transitioning I did find one change that might cause some pain to others out there that have been programming against the PublishingWeb object in the Microsoft.SharePoint.Publishing namespace. The 2007 snippet of code that work just fine in 2007 looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {      PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.ExcludeFromNavigation(true, itemID);     publishingWeb.Update(); } The 2010 update to the code looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {     PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.Navigation.ExcludeFromNavigation(true, itemID); //--Had to reference the Navigation object.     publishingWeb.Update(); } The purpose of the code is to keep a page from showing up on the global or current navigation when it is added to the pages library. However you see that the update to the 2010 codebase actually makes more "object" sense. It specifies that you are technically affecting the Navigation of the Publishing Web, instead of the Publishing Web itself. I know that this isn't a difficult problem to fix, but I thought it would be something quick to help out the general public. Michael

    Read the article

  • Using Google Analytics to track Usernames

    - by DrStalker
    We have a SharePoint Installation (MOSS, IIS 7.0, Windows Authentication, Windows 2008) and Google Analytics has been installed to track site usage. The site is an intranet site, and all users are authenticated before gaining access. Is there any way in Google Analytics to track user information so we can see details of the login names of who is accessing content?

    Read the article

  • IE9 apprears to be ignoring RewriteRule in htaccess file

    - by mouli
    I have a site that uses SEF URLs and htaccess RewriteRules to serve up the pages. This has worked fine for several years until the arrival of IE9. Now it appears that the links are not being rewritten and the site is dead in the water. I have tried different compatabilty modes, to no avail, and I've played with the Rewrite Rules over and over, tried different doctypes and a few other browser settings. I agree that it cannot in theory be a browser specific problem if the problem is with the htaccess file but this site works in IE8, firefox and chrome. I have run the rewriterule through a validator and it looks fine. Any ideas would be appreciated as I am running out of ideas. The site is www.marlboroughsounds.co.nz a sample link is http://www.marlboroughsounds.co.nz/walking/freedom-walk-queen-charlotte-track/4dfw and the rewrite rule thats not working looks like this: RewriteRule ^walking/.*/([a-z0-9_]*)/?$ /walking.php?act_code=$1 [L] The link fails and it serves up a browser 404 page, not even the custom 404 I have for the site. Any ideas would be much appreciated as I am stumped.

    Read the article

  • Can PHP be run in Apache via mod_php and mod_fcgi side by side?

    - by Mario Parris
    I have an existing installation of Apache (2.2.10 Windows x86) using mod_php and PHP 5.2.6. Can I run another site in a virtual host using FastCGI and a different version of PHP, while stilling running the main site in mod_php? I've made an attempt, but when I add my FCGI settings to the virtual host container, Apache is unable to restart. httpd.conf mod_php settings: LoadModule php5_module "C:\PHP\php-5.2.17-Win32-VC6-x86\php5apache2_2.dll" AddHandler application/x-httpd-php .php PHPIniDir "C:\PHP\php-5.2.17-Win32-VC6-x86" httpd-vhosts.conf fastcgi settings: <VirtualHost *:80> DocumentRoot "C:/Inetpub/wwwroot/site-b/source/public" ServerName local.siteb.com ServerAlias local.siteb.com SetEnv PHPRC "C:\PHP\php-5.3.5-nts-Win32-VC6-x86\php.ini" FcgidInitialEnv PHPRC "C:\PHP\php-5.3.5-nts-Win32-VC6-x86" FcgidWrapper "C:\PHP\php-5.3.5-nts-Win32-VC6-x86\php-cgi.exe" .php AddHandler fcgid-script .php </VirtualHost> <Directory "C:/Inetpub/wwwroot/site-b/source/public"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory>

    Read the article

  • Googlebot DNS error HostPapa

    - by Gravy
    Received a message from Google Webmaster Tools: Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 40.0%. You can see more details about these errors in Webmaster Tools. Recommended action Contacted HostPapa and they deny that there is any issue with the site / server!!! Support in terms of what I can do to actually resolve this issue is non-existent!!!! The site is currently online. And I don't know much about DNS... so any advice about what I can do to resolve this problem would be much appreciated.

    Read the article

  • SharePoint domain authentication

    - by JL
    I have a local domain controller setup, which is MYDOMAIN.com, and on a seperate local server I have a MOSS site running. the DNS is all working fine but when I try to connect to the MOSS site using domain credentials I can't use syntax: MYDOMAIN/MyAccount it is expecting [email protected] What can I do to fix this issue, so I have normal domain login capabilities like every other sharepoint site out there?

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • Command-line HTTP crawler for Windows?

    - by Pekka
    Would somebody have a recommendation for a web site crawler that can be invoked and equipped with settings from the command line? This would need to run in a Windows environment. Saving the data, following stylesheet links etc. is not an issue. I only need the crawler to start with a page, parse it, and follow all the links on the same domain so that in the end, all pages on the site have been requested once. Background: I'm setting up a web site that gets frequently uploaded from an office location. Combining data from various sources, it has several levels of caching. I don't want the first user to visit the site after a fresh upload to have to wait until the page has been generated and saved in the cache.

    Read the article

  • Can Windows Essentials Remove the JS/Alescruf.C Virus from a Windows PC?

    - by jmort253
    A colleague just visited a site which had been hacked by the JS/Alescruf.C trojan virus, which infects HTML via embedded JavaScript. Windows PC's are infected by visiting the site. If my colleague has Windows Essentials, which detected and removed the trojan, is there anything else that he needs to do to make sure the virus is no longer active? Windows Essentials detected the trojan within seconds of visiting the site, and he's running a full system scan to see if it still detects anything.

    Read the article

  • A-2-Z web hosting on Amazon AWS

    - by JDelage
    All, I am studying web dvp, and one of my classes is project-based. We have to build a functional site that demonstrate our understanding of: HTML, CSS, Javascript, php, MySQL, And potentially Ajax or some other web component. For the project, we can use a local server using WampServer and basically build the site entirely on our laptop. If I have time, I would like to create a real site, and I thought it would be a good way to familiarize myself with Amazon's AWS services. So if I purchase a domain name, can I rely on AWS to host the site from A-to-Z? I understand I can use AWS to host content, the database, and do the background computations, if needed. What else do I need and what are the parts that AWS cannot help me with? Second, is there good documentation for a beginner to navigate AWS and learn how to use it (either on Amazon, or some 3rd party sites, or even a good book, as long as is up to date). The ideal documentation would be a tutorial on creating a web site from a-to-z on AWS, as detailed as possible. As you can guess, I have limited understanding of the IT issues. I have 0 Linux or sysadmin experience, but this is a good opportunity to change that. I hope you can help me. Thank you, JDelage PS: Please keep the answers AWS-specific. At this point, I am only interested in alternative services to the extent that they plug a hole in Amazon's offering.

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    Hello. I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • Why are my files in /var/lock and where did they just go?!

    - by Nicky Hajal
    I am hosting a website on Debian 5.0 & Apache2. Today one of my websites was down, Apache said it couldn't find the directory. I located the files and the whole site once in /var/www/site was now /var/lock/site. All the files were present. I was confused, but figured I'd just move it back. mv /var/lock/site /var/www All looked fine... Except that only the directories moved and the files appear to be lost! I am working on restoring from backups but I would really love to know what happened and where my files went (the backups are a few days old). Thanks for your help!

    Read the article

  • editing automated SharePoint emails

    - by Richard Collins
    Sharepoint sends out an automated email when a site collection has reached its quota warning level. The email says: You are receiving this e-mail message because you are an administrator of the following SharePoint Web site, which has exceeded the warning level for storage: https://mysite.xxx.ac.uk/personal/xxx/. To see how much storage is being taken up by this site, go to the View site collection usage summary: https://mysite.xxx.ac.uk/personal/xxx/_layouts/Usage.aspx. The problem is that the usage summary link needs to point somewhere else instead. How can I edit the body of the email to change the link? Thanks.

    Read the article

  • IIS SSL is taking all IPs although it is told not to

    - by Martin Sall
    I have a testing system where IIS Express on Windows 7 SSL website has to live together with Cerberus FTP server SSL website (Cerberus FTP has a built-in web server for HTTP uploads). I have set up Windows to use two IPs from my router 192.168.1.128 (for IIS SSL Web Site, using a self-generated SSL certificate for now) 192.168.1.129 (for Cerberus FTP built-in SSL Web Site) In IIS I have set web site binding to use only the IP 192.168.1.128. But still when I launch Cerberus, it says - cannot bind 192.168.1.129:443. I tested in Firefox - indeed, when I go to 192.168.1.129 (or even localhost), I do not get “Unable to connect“ page as expected, but “The connection was reset” instead. IIS is still occupying those IPs, although it is not serving the website on those IPs. When I stop the IIS website, Cerberus FTP Website launches without problems. But then I cannot launch IIS web site, it tells - "The process cannot access the file because it is being used by another process". Why is IIS SSL web site still occupying all IPs?

    Read the article

  • How can I decrease relevancy of Creative Commons footer text? (In Google Webmaster Tools)

    - by anonymous coward
    I know that I may just have to link the image to make this happen, but I figured it was worth asking, just in case there's some other semantic markup or tips I could use... I have a site that uses the textual Creative Commons blurb in the footer. The markup is like so: <div class="footer"> <!-- snip --> <!-- Creative Commons License --> <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/80x15.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="http://www.xmemphisx.com/" property="cc:attributionName" rel="cc:attributionURL">xMEMPHISx.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License</a>. <!-- /Creative Commons License --> </div> Within Google Webmaster Tools, the list of relevant keywords is heavily saturated with the text from that blurb. For instance, 50% of my top-ten most relevant keywords (including the site name): [site name] license [keyword] commons creative [keyword] alike [keyword] attribution [keyword] I have not done any extensive testing to find out rather or not this list even matters, and so far this doesn't impact performance in any way. The site is well designed for humans, and it is as findable as it needs to be at the moment. But, out of mostly curiosity: Do you have any tips for decreasing the relevancy of the text from the Creative Commons footer blurb?

    Read the article

  • Biggest mistake you've ever made

    - by Rogue Coder
    Similar to the question I read on Server Fault, what is the biggest mistake you've ever made in an IT related position. Some examples from friends: I needed to do some work on a production site so I decided to copy over the live database to the beta site. Pretty standard, but when I went to the beta site it was still pulling out-of-date info. OOPS! I had copied the beta database over to the live site! Thank god for backups. And for me, I created a form for an event that was to be held during a specific time range. Participants would fill out the form for a chance to win, and we would send the event organizers a CSV from the database. I went into the database, and found ONLY 1 ENTRY, MINE. Upon investigating, it appears as though I forgot an auto increment key, and because of the server setup there was no way to recover the lost data. I am aware this question is similar to ones on Stack Overflow but the ones I found seemed to receive generic answers instead of actual stories :) What is the biggest coding error/mistake ever…

    Read the article

  • Tomcat and IIS 7 both on different ip's and different ports

    - by n00b
    I have Tomcat and IIS 7 installed together on a Windows 2008 server. The machine has two IPs (134.133.1.1 and 134.133.2.2). I want Tomcat to handle 134.133.1.1, on port 80, and IIS to handle both 134.133.2.2, on port 80 AND 134.133.1.1, on port 443, but can't seem to get the last two together (I can get one or the other by themselves on IIS, along with the first IP address on Tomcat). I have configured Tomcat to successfully listen to ip 134.133.1.1, on port 80 with this configuration; <Connector port="80" protocol="HTTP/1.1" address="134.133.1.1" connectionTimeout="20000" redirectPort="8443" /> I also have a site configured in IIS bound to ip 134.133.1.1, on port 443 (SSL). When I turn on IIS, after Tomcat, I can reach both 134.133.1.1:80 (Tomcat) and 134.133.1.1:443 (IIS) successfully (as desired). The problem now comes when I want to introduce a new site via IIS, at the new ip address. In IIS I have setup a new site at IP 134.133.2.2, port 80. I can not start the site. The event log shows this error; Unable to bind to the underlying transport for [::]:80. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. The data field contains the error number. I think this is because IIS 7 tries to listen to port 80 on all IPs, and it cant because Tomcat is taking port 80 for 134.133.1.1. From reading, the resolution is to specify the IP address you want IIS to bind on port 80. The problem is, when I add 134.133.2.2 to the iplisten list, then I get a 404 when I try navigating to 134.133.1.1:443. I assume this is because IIS is no longer listening to ANY port on 134.133.1.1. How do I resolve this such that IIS will return both sites? EDIT: Per request my IIS binding for site A is 134.133.2.2 on port 80 (http) and 134.133.2.2 on port 443. For site B in IIS, the binding is 134.133.1.1 on port 443 (https). Note the IPs in this example are just for example purposes, but consistent with my setup.

    Read the article

  • What sites track your open source contributions?

    - by Daniel
    I know there's at least one site where you can register the open source projects you are involved with, indicate your committer name, and it will display the lines contributed, and the languages, along a timeline. Unfortunately, I don't recall what site was that. Anyway, can anyone indicate a site that does this?

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >