Search Results

Search found 25629 results on 1026 pages for 'site maintenance'.

Page 133/1026 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • T-SQL Tuesday #13: Clarifying Requirements

    - by Alexander Kuznetsov
    When we transform initial ideas into clear requirements for databases, we typically have to make the following choices: Frequent maintenance vs doing it once. As we are clarifying the requirements, we need to determine whether we want to concinue spending considerable time maintaining the system, or if we want to finish it up and move on to other tasks. Race car maintenance vs installing electric wiring is my favorite analogy for this kind of choice. In some cases we need to sqeeze every last bit...(read more)

    Read the article

  • Python install issue on Mac OS X

    - by Michael Waterfall
    I have been using the standard python that comes with OS X Lion (2.7.2) but I wanted to build a UCS-4 version to handle 4-byte unicode characters better. I had already installed pip and packages like pytz, virtualenv and virtualenvwrapper, etc., and these are installed in /Library/Python/2.7/site-packages. My $PATH is /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin. To build a new version of python on the machine (outside of any project specific virtual environments, that will come later), I followed the instructions on this article and managed to build it in /usr/local/bin. The problem is that when I launched a new bash window, I got the following virtualenvwrapper error: Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named virtualenvwrapper.hook_loader virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python and that PATH is set properly. The instructions said to move /usr/local/bin to the top of the /etc/paths file, and since then I've noticed some strange issues. I installed pip into /usr/local/bin and now I have assumed that since I'm working in /usr/local/bin, and the newly installed python's site packages is now located in /usr/local/lib/python2.7/site-packages, when I do pip freeze, it should be empty as nothing is installed there yet. However, pip freeze still reports things installed in the old (OS X) site-packages folder. Here's some info after the build: $ which python /usr/local/bin/python $ which pip /usr/local/bin/pip $ echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin When I uninstall a python package with pip, it removes it from the old site-packages folder as expected. When I install it again, instead of installing it in /usr/local/lib/python2.7/site-packages, it installs it in /Library/Python/2.7/site-packages (verified by attempting to install it again and receiving Requirement already satisfied (use --upgrade to upgrade): pytz in /Library/Python/2.7/site-packages ). How is it getting that path for the old site-packages folder? Why won't it install it in the correct location for the python install it's using? I'm getting several other issues since promoting /usr/local/bin but I think if I understand this I'll be able to get somewhere. Can anyone see what's happening? If you need any more info I'll be happy to provide it.

    Read the article

  • Organizing Your Department with CMMS Software

    Computerized maintenance management (CMMS) software can help to organize any maintenance department no matter what the skill level of the employees are. One does not have to be a genius on the comput... [Author: Ashley Combs - Computers and Internet - April 01, 2010]

    Read the article

  • Apache not using the right SSL certificate [on hold]

    - by user2420318
    In my apache2 setup, I have one VirtualHost for my main site, and another for a static content site, like downloads, css, etc. I have ssl certificates for both, and the static content one is under a subdomain of the main site. I have configured the four virtualhosts altogether, as both sites need SSL ones as well. When I only had 1 SSL site, everything was OK, but now with the second, the first site uses the second site's certificate, even though it is told specifically to use its own in the VirtualHost section. I honestly have no idea why apache would do this. Any ideas? I have a feeling there may be some default/global setting or something that are set for some odd reason. I am using different IPs for the Virtual hosts.

    Read the article

  • Where is the best location to keep shared-developer website files in the linux hierarchy?

    - by Tchalvak
    I just started hosting files for a website on my server, and I'm not sure where is an appropriate place to keep them. At the moment, I have them in /var/www/name.of.virtualhost.site/www/. That's obviously not secure because anything below the final public /www/ folder is also available since the /var/www/ contents are already being served up. For example, /var/www/name.of.virtualhost.site/docs/site_policies.txt is accessible via something like defaultsite.com/name.of.virtualhost.site/docs/site_policies.txt. So where is a good place to store the files that make up a website? (when it's a site that only I'm developing, I can obviously just stick them in /home/my_username/sites/name.of.virtualhost.site/, but that doesn't work well when I want other developers to be working on the site's files as well) I'm running a LAMP stack, not that I expect it to matter.

    Read the article

  • IBus client for GNU Emacs: Installed, but how do I start it?

    - by fred.bear
    Having recently moved to Linux/Ubuntu, I'm looking for a good editor, and GNU Emacs seems to fit the bill. One thing I want from a text editor is the ability to handle Unicode Input Method Editors in a "normal way", across the board. For Ubuntu, the "normal way" is via IBus. However, emacs does not support IBus "off the shelf". I found a launchpad project: IBus client for GNU Emacs: ibus-el. I've installed ibus-el and set it up as per the Customize section of this emacswiki IBusMode page. I included the suggested "toggle" keybinding: ;; Use s-SPC to toggle input status It seems to have installed okay, but I have no idea how to invoke IBus and switch IMEs. s-SPC doesn't fire up the IBus language panel... I'm stuck :( ...so close, yet so far.... Here are the startup *messages* Loading 00debian-vars... No /etc/mailname. Reverting to default... Loading 00debian-vars...done Loading /etc/emacs/site-start.d/50autoconf.el (source)...done Loading /etc/emacs/site-start.d/50dictionaries-common.el (source)... Loading debian-ispell... Loading /var/cache/dictionaries-common/emacsen-ispell-default.el (source)...done Loading debian-ispell...done Loading /var/cache/dictionaries-common/emacsen-ispell-dicts.el (source)...done Loading /etc/emacs/site-start.d/50dictionaries-common.el (source)...done Loading /etc/emacs/site-start.d/50festival.el (source)...done Loading /etc/emacs/site-start.d/50gtk-doc-tools.el (source)...done Loading /etc/emacs/site-start.d/50ibus-el.el (source)...done IBus: Xlib.protocol.request.QueryExtension IBus: Agent successfully started for display ":0.0"

    Read the article

  • Googlebot visit but no cache update - why?

    - by Mick
    I have made a new plain vanilla HTML website. I have been making regular modifications to it on an almost daily basis. The site is hosted by hostmonster and as part of their service they offer "awstats" to let you know assorted details of visitors to the site. One thing is puzzling me. According to awstats, a "robot/spider" calling itself "Googlebot" visited my site as recently as today (28th June 2011), but when I find my site on google (e.g. by searching for "full reserve banking") the cache is dated only the 5th June. I always thought that a visit from the google robot was synonymous with a cache update. Am I wrong? Or have I accidentally put something in the site telling google that nothing has been updated? EDIT: It seems a moderator has removed the name of my website, so there is now no chance that anyone could check out if I had made some error on my site :-( ... but anyway, in answer to paulmorriss' question, here is what aw stats was telling me:

    Read the article

  • CMMS Download Can Make Life Easier

    Are you looking for a way to organize your maintenance department? Does your maintenance department need to run more efficiently? Would you like to be able to schedule the downtime for your equipment... [Author: Ashley Combs - Computers and Internet - April 11, 2010]

    Read the article

  • Trigger IP ban based on request of given file?

    - by Mike Atlas
    I run a website where "x.php" was known to have vulnerabilities. The vulnerability has been fixed and I don't have "x.php" on my site anymore. As such with major public vulnerabilities, it seems script kiddies around are running tools that hitting my site looking for "x.php" in the entire structure of the site - constantly, 24/7. This is wasted bandwidth, traffic and load that I don't really need. Is there a way to trigger a time-based (or permanent) ban to an IP address that tries to access "x.php" anywhere on my site? Perhaps I need a custom 404 PHP page that captures the fact that the request was for "x.php" and then that triggers the ban? How can I do that? Thanks! EDIT: I should add that part of hardening my site, I've started using ZBBlock: This php security script is designed to detect certain behaviors detrimental to websites, or known bad addresses attempting to access your site. It then will send the bad robot (usually) or hacker an authentic 403 FORBIDDEN page with a description of what the problem was. If the attacker persists, then they will be served up a permanently reccurring 503 OVERLOAD message with a 24 hour timeout. But ZBBlock doesn't do quite exactly what I want to do, it does help with other spam/script/hack blocking.

    Read the article

  • WebCenter Customer Spotlight: Guizhou Power Grid Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryGuizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. The business objectives were to consolidate information contained in disparate systems into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Guizhou Power Grid Company saved more than US$693,000 in storage costs, reduced  average search times from 180 seconds to 5 seconds and solved 80% to 90% of technology and maintenance issues by searching the Oracle WebCenter Content management system. Company OverviewA wholly owned subsidiary of China Southern Power Grid Company Limited, Guizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. Business ChallengesThe business objectives were to consolidate information contained in disparate systems, such as the customer relationship management and power grid management systems, into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Solution DeployedGuizhou Power Grid Company  implemented Oracle WebCenter Content to build a content management system that enabled the secure, integrated management and storage of information, such as documents, records, images, Web content, and digital assets. The content management solution was integrated with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site. Business Results Saved more than US$693,000 in storage costs and shortened the material distribution time by integrating the knowledge management solution with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site Enabled staff to search 31,650 documents using catalogs, multidimensional attributes, and knowledge maps, reducing average search times from 180 seconds to 5 seconds and saving approximately 1,539 hours in annual search time Gained comprehensive document management, format transformation, security, and auditing capabilities Enabled users to upload new documents and supervisors to check the accuracy of these documents online, resulting in improved information quality control Solved 80% to 90% of technology and maintenance issues by searching the Oracle content management system for information, ensuring IT staff can respond quickly to users’ technical problems Improved security by using role-based access controls to restrict access to confidential documents and information Supported the efficient classification of corporate knowledge by using Oracle’s metadata functions to collect, tag, and archive documents, images, Web content, and digital assets “We chose Oracle WebCenter Content, as it is an outstanding integrated content management platform. It has allowed us to establish a system to access, query, share, manage, and store our corporate assets. This has laid a solid foundation for Guizhou Power Grid Company to improve management practices.” Luo Sixi, Senior Information Consultant, Guizhou Power Grid Company Additional Information Guizhou Power Grid Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Oracle Tutor - Is Anyone Reading Your Documentation?

    - by mary.keane
    If you are responsible for documenting your business practices, wouldn't it be nice to know if anyone is using the documentation? If the employees find it useful? You might want to consider surveying the users of the documentation on a regular basis. There are a number of free survey tools online (search for "free survey tools"), and you can have a survey ready in a matter of minutes. It's as simple as gathering a list of questions and a list of email addresses. For the questions, here are some suggestions. How often do you access the policy and procedure site? How useful is the site? How easy is it to navigate the site? How often are your questions answered on the site? What suggestions do you have to make the site more useful? You may want to consider just asking a few questions each month so that employees can complete the survey in less than 5 minutes (you'll get more responses). Make sure you have several comment boxes in the survey so that the employees can give suggestions. As the users of your documentation, the employees may have some terrific ideas that will enhance the usability of your policy and procedure site. It would be great to hear your suggestions for how to survey the users of your documentation. Mary R. Keane Senior Development Manager, Oracle BPM and Tutor

    Read the article

  • .html extension or no for SEO purposes

    - by Scott Schluer
    I know this question has been asked before on Stack Overflow, but what I have not been able to find in the posts I've read are concrete references as to WHY one is better than the other (something I can take to my boss). So I'm working on an MVC 3 application that is basically a rewrite of the existing production application (web forms) using MVC. The current site uses a URL rewriter to rewrite "friendly" urls with HTML extensions to their ASPX counterpart. i.e. http://www.site.com/products/18554-widget.html gets rewritten to http://www.site.com/products.aspx?id=18554 We're moving away from this with the MVC site, but the powers that be still want the HTML extension on the URLs. As a developer, that just feels wrong on an MVC site. I've written a quick and dirty HttpModule that will perform a 301 redirect from the .html URL to the same URL without the .html extension and it works fine, but I need to convince management that removing the .html extension is not going to hurt SEO. I'd prefer to have this sort of friendly URL: http://www.site.com/products/18554-widget Can anyone provide information to back up my position or am I actually trying to do something that WOULD hurt SEO, in which case can you provide references on that?

    Read the article

  • Recovering from an incorrectly deployed robots.txt?

    - by Doug T.
    We accidentally deployed a robots.txt from our development site that disallowed all crawling. This has caused traffic to dip dramatically, and google results to report: A description for this result is not available because of this site's robots.txt – learn more. We've since corrected the robots.txt about a 1.5 weeks ago, and you can see our robots.txt here. However, search results still report the same robots.txt message. The same appears to be true for Bing. We've taken the following action: Submitted site to be recrawled through google webmaster tools Submitted a site map to google (basically doing everything possible to say "Hey we're here! and we're crawlable!") Indeed a lot of crawl activity seems to be happening lately, but still no description is crawled. I noticed this question where the problem was specific to a 303 redirect back to a disallowed path. We are 301 redirecting to /blog, but crawling is allowed here. This redirect is due to a site redesign, wordpress paths for posts such as /2012/02/12/yadda yadda have been moved to /blog/2012/02/12. We 301 redirect to wordpress for /blog to keep our google juice. However, the sitemap we submitted might have /blog URLs. I'm not sure how much this matters. We clearly want to preserve google juice for URLs linked to us from before our redesign with the /2012/02/... URLs. So perhaps this has prevented some content from getting recrawled? How can we get all of our content, with links pointed to our site from pre-and-post redesign reporting descriptions? How can we resolve this problem and get our search traffic back to where it used to be?

    Read the article

  • Track sales and commission with third-party tool

    - by Andrew
    I have a clothing website where I link to various clothing retailers. I have reached an agreement with one of the retailers whereby they will pay a commission to us for every sale they make from traffic that was referred by our site. I need a mechanism for tracking how much commission should be paid to us, that involves as little work as possible to implement from their side. We both have Google Analytics. Option 1: They record a goal in their GA account whenever someone makes a purchase on their site. They see how many completed goals are marked as referral traffic from our site and calculate commission accordingly. The problem with this is that the whole process of calculating and paying commission will be manual. They will need to frequently check how many sales were generated by referral traffic from our site, and probably we will have to chase them for commission payments. Also - since we won't have access to their GA data - we will need to trust that they report all sales accurately. Option 2: Sign them up to an affiliate network like Commission Junction or Google's Affiliate Network, and connect to them through this network. The problem with this solution is that it seems too heavyweight; ideally we don't want to ask a retailer to go through the whole sign up process just to deal with us and pay us commission. I am assuming that there must be some lightweight service that tracks the number of sales by one site and pays commission accordingly to the other site, where the sign up and installation procedure is simple and fast.

    Read the article

  • Unified data source for k2 installed Joomla websites

    - by Özkan ÖZLÜ
    I am responsible for a few web sites of my organization. I use Joomla! 2.5.9 for those web sites. They all are running at the same server. I use K2 component for content managing. I have a general website in which shows all the staff information at the 'Staff' page. Also some of those people and their contents are shown in another department's website. So, there are databases for each web site. For example: In the general website (let's say general.org), when I click on the 'Staff' menu item, page shows all of the people work at my organization. Also they work at different departments. In another web site (eg: education.general.org) when I click on the 'Staff' menu item, it shows the people work at education department. But for each web site, I have different user accounts which means a modification in one of them does not affect the other one. If the one of the education staff tries to change his profile picture on the education web site, he also has to do it on the general web site. And sometimes one person might be working at two departments. Thus he has to edit three times of his data. Is it possible to merge the records for all websites? In other words, I want everyone to insert/update their data on the general web site, and the other web sites will be updated automatically.

    Read the article

  • Access a PLESK website before propagation?

    - by RCNeil
    My web host uses Plesk and I want to know if there is anyway to access and view a website (with PHP and other processes being functional) without propagation of the domain name? I have found countless forums on this but they are all pretty old (circa 01-04) and involve either tricking your localhost or SSH commands and some even result in terrible security risks. I would like to access a web page directory through a browser and see it's contents while having the PHP processes carry out... before I propagate it's potential domain name. People claim this is pointless but during a site migration why on earth would you not test a site before propagating it? I'm looking for something similar to what cPanel offers i.e. http://IP.ADDRESS./~mydomain.com The only solution I could think of is storing the site in a new directory of an already functional site and then setting up databases and testing the site once it's complete. Once tested and working I should be easily be able to migrate the files to the "new" domain name's root directory and just setup a new databases and then propagate the domain name. I can't believe that Plesk V10+ still does not have a site preview method that includes PHP, JS, and Flash ability.

    Read the article

  • Htaccess strange behaviour with Nginx

    - by Termos
    I have a site running on Nginx (v1.0.14) serving as reverse proxy which proxies requests to Apache (v2.2.19). So Nginx runs on port 80, Apache is on 8080. Overall site works fine except that i cannot block access to certain directories with .htaccess file. For example i have 'my-protected-directory' on 'www.site.com' Inside it i have htaccess with following code: <Files *> order deny,allow deny from all allow from 1.2.3.4 <--- my ip address here </Files> When i try to access this page with my ip (1.2.3.4) i get 404 error which is not what i expect: http://www.site.com/my-protected-directory However everything works as expected when this page is served directly by Apache. I can see this page, everyone else can't. http://www.site.com:8080/my-protected-directory Update. Nginx config (7.1.3.7 is site ip.): user apache; worker_processes 4; error_log logs/error.log; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1024; gzip_http_version 1.1; gzip_proxied any; gzip_comp_level 5; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon; server { listen 80; server_name www.site.com site.com 7.1.3.7; access_log logs/host.access.log main; # serve static files location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/www.site.com/httpdocs; proxy_set_header Range ""; expires 30d; } # pass requests for dynamic content to Apache location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Range ""; proxy_pass http://7.1.3.7:8080; } } Could please anyone tell me what is wrong and how this can be fixed ?

    Read the article

  • Should I use subdomains or subfolders for my user groups?

    - by bilygates
    Hello, I run a photography website where each user has its own subdomain (i.e. user.site.com). I'm thinking of adding user groups but I'm unable to decide if I should also associate a separate subdomain or simply a subfolder for each group: Subfolders (www.site.com/groups/my-group) Pros: Easier to maintain from a tehnical p.o.v. Cons: Harder to memorize. The URLs can get really long (www.site.com/groups/my-group/albums/my-album/) Subdomains (my-group.site.com) Pros: Easier to memorize. Shorter URLs. One might have the impression that such an URL is somewhat more "independent" from the main site. Cons: Group and user names belong to the same name space, so we need to check for collisions when creating a new user/group. One cannot determine the content of the page by only reading the URL: Is x.site.com a user page or a group page? What's your opinion on the matter? I should note that DeviantArt.com uses the 2nd option (that's where I got the idea). Thank you in advance!

    Read the article

  • Where to ask a question about startups?

    - by Wolfpack'08
    I've got some questions about how to better run my web application development business, which has only been running for a little more than two years. It's still in its early phases, as I consider the first five years the 'early years'. Being inexperienced with business in general, I always have a lot of questions as to whether I am making the right decisions (for example, I often worry about my hiring practices, and I often worry that I may have priced new products wrongly). Is there a good site on the Stack Exchange to ask questions about things like this (for example, this site, the Project Management site, the Salesforce site, or perhaps the Personal Finance site)? I'm combing through questions and answers on each of these sites, now, and I can see questions that mimic my own. Nothing precisely the same, but things that are similar on all sites. Apart from just reading through previously asked questions, what is a good way to get a sense of whether or not my question fits on a site in the exchange? If you recommend going out of the exchange, please also let me know.

    Read the article

  • Drupal migration failed

    - by Marco
    First of all, I'm new to Drupal and the work I have to do is some kind of too hard. My old colleague (webmaster) had a server with a multisite Drupal 6 installation. Sites and their dirs were (e.g.) Sites Site directory b.a.mycompany.com /drupal_install_dir/sites/b.a.mycompany.com c.a.mycompany.com /drupal_install_dir/sites/c.a.mycompany.com d.a.mycompany.com /drupal_install_dir/sites/d.a.mycompany.com Unluckily my colleague moved and server hdd aren't in my hands: all I have is a backup of /drupal_install_dir and three sql dumps (one for each site). I had to restore three sites, but changing them as z.mycompany.com/b z.mycompany.com/c z.mycompany.com/d Beeing a sysadmin, I Extracted tar.gz backup file under wwwroot (let's call full path to extracted directory /new_install_dir) Restored three databases Created mysql users and give them correct GRANTS on databases Then (trying to restore at least first site) I changed /new_install_dir/sites/settings.php putting correct database connection data and new basepath. But there is no way I can see my new site, simply it doesn't work. Watching /var/log/apache2/error.log I saw Drupal searching for main drupal database; so I created that db too setting user and grants, but dump file is empty. Well, now I can run something like install.php or update.php, but my site is not shown. Is there something I can do? Do I have to walk another way? Consider I searched the web, but I'm not able to find a guide that can help me for my problem. Ah, I forgot: before producing the backup, my colleague set site in maintenance mode. When I try to run z.mycompany.com/?q=user (trying to login) nothing happens. I'm really stuck...

    Read the article

  • Best CMS for review-type sites

    - by Pru
    Is there an ideal CMS for making a review site? By review site, I mean like a restaurant review site where you have each entry belonging to different major categories like Cuisine and City. Then users can browse and filter by each or by combination (Chinese Food in Los Angeles, with suggestions of other Chinese restaurants in LA, etc). Furthermore, I'd want it to support other fields like price, parking, kid-friendliness, etc. And to have users be able to filter by those criteria. I've been told that with a combination of custom taxonomies, plug-ins and many clever little queries, that Wordpress 3.x can handle this. But I'm having a heck of a time with it getting into the nitty gritty, and that's where I find the community support is lacking. The sort of stuff you'd think would work in WP, like making one parent category for Cuisine and one for City, don't really work once you get further in and start trying to pull it all together. Then you find these blog posts where people say, "This example shows that one could create a huge movie review site using custom taxonomies..." but when you go and try it you hit all sorts of challenges and oddities that point a big long finger at Wordpress being in fact a blogging platform. The best I came up with was one category for the cuisine and one tag for the city, then I created a couple of custom tag-like taxonomies for the other features. It's quite a mess to try to figure out how to assemble all of that into a natural, intuitive site. I expect a few versions down the road WP will be able to do these sorts of sites out of the box. So I thought I'd take a step back before I run back into the Wordpress fray and find out if maybe there is another platform better suited to this sort of relational content site. Directory scripts in some ways offer many of the features I'm looking for, but I need something more flexible and, hopefully, interactive (comments, reviews). I'm especially looking for feedback from people who've crafted sites like this. Thanks!

    Read the article

  • Protecting a webpage with an authentication form

    - by Luke
    I have created an employee webpage with a lot of company info, links, etc., but I want to protect the page because it contains some confidential company information. I am running IIS7.5 on Windows Server 2008 R2, and I already have the site setup as a normal, non-protected site. I want all active directory users to have access to the site. This is not an intranet site, it is exposed to the internet. I tried setting it up using Windows Authentication, but I had problems with multiple login prompts, etc. I just want a simple form for users to enter their credentials and have access to the site, and I need it to query the AD for login. I've searched the web for a guide on this, but I can't seem to find one that fits my situation. This is not a Web App. It is just a simple html site. Does anyone have any suggestions or a link to a guide on this? Thanks so much! -LB

    Read the article

  • All email directed to 3rd party vendor except for one specific domain. How?

    - by jherlitz
    So we setup a site to site vpn tunnel with another company. We then proceeded to setup a DNS zone on each others dns servers and entered in each others Mail server name and IP, MX record and WWW record. This allowed us to send emails to each others mail servers through the site to site vpn. Now recently the other company started using MX Logic to scan all outbound and incoming mail. So all outbound email is directed to MX Logic. However we still want email between us to travel across the the Site to Site VPN tunnel. How can we specify that to happen for just one domain not to be directed to MX Logic? Stump on both ends and looking for help.

    Read the article

  • Googlebot visit but no cache update - why?

    - by Mick
    I have made a new plain vanilla HTML website. I have been making regular modifications to it on an almost daily basis. The site is hosted by hostmonster and as part of their service they offer "awstats" to let you know assorted details of visitors to the site. One thing is puzzling me. According to awstats, a "robot/spider" calling itself "Googlebot" visited my site as recently as today (28th June 2011), but when I find my site on google (e.g. by searching for "full reserve banking") the cache is dated only the 5th June. I always thought that a visit from the google robot was synonymous with a cache update. Am I wrong? Or have I accidentally put something in the site telling google that nothing has been updated? EDIT: It seems a moderator has removed the name of my website, so there is now no chance that anyone could check out if I had made some error on my site :-( ... but anyway, in answer to paulmorriss' question, here is what aw stats was telling me:

    Read the article

  • Programming ... where to start?

    - by agnesb
    For the last 4 months, I've working tirelessly on a project with my partner, who is a super programmer. He did 100% of the whole mechanism that makes our site work. My job is to take care of the cosmetic aspects of the site ... thus I should say I am good enough at CSS and html. However, since we are using Drupal to build our site, from time to time, I need his help in order to figure out how to do the customization. Sometimes, I got frustrated. I know that as a partner, I should know a little bit on how to program. However, during the crunch time when you have to deliver lightning fast (we have our site built from scratch to finish in 4 weeks ... and you are all welcome to come join the fun! It's a site for programmers!) there is no time to learn from the basics. All I can do is to pick up whatever I need at the moment. Now the site is launched, I am thinking it should be time to do some learning. So, where should I start? My partner always said I need to start with Python. What's your take on this? Thanks.

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >