Search Results

Search found 37688 results on 1508 pages for 'site search'.

Page 527/1508 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • IE8 Unable to download files

    - by jetgunner
    I recently installed Windows 7. I can browse to any webpage using IE8, but if I click on any links to download files, I receive the following error: Unable to download [filename] from [website]. Unable to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later. I can download files perfectly fine using firefox, it's just IE that is having issues. There are no messages in the windows event log. I have no add-ins installed and have made no security changes as this is a fresh install. Any ideas?

    Read the article

  • IIS - Script for repeated hacks on a website

    - by dodegaard
    I currently have a site that is armored by ELMAH as its reporting mechanism. Each time someone hits a URL that is incorrect it notifies me or logs to the system. This is annoying for someone fat-fingering the URL with a misspelling but great when a hacker is trying to crack a site of mine. Has anyone ever written a script for IIS 7 on Win 2K8 that blocks an IP based on repeated attempts to hit a website? I've looked at Snort and other IDS systems but if I could get a script that could be linked to my ELMAH system it might be the perfect thing. PowerScript, etc. is what I was thinking. Hints and recommendations are wonderful and if you think a true intrusion detection system is recommended give me your ideas. Thanks in advance.

    Read the article

  • IIS not using available memory?

    - by Herb Caudill
    Recently launched an ASP.NET site running on a single 32-bit WS2003 box (SQL on a separate server). The server has 4GB intalled, 3GB available. According to task manager, the w3wp.exe process is only using between 200-600MB. The site has tens of thousands of pages and makes heavy use of page output caching, so I would expect it to use a lot more of the available memory. The app pool isn't set to throttle memory usage. Is there anything else that might be limiting the amount of memory that IIS takes?

    Read the article

  • startup cassandra layout

    - by davidkomer
    We've got a relatively low-traffic site (~1K pageviews/day) hosted on a single server, and expect it to grow significantly over the next few years. I'm thinking of moving over to Rackspace CloudServer or EC2 and firing up 3 nodes (all on CentOS): 2 x Web (Apache) - with loadbalancer 1 x MySQL (for the Wordpress powered part) The question is where to put Cassandra right now... Should it sit on each Web node, or the MySQL node? My thought right now is to put it on Web nodes. It's my understanding that Cassandra has the benefits of fault-tolerance (i.e. if we take a node down, the site is still operational). So even with only 2 nodes, we'd have that benefit as opposed to just putting it on the MySQL node. Also, as we scale up and add another node, a cassandra instance can come along with it and the php can always run its queries on localhost. Is this a good idea?

    Read the article

  • How to set up custom 401 error page or redirect in WSS3 SP2

    - by Stacy Vicknair
    I've got a WSS3 sharepoint site that requires windows authentication both in IIS and via the Sharepoint site. What I would like to do is in the case that a user does not provide valid AD credentials they are redirected to a custom error page. Currently, if the user immediately hits cancel when prompthey will see a plain text response of "401 UNAUTHORIZED". If they make an attempt and then hit cancel they instead see a blank page. I have looked into several options such as customErrors, httpModule interception (only saw examples for this after the user is authenticated), IIS Url rewrites (didn't see how this could help). Is there a good way to do this?

    Read the article

  • Backing up a Linux VPS with RSync to Vista

    - by Frank
    I've been working to setup a Linux VPS to host a couple of Wordpress sites and eventually a Mercurial server. I've setup one site and things have gone well. However, before I start moving other things to the VPS, I need to setup a backup solution. My provider, Linode, suggest RSync (among a couple of other options) to do backups. I've seen a few posts on this site that suggests other backup solutions including going to the Amazon Cloud but that costs money and the VPS is all the money I want to spend on this for the time being. So, to help solve that I want to have my backup computer be my home desktop computer. Assuming I'm using RSync, is it possible to use my Vista based home computer to become the destination for the backup? And if it is possible, what type of command or connection would I need to configure on the vista machine? Any insight would be helpful. It's probably obvious, but I've never used RSync.

    Read the article

  • HAProxy and Intermediate SSL Certificate Issue

    - by Sam K
    We are currently experiencing an issue with verifying a Comodo SSL certificate on an Ubuntu AWS cluster. Browsers are displaying the site/content fine and showing all the relevant certificate information (at least, all the ones we've checked), but certain network proxies and the online SSL checkers are showing we have an incomplete chain. We have tried the following to try to resolve this: Upgraded haproxy to the latest 1.5.3 Created a concatenated ".pem" file containing all the certificate (site, intermediate, w/ and w/out root) Added an explicit "ca-file" attribute to the "bind" line in our haproxy.cfg file. The ".pem" file verifies OK using openssl. The various intermediate and root certificates are installed and showing in /etc/ssl/certs. But the checks still come back with an incomplete chain. Can anyone advise about anything else we can check or any other changes we can make to try to fix this? Many thanks in advance... UPDATE: The only relevant line from the haproxy.cfg (I believe), is this one: bind *:443 ssl crt /etc/ssl/domainaname.com.pem

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • Building a List of All SharePoint Timer Jobs Programmatically in C#

    - by Damon Armstrong
    One of the most frustrating things about SharePoint is that the difficulty in figuring something out is inversely proportional to the simplicity of what you are trying to accomplish.  Case in point, yesterday I wanted to get a list of all the timer jobs in SharePoint.  Having never done this nor having any idea of exactly how to do this right off the top of my head, I inquired to Google.  I like to think my Google-fu is fair to good, so I normally find exactly what I’m looking for in the first hit.  But on the topic of listing all SharePoint timer jobs all it came up with a PowerShell script command (Get-SPTimerJob) and nothing more. Refined search after refined search continued to turn up nothing. So apparently I am the only person on the planet who needs to get a list of the timer jobs in C#.  In case you are the second person on the planet who needs to do this, the code to do so follows: SPSecurity.RunWithElevatedPrivileges(() => {    var timerJobs = new List();    foreach (var job in SPAdministrationWebApplication.Local.JobDefinitions)    {       timerJobs.Add(job);    }    foreach (SPService curService in SPFarm.Local.Services)    {       foreach (var job in curService.JobDefinitions)       {          timerJobs.Add(job);       }     } }); For reference, you have the two for loops because the Central Admin web application doesn’t end up being in the SPFarm.Local.Services group, so you have to get it manually from the SPAdministrationWebApplication.Local reference.

    Read the article

  • How long does it take for a server to get 'off greylisting'

    - by Michael
    Hi all, I asked a question regarding email delays a few months ago, and I think I found a workaround. I changed our email from "[email protected]" to "[email protected]", and it seems to work instantly again. After reading some articles, I believe this could be due to some form of greylisting, though some servers might call it something else -- if a server like yahoo or gmail receive email from a server that it is not used to receiving email from, then sometimes the delay occurs. But a name such as yahoo, gmail, which requires a user to sign up manually -- this delay can be avoided. My question is this: does anyone know more about this issue -- especially since it would be nice to send an email from our own site, instead of needing to use a whitelisted server? Thanks!

    Read the article

  • Enterprise Portal Issue with the Ax Demo VPCs

    - by ssmantha
    Microsoft’s Ax Demo VPC is basically configured for a static IP address 192.168.0.1, this is due to the fact that the VPC has Domain Controlller configured in it which requires a static IP. When we put this VPC on a network with a different subnet and change the IP you can observer that the site http://sharepoint and http://sharepoint/EP cease to function and show “Page Not Found” errors in the browser. This is mainly due to the DNS configuration which is not updated. Below is the screen shot of the changes that needs to be done to make the site functioning properly. Change the following entries in the Forward Lookup Zones of DNS management: These websites default, SharePoint and projectserver are all mapped to a single port in the IIS i.e. port number 80. These websites are recognised with host headers. These host headers are configured in DNS with incorrect IP address entries in DNS when you change the IP address of the VPC. Just change these values to point to the Local Loop Adapter (127.0.0.1) and change the DNS to point to this address in the TCP/IP properties as shown below: This will resolve the issue with the website rendering. Initially you may get time out errors while browsing these website. be patient and try again this would work.

    Read the article

  • Adding multiple websites with different SSL certificates in IIS 7

    - by Timka
    I'm having troubles using SSL for 2 different websites on my IIS 7 server. Please see my setup below: website1: my.corporate.portal.com SSL certificate for website1: *.corporate.portal.com https/443 binded to my.corporate.portal.com website2: client.portal.com SSL certificate issued for: client.portal.com When I try to bind https in IIS7 with the client's certificate, I don't have an option to put host name(grayed out) and as soon as I select 'client.portal.com' cert, I'm getting the following error in IIS: At least one other site is using the same HTTPS binding and the binding is configured with a different certificate. Are you sure that you want to reuse this HTTPS binding and reassign the other site or sites to use the new certificate? If I click 'yes' my.corporate.portal.com website stops using the proper SSL cert. Could you suggest something?

    Read the article

  • nginx connection reset

    - by Steve
    When first visiting my site after not visiting it for a few minutes, the connection is "reset" 100% of the time. I get this message when debug is turned on, along with a 400 bad request status message: client prematurely closed connection while reading client request line I've read that this could be caused by large_client_header_buffers setting. I have google analytics on my site. Using live http headers, I get this as the request: `GET /__utm.gif?utmwv=5.3.7&utms=35&utmn=745612186&utmhn=domain.com&utmcs=UTF-8&utmsr=1920x1080&utmvp=1841x903&utmsc=24-bit&utmul=en-us&utmje=1&utmfl=11.4%20r402&utmdt=2006Scape%20Forums%20-%20General&utmhid=2004697163&utmr=0&utmp=%2Fservices%2Fforums%2Fboard.ws%3F3%2C4&utmac=UA-25674897-2&utmcc=__utma%3D68455186.1647889527.1351640625.1352446442.1352451659.100%3B%2B__utmz%3D68455186.1352097329.64.2.utmcsr%3Ddomain.com%7Cutmccn%3D(referral)%7Cutmcmd%3Dreferral%7Cutmcct%3D%2Fservices%2Fforums%2Fboard.ws%3B&utmu=q~ HTTP/1.1 my large_client_header_buffers in nginx is set to 4 8k, so I don't know if this is the problem. Immediate requests have the first "reset" request are all successful.

    Read the article

  • nslookup gives wrong ip for my domain

    - by Werulz
    I am having some problem in trying to setup DNS for my domain on my server. This tutorial normally works fine for me but when i tried to lookup my domain it gives the following output Server: 4.2.2.1 Address: 4.2.2.1#53 Non-authoritative answer: 119.100.79.64.in-addr.arpa name = server.leech4ever.com. Authoritative answers can be found from: The server and the address are wrong according to the tutorial Here is tutorial http://webcache.googleusercontent.com/search?q=cache:rR7Z4YU4GI0J:www.broexperts.com/2012/03/linux-dns-bind-configuration-on-centos-6-2/+broexperts+bind&cd=1&hl=en&ct=clnk&gl=mu /etc/hosts 127.0.0.1 localhost 64.79.100.119 server.leech4ever.com server /etc/resolve.conf search leech4ever.com nameserver 64.79.100.119 /etc/resolv.conf nameserver 4.2.2.1 nameserver 4.2.2.2 How to solve this problem guys.....The tutorial was flawless until i did a server restore

    Read the article

  • Unable to view 2 local sites over network

    - by gentrobot
    I have 2 websites running on my local machine that I'd like to view from other machines on the same network. For /etc/apache2/sites-available/site1.com: <VirtualHost *:80> ServerName site1.com DocumentRoot /var/www/answers/app/webroot DirectoryIndex index.php <Directory "/var/www/answers/app/webroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> For /etc/apache2/sites-available/site1.com: <VirtualHost *:80> ServerName site2.com DocumentRoot /var/www/answers2/app/webroot DirectoryIndex index.php <Directory "/var/www/answers2/app/webroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have added 2 entries in the /etc/hosts file as: 127.0.0.1 site1.com 127.0.0.1 site2.com Now, when I point the browser on my machine to site1.com, it shows me the first site and pointing the browser to site2.com, it shows me the second site. However,when I type in the local IP of my machine in the browser, it always shows site2. How can I change it to switch between site1 and site2 ? Is there a way that I can view both the sites form another machine (esp. mobile devices over wireless network) ?

    Read the article

  • C# class architecture for REST services

    - by user15370
    Hi. I am integrating with a set of REST services exposed by our partner. The unit of integration is at the project level meaning that for each project created on our partners side of the fence they will expose a unique set of REST services. To be more clear, assume there are two projects - project1 and project2. The REST services available to access the project data would then be: /project1/search/getstuff?etc... /project1/analysis/getstuff?etc... /project1/cluster/getstuff?etc... /project2/search/getstuff?etc... /project2/analysis/getstuff?etc... /project2/cluster/getstuff?etc... My task is to wrap these services in a C# class to be used by our app developer. I want to make it simple for the app developer and am thinking of providing something like the following class. class ProjectClient { SearchClient _searchclient; AnalysisClient _analysisclient; ClusterClient _clusterclient; string Project {get; set;} ProjectClient(string _project) { Project = _project; } } SearchClient, AnalysisClient and ClusterClient are my classes to support the respective services shown above. The problem with this approach is that ProjectClient will need to provide public methods for each of the API's exposed by SearchClient, etc... public void SearchGetStuff() { _searchclient.getStuff(); } Any suggestions how I can architect this better?

    Read the article

  • What's the canonical process for backing up a website?

    - by Walkerneo
    This is going to sound terrible, but bear with me. I currently have a cron job that does a mysql dump, a git add all and commit, and a git push to bitbucket. I set this up almost a year ago, when I didn't know much about git, backups, and general web development and administration. I haven't had the time to fix this and do it properly, but the repo has now grown quite big from accumulating large temporary files from my forum, so now I have to do something and I want to do it properly this time around. What processes do semi-large websites and personal site admins use for backing up server content? Based on what I've learned since I set this up, what I'm currently think of doing is: Making changes on a development domain and committing the code frequently Archiving the entire site after a successful deployment from the development domain Having automatic daily database and user-content backups. I still like the idea of backing up sqldumps with git, though. I know git isn't a backup tool and that this is beyond its purpose, but the textual queries that are exported would be easily managed by git and would save a lot of space in archives.

    Read the article

  • Enable Diagnostics in Oracle apps

    - by PRajkumar
    How to enable Oracle apps Diagnostics-> Examine, for certain users?   Steps 1 Navigate to System Administrator responsibility> Profile> System>     Steps 2 Enter profile name: Utilities:Diagnostics Enter Application User for whom you want to enable Diagnostics-> Examine   Steps 3 Give Yes at User level and Save the Changes Note – You can set Yes at Site level also if you want to enable this option for all Oracle application users   Steps 4 Again navigate to System Administrator responsibility> Profile> System> Enter profile name: Hide Diagnostics menu entry Enter Application User for whom you do not want to hide Diagnostics menu entry   Steps 5 Give No at User level and Save the Changes Note – You can set No at Site level also if you do not want to hide menu entry option for all Oracle application users Steps 6 Congratulations you have successfully enabled Diagnostics-> Examine Logout from Oracle Application and login again. Now can see Diagnostics-> Examine option

    Read the article

  • Sticky Load Balancing with AWS

    - by John Wheal
    I have just setup a load balancer with AWS for a few instances as search engine crawlers were bringing down the site (it has millions of pages). Parts of the site allow you to login so I selected: Enable Application Generated Cookie Stickiness and everything works fine. I now wonder how this will effect my SEO and the crawlers. As I selected sticky load balancing does this mean that a crawler will be stuck on one server and therefore defeat the point in the load balancer? Any recommendations will be appreciated.

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • What is the R Language?

    - by TATWORTH
    I encountered the R Language recently with O'Reilly books and while from the context I knew it was a language for dealing with statistics, doing a web search for the support web site was futile. However I have now located the web site and it is at http://www.r-project.org/R is a free language available for a number of platforms including windows. CRAN mirrors are available at a number of locations worldwide.Here is the official description:"R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity. One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control. R is available as Free Software under the terms of the Free Software Foundation's GNU General Public License in source code form. It compiles and runs on a wide variety of UNIX platforms and similar systems (including FreeBSD and Linux), Windows and MacOS."

    Read the article

  • Browser which shows what's been downloaded so far regardless of formatting and layout ?

    - by Mick
    Every now and then a website becomes super-slow (but not broken) because there are too many people looking at it at the same time. When I try and view such a site, say with Firefox, I can see that it is downloading all sorts of components of the site because of the progress information printed at the bottom of the window and I'm sitting there thinking "If only the browser would show me what it's got so far. I don't care if its a jumbled mess, I just want to see what you've got". Does any browser offer such an option?

    Read the article

  • How do you QA and release software quickly with a large team?

    - by sadadasd
    My work used to be a smaller team. We had less than 13 devs for a while. We are now growing rapidly, and are over 20 with plans to be over 30 in a few months. Our process for QA'ing and releasing each build is no longer working. We currently have everyone develop the new code, and stick it onto a staging environment. A few days before our weekly release, we would freeze the staging environment and QA everything. By our normal release time, everything was usually deemed acceptable and pushed out the door to the main site. We reached a point where our code got too big so we could no longer regress the entire site each week in QA. We were ok with that, we just made a list of everything important and only covered that and the new stuff. Now we are reaching a point where all the new stuff each week is becoming too big and too unstable. Our staging environment is really buggy week after week, and we are usually 1-2 hours behind the normal release time. As the team is growing further, we are going to drown with this same process. We are re-evaluating everything, and I personally am looking for suggestions / success stories. Many companies have been where before and progressed beyond, we need to do the same

    Read the article

  • How to whitelist a user agent for nginx?

    - by djb
    I'm trying to figure out how to whitelist a user agent from my nginx conf. All other agents should be shown a password prompt. In my naivity, I tried to put the following in before deny all: if ($http_user_agent ~* SpecialAgent ) { allow; } but I'm told "allow" directive is not allowed here (!). How can I make it work? A chunk of my config file: server { server_name site.com; root /var/www/site; auth_basic "Restricted"; auth_basic_user_file /usr/local/nginx/conf/htpasswd; allow 123.456.789.123; deny all; satisfy any; #other stuff... } Thanks for any help.

    Read the article

  • Artists and music - i need help to decide wich cms to use.

    - by infty
    A friend has asked me to build a site with the following options: staff members must be able to add new music and artists to the page a gallery must be provided - it is also good if each artist has the ability to have his/her own, smaller, gallery users must be able to vote for artists users must be able to alter in discussions (forums or comments sections) staff members must be able to blog staff members must be able to write articles I did a small project where i actually implementet all of these features, but i want to use an existing content managment system for all of these features so that future devolpers can, hopefully, more easy extend the website. And also, so that i dont have to provide to much documentation. I have never developed a website using an external cms like drupal or wordpress and after seeing hours of tutorial videos of both systems, i still cant make up my mind on wheter i should : a) use Drupal 7 b) use Wordpress 3 c) create my own cms I can only imagine that staff members would also want to create content using iphone or android based mobile devices. But this is not a required feature. Can someone, with experience, please tell me about their experiences with bigger projects like this? The site will approx. have a total of 400 000 - 500 000 visitors (not daily visitors, based on numbers from last year in a period of 4 months)

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >