Search Results

Search found 14156 results on 567 pages for 'index maintenance'.

Page 367/567 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • first of all nice work,, how to redirect the url of old modified directries

    - by kath
    I really appreciate your hard work after searching lot of web i cannot find the answer so if you get time please try to find what i should so the problem is its a website for classifieds ads before i modified or better word edited the category name but its still showing up in Google index and even in browser URL so is there any way to by pass or redirect it to new one i tried .htaccess but cant get the the result here is the both URL list before modification http://adsbuz.com/vehicles-cars/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm after editing category name (modified one) http://adsbuz.com/vehicles-cars-for-sale/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm (edited category name was before ""vehicles-car"" and after editing is ""vehicles-cars-for-sale"" as you can see both URL opens and not good for seo. and is there any way some one opens wrong url but page opens only with corect url automatically just like in your site.. consider me new in this market and want little help here (the website is in php) thanks Really appreciate your quick response thanks kath

    Read the article

  • Second virtual host on Apache redirects to root

    - by Slytherin
    I tried to setup my second virtual host , but I'm getting the default /var/www/index.html ( the one that says "It works!" ) I followed the same procedure as the first time, but this time it didn't work my configuration looks like this <VirtualHost *:80> ServerName messup ServerAlias messup.loc ServerAdmin webmaster@localhost DocumentRoot /var/www/messup ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> my hosts configuration is the following 127.0.0.1 localhost 127.0.1.1 SlytherinPC 127.0.0.1 AFS.loc 127.0.0.1 messup.loc After this , my apache wouldn't restart without any message , only saying [fail] , but stop and start worked. What am I missing ?

    Read the article

  • Can't deploy an ejb bean on jboss

    - by leo
    I am try to deploy a jar to an jboss server. It works on my environment. But when I deployed the same jar on another server, i kept getting an error saying that the persistence unit is already registered. There is no other bean using the same name and the same persistence unit name. I tried to restart the server and remove the tmp, work, data directory but still get the same error. here is my error: ObjectName: jboss.j2ee:service=EJB3,module=wess_jpa.jar State: FAILED Reason: java.lang.RuntimeException: javax.management.InstanceAlreadyExistsException: persistence.units:unitName=dses_wess already registered. This is almost identical to this issue in jboss forum but there is no solution: http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4211687#4211687

    Read the article

  • How do you handle the task of changing the schema of a production MySQL database?

    - by Continuation
    One of the biggest complaints I have heard about MySQL is that it locks up a table if you try to change its schema like adding a column or adding an index. By "locking up the table" does it mean I can neither read nor write to the table? Sometimes for hours? That seems a pretty severe limitations. I was going to use MySQL for my new project but this gives me pause. Is there a workaround for this? How do you handle the task of changing the schema of your production MySQL database? By the way someone told me Postgresql doesn't have this problem. Is that true - I can both read and write to a Postgresql table while changing its schema? Is there any performance penalty incurred? Would love to hear your experiences.

    Read the article

  • Why is BIND giving me a SERVFAIL in this case? (Notes inside)

    - by imaginative
    Woke up this morning to a bunch of the following: root@foo:/etc/bind# dig @1.2.3.4 foo.example.com ; <<>> DiG 9.6.1-P2 <<>> @1.2.3.4 foo.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36121 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;;foo.example.com. IN A ;; Query time: 0 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Thu Apr 1 09:57:59 2010 ;; MSG SIZE rcvd: 31 Some background on the fictitious "1.2.3.4". It's a slave name server in my nameserver "farm". Technically I have ns1 (being the master) and ns2/ns3. Currently ns1/ns2 are down for maintenance, so I left ns3 at it serving live traffic. That's the point, DNS is supposed to be resilient. Now the odd part is, "1.2.3.4" was serving requests for example.com just fine for the last 4-5 days. This morning I get a phone call that it's non-responsive. After investigation I see the message you see above, SERVFAIL. I looked into the zone file and saw the following: example.com IN SOA ns1.example.com. hostmaster.mail.example.com. ( I wondered if at this point that the nameserver thought it was not authoritative over example.com and adjusted it to the following: example.com IN SOA ns3.example.com. hostmaster.mail.example.com. ( After that, it started responding again for all authoritative queries for example.com. I have no idea why. I thought these things were supposed to be normalized upon zone transfer from ns1 - ns3? Can someone please example why this happened and how to prevent it from happening in the future? I've never had a similar problem, and because I don't understand it well, I might be missing some critical information in this question. So please let me know if I can further add any detail to make things clearer as well. One more thing to note: I have other domains that I'm authoritative for that have their SOA still saying ns1.example.com. and not ns3.example.com. Those domains are serving requests just fine! Is it a matter of time before they stop also and I have to change SOA to ns3.example.com? Is this also only required because ns1 and ns2 are currently offline?

    Read the article

  • Drupal 7 on Windows - File Module Problems

    - by TimothyP
    Installed Drupal 7 using the Web Platform installer on Windows 2008 For some reason, the file module, when you upload a file, uses the first few letters of the filename as the unique key to store in the database, which of course causes problems very fast. I'm wondering does anybody have a workaround for this? An AJAX HTTP request terminated abnormally. Debugging information follows. Path: /file/ajax/field_file/und/0/form-EBMatHzV5cZXcWvXJtdADSdyw7Id9-GIpFM_NCJg_a4 StatusText: n/a ResponseText: Error message PDOException: SQLSTATE[23000]: [Microsoft][SQL Server Native Client 10.0][SQL Server]Cannot insert duplicate key row in object 'dbo.file_managed' with unique index 'uri_unique'. in drupal_write_record() (line 6776 of ..........\includes\common.inc). Error The website encountered an unexpected error. Please try again later. ReadyState: undefined (PS: I hope superuser is the right place to ask)

    Read the article

  • How are suspected DoS attacks handled by webservers?

    - by Jan Kuboschek
    I rent a server somewhere out in Canada or so that I'm using to host a website of mine. That website has close to 400,000 pages that I wanted to index today. For that, I wrote a crawler a while back (see JCrawler on Stackoverflow.com). Now, I'm greedy and didn't want it to take too long so I ran multiple threads resulting in some 60+ requests per second from my IP. A couple minutes later, my server locked me out. I can still FTP into it, but I can't HTTP it. As server administrator or user, do you have any idea how servers usually handle these situations? Is it common to place a permanent or temporary ban on the IP or what is typically done? Naturally, I'll re-run my software with fewer requests once I'm back on.

    Read the article

  • how to report a malicious site (http://newss.gr) to google, microsoft and mozilla so that they will prompt

    - by Jayapal Chandran
    Hi, I completed a project an year ago. Now a few modification were needed. While i try to test there was an index.html with a malicious script which had an iframe to this site's jar file. and kaspersky anti virus blocked it. So i browsed the ftp to find the file and i deleted it. and also disabled directory listing. May be the ftp details of the site owner would have been hacked. I want to report this site to google, msn and mozilla and other antivirus programs. How to do that. any idea? I hope kaspersky would have updated it in their database but still i want to explicitly inform it about this. here is the popup kaspersky showed.

    Read the article

  • Why is this MySQL FULLTEXT query returning 0 rows when matching rows are present?

    - by Don MacAskill
    I have a MySQL table with 200M rows which has a FULLTEXT index on two columns (Title,Body). When I do a simple FULLTEXT query in the default NATURAL LANGUAGE mode for some popular results (they'd return 2M+ rows), I'm getting zero rows back: SELECT COUNT(*) FROM itemsearch WHERE MATCH (Title, Body) AGAINST ('fubar'); But when I do a FULLTEXT query in BOOLEAN mode, I can see the rows in question do exist (I get 2M+ back, depending): SELECT COUNT(*) FROM itemsearch WHERE MATCH (Title, Body) AGAINST ('+fubar' IN BOOLEAN MODE); I have some queries which return ~500K rows which are working fine in either mode, so if it's result size related, it seems to crop up somewhere between 500K and a little north of 2M. I've tried playing with the various buffer size variables, to no avail. It's clearly not the 50% threshold, since we're not getting 100M rows back for any result. Any ideas?

    Read the article

  • IIS7.5 is only resolving to the Default Web Site

    - by Dennis Burnham
    I am able to access only the Default Web Site on a Windows 2008 R2 Server which is running IIS 7.5 The Default Web Site's binding is to "All Unassigned", same way as I have done it on a different machine running IIS6 under Windows 2003 Server. The bindings of the desired Web Site have the IP address of the server and the correct home directory. Regardless what I do, the only page content I can see is the default index page in the wwwroot directory which is the Default Web Site. What must I do to deliver the correct content from the Web Sites that are configured in IIS7.5?

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • How to setup Proxy Cache with Nginx and Passenger

    - by tiny
    I use Nginx and Passenger for my rails application. I want to use proxy cache to cache my pages. However, every request go direct to my rails application. I don't know what wrong with my configuration. Below is my configuration: user www-data; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15; passenger_ruby /usr/bin/ruby1.8; passenger_max_pool_size 6; passenger_max_instances_per_app 1; passenger_pool_idle_time 0; rails_spawn_method conservative; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 512; sendfile on; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_http_version 1.0; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css text/javascript application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss; proxy_cache_path /var/www/cache/webapp levels=1:2 keys_zone=webapp:8m max_size=1000m inactive=600m; include vhosts/*.conf; include /opt/nginx/conf/sites-enabled/*; root /var/www; } server { listen 127.0.0.1:3008; server_name localhost; root /var/www/yoolk_web_app/public; # <--- be sure to point to 'public'! passenger_enabled on; rails_env development; passenger_use_global_queue on; } server { listen 80; server_name webpage.dev; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; error_page 503 http://$host/maintenance.html; location ~* (css|js|png|jpe?g|gif|ico)$ { root /var/www/web_app/public; expires max; } location / { proxy_pass http://127.0.0.1:3008/; proxy_cache webapp; proxy_cache_valid 200 10m; } #More Location }

    Read the article

  • mod_pagespeed drops headers

    - by kuboslav
    I have installed mod_pagespeed in Apache. It drops header X-UA-Compatible. When I turned of mod_pagespeed X-UA-Compatible appear in headers. Have anyone idea how to disable this in mod_pagespeed? My .htaccess file: <IfModule mod_headers.c> Header set X-UA-Compatible "IE=Edge,chrome=1" </IfModule> <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> Thanks a lot!

    Read the article

  • Windows Server 2008 R2 and OSX 10.5/.6/.7

    - by Keith Loughnane
    I'm handling a migration from a on old mac server to a Windows Server 2008 R2 machine running a 12TB(10 usable) RAID5 server. It's using an SMB share and now the OSX 10.5/.6 users can search sometimes it works but takes up to 10 minutes. The OSX 10.7 machine seems to be fine. I've looked in the root of the shared drive for a .Spotlight-V100 file (ls -a) but it doesn't seem to be there. mdutil says indexing is on for that volume and I have cleared the index using mdutil -E /Volumes/MeSharedVolume numerous times. Any ideas?

    Read the article

  • Why is one specific file not showing up in my document library?

    - by Jay Bazuzi
    In my "Documents" library on Windows 7, one file is not showing up in Windows Explorer. When I look in C:\Users\%USERNAME%\Documents\blah\blah all 24 files appear. But when I look in Libraries > Documents > blah > blah only 23 show up. I made a copy of the file and the copy appears. Refresh doesn't help. The "Arrange by" setting defaults to "Name". When I change it to "Folder" the extra file appears, but changing it back to "Name" the file disappears again. How can I make the file appear in all views? Why would it disappear? EDIT: I deleted the Windows Search Index and things seem to be working again. I say it's a bug in the Search Service.

    Read the article

  • Allowing users in from an IP address without certificate client authentication

    - by John
    I need to allow access to my site without SSL certificates from my office network and with SSL certificates outside. Here is my configuration: <Directory /srv/www> AllowOverride All Order deny,allow Deny from all # office network static IP Allow from xxx.xxx.xxx.xxx SSLVerifyClient require SSLOptions +FakeBasicAuth AuthName "My secure area" AuthType Basic AuthUserFile /etc/httpd/ssl/index Require valid-user Satisfy Any </Directory> When I'm inside network and have certificate - I can access. When I'm inside network and haven't certificate - I can't access, it requires certificate. When I'm outside network and have certificate - I can't access, it shows me basic login screen When I'm outside network and haven't certificate - I can't access, it shows me basic login screen and following configuration works perfectly <Directory /srv/www> AllowOverride All Order deny,allow Deny from all Allow from xxx.xxx.xxx.xxx AuthUserFile /srv/www/htpasswd AuthName "Restricted Access" AuthType Basic Require valid-user Satisfy Any </Directory>

    Read the article

  • Default documentroot apache does not work

    - by James Wise
    I have apache version 2.2 and php 5.3.15 on a single server. I configured virtual hosting and a default vhost. 0_default_.conf - goes to /var/www/default sub.domain.com.conf - goes to /var/www/sub.domain.com My question is, how could I set the default documentroot to sub.domain.com permanently? That means all request should be redirected to sub.domain.com. I try to remove 0_default_.conf but when viewing the page it display the php source code of sub.domain.com. Here is my configurations -- http://pastebin.com/4e3awUJ4 Although I can create index.php to /var/www/default and permanently redirect to sub.domain.com site but it's not viable solution for me because what if I didn't point the ip address of sub.domain.com to the server so user cannot view that subdomain. I would appreciate if anyone could share their knowledge and wisdom. Thanks. JamesW

    Read the article

  • Hide directory contents from showing when accessing the URL directly

    - by SoLoGHoST
    On my site, if you browse to http://example.com/images/ the contents of the entire directory are shown like so: How can I make it so that this doesn't show up when people browse directly to http://example.com/images/? Can I create an .htaccess file in that directory? Or is there a better way? I really don't want people being able to do this for the entire site (i.e. every directory on that site). What can I do to prevent this? I figure it's either something that has to be done in Apache or using an global .htaccess file and placing it in the public_html folder perhaps? EDIT I diverted this using an index.php file, but I still feel that security is an issue here, how can I fix this permanently?

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

  • Windows Server 2003 Router, Good approach [closed]

    - by jM2.me
    Possible Duplicate: Windows Server 2003 Router with PortForwarding Situation We have Verizon Fios 25/25 Internet Connection, Server acting as a router, and around 12 office computers. Task Portforward port 29000 from office computer. Problem Once I connected wan and lan cables I just had to set static lan ip (on server) and plug in switch with office computers into second nic. Then just right click on wan nic and select share internet connection. All office computers were assigned with an IP address 192.168.0.XXX and Gateway 192.168.0.1 (server). Now I have to open port and portfoward it from computer 192.168.0.190 (static ip, manual). Using this guide http://www.rosscode.com/blog/index.php?title=port_forwarding_in_windows_2003&more=1&c=1&tb=1&pb=1 I faced a problem. Before enabling "RRA" I had to unshare internet connection (wan interface) and was able to setup network. Now how do I setup a network within "RRA" and share internet with private network? Thank you much

    Read the article

  • Can GnomeKeyring store passwords unencrypted?

    - by antimeme
    I have a Fedora 15 laptop with the root and home partitions encrypted using LUKS. When it boots I have to enter a pass phrase to unlock the master key, so I have it configured to automatically log me in to my account. However, GnomeKeyring remains locked, so I have to enter another pass phrase for that. This is unpleasant and completely pointless since the entire disk is encrypted. I've not been able to find a way to configure GnomeKeyring to store its pass phrases without encryption. For example, I was not able to find an answer here: http://library.gnome.org/users/seahorse-plugins/stable/index.html.en Is there a solution? If not, is there a mailing list where it would be appropriate to plead my case?

    Read the article

  • Why does try_files append each path together?

    - by Tom
    I'm using try_files like this: http { server { error_log /var/log/nginx debug; listen 127.0.0.1:8080; location / { index off default_type application/octet-stream; try_files /files1$uri /files2/$uri /files3$uri; } } } In the error log, it's showing this: *[error] 15077#0: 45399 rewrite or internal redirection cycle while internally redirecting to "/files1/files2/files3/path/to/my/image.png", client: 127.0.0.1, server: , request: "GET /path/to/my/image.png HTTP/1.1", host: "mydomain.com", referrer: "http://mydomain.com/folder" Can anyone tell me why nginx is looking for /files1/files2/files3/path/to/my/image.png instead of /files1/path/to/my/image.png, /files2/path/to/my/image.png and /files3/path/to/my/image.png? Thanks

    Read the article

  • htaccess not found

    - by clarkk
    I have installed a Apache 2 (from webmin) server on Debian 6.. I have setup a virtual host db.domain.com on the server which works fine, but .htaccess doesn't work if you get access from the ip address and the directory is listed if no index.php is found? db.domain.com -> 403 forbidden xxx.xxx.xxx.xxx -> gets access to the server Why is .htaccess omitted when you get access from the servers ip address? httpd.conf <Directory *> Options -Indexes FollowSymLinks </Directory> <VirtualHost *:80> ServerName db.domain.com DocumentRoot /var/www </VirtualHost> htaccess order deny,allow deny from all

    Read the article

  • Git Clone from SSH Repository

    - by Mike Silvis
    I used to be able to clone from my personal git repository but now i seem to be running into an error. user:dev.site.com mikesilvis$ git clone { my ssh directory } server@ipaddress's password: remote: Counting objects: 3622, done. remote: Compressing objects: 100% (2718/2718), done. error: git upload-pack: git-pack-objects died with error. fatal: git upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed It seems to be working however while I push files to the repository.

    Read the article

  • Nginx PHP-FPM Basic Auth

    - by Lari13
    I have nginx with php-fpm installed on Debian Squeeze. Directory tree is: /var/www/mysite index.php secret_folder_1 admin.php static.html secret_folder_2 admin.php static.html pictures img01.jpg I need to close secret_folder_1 and secret_folder_2 with basic_auth. Now config looks like: location ~ /secret_folder_1/.+\.php$ { root /var/www/mysite/; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /var/www/mysite$fastcgi_script_name; include fastcgi_params; auth_basic "Restricted Access"; auth_basic_user_file /path/to/.passwd; } location ~ /secret_folder_1/.* { root /var/www/mysite/; auth_basic "Restricted Access"; auth_basic_user_file /path/to/.passwd; } Same config for secret_folder_2. Is it normal? I mean, first location for serving php files in restricted folder, and second location for serving static files. Can it be simplified?

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >