Search Results

Search found 10860 results on 435 pages for 'bad blocks'.

Page 18/435 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Bad ways to secure wireless network.

    - by Moshe
    I was wondering if anybody had any thoughts on this, as I recently saw a Verizon DSL network set up where the WEP key was the last 8 characters of the router's MAC address. (It's bad enough that hey were using WEP in the first place...)

    Read the article

  • Is my Cisco switch port bad?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced last week, however the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link (switchport configuration pastebin). We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • how to connect to server continuously using an bad internet connetion

    - by Nikhil
    I have a bad Internet connection, it disconnects frequently and on reconnect, I'm assigned a different IP address by the ISP. The problem is that I connect to a remote VPS (Ubuntu), and when Internet connection is disrupted n reconnected, I can no longer do anything on the terminal. I have to restart the terminal and re-initiate the connection. Is there a way I can have persistent connection with server.

    Read the article

  • Intermittent 400 bad request header field is missing ':' with Apache and SSL

    - by David Tinker
    Apache is returning rare intermittent 400 "bad request header field is missing ':' olhuaqv3o1t29flvr0 (random string)" errors. This seems to be related to https access and happens from Firefox, IE, Chrome etc. I am using a certificate from rapidssl. Apache/2.2.14 (Ubuntu) DAV/2 SVN/1.6.6 mod_jk/1.2.28 PHP/5.3.2-1ubuntu4.5 with Suhosin-Patch mod_ssl/2.2.14 OpenSSL/0.9.8k Anyone know how to fix this?

    Read the article

  • Eject a bad disk from optical drive

    - by Chuck
    I have an Alienware computer with one of the optical DVD drives that does not have a manual tray, just a slot to insert the disk. I recently inserted a disk that was apparently bad. It is unreadable does not show up in Windows Explorer. I tried right clicking on the Drive letter and hitting eject, but get an error message that there is no disk in the drive. How do I get the d--ned disk out so I can use the drive?

    Read the article

  • Is Fedora a bad choice for a server?

    - by Jakobud
    I'm taking over IT responsibilities at a small company. Most of the servers appear to be running various releases of Fedora (file servers, backup servers, oracle servers, etc). I don't have much experience with Fedora, but I was under the impression its geared for end user desktops/workstations/laptops. Is Fedora a bad choice for servers?

    Read the article

  • Nginx bad gateway and connection errors

    - by r2b2
    I've followed this tutorial for a basic installation of nginx. I always get bad gateway errors and when i look at the logs i see : [error] 3226#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client Here is my nginx.conf and contents of my sites-available/defaults nginx.conf, defaults I am also seeing this error : conflicting server name "explorable.com" on 0.0.0.0:80, ignored I am using Ubuntu 12.04, PHP5-FPM Thank you!

    Read the article

  • Images as links bad for SEO?

    - by Karl
    I am just about finished with my website and my as I was reading over Google's SEO information, they mentioned images as links without text is bad. What if it is simply a link to enlarge the picture? such as the following: <a href="images/image1.jpg"><img src="images/image1.jpg" height="200" width="100"></a> Any help on this would be great! Thanks, -K

    Read the article

  • Multi-threaded .NET application blocks during file I/O when protected by Themida

    - by Erik Jensen
    As the title says I have a .NET application that is the GUI which uses multiple threads to perform separate file I/O and notice that the threads occasionally block when the application is protected by Themida. One thread is devoted to reading from serial COM port and another thread is devoted to copying files. What I experience is occasionally when the file copy thread encounters a network delay, it will block the other thread that is reading from the serial port. In addition to slow network (which can be transient), I can cause the problem to happen more frequently by making a PathFileExists call to a bad path e.g. PathFileExists("\\\\BadPath\\file.txt"); The COM port reading function will block during the call to ReadFile. This only happens when the application is protected by Themida. I have tried under WinXP, Win7, and Server 2012. In a streamlined test project, if I replace the .NET application with a MFC unmanaged application and still utilize the same threads I see no issue even when protected with Themida. I have contacted Oreans support and here is their response: The way that a .NET application is protected is very different from a native application. To protect a .NET application, we need to hook most of the file access APIs in order to "cheat" the .NET Framework that the application is protected. I guess that those special hooks (on CreateFile, ReadFile...) are delaying a bit the execution in your application and the problem appears. We did a test making those hooks as light as possible (with minimum code on them) but the problem still appeared in your application. The rest of software protectors that we tried (like Enigma, Molebox...) also use a similar hooking approach as it's the only way to make the .NET packed file to work. If those hooks are not present, the .NET Framework will abort execution as it will see that the original file was tampered (due to all Microsoft checks on .NET files) Those hooks are not present in a native application, that's why it should be working fine on your native application. Oreans support tried other software protectors such as Enigma Protector, Engima VirtualBox, and Molebox and all exhibit the exact same problem. What I have found as a work around is to separate out the file copy logic (where the file exists call is being made) to be performed in a completely separate process. I have experimented with converting the thread functions from unmanaged C++ to VB.NET equivalents (PathFileExists - System.IO.File.Exists and CreateFile/ReadFile - System.IO.Ports.SerialPort.Open/Read) and still see the same serial port read blocked when the file check or copy call is delayed. I have also tried setting the ReadFile to work asynchronously but that had no effect. I believe I am dealing with some low-level windows layer that no matter the language it exhibits a block on a shared resource -- and only when the application is executing under a single .NET process protected by Themida which evidently installs some hooks to allow .NET execution. At this time converting the entire application away from .NET is not an option. Nor is separating out the file copy logic to a separate task. I am wondering if anyone else has more knowledge of how a file operation can block another thread reading from a system port. I have included here example applications that show the problem: https://db.tt/cNMYfEIg - VB.NET https://db.tt/Y2lnTqw7 - MFC They are Visual Studio 2010 solutions. When running the themida protected exe, you can see when the FileThread counter pauses (executing the File.Exists call) while the ReadThread counter also pauses. When running non-protected visual studio output exe, the ReadThread counter does not pause which is how we expect it to function. Thanks!

    Read the article

  • Bash script: bad interpreter

    - by Quandary
    Question: I get this error message: export: bad interpreter: No such file or directory when I execute this bash script #!/bin/bash MONO_PREFIX=/opt/mono-2.6 GNOME_PREFIX=/opt/gnome-2.6 export DYLD_LIBRARY_PATH=$MONO_PREFIX/lib:$DYLD_LIBRARY_PATH export LD_LIBRARY_PATH=$MONO_PREFIX/lib:$LD_LIBRARY_PATH export C_INCLUDE_PATH=$MONO_PREFIX/include:$GNOME_PREFIX/include export ACLOCAL_PATH=$MONO_PREFIX/share/aclocal export PKG_CONFIG_PATH=$MONO_PREFIX/lib/pkgconfig:$GNOME_PREFIX/lib/pkgconfig PATH=$MONO_PREFIX/bin:$PATH PS1="[mono-2.6] \w @ " But the bash path seems to be correct: asshat@IS1300:~/sources/mono-2.6# which bash /bin/bash asshat@IS1300:~# cd sources/ asshat@IS1300:~/sources# cd mono-2.6/ asshat@IS1300:~/sources/mono-2.6# ./mono-2.6-environment export: bad interpreter: No such file or directory asshat@IS1300:~/sources/mono-2.6# ls download mono-2.4 mono-2.4-environment mono-2.6 mono-2.6-environment asshat@IS1300:~/sources/mono-2.6# cp mono-2.6-environment mono-2.6-environment.sh asshat@IS1300:~/sources/mono-2.6# ./mono-2.6-environment.sh export: bad interpreter: No such file or directory asshat@IS1300:~/sources/mono-2.6# ls download mono-2.4-environment mono-2.6-environment mono-2.4 mono-2.6 mono-2.6-environment.sh asshat@IS1300:~/sources/mono-2.6# bash mono-2.6-environment asshat@IS1300:~/sources/mono-2.6# What am I doing wrong? Or is this a Lucid bug? [i did chmod + x]

    Read the article

  • Bad value for type timestamp on production server

    - by Juan Javaloyes
    I'm working with: seam 2.2.2 + hibernate + richfaces + jboss 5.1 + postgres I have an module which needs to load some data from the database. Easy. The problem is, on development it works fine, 100%, but when I deploy on my production server and try to get the data, an error rise: could not read column value from result set: fechahor9_504_; Bad value for type timestamp : [C@122e5cf SQL Error: 0, SQLState: 22007 Bad value for type timestamp : [C@122e5cf javax.persistence.PersistenceException: org.hibernate.exception.DataException: could not execute query [more errors] Caused by: org.postgresql.util.PSQLException: Bad value for type timestamp : [C@122e5cf at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:232) [more errors] Caused by: java.lang.NumberFormatException: Trailing junk on timestamp: '' at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:226) I can't understand why it works on my machine (development) and why not on production. Any clues? Anyone gone through the same problem? Is exactly the same compilation

    Read the article

  • Nginx + PHP-FPM = "Random" 502 Bad Gateway

    - by david
    I am running Nginx and proxying php requests via FastCGI to PHP-FPM for processing. I will randomly receive 502 Bad Gateway error pages - I can reproduce this issue by clicking around my PHP websites very rapidly/refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. Here is my setup: nginx/0.7.64 PHP 5.3.2 (fpm-fcgi) (built: Apr 1 2010 06:42:04) Ubuntu 9.10 (Latest 2.6 Paravirt) I compiled PHP-FPM using this ./configure directive ./configure --enable-fpm --sysconfdir=/etc/php5/conf.d --with-config-file-path=/etc/php5/conf.d/php.ini --with-zlib --with-openssl --enable-zip --enable-exif --enable-ftp --enable-mbstring --enable-mbregex --enable-soap --enable-sockets --disable-cgi --with-curl --with-curlwrappers --with-gd --with-mcrypt --enable-memcache --with-mhash --with-jpeg-dir=/usr/local/lib --with-mysql=/usr/bin/mysql --with-mysqli=/usr/bin/mysql_config --enable-pdo --with-pdo-mysql=/usr/bin/mysql --with-pdo-sqlite --with-pspell --with-snmp --with-sqlite --with-tidy --with-xmlrpc --with-xsl My php-fpm.conf looks like this (the relevant parts): ... <value name="pm"> <value name="max_children">3</value> ... <value name="request_terminate_timeout">60s</value> <value name="request_slowlog_timeout">30s</value> <value name="slowlog">/var/log/php-fpm.log.slow</value> <value name="rlimit_files">1024</value> <value name="rlimit_core">0</value> <value name="chroot"></value> <value name="chdir"></value> <value name="catch_workers_output">yes</value> <value name="max_requests">500</value> ... I've tried increasing the max_children to 10 and it makes no difference. I've also tried setting it to 'dynamic' and setting max_children to 50, and start_server to '5' without any difference. I have tried using both 1 and 5 nginx worker processes. My fastcgi_params conf looks like: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; Nginx logs the error as: [error] 3947#0: *10530 connect() failed (111: Connection refused) while connecting to upstream, client: 68.40.xxx.xxx, server: www.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com" PHP-FPM logs the follow at the time of the error: [NOTICE] pid 17161, fpm_unix_init_main(), line 255: getrlimit(nofile): max:1024, cur:1024 [NOTICE] pid 17161, fpm_event_init_main(), line 93: libevent: using epoll [NOTICE] pid 17161, fpm_init(), line 50: fpm is running, pid 17161 [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17162 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17163 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17164 started [NOTICE] pid 17161, fpm_event_loop(), line 111: ready to handle connections My CPU usage maxes out around 10-15% when I recreate the issue. My Free mem (free -m) is 130MB I had this intermittent 502 Bad Gateway issue when in was using php5-cgi to service my php requests as well. Does anyone know how to fix this?

    Read the article

  • Bad interpreter: No such file or directory

    - by user1497462
    I'm working through Michael Hartl's tutorial trying to learn Rails for the first time, and I've run into some issues. I recently reinstalled the whole Rails Installer because I had apparently inadvertently deleted some important files. Now, when I try running a test I get the following error: sh.exe": /c/Program Files (x86)/ruby-1.9.3/bin/bundle: "c:/Program: bad interpre ter: No such file or directory I checked my PATH and attempted to use the solution outlined here: Bundle command not found. Bad Interpreter ..but putting quotation marks around "C:\Program Files (x86)\ruby-1.9.3\bin" didn't do anything for me. I ran $ rails -v and got the following output: $ rails -v ?[31mCould not find multi_json-1.3.6 in any of the sources?[0m ?[33mRun `bundle install` to install missing gems.?[0m So then I tried running $bundle install and got the following issue again: Tom@TOM-PC /c/sample_app (updating-users) $ bundle install sh.exe": /c/Program Files (x86)/ruby-1.9.3/bin/bundle: "c:/Program: bad interpre ter: No such file or directory I'd really appreciate any help -- I've spent 5+ hours today trying to get back on track and am still at a loss. Thanks!

    Read the article

  • What's so bad about in-line CSS?

    - by ChessWhiz
    When I see website starter code and examples, the CSS is always in a separate file, named something like "main.css", "default.css", or "Site.css". However, when I'm coding up a page, I'm often tempted to throw the CSS in-line with a DOM element, such as by setting "float: right" on an image. I get the feeling that this is "bad coding", since it's so rarely done in examples. I understand that if the style will be applied to multiple objects, it's wise to follow "Don't Repeat Yourself" (DRY) and assign it to a CSS class to be referenced by each element. However, if I won't be repeating the CSS on another element, why not in-line the CSS as I write the HTML? The question: Is using in-line CSS considered bad, even if it will only be used on that element? If so, why? Example (is this bad?): <img src="myimage.gif" style="float:right" />

    Read the article

  • SMART says disk failure is imminent due to bad blocks, what do I need to do?

    - by flix
    I have on my hard drive 2 OSes: Ubuntu 12.04 and Windows Vista (I keep it just because of school). Everything was OK on both OSes, but one day on Ubuntu I was getting awkward noises from my notebooks' hard drive and then everything stopped and I couldn't do anything. On Windows everything was OK. Every time I boot Ubuntu I can get 5 minutes normal run time, without problems. After that the hard drive sounds crazy and nothing works. I could run S.M.A.R.T tests from a older Ubuntu CD (10.04) from the GUI (Disk Utility, or something like that and from terminal). From the GUI, I got that the DISK FAILURE IS IMMINENT and I have ~700 bad blocks (or broken blocks, I had that test I while ago) on my HDD. From the terminal (I don't remember if it was fsck or a SMART test command) I got that the HDD will fail in under 24 hours. Since then it passed 2-3 weeks. I've tried "badblocks" but after 10 hours it was still running and I had to stop it. Now I have to use cygwin and other alternatives for my Linux apps on Windows. How can I separate the bad blocks from Ubuntu so it wouldn't use them? Please help.

    Read the article

  • c++ file bad bit

    - by user230911
    Hi, when I run this code, the open and seekg and tellg operation all success. but when I read it, it fails, the eof,bad,fail bit are 0 1 1. What can cause a file bad? thanks int readriblock(int blockid, char* buffer) { ifstream rifile("./ri/reverseindex.bin", ios::in|ios::binary); rifile.seekg(blockid * RI_BLOCK_SIZE, ios::beg); if(!rifile.good()){ cout<<"block not exsit"<<endl; return -1;} cout<<rifile.tellg()<<endl; rifile.read(buffer, RI_BLOCK_SIZE); **cout<<rifile.eof()<<rifile.bad()<<rifile.fail()<<endl;** if(!rifile.good()){ cout<<"error reading block "<<blockid<<endl; return -1;} rifile.close(); return 0; }

    Read the article

  • My block is not retaining some of its objects

    - by Drew Crawford
    From the Blocks documentation: In a reference-counted environment, by default when you reference an Objective-C object within a block, it is retained. This is true even if you simply reference an instance variable of the object. I am trying to implement a completion handler pattern, where a block is given to an object before the work is performed and the block is executed by the receiver after the work is performed. Since I am being a good memory citizen, the block should own the objects it references in the completion handler and then they will be released when the block goes out of scope. I know enough to know that I must copy the block to move it to the heap since the block will survive the stack scope in which it was declared. However, one of my objects is getting deallocated unexpectedly. After some playing around, it appears that certain objects are not retained when the block is copied to the heap, while other objects are. I am not sure what I am doing wrong. Here's the smallest test case I can produce: typedef void (^ActionBlock)(UIView*); In the scope of some method: NSObject *o = [[[NSObject alloc] init] autorelease]; mailViewController = [[[MFMailComposeViewController alloc] init] autorelease]; NSLog(@"o's retain count is %d",[o retainCount]); NSLog(@"mailViewController's retain count is %d",[mailViewController retainCount]); ActionBlock myBlock = ^(UIView *view) { [mailViewController setCcRecipients:[NSArray arrayWithObjects:@"[email protected]",nil]]; [o class]; }; NSLog(@"mailViewController's retain count after the block is %d",[mailViewController retainCount]); NSLog(@"o's retain count after the block is %d",[o retainCount]); Block_copy(myBlock); NSLog(@"o's retain count after the copy is %d",[o retainCount]); NSLog(@"mailViewController's retain count after the copy is %d",[mailViewController retainCount]); I expect both objects to be retained by the block at some point, and I certainly expect their retain counts to be identical. Instead, I get this output: o's retain count is 1 mailViewController's retain count is 1 mailViewController's retain count after the block is 1 o's retain count after the block is 1 o's retain count after the copy is 2 mailViewController's retain count after the copy is 1 o (subclass of NSObject) is getting retained properly and will not go out of scope. However mailViewController is not retained and will be deallocated before the block is run, causing a crash.

    Read the article

  • heartbeat: Bad nodename in /etc/ha.d//haresources [node1]

    - by Richard
    I'm trying to start heartbeat on Ubuntu 10.04 with service heartbeat start, but getting the following errors: heartbeat[24829]: 2011/11/22_19:31:07 ERROR: Bad nodename in /etc/ha.d//haresources [node1] heartbeat[24829]: 2011/11/22_19:31:07 ERROR: Configuration error, heartbeat not started. On on server uname -n produces loadb1, on the second server uname -n produces loadb2. The two servers can ping each other okay with those names. This is /etc/ha.d/ha.cnf on both servers: debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 10 udpport 694 bcast eth1 ucast eth0 my.external.ip ucast eth0 my.external.ip ucast eth1 10.0.0.5 ucast eth1 10.0.0.6 #udp eth0 node loadb1 node loadb2 auto_failback off And this is /etc/ha.d/haresources on both servers: node1 IPaddr::46.20.121.113 httpd smb dhcpd Authkeys is also set up. What am I doing wrong? The part where I'm least clear is the ucast/bcast lines.

    Read the article

  • rpmbuild on Cent OS 6: "cpio: bad magic"

    - by djhaskin987
    When I try to run this command : rpmbuild -bb SPECS/software.spec I get an error when the WAR file (as in tomcat java web archive file) is being added to the rpm: error: create archive failed on file /<filepath>/<filename>.war: cpio: Bad magic This didn't use to happen. The only things that have changed since this worked was an upgrade. Further, no problems are happening like it on my CentOS 5 box. I compile and build the exact same code set on both machines, but CentOS 6 won't create an rpm. How do I troubleshoot this? I have already googled it and received few (if any) useful links. This appears nowhere in the user guide for RPM as far as I can see, and Maximum RPM has no section on this.

    Read the article

  • Hard disk with bad clusters

    - by Dan
    I have been trying to backup some files up to DVD recently, and the burn process failed saying the CRC check failed for certain files. I then tried to browse to these files in Windows explorer and my whole machine locks up and I have to reboot. I ran check disk without the '/F /R' arguments and it told me I had bad sectors. So I re-ran it with the arguments and check disk fails during the 'Chkdsk is verifying usn journal' stage with this error: Insufficient disk space to fix the usn journal $j data stream The hard disk is a 300GB Partition on a 400GB Disk, and there is 160GBs of free space on the partition. My os (Windows 7) is installed on the other partition and is running fine. Any idea how I fix this? or repair it enough to copy my files off it?

    Read the article

  • Using AuthzSVNAccessFile for controlling SVN Access produces HTTP 400 Bad Request

    - by meeper
    I have a new repository on an existing subversion server that requires us to perform path based authorization within the repository. I found that the AuthzSVNAccessFile directive in apache is directly responsible for allowing this functionality. After fixing several other problems such as AuthzSVNAccessFile preventing SVNListParentPath from operating properly, I am left with one single problem. I can checkout, I can update, I can commit, BUT I cannot execute an SVN COPY for performing branch/tagging operations. The moment I comment out the AuthzSVNAccessFile line in the Apache config everything works as expected except the obvious path authorizations. Versions: The server OS is Debian 6.0.7 (Squeeze) Apache 2.2.16-6+squeeze11 Server Subversion 1.6.12dfsg-7 Clients are running windows Clients tried are: TortoiseSVN 1.8.2 Build 24708 64bit SVN CLI Client 1.8.3 (r1516576) Authentication is performed via AD to a Windows 2003 domain and appears to be operating normally. I have stripped out all other configurations and repository setups to produce this single configuration that reproduces the problem. Apache Configuration: <VirtualHost *:443> ServerName svn-test.company.com ServerAlias /svn-test ServerAdmin [email protected] SSLEngine On SSLCertificateFile /etc/apache2/apache.pem ErrorLog /var/log/apache2/svn-test_error.log LogLevel warn CustomLog /var/log/apache2/svn-test_access.log combined ServerSignature On # Repository Access to all Repositories <Location "/"> DAV svn SVNParentPath /var/svn SVNListParentPath on AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative Off AuthName "Subversion Test Repository System" AuthLDAPURL "ldap://adserver.company.com:389/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" NONE AuthLDAPBindDN "CN=service_account,OU=ServiceIDs,OU=corp,OU=Delegated,DC=na,DC=corp,DC=company,DC=com" AuthLDAPBindPassword service_account_password Require valid-user SSLRequireSSL </Location> # <LocationMatch /.+> is a really dirty trick to make listing of repositories work # http://d.hatena.ne.jp/shimonoakio/20080130/1201686016 <LocationMatch /.+> AuthzSVNAccessFile /etc/apache2/svn_path_auth </LocationMatch> </VirtualHost> SVN Access File: [/] * = rw The repository used (AuthTestBasic) consists of the following directory structure and contains no externals (this is a literal listing, not an example): / /branches/ /tags/ /trunk/ /trunk/somefile.txt Tortoise produces the following error during a tag operation in it's tag result window: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The svn.exe CLI client produces the following error: C:\Users\e20epkt>svn copy https://servername/authtestbasic/trunk https://servername/authtestbasic/tags/tag1 -m "svn cli client" svn: E175002: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The Apache error log has nothing in it, however the apache access log has the following in it (IP addresses and usernames changed obviously): 10.1.2.100 - - [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 401 2595 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 996 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 884 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/trunk HTTP/1.1" 207 692 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/0/trunk HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 200 674 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 207 548 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags/tag1 HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "MKACTIVITY /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags HTTP/1.1" 207 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/vcc/default HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPPATCH /authtestbasic/!svn/wbl/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e/2 HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/ver/1/tags HTTP/1.1" 201 724 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "COPY /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 400 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "DELETE /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 204 1956 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" You'll see that the second to last line contains the COPY command with the HTTP 400 response, however, there doesn't appear to be any indication as to why. Please note that, while yes this is a test repository on a test server, I am experiencing this same issue in this test setup where I have eliminated all other possible causes (mixed repository configurations, externals, etc). I have also confirmed that all files for the repository (/var/svn/authtestbasic) are owned by the Apache user www-data.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >