Search Results

Search found 92198 results on 3688 pages for 'http error'.

Page 235/3688 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • heimdal error Decrypt integrity check failed for checksum type

    - by user880414
    when I try to authentication with heimdal-kdc ,I get this error in kdc log : (enctype aes256-cts-hmac-sha1-96) error Decrypt integrity check failed for checksum type hmac-sha1-96-aes256, key type aes256-cts-hmac-sha1-96 and authentication failed!!! but authentication with kinit is correct!! my kerb5.conf is [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log krb5 = FILE:/var/log/krb5.log [libdefaults] default_realm = AUTH.LANGHUA clockskew = 300 [realms] AUTH.LANGHUA = { kdc = AUTH.LANGHUA } [domain_realm] .langhua = AUTH.LANGHUA [kdc] and when add this line to krb5.conf (in kdc tag) require-preauth = no I get this error krb5_get_init_creds: Client have no reply key

    Read the article

  • Enforcing a specific order for cookie headers

    - by Paul
    We have an application that cares about the order of cookie headers. It shouldn't, since this isn't mandated by the standards and indeed we're getting the headers in various different orders So we would like to rewrite the headers in Apache so that the cookie headers always appear in a specific order. Is there any way of doing this? An ideal solution would be specifically about cookie headers, but something that lets us mess with the header order more generally would do too.

    Read the article

  • Error in New-MailboxExportRequest - "Couldn't find the Enterprise Organizational Container" even though permissions seem right

    - by tacos_tacos_tacos
    When disabling users I typically will be asked to retain a copy of their mailbox. I accomplish this by literally creating their mailbox in Outlook and then exporting to PST. Is there some way around having to do this just to save a mailbox? Edit: I've tried New-MailboxExportRequest but I keep getting the following after providing an alias: Supply values for the following parameters: FilePath: \\localhost\EXPORT_PST\myuser.pst Mailbox: myuser Couldn't find the Enterprise Organization container. <--- the error I've also tried supplying [email protected] as the mailbox as well. Edit 2: I had already seen the post at http://www.mikepfeiffer.net/2010/10/error-couldnt-find-the-enterprise-organization-container-when-creating-a-new-mailbox-export-request/ so I set the permissions as follows below: NTFS permissions Sharing permissions I am still getting that error. Final Solution In Exchange SP2, it does not warn you that you have not set role assignments, it just fails. So be sure to create a management role for "Mailbox Import Export" and add your user to the group, then restart PowerShell for this to take effect.

    Read the article

  • Error when trying to access Shared files from iMac via smb

    - by SatheeshJM
    I used to access all my Windows XP shared files on my Mac using Finder -- Window -- Connect to server. Now all of a sudden, an error crops up when I try to connect. I get the error "There was a problem connecting to the server "192.168.1.*" The server may not exist or it is unavailable at this time. Check the server name or IP address, check your internet connection and then try again. How can I remove this error and access my shared files from my Mac? P.S my network connections is fine.

    Read the article

  • ProFTPD pam_ecryptfs: Error getting passwd

    - by Olirav
    proftpd: pam_ecryptfs: Error getting passwd info for user [USERNAME] I am getting this error in the syslog nearly every time any user connects via FTP, the user is able to connect and the session seems to continue without a hitch. ProFTPD.log shows no error, this warning only show in the syslog. My VPS is running Ubuntu 11.10 and Proftpd 1.3.4rc2 from the Ubuntu Repo, I have made only a few changes to the config (no weird auth methods). This has been going on for quite a while but I can't quite find the cause. Anyone got any ideas?

    Read the article

  • Mount secure WebDAV with davfs2 ssl error

    - by Wouter0100
    I try to mount my secure webdav on my Ubuntu notebook, I've added the following to my fstab: https://[URL] /mnt/[folder] davfs user,auto,uid=wouter0100,file_mode=600,dir_mode=700 0 1 But when I run the command sudo mount -a I keep getting: /sbin/mount.davfs: Mounting failed. Server certificate verification failed: issuer is not trusted I've tried very much different things, but I couldn't get it working.. It's signed by Comodo and valid (when I load it in Chrome it's okay).

    Read the article

  • Cannot mount USB disks due to "not authorized" error

    - by shadovv
    I have Ubuntu 12.04 (.2?) installed. I don't plan to need GUI after setup so I modified the grub to start in console. However, whenever I want to start up GUI I need to type startx. Simple/basic stuff. However I seem to be hitting a snag: I am having trouble mounting/accessing USB flash drives in the GUI I started in the console. "Unable to mount location - not authorized". Seemed like it should be an easy fix, but can't figure out how I'm overlooking it. Can someone help me out?

    Read the article

  • ubuntu-support-status error

    - by Robert Vila
    Running Natty, I read: "The ubuntu-support-status command will print the exact status of your system." I typed: $ ubuntu-support-status Traceback (most recent call last): File "/usr/bin/ubuntu-support-status", line 105, in <module> (still_supported, support_str) = get_maintenance_status(cache, pkg.name, support_tag) File "/usr/bin/ubuntu-support-status", line 37, in get_maintenance_status raise Exception("No date tag found") Exception: No date tag found Does someone know why can this happen. Could it be because is it running on a Mac. How can this be fixed?

    Read the article

  • Winamp "Now Playing" POST to PHP script

    - by Brad
    I have tried/researched 8 or so different plugins for Winamp that allow this functionality, and none of them seem to work on Windows 7 x64! I need a plugin for Winamp that will send the current playing track information to a PHP script, via GET or POST. I have almost no requirements... it can be the name/artist, or the filename (preferred). No fancy functionality needed, just something basic! The four that I actually are: Now Playing XML HTML Server SongStat Currently Hearing Any suggestions? (Note to Moderators: No, I'm not looking for "shopping recommendations", I just need a plugin that works. There probably is only "one answer" to this question, and if there isn't, feel free to make it a CW.)

    Read the article

  • Only allow the POST method for a specific file in a directory

    - by Dave Chen
    I have one file that should only be accessible via the POST method. /var/www/folder/index.php The document root is /var/www/ and index.php is nested inside a folder. My configurations are as follows: <Directory "/var/www/folder"> <Files "index.php"> order deny,allow Allow from all <LimitExcept POST> Deny from all </LimitExcept> </Files> </Directory> I visit my server at 127.0.0.1/folder but I can GET and POST the file just like normal. I've also tried reversing the order, order allow,deny, require, limitexcept and limit. How can I only allow POST requests to be processed by one file in a folder?

    Read the article

  • Access Denied Error in PagesListCPVEventReceiver post SharePoint SP2 upgrade

    - by Jeff
    I am seeing the following errors from one of the SharePoint Web Front Ends after the SP2 upgrade. Has anyone else seen this error or a solution? Event Type: Error Event Source: Windows SharePoint Services 3 Event Category: General Event ID: 6875 Date: 2009-10-27 Time: 13:09:57 User: N/A Computer: XXXXXXX Description: Error loading and running event receiver Microsoft.SharePoint.Publishing.PagesListCPVEventReceiver in Microsoft.SharePoint.Publishing, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c. Additional information is below. : Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Exclude pings from apache error logs (ran from PHP exec)

    - by fooraide
    Now, for a number of reasons I need to ping several hosts on a regular basis for a dashboard display. I use this PHP function to do it: function PingHost($strIpAddr) { exec(escapeshellcmd('ping -q -W 1 -c 1 '.$strIpAddr), $dataresult, $returnvar); if (substr($dataresult[4],0,3) == "rtt") { //We got a ping result, lets parse it. $arr = explode("/",$dataresult[4]); return ereg_replace(" ms","",$arr[4]); } elseif (substr($dataresult[3],35,16) == "100% packet loss") { //Host is down! return "Down"; } elseif ($returnvar == "2") { return "No DNS"; } } The problem is that whenever there is an unknown host, I will get an error logged to my apache error log (/var/log/apache/error.log). How would I go about disabling logs for this particular function ? Disabling logs in the vhost is not an option since logs for that vhost are relevant, just not the pings. Thanks,

    Read the article

  • Attempting to GREP details of a Java error

    - by BOMEz
    I'm running Ubuntu 11 and I'm having some issues with grep. I have a shell script (see below) which essentially checks if a certain Java program of mine is running, if not it runs it. That part works out great! If my Java application throws any kind of exception however I would like to capture that information and email it to myself. How can I go about checking to see if the call to java -jar /bin/MyApp.jar fails? I tried piping it to grep, but that doesn't seem to work. Below is the full script that I've written: #Check if MyApp.jar is running, if not run it. if [ $(ps aux | grep 'java' | grep -v grep | wc -l | tr -s "\n") -eq 0 ] then echo "PacketCapture Starting...\n" java -jar /bin/MyApp.jar echo "PacketCapture Started.\n" else echo "PacketCapture already running.\n" fi

    Read the article

  • Broadband Traffic Question

    - by rutherford
    I have a broadband ADSL line with plus.net in the UK. Having checked the modem there is no firewall or any weird features enabled. But since I arrived at the apartment (the broadband already being installed), I cannot log into Twitter nor update any of my wordpress blogs (I can browse them and log in, but cannot save any edits or new posts). It only seems to affect these two sites in their unique ways. If I take the netbook I use in this place out to say a McDonalds or some other wifi access point then these sites work fine again. Anyone know what could possibly be preventing access of the pages in question? The only thing common to these pages are the POST response they are expecting. But POST form submission works fine on other sites...

    Read the article

  • Recovering SQL Server 2008 Database From Error 2008

    MS SQL Server 2008 is the latest version of SQL Sever. It has been designed with the SQL Server Always On technologies that minimize the downtime and maintain appropriate levels of application availa... [Author: Mark Willium - Computers and Internet - May 13, 2010]

    Read the article

  • Is it possible to do a 301 redirect AND redirect to the requested resource?

    - by Pure.Krome
    For one of our projects, we're doing a rebranding of the website name, logo, etc... As such, we need to 301 Moved Permenantly redirect all users from the old domain to the new domain. With IIS7, that's pretty simple. We just create a new website that redirects all traffic to a host-headered domain .. to the new one. But this loses their original destination resource. eg. Old Domain: www.OldDomain.com New Domain: www.NewDomain.com User: www.OldDomain.com/user/PureKrome -> 301 --> www.newDomain.com Notice how it's going to the new domain BUT not to /user/PureKrome? How can I do this so it goes to the new domain and keeps the original resource request? I'm guessing URL-ReWriter for IIS7 might help? Also, what happens if I want to do this... CurrentDomain 1: Domain.com CorrectDomain 1: www.Domain.com CurrentDomain 2: AnotherDomain.com CorrectDomain 2: www.AnotherDomain.com Is it also possible to have those in the same IIS website? So any URL to domain.com will 301 to www.domain.com Right now I'm making 2 IIS websites, with a 301 hardcoded (which still means I lose the original resource request, too). Help!

    Read the article

  • Cache Control Headers with IIS 7.5

    - by Brad
    I'm trying to wrap my head around client side (web browser) caching and how it works in relation to IIS 7.5 cache control headers. In particular: If we want to force clients to reload cached resources, how must IIS be configured? Do we need to set expire web content immediately if the resources on the server have a more recent Modified Date (or ETag value)? Right now we're not setting any cache headers. So if I set a cache header of no-cache (which I think is the equivalent of expire web content immediately) will that force the web browser to obtain a new version of a particular file. Or will the browser only request a new version after it deems its current copy to be stale and then from that point forward not cache it? Would a best practice be to set a cache control flag of 1 week, then 8 days before I know I am going to make a change set the cache control down to for instance 30 minutes? But if I do that and then need to immediately expire an item from users caches because there was an issue with it how do I do that?

    Read the article

  • How to fix 0x800CCC0E Error Codes?

    - by greenber
    I recently started receiving the above-mentioned error which is apparently a Winsock error message it is preventing me from checking my e-mail with Gmail, although there is no problem with my e-mail was ATT e-mail and MSN mail. I found a number of supposed fixit programs which found a great number of errors in my registry (although Wyse and Glary did not find anything wrong with my registry?) And offered to fix them for a fee. I would much rather not pay! :-) Does anybody here know what is causing this error and how to fix it? oh – I am using Windows 7/ultimate and Live Mail as my e-mail reader. Thank you. Ross

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • Drive stopped working on windows server 2003 and I receive a "controller error"

    - by Durden81
    I can access the server in safe mode. I have a Proliant 360 Hp server with Windows server 2003 R2. The event viewer is completely filled up with this error: the driver detected a controller error on Device\Harddisk3\DR3 I individuated the drive affected. It is drive H that is a secondary non mirrored drive. When I access anything on that drive I receive: "the request could not be performed because of an I/O device error" What should I do? Is this just a driver issue or a hard drive failure? Please give me a quick help as my websites are offline due to this. Any suggestion is welcome!

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • google webmaster soft 404 on 301

    - by Daniel
    I'm looking through google webmaster that my page is generating soft 404 errors (https://support.google.com/webmasters/answer/181708?hl=en) google says: We recommend that you always return a 404 (Not found) or a 410 (Gone) response code in response to a request for a non-existing page But I've got redirects set up that handle old pages to redirect to the proper new pages using a 301. The website links changed because of a use of a framework, which allows it to be more consistent, but the old pages till have links out there to these. Should I be worried about this? IS google penalizing the site for this? (Using IIS 8, Tomcat, CF10, Win)

    Read the article

  • How would I recognize the "spoon-feeding problem" on a dynamic webapp server?

    - by Don Spaulding
    The "spoon-feeding problem", as it was recently explained to me, happens when connections to your application server are tied up feeding data across slow network connections to your clients. This makes sense to me and now I understand the importance of putting a highly-concurrent proxy in front of my app servers. My question is, how did the first person to recognize this problem figure it out? What *nix tools and troubleshooting techniques would help me to recognize this problem if I hadn't had it explained to me?

    Read the article

  • Parsing glGetShaderInfoLog() to get error info. Is this reliable, or is there a better way?

    - by m4ttbush
    I want to get a list of errors and their line numbers so I can display the error information different to how it's formatted in the error string, and also show the line in error. It looks easy enough to just parse the result of glGetShaderInfoLog(), look for "ERROR:" then read the next number up to : and then the next, and then the error description up to the next newline. But the OpenGL docs say "Application developers should not expect different OpenGL implementations to produce identical information logs." Which makes me worry that my code may behave incorrectly on different systems. I don't need them to be identical, I just need them to follow the same format. So is there a better way to get a list of errors with line number separate, is it safe to assume that they'll always follow the "ERROR: 0:123:" format, or is there simply no reliable way to do this? Thanks!

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >