Search Results

Search found 3118 results on 125 pages for 'fragment caching'.

Page 80/125 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Problems Running Cherokee Web Server Admin - config_reader.c:249 - Parsing error

    - by Sebastian
    I'm running Cherokee web server 0.99.30 on (Ubuntu Hardy) and I have been having some issues getting the admin to run property. When I run sudo cherokee-admin -b Login: User: admin One-time Password: {password} Web Interface: URL: http://localhost:9090/ [20/11/2009 22:57:29.733] (error) config_reader.c:249 - Parsing error Cherokee Web Server 0.99.30 (Nov 20 2009): Listening on port ALL:9090, TLS disabled, IPv6 disabled, using epoll, 4096 fds system limit, max. 2041 connections, caching I/O, single thread When I go to the admin page I get a 503 Service Unavailable error page. Any idea about how I could fix this? Thanks

    Read the article

  • Setting up a router as a DNS server only

    - by Jacob R
    I have a Linksys WRT54GL router that I don't need anymore, since I had to buy a 3G capable router (Dovado 3GN). As I only have a 3G connection at home, I want to optimize it as much as possible. I want to setup a caching DNS server, including some blacklisting of ad domains. The router currently runs the DD-WRT firmware. Is it possible to use this router as an ordinary computer, running only a DNS server, disabling all other features such as DHCP, WLAN, etc? Connecting it to my other router, should I simply run a cable into the WAN-port of the Linksys router?

    Read the article

  • Command-line HTTP crawler for Windows?

    - by Pekka
    Would somebody have a recommendation for a web site crawler that can be invoked and equipped with settings from the command line? This would need to run in a Windows environment. Saving the data, following stylesheet links etc. is not an issue. I only need the crawler to start with a page, parse it, and follow all the links on the same domain so that in the end, all pages on the site have been requested once. Background: I'm setting up a web site that gets frequently uploaded from an office location. Combining data from various sources, it has several levels of caching. I don't want the first user to visit the site after a fresh upload to have to wait until the page has been generated and saved in the cache.

    Read the article

  • Better performance with memcached cluster or local memcaches?

    - by Nicholas Tolley Cottrell
    I have a small cluster of servers balancing a Java web app. Currently I have 3 memcached servers caching data and all web apps shares all 3 memcached instances. I often get strange slowdowns and timeouts to some of the memcacheds and I wondering if there is a good way of analyzing the performance. I am wondering whether my iptables rules (or some other system limitation) are blocking/slowing connections. I am considering reconfiguring the web apps so that they only query the memcached process on their own localhost.

    Read the article

  • How to use WPA2 client mode in the Linux-based Cisco WAP4410N access point

    - by joechip
    I have a Cisco WAP4410N access point that I want to use as a client to connect to a WPA2 wireless network (for WLAN service monitoring purposes). Supposedly this access point supports a "Wireless Client/Repeater" mode that allows to do this. The Repeater function is optional (I have that box unchecked so that nobody can connect to this access point wirelessly). I have verified through SSH that the access point gets configured as a client and not as a Master. But it never associates to the SSID I ask it to. This is what iwconfig shows: ath04 IEEE 802.11ng ESSID:"myownssid" Mode:Managed Channel:0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power:14 dBm Sensitivity=1/3 Retry:off RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=0/94 Signal level=161/162 Noise level=161/161 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Although I've never done this from the command line, I suppose I could use wpa_supplicant or wpa_client to associate it, but I don't know how to do that without editing configuration files and the filesystem is readonly. Besides, I would have to run those commands manually after every reboot. I'd like to know how to do this the Cisco way, if possible. If not, any trick to make this work would be useful. Edit: This is with the latest firmware, 2.0.4.2. And I found that not all of the filesystem is readonly, since /var and /tmp are mounted with type ramfs.

    Read the article

  • Getting 401 when using client certificate with IIS 7.5

    - by Jacob
    I'm trying to configure a web site hosted under IIS 7.5 so that requests to a specific location require client certificate authentication. With my current setup, I still get a "401 - Unauthorized: Access is denied due to invalid credentials" when accessing the location with my client cert. Here's the web.config fragment that sets things up: <location path="MyWebService.asmx"> <system.webServer> <security> <access sslFlags="Ssl, SslNegotiateCert"/> <authentication> <windowsAuthentication enabled="false"/> <anonymousAuthentication enabled="false"/> <digestAuthentication enabled="false"/> <basicAuthentication enabled="false"/> <iisClientCertificateMappingAuthentication enabled="true" oneToOneCertificateMappingsEnabled="true"> <oneToOneMappings> <add enabled="true" certificate="MIICFDCCAYGgAwIBAgIQ+I0z6z8OWqpBIJt2lJHi6jAJBgUrDgMCHQUAMCQxIjAgBgNVBAMTGURldiBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMTAxMjI5MjI1ODE0WhcNMzkxMjMxMjM1OTU5WjAaMRgwFgYDVQQDEw9kZXYgY2xpZW50IGNlcnQwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANJi10hI+Zt0OuNr6eduiUe6WwPtyMxh+hZtr/7eY3YezeJHC95Z+NqJCAW0n+ODHOsbkd3DuyK1YV+nKzyeGAJBDSFNdaMSnMtR6hQG47xKgtUphPFBKe64XXTG+ueQHkzOHmGuyHHD1fSli62i2V+NMG1SQqW9ed8NBN+lmqWZAgMBAAGjWTBXMFUGA1UdAQROMEyAENGUhUP+dENeJJ1nw3gR0NahJjAkMSIwIAYDVQQDExlEZXYgQ2VydGlmaWNhdGUgQXV0aG9yaXR5ghB6CLh2g6i5ikrpVODj8CpBMAkGBSsOAwIdBQADgYEAwwHjpVNWddgEY17i1kyG4gKxSTq0F3CMf1AdWVRUbNvJc+O68vcRaWEBZDo99MESIUjmNhjXxk4LDuvV1buPpwQmPbhb6mkm0BNIISapVP/cK0Htu4bbjYAraT6JP5Km5qZCc0iHZQJZuch7Uy6G9kXQXaweJMiHL06+GHx355Y="/> </oneToOneMappings> </iisClientCertificateMappingAuthentication> </authentication> </security> </system.webServer> </location> The client certificate I'm using in my web browser matches what I've placed in the web.config. What am I doing wrong here?

    Read the article

  • Setting up a router as a DNS server only

    - by Jacob R
    I have a Linksys WRT54GL router that I don't need anymore, since I had to buy a 3G capable router (Dovado 3GN). As I only have a 3G connection at home, I want to optimize it as much as possible. I want to setup a caching DNS server, including some blacklisting of ad domains. The router currently runs the DD-WRT firmware. Is it possible to use this router as an ordinary computer, running only a DNS server, disabling all other features such as DHCP, WLAN, etc? Connecting it to my other router, should I simply run a cable into the WAN-port of the Linksys router?

    Read the article

  • NVRAM for journals on Linux?

    - by symcbean
    I've been thinking about ways of speeding up disk I/O, and one of the bottlenecks I keep coming back to is the journal. There's an obvious benefit to using an SSD for the journal - over and above just write caching unless of course I just disable the journal with the write cache (after all devicemapper doesn't seem to support barriers). In order to get the benefits from using a BB write cache on the controller, then I'd need to disable journalling - but then the OS should try to fsck the system after an outage. Of course if the OS knows what's in the batter-backed memory then it could use it as the journal - but that means it must be exposed as a block device and only be under the control of the operating system. However I've not been able to find a suitable low-cost device (no, write-levelling for Flash is not adequate for a journal, at least one which uses Smartmedia). While there's no end of flash devices, disk/array controllers with BB write caches, so far I've not found anything which just gives me non-volatile memory addressable as a block storage device.

    Read the article

  • AWS ELB as backend for Varnish Accelerator

    - by addisonj
    I am working on a large deployment on AWS that has high uptime requirements and variable loads throughout the day. Obviously, this is the perfect use case for ELB (Elastic Load Balancer) and autoscaling. However, we also rely on varnish for caching of API calls. My initial instinct was to structure the stack so that varnish uses ELB as a backend which in turn hits an appGroup. Varnish -> ELB -> AppServers However, according to a few sources that isn't possible as ELB constantly changes the IP address of its DNS hostname, which varnish caches on start, meaning changes to the IP won't be picked up by varnish. Reading around however, it looks like people are doing this so I am wondering what workarounds exist? Perhaps a script to reload the vcl periodically? In the case of where this is really just not a good idea, any idea of other solutions?

    Read the article

  • Using wget to download pdf files from a site that requires cookies to be set

    - by matt74tm
    I want to access a newspaper site and then download their epaper copies (in PDF). The site requires me to login using my email address and password and then it permits me to access those PDF URLs. I'm having trouble 'setting my session' in wget. When I login into the site from my browser, it sets two cookie values: [email protected] Password=12345 I tried: wget --post-data "[email protected]&Password=12345" http://epaper.abc.com/login.aspx However, that just downloaded the login page and saved it locally The FORM on the login page has two fields: txtUserID txtPassword and radiobuttons like this: <input id="rbtnManchester" type="radio" checked="checked" name="txtpub" value="44"> Another button: <input id="rbtnLondon" type="radio" name="txtpub" value="64"> If I post this to the login.aspx page, I get the same output wget --post-data "[email protected]&txtPassword=12345&txtpub=44" http://epaper.abc.com/login.aspx If I do: --save-cookies abc_cookies.txt it doesnt seem to have anything other than the default content. For the last if I do --debug as well it says: ... Set-Cookie: ASP.NET_SessionId=05kphcn4hjmblq45qgnjoe41; path=/; HttpOnly ... Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId 05kphcn4hjmblq45qgnjoe41 Length: 107253 (105K) [text/html] Saving to: `login.aspx' ... Saving cookies to abc_cookies.txt. However, abc_cookies.txt shows ONLY the following: # HTTP cookie file. # Generated by Wget on 2011-08-16 08:03:05. # Edit at your own risk. (Not sure why I'm not getting any responses on SO - perhaps SU is a better forum - http://stackoverflow.com/questions/7064171/using-wget-to-download-pdf-files-from-a-site-that-requires-cookies-to-be-set) EDIT 1 C:\Temp>wget --cookies=on --keep-session-cookies --save-cookies abc_cookies.txt --post-data "txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7" http://epaper.abc.com/login.aspx --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. --2011-08-18 08:15:59-- http://epaper.abc.com/login.aspx Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00a2ae80 (new refcount 1). ---request begin--- POST /login.aspx HTTP/1.0 User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 100 ---request end--- [POST data: txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:17 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Set-Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145; path=/; HttpOnly Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 107253 ---response end--- 200 OK Registered socket 300 for persistent reuse. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 Length: 107253 (105K) [text/html] Saving to: `login.aspx.1' 100%[======================================================================================================================>] 107,253 24.9K/s in 4.2s 2011-08-18 08:16:05 (24.9 KB/s) - `login.aspx.1' saved [107253/107253] Saving cookies to abc_cookies.txt. Done saving cookies. C:\Temp>wget --referer=http://epaper.abc.com/login.aspx --cookies=on --load-cookies abc_cookies.txt --keep-session-cookies --save-cookies abc_cookies.txt http://epaper.abc.com/PagePrint/16_08_2011_001.pdf --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 --2011-08-18 08:16:12-- http://epaper.abc.com/PagePrint/16_08_2011_001.pdf Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00598290 (new refcount 1). ---request begin--- GET /PagePrint/16_08_2011_001.pdf HTTP/1.0 Referer: http://epaper.abc.com/login.aspx User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:30 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 content-disposition: attachement; filename=Default_logo.gif Cache-Control: private Content-Type: image/GIF Content-Length: 4568 ---response end--- 200 OK Registered socket 300 for persistent reuse. Length: 4568 (4.5K) [image/GIF] Saving to: `16_08_2011_001.pdf' 100%[======================================================================================================================>] 4,568 7.74K/s in 0.6s 2011-08-18 08:16:14 (7.74 KB/s) - `16_08_2011_001.pdf' saved [4568/4568] Saving cookies to abc_cookies.txt. Done saving cookies. Contents of abc_cookies.txt epaper.abc.com FALSE / FALSE 0 ASP.NET_SessionId owcrje55yl45kgmhn43gq145

    Read the article

  • Prevent Ultrabay HDD from ejecting on sleep

    - by Bryce Evans
    I have a lenovo T430s thinkpad with a small SSD primary drive and 500gb ultrabay drive. When I put the computer to sleep and then return, I get the message titled "problem ejecting < drive name " "Windows can't stop your 'Generic volume' device because a program is still using it." This pop up is very annoying every time every time I use the computer. I don't want to disable write caching [D:Hardware[drive]policiesquick removal] because I want best performance and never remove the drive. Any ways to avoid this pop up?

    Read the article

  • Does scheduling recycling app pool in IIS7 help the server conserve memory better?

    - by user29266
    Hello, I have a VPS (IIS7 with Win 2008) It's got: 40 websites and a SQL Server 2008 powering them with only 2 Gigs of RAM. None of the sites are mission critical, they are all just demos. I often have ram issues on the server because each site has does caching and generally uses a lot of memory. Would it make sense to set the application pools to recycle every 3 hours? I'm sure this would free up any memory leaks or processes left "hanging" Are there any other tips on this? Thank you very much!, Aron

    Read the article

  • Why is IIS 7.5 flushing file cache very often?

    - by Steffen
    We're running a Win 2008 R2 server with IIS 7.5 for serving image files. It's only used for static content, and file caching has been set up to cache files for 10 minutes. However the IIS frequently completely flushes the cache (seen by using Perfmon) It's not application pool recycling, it's not because the TTL has expired, so now I'm at a loss :-( I've included a screenshot of the perfmon graph where you can clearly see the issue. Is there anywhere I can see WHY it's doing these flushes ? (Note: I'm aware I could maybe detect it by attaching a debugger to the process, but that's not an option because it's a production server, and it cannot handle the slowdown a debugger would cause)

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • I can't work locally unless connected to the internet - how to fix?

    - by Rodney
    Hi, In Firefox, when I am disconnected from the net, I want to work locally on my local IIS server (Win XP, Firefox 3.5.10). I do NOT have Work Offline checked but FF says that it cannot find my site (ie. the message from FF if you try to access an online site offline) This applies to any localhost URL. I tried 127.0.0.1 and checked my Host file - that does not work either. If I check Work Offline then it shows the Firefox message that it cannot be reached because I have Work Offline checked. Unchecking it does not help. Then - I load up Safari, copy and paste the URL into that browser and it connects to my development localhost site. It is not just browser caching as I can log in etc. So Firefox will not let me develop locally unless I am connected to the internet, which is a problem. Suggestions please?

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • Keeping rackspace vserver alive

    - by mit
    It appears to me that rackspace somehow freezes cloud VMs after some idle time. This means the first page request to a php page takes much longer to respond than the subsequent requests. I am actually querying a machine with wget from a different host now to keep it "alive". But I wonder what frequency would be necessary. Does anyone know the time period after which they send a VM to "sleep"? I guess it would be some minutes. EDIT: There is no caching involved on the php site. It just recently moved from another vhost and there was never such latency on the first request.

    Read the article

  • IIS cannot access itself

    - by dave
    We are on a corporate network that uses ISA and I am having issues trying to not have requests go through ISA. I have IIS7 on my local Windows 7 machine that has websites and a service layer. The websites access the service layer using a xxxx.servicelayer.local address that is set up in my HOSTS file to point to 127.0.0.1. I have Windows Firewall client which I have disabled. I have tried both adding this address into IE so that it does not go through ISA and also disabled this section altogether. When the website (which is actually IIS making the request to itself) tries to access the service layer I receive an ISA error that proxy authentication has failed. Considering that everything I can see to configure is set to not go through the proxy, ISA, I cannot see how this is actually going through the proxy and giving this error. Is there something within Windows 7 that forces the proxy setting, some sort of caching or similar?

    Read the article

  • Normalize Accept-Encoding via HAProxy for optimized Squid hit rate

    - by Matt Beckman
    Our website infrastructure uses HAProxy for load balancing, a Squid cluster for caching, and application data is on an IIS cluster. We load balance HAProxy by URI to optimize the Squid hit-rate, but we know that Squid is holding different copies of each page based on the Accept-Encoding header passed to it by the browser, and so IE (gzip, deflate) will have a different copy of a cached page than Firefox (gzip,deflate) or Chrome (gzip,deflate,sdch). We want to normalize the Accept-Encoding headers and I think the best place to do so would be in HAProxy. I'd appreciate it if someone could offer some ideas on how to accomplish this without breaking support for clients without gzip or deflate support.

    Read the article

  • Redirect some URL requests to CloudFront and the rest direct to the normal server?

    - by indiehacker
    Say I have two types of URL requests that must be handled by my REST API: http://query.restapi.com/image.png?apikey=abc123 http://query.restapi.com/2.0/<apiKey>/resource.json?from=umi.us_census00.state_geometry Is it possible to redirect only some URL requests for static images (ie., regex: *.png?.*) to take advantage of CloudFront's caching and have the rest of the requests go directly to the normal EC2 server (or at least take a speedier indirect route to the normal EC2 server?). Perhaps the added request time for the misses to CloudFront is irrelevant to worry about? Or perhaps my situation is not best to use for CloudFront? I understand I will need to make DNS change where the current URL requests having http://query.restapi.com/some.png?apikey=0123 get redirected to http://d1234.cloudfront.net/some.png, but I am hoping there is some way for just redirecting static .png requests to take advantage of CloudFront?

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode e2fsck 1.41.3 (12-Oct-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... fsck.ext3: A block group is missing an inode table while checking ext3 journal for /dev/sda3 I tried also backup superblocks, same error result. There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

  • Wireless does not work on Ubuntu 9.04

    - by Yongwei Xing
    Hi all I install the Ubuntu 9.04 my old Lenovo Y520 laptop, the wirless does not work.My Wireless card is Intel Pro/wireless 2100 card. But I can not enable it. My wired card is working well. Does anyone meet it before. the ifconfig output is eth0 Link encap:Ethernet HWaddr 00:0a:e4:5f:6c:30 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:973 errors:0 dropped:0 overruns:0 frame:0 TX packets:1025 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:574701 (574.7 KB) TX bytes:169249 (169.2 KB) Interrupt:10 eth1 Link encap:Ethernet HWaddr 00:0c:f1:58:79:b5 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:10 Base address:0x8000 Memory:d0202000-d0202fff lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 B) TX bytes:480 (480.0 B) the output of iwconfig is eth1 unassociated ESSID:off/any Nickname:"ipw2100" Mode:Managed Channel=0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power:off Retry short limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I have another question. When my OS is 9.04, there is a icon about network connection on the panel at the top. After I upgraded to 9.10, that icon disappeared. How can I get that back? Best Regareds,

    Read the article

  • How to cache streaming video and silverlight with squid windows reverse proxy

    - by V. Romanov
    We have an intranet web server running a silverlight application (ACTUS media monitor if anyone cares to know). The server is used to record video and stream it to clients through a CDN solution. We want to put a reverse proxy in between the server and CDN provider in order to remove the office network bottleneck that's currently strangling us. I've set up SQUID for windows on a separate machine outside the network using squid BasicAccelerator configuration setting. It seems to work as far as the reverse proxy is concerned, requests are forwarded and the application is working but it doesn't seem to cache anything (no space is used on the drive where squid is installed). I found to explicit setting to turn caching on in squid, so i assume it's on by default. Perhaps I need some other trick to make the video and/or silverlight cacheable? Any help will be appreciated. Any info you need to help me will be provided at once. Thanks in advance!

    Read the article

  • "RPC Server is unavailable" on Windows 7 Client with Samba3 making logon impossible

    - by HalloDu
    I am running an Samba 3.4.* Server with LDAP Backend on Ubuntu 10.04 LTS as a PDC for purely Windows 7 Clients (about 50). It worked quite fine for a long time, but now sometimes on some machines (really strange to replicate) I cannot login, because he tells me that there are no logon servers. When I look in the machines eventlog after logging in locally, the only error it shows is: "RPC Server is unavailable". The system was ported quite a while back from another machine running the same software versions to this new one. Are there maybe trust or caching issues between the workstations and the domain controller? The number of machines affecting is steadily increasing and so do you know any reason as to why this problem occurs? Any information on workstation and domain trust would also be welcome (After googleing I did not find anything really enlightening).

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >