Search Results

Search found 3604 results on 145 pages for 'bigced 21'.

Page 90/145 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Returning a 404 page when a folder is accessed from one domain, but allowing access from other domains and IP addresses

    - by okw
    Situation: I want to return a 404 page ("404.php") when a folder ("hidden") is accessed from the example.com domain. I want the same folder to be accessible from a subdomain ("hidden.example.com") or from a different domain ("hidden.com") which are both configured in a single VirtualHost entry. The server has multiple IP addresses that it listens on. Each IP address serves identical content from the example.com domain (sharing a VirtualHost entry.) I want the folder to be accessible from the IP address. The server is configured to use SSL/TLS/HTTPS. HTTPS is optional on example.com, but HTTPS is enforced in the .htaccess file for the hidden folder using a rewrite rule shown below. /www/hidden/.htaccess RewriteCond %{HTTPS} !=on RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [R,L] I know that {SERVER_ADDR} gives the server's IP address, but does it return the one that the client is requesting from? I'm also starting to think that something in the VirtualHosts file would be more appropriate. Any thoughts on this? What should be allowed: http://87.65.43.21/hidden/ https://87.65.43.21/hidden/ http://12.34.56.78/hidden/ https://12.34.56.78/hidden/ http://hidden.example.com/ https://hidden.example.com/ http://hidden.com/ https://hidden.com/ http://www.hidden.com/ https://www.hidden.com/ What should be 404-ed with 404.php http://example.com/hidden/ https://example.com/hidden/ http://www.example.com/hidden/ https://www.example.com/hidden/ http://example.com/hidden/hiddenfile.php https://example.com/hidden/hiddenfile.php etc. Thanks.

    Read the article

  • can't properly shutdown ubuntu and sound problem on HP mini 311 netbook

    - by Viele
    hi, I recently bought a HP mini 311 netbook. I replaced its harddrive and installed ubuntu 10.04. Since then, I have encountered some very strange problem with its sound and shutdown/reboot. At times, when I start the computer, it will have no sound, on the GUI the sound is at max, but no sound is available. This sometimes also happen after after upgrades, hibernate, and toggling the wireless radio button. Strangely, when the sound is out the device will also refuse to be shutdown. If I shutdown the computer using the GUI, it will simply go back to the log in screen without actually shutting down. If I use "sudo shutdown 0", the computer will hang on the loading screen of the shut down process. I had to force the pc to shutdown by holding the power button down. usually (probably always) after I force a turn-off then start off again. the sound and shutdown become normal. I wonder if anyone have clues regarding to the cause of this problem. This the info about the computer: 1) installed ubuntu 10.04LTS RC, later upgraded to formal released version. 2) cat /proc/asound/version == "Advanced Linux Sound Architecture Driver Version 1.0.21." however, when doing 'alsaconf' the version displayed is 1.0.23 any help is appreciated. Thanks

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • Connection timeout when trying to SSH

    - by dan
    The other day I tried to connect to my remote server via SSH as i always have. But now when I try to connect it just times out after about 60 seconds. I run service ssh start Which tells me that Job is already running: ssh. I then ran $netstat -tnlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2030/mysqld tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 2157/perl tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3028/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2273/master tcp6 0 0 :::80 :::* LISTEN 2618/apache2 tcp6 0 0 :::21 :::* LISTEN 2291/proftpd: (acce tcp6 0 0 :::22 :::* LISTEN 3028/sshd I am able to access subdomains on my site, and FTP, but don't have the ability to SSH or even ping remotely. Any thoughts?

    Read the article

  • configure Squid3 proxy server on Ubuntu with caching and logging

    - by Panshul
    I have a ubuntu 11.10 machine. Installed Squid3. When i configure the squid as http_access allow all, everything works fine. my current configuration mostly default is as follows: 2012/09/10 13:19:57| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/09/10 13:19:57| Processing: acl manager proto cache_object 2012/09/10 13:19:57| Processing: acl localhost src 127.0.0.1/32 ::1 2012/09/10 13:19:57| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/09/10 13:19:57| Processing: acl SSL_ports port 443 2012/09/10 13:19:57| Processing: acl Safe_ports port 80 # http 2012/09/10 13:19:57| Processing: acl Safe_ports port 21 # ftp 2012/09/10 13:19:57| Processing: acl Safe_ports port 443 # https 2012/09/10 13:19:57| Processing: acl Safe_ports port 70 # gopher 2012/09/10 13:19:57| Processing: acl Safe_ports port 210 # wais 2012/09/10 13:19:57| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/09/10 13:19:57| Processing: acl Safe_ports port 280 # http-mgmt 2012/09/10 13:19:57| Processing: acl Safe_ports port 488 # gss-http 2012/09/10 13:19:57| Processing: acl Safe_ports port 591 # filemaker 2012/09/10 13:19:57| Processing: acl Safe_ports port 777 # multiling http 2012/09/10 13:19:57| Processing: acl CONNECT method CONNECT 2012/09/10 13:19:57| Processing: http_access allow manager localhost 2012/09/10 13:19:57| Processing: http_access deny manager 2012/09/10 13:19:57| Processing: http_access deny !Safe_ports 2012/09/10 13:19:57| Processing: http_access deny CONNECT !SSL_ports 2012/09/10 13:19:57| Processing: http_access allow localhost 2012/09/10 13:19:57| Processing: http_access deny all 2012/09/10 13:19:57| Processing: http_port 3128 2012/09/10 13:19:57| Processing: coredump_dir /var/spool/squid3 2012/09/10 13:19:57| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/09/10 13:19:57| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/09/10 13:19:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 2012/09/10 13:19:57| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/09/10 13:19:57| Processing: refresh_pattern . 0 20% 4320 2012/09/10 13:19:57| Processing: http_access allow all 2012/09/10 13:19:57| Processing: cache_mem 512 MB 2012/09/10 13:19:57| Processing: logformat squid3 %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru 2012/09/10 13:19:57| Processing: access_log /home/panshul/squidCache/log/access.log squid3 The problem starts when I enable the following line: access_log /home/panshul/squidCache/log/access.log I start to get proxy server is refusing connections error in the browser. on commenting out the above line in my config, things go back to normal. The second problem starts when i add the following line to my config: cache_dir ufs /home/panshul/squidCache/cache 100 16 256 The squid server fails to start. Any suggestions what am I missing in the config. Please help.!!

    Read the article

  • Using "route add" to tell my computer to use a direct ethernet connexion instead of wifi ?

    - by TheSamFrom1984
    2 PCs are involved. Both are connected to the internet via Wifi on the same router. I can ping to/from each other and share folders flawlessly, but I'd like to be able to set a direct Ethernet link between them to speed up file transfers AND keep the Wifi connections (no gateway). So I plugged my RJ45 cable, and set up the connection. It works, but the PCs are only using this connection if one of them if disconnected from the Wifi. PC1 local address is 192.168.0.7 on its ethernet interface, and 192.168.1.21 on the wifi one. PC2 local address is 192.168.0.6 on its ethernet interface, and 192.168.1.22 on the wifi one. My question is : I'd like to using the route add command to tell PC1 to use the Ethernet interface when it needs to connect with PC2, by specifying "IF 2" at the end of the route add command. How can I do this ? I don't know what to put in the "gateway" parameter of the command, and everything I tried returns "the parameter is incorrect" (i don't know which one). ipconfig /all on PC1 : Windows IP Configuration Host Name . . . . . . . . . . . . : Sam-PC Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : NETGEAR WG111v3 54Mbps Wireless USB 2.0 Adapter Physical Address. . . . . . . . . : 00-22-3F-DA-51-56 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::1d33:60b:476c:d396%12(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.1.21(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : vendredi 27 novembre 2009 15:38:48 Lease Expires . . . . . . . . . . : dimanche 29 novembre 2009 07:33:04 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DHCPv6 IAID . . . . . . . . . . . : 301998655 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-12-7E-58-EA-00-1A-4D-59-B2-71 DNS Servers . . . . . . . . . . . : 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 00-1A-4D-59-B2-71 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::f598:c3a0:df8d:706e%11(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.0.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 234887757 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-12-7E-58-EA-00-1A-4D-59-B2-71 DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled

    Read the article

  • Is it possible to install ffmpeg and x264 on a Synology Diskstation 209?

    - by Kieran Benton
    Hi, Complete linux novice here! :) I'm trying to get my brilliant DS209 NAS box to do some transcoding for me of a few AVI videos to a format suitable for my Apply iTouch - yes I could do it with another machine and Handbrake but it would be really useful to offload some of this to the NAS to do overnight. I've managed to install ipkg onto my DS209 NAS box and have played around with installing some packages (binutils, mono, bash etc). I've even managed to install ffmpeg from ipkg and put together the correct command line profile to do the encoding as a .sh file: time ffmpeg -y -i $1 -f mp4 -title $2 -vcodec libx264 -level 21 -s 426×320 -b 512k -bt 512k -bufsize 4M -maxrate 4M -g 250 -coder 0 -threads 0 -acodec libfaac -ac 2 -ab 64k $3 However running this I get a missing dependency on libx264. I've tried building this from the latest source in git, but I get errors during the make process that I just don't understand (way out of my depth). encoder/set.c: In function 'x264_sei_version_write': encoder/set.c:491: error: 'X264_VERSION' undeclared (first use in this function) encoder/set.c:491: error: (Each undeclared identifier is reported only once encoder/set.c:491: error: for each function it appears in.) make: *** [encoder/set.o] Error 1 Can anyone else try building it or give me a pointer as to what I can do to get this going? Its been a good learning experience so far! Thanks.

    Read the article

  • Unable to get squid working for remote users

    - by Sean
    I am trying to setup squid 3.2.4, but I have not been able to get it working for remote users. Works fine locally. Unable to figure out what I am doing wrong... http_port 3128 transparent ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/usr/share/ssl-cert/myCA.pem refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access allow localhost http_access allow localnet http_access allow all cache deny all via off forwarded_for off header_access From deny all header_access Server deny all header_access WWW-Authenticate deny all header_access Link deny all header_access Cache-Control deny all header_access Proxy-Connection deny all header_access X-Cache deny all header_access X-Cache-Lookup deny all header_access Via deny all header_access Forwarded-For deny all header_access X-Forwarded-For deny all header_access Pragma deny all header_access Keep-Alive deny all acl ip1 localip 1.1.1.90 acl ip2 localip 1.1.1.91 acl ip3 localip 1.1.1.92 acl ip4 localip 1.1.1.93 acl ip5 localip 1.1.1.94 tcp_outgoing_address 1.1.1.90 ip1 tcp_outgoing_address 1.1.1.91 ip2 tcp_outgoing_address 1.1.1.92 ip3 tcp_outgoing_address 1.1.1.93 ip4 tcp_outgoing_address 1.1.1.94 ip5 tcp_outgoing_address 1.1.1.90

    Read the article

  • Why my 2nd ip from traceroute is not answering the ping anymore?

    - by Pedro77
    My Internet is really laggy today, I did a tracerout and I realize that I'm having no answer from an ip at the beginning of the traceroute. see: Tracing route to 12.129.202.154 over a maximum of 30 hops 1 <1 ms <1 ms <1 ms 192.168.0.1 2 * * * Request timed out. 3 8 ms 8 ms 8 ms bd044008.virtua.com.br [189.4.64.8] 4 9 ms 8 ms 8 ms bd044009.virtua.com.br [189.4.64.9] 5 26 ms 26 ms 24 ms embratel-T0-1-5-0-tacc01.cas.embratel.net.br [200.174.243.21] 6 360 ms 15 ms 12 ms ebt-T0-15-0-12-tcore01.ctamc.embratel.net.br [200.244.140.218] 7 330 ms 349 ms 261 ms ebt-Bundle-POS11942-intl04.mianap.embratel.net.br [200.230.220.10] 8 139 ms 141 ms 139 ms sl-st30-mia-.sprintlink.net [144.223.64.221] Connection diagram: PC - Router configured as access point - Router (192.168.0.1) - Cable modem (192.168.100.1). Well, I think it is odd that the 2nd ip is not returning the ping. I looked some old tracerout logs to see what was the 2nd ip. The ip was: 10.19.0.1 So, what this 2nd ip stand for? How can I find why it is not answering the ping? I don't understand it, if does not answer the ping, how can the packets continue (yeah newbie question)? edit: well, because the hope 3 have a ping of 8 ms the hop 2 request time out should really not be a problem. But it is still odd that the 2nd hop stopped to answer ping request. So my doubts are: 1. Were the ip 10.19.0.1 is from? 2. Why it stopped to answer ping requests? 3. How can hop 7 be smaller than 6 and 8 smaller than 7 and 6!?? Shouldn't the pings be higher for each hop? Like: hop 3 time should be the sum of the hops before it plus its own time (hop 3 = 1+2+3) ??

    Read the article

  • Websites down EC2 inaccessible via SSH CPU utilisation 100% last few hours - what should I do?

    - by fuzzybee
    I have multiple websites hosted on 1 single EC2 instance. 1 website "abc" were down for a few hours, sometimes threw database connection error and sometimes just took too long to respond. 1 website "def" were incredibly slow but still up and running the rest of the websites had the same symptoms has "abc" I can afford 15 min or less down time for "def". Should I then (in AWS console) reboot my instance or create an AMI image from my instance and launch it and associate my elastic IP to the new instance or "launch more like this" Background on what may have happened to my ec2 The last time I made changes for 21 hours ago. A cronjob to create snapshots ran around 19 hours ago and it has been running for a long time. Google Analytics shows traffic to my websites such as kidlander.sg has been nothing exceptional. Is there any other actions I should take or better options I could have? (I have already contacted AWS support but their turnaround is 12 hours so I appreciate all the help I could get) Update I got everything back up and running and CPU utilisation back to normal, around 30%. There is 1 difference between "def" and "abc" as well as my other websites "def"'s database is hosted on RDS "abc"'s database is hosted on an EC2 instance (different from my web server instance) configured by myself Nevertheless, I checked the EC2 instance I'm using as MySQL server yesterday and it was absolutely fine during the incident low CPU ultilisation I could log in using linux command line

    Read the article

  • Where is my CPU usage going?

    - by Josh
    My Ubuntu 10.04 Lucid virtual machine is saying it's at 100% CPU usage... but all I'm running is Thunderbird. According to top, CPU usage should be ~25.9%... How do I interpret this conflicting output from top? top - 13:55:26 up 3:35, 4 users, load average: 3.03, 2.59, 2.48 Tasks: 178 total, 1 running, 177 sleeping, 0 stopped, 0 zombie Cpu(s): 16.0%us, 79.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 1.3%hi, 3.0%si, 0.0%st Mem: 509364k total, 479108k used, 30256k free, 3092k buffers Swap: 2096440k total, 58380k used, 2038060k free, 225116k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7708 jnet 20 0 480m 109m 17m S 18.4 22.1 21:59.14 thunderbird-bin 4615 jnet 20 0 5488 1268 1040 S 2.3 0.2 5:00.03 nx-rootless-ses 7124 jnet 20 0 56688 27m 4812 S 2.0 5.5 6:35.09 nxagent 6724 nx 20 0 9628 1400 636 S 1.6 0.3 3:26.59 sshd 30106 root 20 0 2544 1236 908 R 0.7 0.2 0:00.33 top 19 root 20 0 0 0 0 S 0.3 0.0 0:22.45 ata/0 38 root 20 0 0 0 0 S 0.3 0.0 0:05.53 scsi_eh_1 345 root 20 0 0 0 0 S 0.3 0.0 0:04.72 kjournald 1719 root 20 0 3260 1192 944 S 0.3 0.2 0:17.36 vmware-guestd 1 root 20 0 2804 1356 940 S 0.0 0.3 0:01.99 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 20 0 0 0 0 S 0.0 0.0 0:00.15 ksoftirqd/0 5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 ... Specifically I'm referring to the fact that the CPU usage totals show 0% idle time: Cpu(s): 16.0%us, 79.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 1.3%hi, 3.0%si, 0.0%st Yet when adding up the percentages in the %CPU column I get 25.9%, not 100%!

    Read the article

  • Rsync when run in cron doesnt work. Rsync between Mac Os x Server and Linux Centos

    - by Brady
    I have a working rsync setup between Mac OS X Server and Linux Centos when run manually in a terminal. I enter the rsync command, it asks for the password, I enter it and off it goes, runs and completes. Now I know thats working I set out to fully automate it via cron. First off I create an SSH authorized key by running this command on the Mac server: ssh-keygen -t dsa -b 1024 -f /Users/admin/Documents/Backup/rsync-key Entering the password and then confirming it. I then copy the rsync-key.pub file accross to the linux server and place in the rsync user .ssh folder and rename to authorized_keys: /home/philosophy/.ssh/authorized_keys I then make sure that the authorized_keys file is chmod 600 in the folder chmod 700. I then setup a shell script for cron to run: #!/bin/bash RSYNC=/usr/bin/rsync SSH=/usr/bin/ssh KEY=/Users/admin/Documents/Backup/rsync-key RUSER=philosophy RHOST=example.com RPATH=data/ LPATH="/Volumes/G Technology G Speed eS/Backup" $RSYNC -avz --delete --progress -e "$SSH -i $KEY" "$LPATH" $RUSER@$RHOST:$RPATH Then give the shell file execute permissions and then add the following to the crontab using crontab -e: 29 12 * * * /Users/admin/Documents/Backup/backup.sh I check my crontab log file after the above command should run and I get this in the log and nothing else: Feb 21 12:29:00 fileserver /usr/sbin/cron[80598]: (admin) CMD (/Users/admin/Documents/Backup/backup.sh) So I asume everything has run as it should. But when I check the remote server no files have been copied accross. If I run the backup.sh file in a terminal as normal it still prompts for a password but this time its through the Mac Key chain system rather than typing into the console window. With the Mac Key Chain I can set it to save the password so that it doesnt ask for it again but Im sure when run with cron this password isnt picked up. This is where I'm asuming where rsync in cron is failing because it needs a password to connect but I thought the whole idea of making the SSH keys was to prevent the use of a password. Have I missed a step or done something wrong here? Thanks Scott

    Read the article

  • setting up git on cygwin - openssl

    - by user23020
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • Windows 7 fails to install

    - by Brian Ortiz
    I'm upgrading from Vista SP1 (which was actually upgraded from XP over a year ago) to Windows 7 RTM (64-bit Ultimate to 64-bit Ultimate). After 4 hours or so, the install fails with the message "This version of Windows could not be installed, Your previous version of Windows has been restored, and you can continue to use it." This error is back at my Vista desktop, there's no error that I could see during install, I just a message indicating that it was reverting everything. I tracked down the error logs and here's the log at I uploaded the error log (from C:\$WINDOWS.~BT\Sources\Panther) and uploaded onto Pastebin. Here is an excerpt: 2009-08-09 02:54:57, Error Number of Enumerated Devices = 21[gle=0x00000103] 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x[gle=0x80092004] 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x[gle=0x80092004] It was suggested that I upgrade to SP2 before upgrading to Vista, but this made no difference. I since uninstalled SP2 since it was creating some problems with a piece of hardware. I know a fresh install is best, but I'm hoping to avoid that because I'd need a new hard drive. Per Reuben's instruction, I found the install's dump and uploaded it here. (266 KB)

    Read the article

  • Formatting an external HDD stuck at 70%

    - by mahmood
    My external HDD which is a 250GB WD (powered by USB) seems to have problem! Whenever i try to copy some files, it stuck while copying. I decided to format it. So I used windows tool and performed the format (not quickly) however at nearly 70% it stuck. Then I decided to perform a low level format with lowlevel. Again it stuck at 70%. I endup that the HDD has bad sector. So is there any tool that mark the bad sectors and bypass them? It is not very reasonable to through 250GB because of some bad sectors! P.S: I saw a similar topic but there were no conclusion there either. The smart data is Attribute, raw value, value, threshold, status Read Error Rate, 50, 200, 51, OK Spin-Up Time, 3275, 154, 21, OK Start/Stop Count, 2729, 98, 0, OK Reallocated Sectors Count,0, 200, 140, OK Seek Error Rate, 0, 100, 51, OK Power-On Hours (POH), 1057, 99, 0, OK Spin Retry Count, 0, 100, 51, OK Recalibration Retries ,0, 100, 51 , OK Power Cycle Count, 1385, 99, 0, OK Power-off Retract Count, 425, 200, 0, OK Load /Unload Cycle Count,12974, 196, 0, OK Temperature, 43, 43, 0, OK Reallocation Event Count,0, 200, 0, OK Current Pending Sector Count,23,200, 0, Degradation Uncorrectable Sector Count, 0, 100, 0, OK UltraDMA CRC Error Count,6, 200, 0, OK Write Error Rate/Multi-Zone Error Rate,0,100,51, OK It seems that the most important thing is this line Current Pending Sector Count,23,200, 0, Degradation Any idea on that?

    Read the article

  • Apache web logs "GET http/1.1" 200 error?

    - by songdogtech
    I've got some puzzling (at least to me) errors that show up in my webserver logs. What's puzzling is that they show my error page being referenced, but that error page doesn't show up as being displayed in hits on statcounter.com. Can someone tell me the difference in these two kinds of errors? Or is only one a "real" error? This is in WordPress, and I have my 404.php file in my theme redirecting to a static Wordpress page at mydomain.com/error/ These first two log entries show /error/, but a "hit" on my /error/ page doesn't show in my webstats. The page referenced - /2009/08/delete-preference-files/ - exists, and shows a "hit" at statcounter. 92.17.232.242 - - [13/Mar/2010:01:45:43 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/2009/08/delete-preference-files/" "Mozilla/5.0 (Windows;U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 732 4432 92.17.232.242 - - [13/Mar/2010:01:47:11 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/wp-content/themes/default/style.css" "Mozilla/5.0(Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 861 4432 These are two error entries that display my /error/ page, because /error/ gets a "hit" in statcounter. The page /cookies/ doesn't exist. 72.174.16.202 - - [13/Mar/2010:10:53:37 -0800] "GET /cookies HTTP/1.1" 302 21 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 394 642 72.174.16.202 - - [13/Mar/2010:10:53:38 -0800] "GET /error/ HTTP/1.1" 200 4012 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 393 4432

    Read the article

  • How to fix Apache from crashing with PHP+Curl on an SSH request?

    - by Jason Cohen
    My Apache process segfaults whenever I call curl_exec() from PHP with an "https://" URL. If I use http instead of https as the URL transport, it works perfectly, so I know curl and the other curl options are correct. I can use curl from the command-line on that server using the https version of the URL and it works perfectly, so I know the remote server is responding correctly, the cert isn't expired, etc.. My server is: Linux 2.6.32-21-server #32-Ubuntu SMP Fri Apr 16 09:17:34 UTC 2010 x86_64 GNU/Linux My Apache version is: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:21:26 My PHP version is: PHP 5.3.2-1ubuntu4.2 with Suhosin-Patch (cli) (built: May 13 2010 20:03:45) Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies My PHP curl module info is: cURL support => enabled cURL Information => 7.19.7 Age => 3 Features AsynchDNS => No Debug => No GSS-Negotiate => Yes IDN => Yes IPv6 => Yes Largefile => Yes NTLM => Yes SPNEGO => No SSL => Yes SSPI => No krb4 => No libz => Yes CharConv => No Protocols => tftp, ftp, telnet, dict, ldap, ldaps, http, file, https, ftps Host => x86_64-pc-linux-gnu SSL Version => OpenSSL/0.9.8k ZLib Version => 1.2.3.3

    Read the article

  • How to recover a Linksys WRT54GL router that has a blinking green power LED and no response from the

    - by Peter Mounce
    I was flashing the router with the Tomato firmware, but something went wrong; I'm not sure what. Now, the router responds to ping at 192.168.1.1 (my Mac's on a static IP 192.168.1.21), but the web-interface doesn't come up. I have read that this situation is recoverable in a [couple of places][2], but I haven't been having much success and so I wondered whether anyone could help. From my Mac (OSX 10.5) I have tried to tftp a new vanilla-Linksys firmware to the router and reboot; according to the trace, this sends it but the router behaves no differently after a reboot. I've read that if boot_wait is turned on, I'll have an easier time, but I haven't been able to find any instructions that tell me how I can tell whether I did this or not (I don't think I have, but I might have, when I tinkered the first time months ago - the router has worked since then, though). I have found a couple of references to [something called JTAG][3], which seems like some kind of [homebrew diagnostic cable thing][4], but that's a little beyond my ken. Happy to try it, with muppet-level instructions, though (I do software, not hardware!). So, I'm at a bit of a loss, really, and wondered whether anyone could provide me with the route (ha. ha.) out of this mess? Hm, I can't post all the links I wanted to until I have some more reputation.

    Read the article

  • Official end of support date for Internet Explorer 6 on Windows XP

    - by scunliffe
    If I read the docs on Windows Service Pack support policies, and the specific Internet Explorer lifecycle support page as well as the Wikipedia page I've deduced that: IE6 support ends/ended at: Windows 2000 Ended (date unknown) Windows XP SP0 (RTM) Ended Home: 30-Aug-2003 Pro: 30-Sep-2004 SP1 Ended Home: 11-Jul-2004 Pro: 11-Jul-2004 SP2 Home: 13-Jul-2010 Pro: 13-Jul-2010 SP3 (released: April 21, 2008) Home: ??? Pro: ??? What isn't clear is the Windows XP SP3 scenario. In "human" terms, when is the end of support for IE6 on Windows XP SP3? e.g. if there is never a Windows XP SP4... or heaven forbid, an SP4 is released. I realize this doesn't force people to upgrade etc. however I'm trying to get a "semi" official word on when IE6 moves into the "not supported" category. I'm not interested in philosophical answers e.g. "big enterprise won't upgrade but they will expect support into 2017" stuff... I just want the "clear answer" in terms of official Microsoft support.

    Read the article

  • centos TCP/IP connection very slow

    - by yuli chika
    I have a VSP (centos6.1 64bit) with 4gb ram. It always runs well, but in recent few days, the server become slowly. open a small css file need 22 seconds(2kb). tested in home/office/phone with (IE,chrome,safari,firefox). see in firebug networking DNS Lookup ?4?ms Connecting ?21.18?s Sending 1?ms Waiting ?115?ms Receiving ?9?ms The connection cost 21.18 seconds I have checked all the log file, there have no error. top commond, still have free memory. top - 00:23:15 up 8 days, 3:57, 1 user, load average: 3.60, 3.42, 3.83 Tasks: 221 total, 4 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 19.3%us, 3.2%sy, 0.0%ni, 76.1%id, 1.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4194304k total, 3247724k used, 946580k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32357 mysql 15 0 3710m 835m 6268 S 34.5 20.4 39:14.40 mysqld 9780 apache 15 0 442m 59m 12m S 33.2 1.4 0:05.69 httpd 9842 apache 15 0 403m 26m 10m S 16.9 0.7 0:01.23 httpd 9847 apache 15 0 412m 45m 22m R 15.3 1.1 0:01.00 httpd 9834 apache 15 0 426m 46m 11m R 13.0 1.1 0:02.22 httpd 9891 apache 15 0 407m 43m 19m S 8.0 1.1 0:00.33 httpd 9845 apache 15 0 414m 51m 24m S 6.0 1.3 0:01.53 httpd 9827 apache 15 0 402m 28m 11m S 3.3 0.7 0:02.69 httpd 9768 apache 16 0 414m 51m 24m S 3.0 1.3 0:06.51 httpd 9889 root 15 0 211m 12m 8160 S 2.7 0.3 0:00.32 php 9702 apache 15 0 415m 55m 26m S 1.7 1.4 0:10.67 httpd 9844 apache 15 0 413m 47m 21m S 1.7 1.2 0:01.21 httpd 9697 apache 15 0 414m 51m 24m S 1.3 1.3 0:11.05 httpd 9778 apache 15 0 414m 53m 25m S 1.3 1.3 0:05.38 httpd 9772 apache 15 0 414m 51m 23m R 0.7 1.3 0:05.04 httpd 9823 apache 15 0 415m 50m 23m S 0.7 1.2 0:03.97 httpd 9837 apache 15 0 402m 27m 11m S 0.3 0.7 0:01.04 httpd Then, how to check where is the problem and fixed it? I haven't change and config files in these days. Thanks.

    Read the article

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • Unable to write DVD-R(Blank DVD's)

    - by FrozenKing
    I have a problem in dvd drive i.e. It can read CD/DVD and can write CD and all CD/DVD-RW but cannot write DVD DVD drive model is SH-S203B Samsung; I also have a log file created by nero burning rom 11. Actually the fact is no Blank DVD's are being read in my dvd drive only previously written dvd's can be read! Is this the problem of OS or should I try cleaning the dvd drive or my DVD drive is 4yrs old so is it going to spoil now, since it is showing this type of symptoms! OS = WinXP AV = KIS 2012 DVD Drive = Samsung SH-S203B (Also tried latest firmware and downgrade versions also) IA32 Nero Version: 11.2.4.100 Internal Version: 11,2,4,100 Recorder: <TSSTcorp CDDVDW SH-S203B>Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 Drive buffer : 2048kB Bus Type : via Inquiry data CD-ROM: <TSSTcorp CDDVDW SH-S203B >Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 18:58:10 #37 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 00 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:10 #38 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 0 to 14 <Padding> 18:58:19 #39 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 10 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:19 #40 SectorVerify 21 File Cdrdrv.cpp, Line 12057 Read error at sector 15 <Virtual Multisession Info> 18:58:19 #41 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 16 to 18 <Volume Structure Descriptor Sequence> 18:58:28 #42 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 20 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • Website cannot be accessed with google DNS because of unsigned DNS

    - by Sinan Samet
    I get this error: Inconsistent security for stakeholdergame.com - DS found at parent, but no DNSKEY found at child. On http://dnscheck.pingdom.com/?domain=stakeholdergame.com People can't access my site with google public DNS because of this. How do I solve this problem? dig @ns1.haveabyte.nl stakeholdergame.com DS shows me this ; <<>> DiG 9.8.3-P1 <<>> @ns1.haveabyte.nl stakeholdergame.com DS ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42223 ;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;stakeholdergame.com. IN DS ;; AUTHORITY SECTION: stakeholdergame.com. 14400 IN SOA ns1.haveabyte.nl. hostmaster.stakeholdergame.com. 2014030300 14400 3600 1209600 86400 ;; Query time: 21 msec ;; SERVER: 79.170.93.174#53(79.170.93.174) ;; WHEN: Tue Jun 10 11:20:41 2014 ;; MSG SIZE rcvd: 100

    Read the article

  • FTP Server on Centos 5.8 - Transfer fails randomly

    - by Diego
    Hi have ProFTPD runningon a brand new CentOS 5.8 server with Plesk, and its behaviour is inconsistent at best. I tried to transfer a directory from my PC, and every time I get a transfer failed on a random file. It's never the same one that fails, it just fails. Sometimes it's a .gif, sometimes it's a .css, sometimes it's a JPG. Of several hundred files, a dozen is always failing for no apparent reason. The error that I get is the following: COMMAND:> [27/11/2012 11:43:52] STOR main_border.gif [27/11/2012 11:43:53] 500 Invalid command: try being more creative ERROR:> [27/11/2012 11:43:53] Syntax error: command unrecognized. The above is just an example, the "command unrecognized" occurs with LIST and other commands as well. Here's the ProFTPD configuration, just in case: ServerName "ProFTPD" #ServerType standalone ServerType inetd DefaultServer on <Global> DefaultRoot ~ psacln AllowOverwrite on </Global> DefaultTransferMode binary UseFtpUsers on TimesGMT off SetEnv TZ :/etc/localtime Port 21 Umask 022 MaxInstances 30 ScoreboardFile /var/run/proftpd/scoreboard TransferLog /usr/local/psa/var/log/xferlog #Change default group for new files and directories in vhosts dir to psacln <Directory /var/www/vhosts> GroupOwner psacln </Directory> # Enable PAM authentication AuthPAM on AuthPAMConfig proftpd IdentLookups off UseReverseDNS off AuthGroupFile /etc/group Include /etc/proftpd.include Note: file /etc/proftpd.include is blank. The above is the default configuration set by Plesk 11. I don't know much of why is that way, my knowledge of Linux System Administration is very basic and the one of ProFTPD is a complete zero. Thanks in advance for the help. Update Issue experienced with CuteFTP and FileZilla. Update Replaced ProFTPd with PureFTPd, issue persists. Sometimes I get "command unrecognized", sometimes "failed to establish data connection". I'm starting to think that it could be a network issue, but I have completely zero knowledge of networking.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >