Search Results

Search found 3154 results on 127 pages for 'debian etch'.

Page 19/127 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • not able to install g++ and gcc on debian

    - by austin powers
    Hi , I want to use directadmin as my web control panel and it needs several packages like g++ , gcc and etc... as usuall I started to type apt-get install g++ and there problems start : dependecy error... then I tried to apt-get -f install and I got this error (Reading database ... 15140 files and directories currently installed.) Removing libc6-xen ... ldconfig: /etc/ld.so.conf.d/libc6-xen.conf:6: hwcap index 0 already defined as nosegneg dpkg: error processing libc6-xen (--remove): subprocess post-removal script returned error exit status 1 Errors were encountered while processing: libc6-xen E: Sub-process /usr/bin/dpkg returned an error code (1) what shoud I do? I want to install g++ and all of its dependencies due to using of directadmin I need it. regards.

    Read the article

  • How to determine where debian package was sourced

    - by user169309
    How would I trace out which archive(s) in the sources.list a given installed deb package was or (could be) sourced from? I understand that the same package may be indexed by multiple archives. Does "aptitude" log any of this type of information when its installing packages? My aim is to pare down my current sources.list to the minimum set of archives needed to maintain the current set of installed packages.

    Read the article

  • PhpMyAdmin 500 Internal Server Error on Nginx/php5-fpm/Debian

    - by ThrownAway
    I downloaded PhpMyAdmin a while ago and am having a hard time getting it to work. Requesting localhost/phpmyadmin gives a 500 Internal Server Error response, but there's nothing in the error log. These are the steps I did: Downloaded the newest phpmyadmin and unzipped all the files to /var/vhosts/phpmyadmin/www/ Created a new php5-fpm pool and a server block on nginx Changed the owner of all the files inside phpmyadmin/ Tried requesting localhost/phpmyadmin and localhost/phpmyadmin/setup The phpmyadmin is running inside a chroot, and all the files are owned by www-data so it shouldn't be a permission error. I made a new php file in the same directory to produce an error and it logs just fine so it has to be just phpmyadmin. Here's my php5-fpm pool: [phpmyadmin] listen = /var/vhosts/phpmyadmin/tmp/.php.sock; user = www-data group = www-data chroot = /var/vhosts/phpmyadmin/ chdir = / php_admin_value[error_reporting] = E_ALL php_admin_value[error_log] = error.log php_admin_flag[log_errors] = on php_admin_flag[display_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /tmp And Nginx server block: server { listen 80; root /var/vhosts/phpmyadmin/www; server_name pma.domain; location / { try_files $uri $uri/ /index.html; autoindex on; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_pass unix:/var/vhosts/phpmyadmin/tmp/.php.sock; fastcgi_param SCRIPT_FILENAME /www$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /www; } index index.html index.htm index.php; try_files $uri $uri/ =404; } Any ideas what could be wrong? Why is it not producing any errors even though I've forced them to be on?

    Read the article

  • sudo suddenly stopped working on debian

    - by chovy
    I've been using 'sudo ' since I setup my server about a week ago. It suddently stopped working with no explanation. I am in 'sudo' group. So there should be no config change required to /etc/sudoers $ sudo apt-get install tsocks [sudo] password for me: me is not in the sudoers file. root@host:/etc# groups me me : me sudo The only thing it could possibly be related to was I added the following line to sshd_config: PermitRootLogin without-password But I have since changed that back to PermitRootLogin yes Permission on file is 400: ls -l /etc/sudoers -r--r----- 1 root root 491 Sep 28 21:52 /etc/sudoers No idea why it stopped working, or how to fix it.

    Read the article

  • Locale misconfig. Debian

    - by JakeTheFish
    perl -e 'print "Hello\n";' perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Hello I'v tried to do export LC_ALL=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 And it workis, till I log out. Is there any permanent solution?

    Read the article

  • Debian Wheezy (testing) df reported volume size

    - by TheRoadrunner
    I am a bit confused about the /dev/sda* references since I installed Wheezy instead of Squeeze on a testing box. fdisk -l returns: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e9623 Device Boot Start End Blocks Id System /dev/sda1 * 2048 480278527 240138240 83 Linux /dev/sda2 480280574 488396799 4058113 5 Extended /dev/sda5 480280576 488396799 4058112 82 Linux swap / Solaris This seems correct. But df -h /dev/sda (and /dev/sda1 and /dev/sda2 and /dev/sda5) returns: Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev The same happens with every entry under /dev/disk/by-id and /dev/disk/by-path. Only one of two entries under /dev/disk/by-uuid returns the correct volume size: df -h /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 Filesystem Size Used Avail Use% Mounted on /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 229G 22G 196G 11% / Contents of /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=cacdbad6-7e6b-4e80-84ba-e3c77ef48796 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=45840d13-ee36-4e77-8e73-16cbdff25eb1 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 It seems all other references than the uuid points to the swap partition. Is this because Wheezy is in testing, and should it be reported as an error?

    Read the article

  • "Network is unreachable" When pinging google, can connect to internal computers on debian VM

    - by musher
    Similar to this SU question: "Network is unreachable" when attempting to ping google, but internal addresses work Actually, it's pretty much the same base issue. I went through that thread trying to find a solution, I changed my resolv.conf: before: domain [my work domain] search [my work domain] nameserver [my gateway] nameserver [my gateway2] I changed it to: after: domain [my work domain] search [my work domain] nameserver 8.8.8.8 nameserver 8.8.4.4 However, any time I reboot the computer the resolv.conf gets overwritten to the previous version (the 'before' above). The issues began after I installed virtualbox additions, X server and (specifically) LXDE: Cat of apt history.log: Start-Date: 2014-08-21 10:03:42 Commandline: apt-get install virtualbox-guest-utils virtualbox-guest-dkms Install: x11-xkb-utils:amd64 (7.7+1, automatic), libxaw7:amd64 (1.0.12-2, automatic), xfonts-utils:$ End-Date: 2014-08-21 10:03:56 Start-Date: 2014-08-21 10:18:39 Commandline: apt-get install lxde Install: desktop-base:amd64 (7.0.3, automatic), libgoa-1.0-0b:amd64 (3.12.4-1, automatic), lxmenu-d$ End-Date: 2014-08-21 10:21:52 Start-Date: 2014-08-21 10:26:40 Commandline: apt-get upgrade Upgrade: libio-socket-ssl-perl:am ifconfig on the guest: root@Peridot:~# ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:89:c9:20 og inet addr:172.31.2.102 Bcast:172.31.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe89:c920/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2281 errors:0 dropped:1 overruns:0 frame:0 TX packets:463 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:266507 (260.2 KiB) TX bytes:120554 (117.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) The adapter in VBox is a bridged adapter directly onto my ethernet connection; as are my other 2 VMs (which work) Other SU questions I've tried: "connect: Network is unreachable" in VirtualBox VM

    Read the article

  • Set source address to use tun device does not work (Debian Squeeze)

    - by A. Donda
    there have been similar questions on StackExchange but none of the answers helped me, so I'll try a question of my own. I have a VPN connection via OpenVPN. By default, all traffic is redirected through the tunnel using OpenVPN's "two more specific routes" trick, but I disabled that. My routing table is like this: 198.144.156.141 192.168.2.1 255.255.255.255 UGH 0 0 0 eth0 10.30.92.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun1 10.30.92.1 10.30.92.5 255.255.255.255 UGH 0 0 0 tun1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.30.92.5 0.0.0.0 UG 0 0 0 tun1 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth0 And the interface configuration is like this: # ifconfig eth0 Link encap:Ethernet HWaddr XX-XX- inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fe8d:acbd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:394869 errors:0 dropped:0 overruns:0 frame:0 TX packets:293489 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:388519578 (370.5 MiB) TX bytes:148817487 (141.9 MiB) Interrupt:20 Base address:0x6f00 tun1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.30.92.6 P-t-P:10.30.92.5 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:9885 (9.6 KiB) TX bytes:4380 (4.2 KiB) plus the lo device. The routing table has two default routes, one via eth0 through my local network router (DSL modem) at 192.168.2.1, and another via tun1 through the VPN's gateway. With this configuration, if I connect to a site, the route chosen is the direct one (because it has less hops?): # traceroute 8.8.8.8 -n traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 192.168.2.1 0.427 ms 0.491 ms 0.610 ms 2 213.191.89.13 17.981 ms 20.137 ms 22.141 ms 3 62.109.108.48 23.681 ms 25.009 ms 26.401 ms ... This is fine, because my goal is to send only traffic from specific applications through the tunnel (esp. transmission, using its -i / bind-address-ipv4 option). To test whether this can work at all, I check it first with traceroute's -s option: # traceroute 8.8.8.8 -n -s 10.30.92.6 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 * * * ... This I take to mean that connection using the tunnel's local address as source is not possible. What is possible (though only as root) is to specify the source interface: # traceroute 8.8.8.8 -n -i tun1 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 10.30.92.1 129.337 ms 297.758 ms 297.725 ms 2 * * * 3 198.144.152.17 297.653 ms 297.652 ms 297.650 ms ... So apparently the tun1 interface is working and it is possible to send packets through it. But selecting the source interface is not implemented in my actual target application (transmission), so I would like to get source address selection to work. What am I doing wrong?

    Read the article

  • How to flash Dell Precision 390 from linux (debian)

    - by malat
    I am trying to update my BIOS: $ sudo dmidecode -s bios-version 2.1.2 With a newer one: 2.6.0. I went to this page Dell Precision System BIOS, 2.6.0 After downloading the file WS390-020600.BIN, here is what it states: $ ./WS390-020600.BIN --help Usage: WS390-020600.BIN [options] Options: --help Print this text. --version Print package versions. If no options, update the BIOS. and $ ./WS390-020600.BIN --version Dell BIOS Update Installer 1.2 Copyright 2006 Dell Inc. All Rights Reserved. ./WS390-020600.BIN: 60: ./WS390-020600.BIN: ./flash: not found Does anyone knows where this flash command can be found ? Update: it looks like this is a self-extracting archive (need bash as per comment in header). $ head -30 WS390-020600.BIN [...] Extract() { tail -n +`awk '/^__ARC__/ { print NR + 1; exit 0; }' $0` $0 | gzip -cd >$_PRG So the flash command should have been auto-generated, however the above command does not appear to be running as original author intended. I do not see anything wrong with the command though.

    Read the article

  • VPN PPTPD with MPPE Support for Debian or Ubuntu

    - by user78395
    Having an unencrypted vpn connection from a windows client to linux is pretty easy by using pptpd. When I was looking for an solution for encrypted (per MPPE) connection, I found a lot of information about patching the kernel etc. - so it definitly works after some work. But all these information is pretty old (2005-2006). Is it the same solution nowadays? I am not asking for a complete instruction (only if it's short) - I am more asking for a link to the right solution.

    Read the article

  • How to securely connect to multiple different LDAPS servers (Debian)

    - by Pickle
    I'm trying to connect to multiple different LDAPS servers. A lot of the documentation I've seen recommends setting TLS_REQCERT never, but that strikes me as horribly unsecure to not verify the certificate. So I've set that to demand. All the documentation I've seen says I need to update ldap.conf with a TLS_CACERT directive pointing to a .pem file. I've got that .pem file set up with the certificate from LDAP Server #1, and ldaps connections are happening fine. I've now got to communicate securely with another LDAP server in another branch of my organization, that uses a different certificate. I've seen no documentation on how to do this, except 1 page that says I can simply put multiple (not chained) certificates in the same .pem file. I've done this and everything is working hunky dorey. However, when I told a colleague what I did, he sounded like the sky was falling - putting 2 non-chained certificates into one .pem file is apparently the worst thing since ... ever. Is there a more acceptable way to do this? Or is this the only accepted way?

    Read the article

  • Debian Squeeze - Monitor outgoing traffic

    - by Sam W.
    I have a small webserver that running on Lighttpd 1.4 which steadily uses 250GB or less bandwidth for the past couple of months. But since May the traffic spikeed to more than triple of what it was. Nothing special was on my site to make its spike like that. When I checked with vnstat I found that 70% of the bandwidth is tx. I suspect I've been hacked and my webserver is becoming some sort of bot. ClamAV comes out with nothing and I already replaced the Joomla installation with a fresh one, early in June. But right now the traffic stayed the same. My question, how can I monitor my server and look what is transmitting all that data out? My need to be done to pinpoint what is the culprit. Can someone please point to the right way to solve this? Thank you.

    Read the article

  • Debian: Give users permission

    - by 50ndr33
    I have a www-data that was automatically set up when I installed Apache. I have a ftpuser that I configured myself to use with ProFTPd. I use a MySQL database with users that use this user to log on. The problem is that Apache with PHP is working as it should, but I cannot add files with FTP. I tried to do chmod 777 mysite.com, and it worked, but now Apache gave me a 500 internal error. I suppose chmod isn't the correct way to go. I deleted my folder and made a new one. How can I give ftpuser permissions to read and write, while www-data should not loose its permissions. I don't have much experience with Linux command line. Thanks!

    Read the article

  • Debian/Linux backup files changed by user

    - by verhogen
    I would like to backup my server that is hosting a few websites in such a way that I can restore everything to the way it was from a fresh format. I know that I should backup all the home folders and then probably my /etc/ folders. Is there a way to figure out all the folders that are relevant for backup in that they were not automatically generated or installed from apt-get? It would ideally restore all the users with their current passwords as well. Basically, enough to clone the system but only copying configuration files.

    Read the article

  • howto install firefox + dependencies on debian without root privileges

    - by Ivar Lugtenburg
    I have a problem installing Firefox without root privileges. Mozilla says the following: Firefox will not run at all without the following libraries or packages: * GTK+ 2.10 or higher * GLib 2.12 or higher * Pango 1.14 or higher * X.Org 1.0 or higher Of course i need to install all these dependencies without root privileges as well, but the thing is i don't know exactly how to do this. I've tried a few things i found on the internet, but to no avail.

    Read the article

  • debian out of memory error server crash

    - by user42700
    hi, the server keeps crashing due to apache, is there any way i can stop this, the server has 2GB swap space and 3GB ram May 25 03:33:41 server kernel: [ 3513.200719] [<c015959c>] out_of_memory+0x14e/0x17f May 25 03:33:41 server kernel: [ 3513.211491] Out of memory: kill process 2936 (apache2) score 87364 or a child May 25 04:35:30 server kernel: [ 7239.936995] [<c015959c>] out_of_memory+0x14e/0x17f May 25 04:35:30 server kernel: [ 7239.948878] Out of memory: kill process 2936 (apache2) score 88236 or a child May 25 05:42:57 server kernel: [11210.572510] [<c015959c>] out_of_memory+0x14e/0x17f May 25 08:13:23 server kernel: [ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 0000000000100000

    Read the article

  • Is there a debian/ubuntu policy on softlinking things to another location in opt once they're installed?

    - by AbrahamVanHelpsing
    Is there a debian/ubuntu policy on softlinking things to another location in opt once they're installed properly in usr/share or usr/lib? Here's a simple example: Packaging up dnsenum. It's a REALLY simple package (4 files). A perl script, two wordlists, and a readme. So from what I gather: The wordlists should go in usr/share/dnsenum/* The perl script itself would go in usr/lib/dnsenum/ The readme would go in usr/share/doc/dnsenum/ Add a wrapper bash script that goes in bin and just passes arguments to dnsenum.pl. The question is this: If there are various tools that provide wordlists or some other shared resource, is there a policy on linking all the wordlists from different packages in to /opt/wordlists/ ? It seems like the "right" thing to do respecting the directory structure while still making things convenient.

    Read the article

  • Getting rails application from github to debian server

    - by Micke
    Hello. I've been developing my first rails application on my windows computer. But now i have been setting up a debian server with nginx and passanger. I've been using Github to keep track of my application and now i am wondering how i can get the Github version of my application to the debian server and put it in production mode? Anybody that have a good guide about this or something?

    Read the article

  • Merging MP3 files in Linux Debian using PHP

    - by pako
    What's the easiest way to merge the contents of several MP3 files into one using PHP 5.2 on Linux Debian system? I found some scripts that are supposed to do in PHP only, but they seem to be buggy. Perhaps there is a way to accomplish this task using command line programs, that I could install on my Linux Debian machine?

    Read the article

  • How to write a package (developed on Ubuntu 9.10) for installation on a Debian server

    - by Stick it to THE MAN
    I have written a number of applications and libraries (some of which depend on third party libraies), on my home workstation (Ubuntu 9.10). I now want to create packages (one package per application/library), so that I may then install them on my server, which will be running Debian OS. Any guidelines/gotchas on how to go about creating installation packages for debian on Ubuntu?

    Read the article

  • Backup broken PostgreSQL 8.4 without pg_dump

    - by Daniil
    So. I have a problem. PostgreSQL 8.4 won't start or restart without any output given. But it worked for 3 monthes until hosting provider doesn't rebooted server. Now it is completly broken. It wan't start and doesn't give any output or log. pg_dump: [archiver (db)] connection to database "postgres" failed: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Now I want to backup (or just start pgsql socket) my database to reinstall postgesql. How?

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • supervisord failed to start nagiosapi after reboot, need to run reload manually

    - by Bajingan Keparat
    I have supervisord to start nagiosapi everytime the server starts. The API created a status dump file called status.dat, which will get updated periodically. The following is the conf file that starts the api. [program:nagapi] directory = /home/nagapi user = api command = /bin/bash -c "source /home/nagapi/.virtualenvs/nagapi/bin/activate; /home/nagapi/nagios-api/nagios-api" stdout_logfile = /home/nagapi/supervisor_nagios-api_stdout.log stderr_logfile = /home/nagapi/supervisor_nagios-api_stderr.log Everytime i restart the server, supervisord cannot start the api. stderr.log claims that it cannot find the status.dat file located in /var/cache/nagios3. It seems like the files was not created yet when supervisor tried to run the api the first time. I'm saying this because if i do a supervisorctl reload, everything would reload just fine, and the api would run ok about 50 seconds after the reload command completes. should i change the command option of the conf file to check for

    Read the article

  • Compiz: Switching focus by application instead of by window

    - by Ivan Vucica
    I got used to OS X way of doing things (separate shortcuts for switching between applications and switching between current application's windows). Is there a way to get Compiz to have a shortcut (such as Super+Tab) to switch between applications ("window groups") instead of between windows? I already got the "Scale" plugin (an expose clone) to display only windows from current window group, proving there is a way to group by application, but I cannot find a way to get the "Application Switcher" to switch between these groups instead of between windows themselves.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >