Search Results

Search found 17260 results on 691 pages for 'folder tree'.

Page 454/691 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • package issue with ubuntu 10.10 and passenger requirements

    - by user368937
    I'm trying to get Passenger working with Ubuntu 10.10 and I'm running into a problem. It seems that the passenger installer is not recognizing the virtual package. I'm getting this error: Code: passenger-install-apache2-module ... * OpenSSL support for Ruby... not found ... And then it says, run this: * To install OpenSSL support for Ruby: Please run apt-get install libopenssl-ruby as root. When I run the above command, it refers to the libruby package: sudo apt-get install libopenssl-ruby Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'libruby' instead of 'libopenssl-ruby' libruby is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 43 not upgraded. When I look at the details for libruby, it says it provides libopenssl-ruby: Code: Provides: libbigdecimal-ruby, libcurses-ruby, libdbm-ruby, libdl-ruby, libdrb-ruby, liberb-ruby, libgdbm-ruby, libiconv-ruby, libopenssl-ruby, libpty-ruby, libracc-runtime-ruby, libreadline-ruby, librexml-ruby, libsdbm-ruby, libstrscan-ruby, libsyslog-ruby, libtest-unit-ruby, libwebrick-ruby, libxmlrpc-ruby, libyaml-ruby, libzlib-ruby And when I rerun the passenger installer, it gives the same error: Code: passenger-install-apache2-module ... * OpenSSL support for Ruby... not found ... Let me know if you need more info. How do I fix this?

    Read the article

  • One email user keeps disconnecting from our exchange server

    - by Funky Si
    I have one user who keeps reporting that Outlook keeps disconnecting from our email server. All other users are fine. Our email server is running Exchange 2010 and the client is running Outlook 2003. The disconnection only lasts a moment. I have checked for logs on the client and Exchange server and can not see any reason for the disconnect, On the Client I get EventId 26 telling me Outlook has disconnected and reconnected but no reason why. Can anyone give me some suggestions of things to try and track down where the problem could be? --Update-- I have found the following log file C:\Program Files\Microsoft\Exchange Server\V14\Logging\RPC Client Access which suggests that it is a problem with RPC sessions. Excerpt is below 2013-01-31T15:21:24.015Z,6413,15,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,0,00:00:00,"BS=Conn:24,HangingConn:0,AD:$null/$null/0%,CAS:$null/$null/2%,AB:$null/$null/0%,RPC:$null/$null/1%,FC:$null/0,Policy:ClientThrottlingPolicy2,Norm[Resources:(Mdb)Mailbox Database 0765959540(Health:-1%,HistLoad:0),(Mdb)Public Folder Database 1945427388(Health:-1%,HistLoad:0),];GC:6/1/0;", 2013-01-31T15:21:24.015Z,6413,16,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,PublicLogoff,0,00:00:00,LogonId: 1, 2013-01-31T15:21:24.015Z,6413,16,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00,"BS=Conn:24,HangingConn:0,AD:$null/$null/0%,CAS:$null/$null/2%,AB:$null/$null/0%,RPC:$null/$null/1%,FC:$null/0,Policy:ClientThrottlingPolicy2,Norm[Resources:(Mdb)Mailbox Database 0765959540(Health:-1%,HistLoad:0),(Mdb)Public Folder Database 1945427388(Health:-1%,HistLoad:0),];GC:6/1/0;", 2013-01-31T15:21:24.015Z,6417,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,OwnerLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.015Z,6417,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,0x6BA (rpc::Exception),00:02:54.7668000,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0; SessionDropped,"RpcEndPoint: [ServerUnavailableException] Connection must be re-established - [SessionDeadException] Connection doesn't have any open logons, but has client activity. This may be masking synchronization stalls. Dropping a connection." 2013-01-31T15:21:24.015Z,6420,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.031Z,6420,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00.0156000,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0, 2013-01-31T15:21:24.031Z,6420,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,Disconnect,0,00:02:54.2364000,, 2013-01-31T15:21:24.031Z,6419,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,OwnerLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.031Z,6419,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0, 2013-01-31T15:21:24.031Z,6419,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,Disconnect,0,00:02:54.4392000,, 2013-01-31T15:21:24.031Z,6416,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.031Z,6416,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0, Can anyone help point me in the right direction for a solution?

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Cannot Install Bootcamp on Mac Fusion Drive

    - by user3377019
    I have two drive installed in my mid-2013 Macbook pro. It is running osx maverick. I set the two drives (an ssd and a regular HD) up using the fusion drive (following the diy fusion drive guides out there). I went ahead and created a 40gb partition to host my windows install. I removed the SSD, installed windows, then reinstalled the SSD. As soon as the ssd was placed back in the bootcamp partition stopped booting. I get a blinking cursor on a black screen. I checked out the partition info in disk utility and it appears that the windows partition is not marked bootable. Below is some info I managed to gather. I am wondering if there is a way to fix the partition table so my bootcamp will boot. /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *120.0 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 119.7 GB disk0s2 3: Apple_Boot Boot OS X 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk1 1: EFI EFI 209.7 MB disk1s1 2: Microsoft Basic Data BOOTCAMP 40.0 GB disk1s2 3: Apple_CoreStorage 459.2 GB disk1s3 4: Apple_Boot Boot OS X 650.0 MB disk1s4 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Macintosh HD *573.4 GB disk2 Name : BOOTCAMP Type : Partition Disk Identifier : disk1s2 Mount Point : /Volumes/BOOTCAMP File System : Windows NT File System (NTFS) Connection Bus : SATA Device Tree : IODeviceTree:/PCI0@0/SATA@1F,2/PRT1@1/PMP@0 Writable : No Universal Unique Identifier : 584BAED6-4C46-4F18-93B3-957F6E27003C Capacity : 40 GB (39,998,980,096 Bytes) Free Space : 16.34 GB (16,339,972,096 Bytes) Used : 23.66 GB (23,659,003,904 Bytes) Number of Files : 86,424 Number of Folders : 0 Owners Enabled : No Can Turn Owners Off : No Can Be Formatted : No Bootable : No Supports Journaling : No Journaled : No Disk Number : 1 Partition Number : 2

    Read the article

  • rsync not copying hard links

    - by A.Ellett
    I have two computers (both MacBook Airs) for which I sync one directory tree in both, but not the entire hard drive or any other directories. Let's say on computer A the directory is /Users/aellett/projects Let's say on computer B the directory is /Users/bellett/projects Generally, I'll log into computer B and then remotely connect to computer A as user 'aellett'. As super user I sync the two project directories as follows: rsync -av /Volumes/aellett/projects/ /Users/bellett/projects/ and this works as expected. On both computers I have another file letter.txt in a different directory which is not getting synced. Let's say on computer A the file is found in /Users/aellett/letters On computer B the file is found in /Users/bellett/correspondence Generally, I don't want to share what's not included in /Users/<username>/projects. But I do want to share this particular file. So on both computer I made a correspondence directory in projects. And then I made hard links as follows On computer A: ln /Users/aellett/letters/letter.txt /Users/aellett/projects/correspondence/letter.txt On computer B: ln /Users/bellett/correspondence/letter.txt /Users/aellett/projects/correspondence/letter.txt The next time I synced the two computers I did the following rsync -av -H /Volumes/aellett/projects/ /Users/bellett/projects/ When I checked on computer B, /Users/bellett/projects/correspondence/letter.txt was correctly synced. But, the hardlink to /Users/bellett/correspondence/letter.txt was no longer there. In other words, /Users/bellett/projects/correspondence/letter.txt was identical to /Users/aellett/projects/correspondence/letter.txt but it differed from /Users/bellett/correspondence/letter.txt. Since these two files were hard linked on both computers, I expected them to still have the hard link. Why are my hard links not being preserved?

    Read the article

  • ssh authentication nfs

    - by user40135
    Hi all I would like to do ssh from machine "ub0" to another machine "ub1" without using passwords. I setup using nfs on "ub0" but still I am asked to insert a password. Here is my scenario: * machine ub0 and ub1 have the same user "mpiu", with same pwd, same userid, and same group id * the 2 servers are sharing a folder that is the HOME directory for "mpiu" * I did a chmod 700 on the .ssh * I created a key using ssh-keygene -t dsa * I did "cat id_dsa.pub authorized_keys". On this last file I tried also chmod 600 and chmod 640 * off course I can guarantee that on machine ub1 the user "shared_user" can see the same fodler that wes mounted with no problem. Below the content of my .ssh folder Code: authorized_keys id_dsa id_dsa.pub known_hosts After all of this calling wathever function "ssh ub1 hostname" I am requested my password. Do you know what I can try? I also UNcommented in the ssh_config file for both machines this line IdentityFile ~/.ssh/id_dsa I also tried ssh -i $HOME/.ssh/id_dsa mpiu@ub1 Below the ssh -vv Code: OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ub1 [192.168.2.9] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /mirror/mpiu/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version lshd-2.0.4 lsh - a GNU ssh debug1: no match: lshd-2.0.4 lsh - a GNU ssh debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-3ubuntu1 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,spki-sign-rsa debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server-client 3des-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client-server 3des-cbc hmac-md5 none debug2: dh_gen_key: priv key bits set: 183/384 debug2: bits set: 1028/2048 debug1: sending SSH2_MSG_KEXDH_INIT debug1: expecting SSH2_MSG_KEXDH_REPLY debug1: Host 'ub1' is known and matches the RSA host key. debug1: Found key in /mirror/mpiu/.ssh/known_hosts:1 debug2: bits set: 1039/2048 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /mirror/mpiu/.ssh/id_dsa (0xb874b098) debug1: Authentications that can continue: password,publickey debug1: Next authentication method: publickey debug1: Offering public key: /mirror/mpiu/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: password,publickey debug2: we did not send a packet, disable method debug1: Next authentication method: password mpiu@ub1's password: I hangs here!

    Read the article

  • How to deduplicate 40TB of data?

    - by Michael Stauffer
    I've inherited a research cluster with ~40TB of data across three filesystems. The data stretches back almost 15 years, and there are most likely a good amount of duplicates as researchers copy each others data for different reasons and then just hang on to the copies. I know about de-duping tools like fdupes and rmlint. I'm trying to find one that will work on such a large dataset. I don't care if it takes weeks (or maybe even months) to crawl all the data - I'll probably throttle it anyway to go easy on the filesystems. But I need to find a tool that's either somehow super efficient with RAM, or can store all the intermediary data it needs in files rather than RAM. I'm assuming that my RAM (64GB) will be exhausted if I crawl through all this data as one set. I'm experimenting with fdupes now on a 900GB tree. It's 25% of the way through and RAM usage has been slowly creeping up the whole time, now it's at 700MB. Or, is there a way to direct a process to use disk-mapped RAM so there's much more available and it doesn't use system RAM? I'm running CentOS 6.

    Read the article

  • Linking to network shares from Sharepoint pages

    - by Russell C
    So the place I work decided to set up a Microsoft Sharepoint 2010 server for task management and I (as the lowly entry-level intern) have been tasked with "figuring it out." One thing that the end users really, really, really, want is the ability to link to network shares (that are readable by anyone who will be using sharepoint) from a Sharepoint web page. In order to do this, I have edited the HTML manually with several lines that look like the following: <a href="file://server/share">Server Share</a> This works (sometimes) but the link reported by Sharepoint is often wrong and editing pages that contain these links will mangle the code such that when I open it, the code no longer looks like what it did when I last hit save (breaking all those links). Obviously this is not sustainable. I've been told by coworkers that "It worked that way at the last place I worked" but I haven't found out how yet. Any ideas on how this would work or am I barking up the wrong tree? None of the knowledge searches I've done shed any light on the sitataion. Thanks for any help! -Russell P.S. It should be noted that the file option in an href tag ONLY works in IE (which is a real bummer since we mostly use Firefox).

    Read the article

  • How to get robocopy running in powershell?

    - by Moo MinTroll
    I'm trying to use robocopy inside powershell to mirror some directories on my home machines. Here's my script: param ($configFile) $config = Import-Csv $configFile $what = "/COPYALL /B /SEC/ /MIR" $options = "/R:0 /W:0 /NFL /NDL" $logDir = "C:\Backup\" foreach ($line in $config) { $source = $($line.SourceFolder) $dest = $($line.DestFolder) $logfile = $logDIr $logfile += Split-Path $dest -Leaf $logfile += ".log" robocopy "$source $dest $what $options /LOG:MyLogfile.txt" } The script takes in a csv file with a list of source and destination directories. When I run the script I get these errors: ------------------------------------------------------------------------------- ROBOCOPY :: Robust File Copy for Windows ------------------------------------------------------------------------------- Started : Sat Apr 03 21:26:57 2010 Source : P:\ C:\Backup\Photos \COPYALL \B \SEC\ \MIR \R:0 \W:0 \NFL \NDL \LOG:MyLogfile.txt\ Dest - Files : *.* Options : *.* /COPY:DAT /R:1000000 /W:30 ------------------------------------------------------------------------------ ERROR : No Destination Directory Specified. Simple Usage :: ROBOCOPY source destination /MIR source :: Source Directory (drive:\path or \\server\share\path). destination :: Destination Dir (drive:\path or \\server\share\path). /MIR :: Mirror a complete directory tree. For more usage information run ROBOCOPY /? **** /MIR can DELETE files as well as copy them ! Any idea what I need to do to fix? Thanks, Mark.

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Can't connect to MS SQL Server database using SSMS

    - by Charles
    I have a database on line with Godaddy (who uses SQL Server 2005). They provide basic management tools, but tell you that for more advanced tools you can connect directly using SSMS. I followed their instructions to ensure my online database will accept remote connections, and can apparently log in using SSMS with success (after giving my hostname and access data). However: Now from in SSMS, when attempting to expand the "Databases" folder tree, I get the following error: Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc) An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) The server principal "cmitchell" is not able to access the database "3pointdb" under the current security context. (Microsoft SQL Server, Error: 916) The irony is that 3pointdb isn't my database. It is just another in a long list of databases that show up when I access my Godaddy backend. From SSMS, I selected the default database to be the name of my database, which it did locate on the list when I browsed. Still same error message. It is trying to connect to a database that isn't mine! :( Godaddy support, after a bit of testing, said the problem isn't on their end. it's on mine. – Charles

    Read the article

  • update from debian lenny to squeeze

    - by Daniel
    I'm trying to update from debian lenny to squeeze on my 64bit root server and did the following so far: modifying sources.list apt-get update apt-get upgrade apt-get install linux-image-2.6-amd64 The last step leads to the following error-output: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: linux-image-2.6-amd64: Depends: linux-image-2.6.32-5-amd64 but it is not going to be installed E: Broken packages UPDATE: here's my sources.list deb ftp://mirror.hetzner.de/debian/packages squeeze main contrib non-free deb ftp://mirror.hetzner.de/debian/security squeeze/updates main contrib non-free deb http://ftp.de.debian.org/debian squeeze main non-free contrib deb-src http://ftp.de.debian.org/debian squeeze main non-free contrib deb http://security.debian.org/ squeeze/updates main contrib non-free deb-src http://security.debian.org/ squeeze/updates main contrib non-free How can I fix that safely? thx

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks Update: This is on virtual ESXi hardware with an additional pass-through HBA. There is no controller 0 on the machine, that is for sure. devfsadm -C cleans up all the c0 device symlinks but keeps the already linked controllers at their current ids.

    Read the article

  • Unable to install mysql-server in Ubuntu

    - by Arihant
    I am unable to install mysql-server on my ubuntu 9.10 server machine. When using apt-get install mysql-server the output is : # apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 120 not upgraded. 2 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up mysql-server-5.1 (5.1.37-1ubuntu5.4) ... * Stopping MySQL database server Mysqld [ OK ] * Starting MySQL database server mysqld [fail] invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I cant find a satisfactory solution to this problem anywhere. Many sites tell to reinstall it but its not working. Any help will be appreciated. Thank you..

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Unsigned lenny packages with aptitude safe-upgrade

    - by Liam
    I have several Debian lenny computers. Two have nearly identical sources.list files. On both, I do regular update/safe-upgrades. On one it always goes smoothly. On the other, much of the time I get the following: sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages will be upgraded: krb5-clients krb5-ftpd krb5-rsh-server krb5-telnetd krb5-user libimlib2 libkadm55 libkrb53 libpng12-0 libpulse0 xpdf xpdf-common xpdf-reader 13 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 2906kB of archives. After unpacking 36.9kB will be used. Do you want to continue? [Y/n/?] WARNING: untrusted versions of the following packages will be installed! Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. krb5-rsh-server krb5-user krb5-ftpd krb5-clients libkrb53 xpdf-reader libpng12-0 libkadm55 xpdf libpulse0 libimlib2 krb5-telnetd xpdf-common Do you want to ignore this warning and proceed anyway? To continue, enter "Yes"; to abort, enter "No": no Abort. Needless to say, I don't proceed. What is going on? How do I fix it? These are the non-comment lines in the sources.list for this computer: deb ftp://ftp.debian.org/debian/ lenny main contrib non-free deb-src ftp://ftp.debian.org/debian/ lenny main contrib deb http://security.debian.org/ lenny/updates main contrib non-free Thank you.

    Read the article

  • What is the difference between Startup programs in windows and the same programs being started manually

    - by sup
    I am no Windows guy, but I am trying to get a seamless integration of Windows program through Virtual Box Windows guest onto my Ubuntu machine. I more or less followed this tutorial: https://nowhere.dk/articles/running-windows-applications-natively-with-seamlessrdp Basically I start up Windows in Virtual Box and then I try to launch an application (on Ubuntu host) like this: rdesktop -A -s "c:\Program Files\ThinLinc\WTSTools\seamlessrdpshell.exe notepad.exe" 192.168.123.103:3389 -u user -p password That just gives me full Windows desktop that I do not want. However, when I run (on the Windows guest) "c:\Program Files\ThinLinc\WTSTools\seamlessrdpshell.exe" "notepad" The command above works and I get just the window I want. Now, so I thought I would put this command into startup folder of the Windows machine and everything would be fine. But it says "Unable to set up the virtual channel". (by googling, I nailed it to this file: https://sourceforge.net/p/rdesktop/code/1686/tree/seamlessrdp/trunk/ServerExe/vchannel.c - the warning is triggered (by main.c in the same directory) when function vchannel_open() returns something that C interprets as yes for if condition). I have no idea why it works when I launch this command manually via a bat file and not when I put it to startup programs. Any ideas?

    Read the article

  • php5-mysqlnd on debian wheezy/sid?

    - by Joseph
    I am trying to install php5-mysqlnd on a fresh install of Wheezy (/etc/debian_version refers to it as wheezy/sid) and I'm having a problem: root@debian:/var/www/lottery1# apt-get install php5-mysqlnd Reading package lists... Done Building dependency tree Reading state information... Done php5-mysqlnd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up php5-mysqlnd (5.4.0-3) ... ucfr: Attempt from package php5-mysqlnd to take /etc/php5/mods-available/mysql.ini away from package php5-mysql ucfr: Aborting. dpkg: error processing php5-mysqlnd (--configure): subprocess installed post-installation script returned error exit status 4 Processing triggers for libapache2-mod-php5 ... configured to not write apport reports Reloading web server config: apache2. Errors were encountered while processing: php5-mysqlnd E: Sub-process /usr/bin/dpkg returned an error code (1) It seems there is some sort of conflict with the php5-mysql package, but I still get this error even after removing (with --purge) the php5-mysql package. Any thoughts? I'm trying to run a web tool that makes heavy use of mysqli_result::fetch_all(). Thanks!

    Read the article

  • Refresh file access time under Linux / Discard disk read cache

    - by calandoa
    I am making use of the access time to analyse some build process, but it is not working the way I want: the access time is updated the first time I read the file, then it stays the same for a long while, or until the next reboot. For instance: $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 10:03 some_file $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time is updated # waiting a few minutes... $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time has not been updated :( I suppose that the file is buffered by Linux in the free memory, the only this copy is accessed the subsequent times for speed reasons. A solution would be to discard the buffers in memory. After searching some forums, I found: sync echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches But it is not working, it seems that it only sync up the write buffers, not the read ones. May be it is due to some custom kernel configuration on my distro (fedora 9)? Or I am missing something here? Is there a way to achieve this access time refresh? Note also that I do not want to simulate some writes on my entire file tree. Because I am using some makefile based build system, this will cause the entire project to be build again.

    Read the article

  • How to redirect http requests to https (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. # MANAGED BY PUPPET upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } # setup server with or without https depending on gitlab::gitlab_ssl variable server { listen *:80; server_name gitlab.localdomain; server_tokens off; root /nowhere; rewrite ^ https://$server_name$request_uri permanent; } server { listen *:443 ssl default_server; server_name gitlab.localdomain; server_tokens off; root /home/git/gitlab/public; ssl on; ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? I've also tried the following configurations listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; The logs tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • DEB: "Provides:" field ignored

    - by Creshal
    I need to replace a package with a custom one, which gets its own name (foo-origpackage). To allow it to be used as drop-in replacement, I added the Provides: origpackage line to the control file. apt-cache show foo-origpackage lists the "Provides" entry just fine. However, when I want to install a file depending on origpackage, it fails ("Package origpackage not installed"). Is there some distinction between "real" and virtual packages I'm missing? EDIT: To be precise, what I want to replace is xen-utils-common for Squeeze. My tao-xen-utils-common has the following control file: Source: tao-xen-utils-common Section: kernel Priority: optional Maintainer: Creshal <[email protected]> Build-Depends: debhelper Standards-Version: 3.8.0 Homepage: http://tao.at Package: tao-xen-utils-common Architecture: all Depends: gawk, lsb-base, udev, xenstore-utils, tao-firewall Provides: xen-utils-common Conflicts: xen-utils-common Replaces: xen-utils-common Description: Xen administrative tools - common files (modified) The userspace tools to manage a system virtualized through the Xen virtual machine monitor. Modified for use with TAO Firewall. Installing xen-utils-4.0 fails, however: foo@bar# apt-cache showpkg tao-xen-utils-common Package: tao-xen-utils-common Versions: 4.0.0-1tao1 (/var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages MD5: 7c2503f563fca13b33b4eb3cbcb3c129 Reverse Depends: tao-firewall,tao-xen-utils-common tao-firewall,tao-xen-utils-common Dependencies: 4.0.0-1tao1 - gawk (0 (null)) lsb-base (0 (null)) udev (0 (null)) xenstore-utils (0 (null)) tao-firewall (0 (null)) xen-utils-common (0 (null)) xen-utils-common (0 (null)) Provides: 4.0.0-1tao1 - xen-utils-common Reverse Provides: foo@bar# apt-get install xen-utils-4.0 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xen-utils-common Suggested packages: xen-docs-4.0 The following packages will be REMOVED: tao-xen-utils-common The following NEW packages will be installed: xen-utils-4.0 xen-utils-common Edit:foo@bar# apt-cache policy xen-utils-4.0 xen-utils-4.0: Installed: (none) Candidate: 4.0.1-4 Version table: 4.0.1-4 0 500 http://ftp.at.debian.org/debian/ stable/main amd64 Packages 4.0.1-4 0 500 http://security.debian.org/ stable/updates/main amd64 Packages

    Read the article

  • Alternatives to using email (in particular, Outlook) as a knowledge store?

    - by Umber Ferrule
    I suspect that, like many people, I use my work email account (accessed via Outlook 2007) to store information. I generally try to group similar things in folders and sub-folders, but with a multitude of folders this gets very unwieldy. In particular, it can be a bind to locate things using Outlook's tree structure. (As an aside: I've yet to come across a good free search add-on for Outlook.) I realise Outlook is not the best place to store all my information and I'd prefer not to. In an ideal world I'd like to be able to organise all of the information stored in Outlook in a MindMap (my software of choice being Freemind) or Wiki. To maintain an email audit-trail, I've considered saving individual emails as files using a MindMap or Wiki to link them. What do people think of this? (I can't say I relish the thought of the exporting process!) Whatever I do is going to involve some pain (i.e. setting up a Wiki/MindMap) or sticking with what Outlook provides currently. Has anyone been in the same position? Has anyone mass-migrated information from Outlook? If so, what was the best way? Any ideas or alternative proposals?

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >