Search Results

Search found 20140 results on 806 pages for 'remote management'.

Page 543/806 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • How to reset password for Dell PowerConnect 2708?

    - by oherrala
    I do as the user manual says and press "Managed Mode" button to get into unmanaged mode and then press "Managed Mode" button again for managed mode. This should reset the device to factory defaults and username "Admin" with no password. However, the device resets (I think) and I can access the web console from IP 192.168.2.1, but the username and password doesn't work. Maybe the device doesn't reset after all. Or the username/password has been changed in some firmware upgrade? What should I do to get into management of this switch?

    Read the article

  • Internet cafe software for linux

    - by pehrs
    I have gotten a request to roll out a total of 8 internet cafe's in a large network. Budget is non-existent as it will all be done for a non-profit. I was planing to use Ubuntu and live-cds to minimize the amount of management required, but I can't seem to find any suitable internet cafe system that is Ubuntu based. The requirements are pretty basic: It needs to keep track of logged in time and log out users when their time it up. No billing will be done, it will just be used to ensure people can share the computers fairly. It should be possible to force logout from a central system. Users will be unskilled, so it has to have a GUI. What (preferably free, considering the shoe-string budget) software would you suggest to manage this?

    Read the article

  • Can't connect to VPN in Windows XP mode

    - by darkstar13
    I have Windows 7 x32 installed on my laptop. I have also Windows XP mode installed. My setup is that my work-remote programs are in Windows XP mode because my VPN installer in Windows XP only. Lately, I have been having troubles getting on / logging in to VPN. I can access the internet in WinXP mode but When I ping the IP address of the target IP of my VPN network (or even just Google.com), I always get a 'Request Timeout'. However, when I ping the same IP address in command prompt in Windows 7, I get 100% data sent. Is there anything I need to adjust? Before, I have been able to connect instantly. Now, it's like trial and error, or I will have to wait for hours just to be able to enter logon credentials in Cisco VPN dialer. NAT is my network adapter in XP mode.

    Read the article

  • GlusterFS with CIFS, quotas and LDAP

    - by lpfavreau
    Has anyone had experience plugging GlusterFS and Openfiler together or something similar? Here is the motivation: Disk space on multiple server regrouped using GlusterFS Centralized access using LDAP/AD and quota management using Openfiler as the GlusterFS client SMB/CIFS server for easy sharing to multiple users on Mac and Windows I know I can have Gluster installed on Openfiler (rPath Linux) successfully but Openfiler seems to be very picky on what it can use as a shared drive. Mounting the Gluster volume inside an existing share does not seem to allow quotas with the mounted folder free space. If this is not possible, is there any alternative to give the same capabilities?

    Read the article

  • Exposing a WebServer behind a firewall without Port Forwarding

    - by pbreault
    We are deploying web applications in java using tomcat on client machines across the country. Once they are installed, we want to allow a remote access to these web applications through a central server, but we do not want our clients to have to open ports on their routers. Is there a way to tunnel the http traffic so that people connected to the central server can access the web applications that are behind a firewall ? The central server has a static ip address and we have full control over it. Right now, it is a windows box but it could be changed to a linux box if necessary. Our clients are running windows xp and up. We don't need to access the filesystem, we only want to access the web application through a browser. We have looked at reverse ssh tunneling but it shows scaling problem since every packet would have to pass through the central server.

    Read the article

  • Replies to request coming over a relay goes to relay's internal IP, not to original request's source IP

    - by seaquest
    Dhcpd running on Linux gets a dhcp request over dhcrelay which is running on other remote machine. Oct 6 10:09:46 2012 dhcpd: DHCPDISCOVER from 00:1e:68:06:eb:37 (oguz-U300) via 172.16.17.81 tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 96 bytes 10:35:01.112500 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: UDP (17), length: 328) 192.168.0.81.67 > 192.168.0.1.67: BOOTP/DHCP, Request from 00:1e:68:06:eb:37, length: 300, hops:1, xid:0xe378fc7e, flags: [none] (0x0000) Gateway IP: 172.16.17.81 Client Ethernet Address: 00:1e:68:06:eb:37 [|bootp] It matches to a subnet and send reply. However reply does not go to the requesting dhcrelay external IP(192.168.0.81). Instead, it goes to the internal interface IP of machine running dhcrelay. And I think because of this remote machine running dhcrelay or the dhcrealy itself discarding packet. Oct 6 10:09:46 2012 dhcpd: DHCPOFFER on 172.16.17.11 to 00:1e:68:06:eb:37 (oguz-U300) via 172.16.17.81 10:35:02.050108 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: UDP (17), length: 328) 192.168.0.1.67 > 172.16.17.81.67: BOOTP/DHCP, Reply, length: 300, hops:1, xid:0xe378fc7e, flags: [none] (0x0000) Your IP: 172.16.17.11 Gateway IP: 172.16.17.81 Client Ethernet Address: 00:1e:68:06:eb:37 [|bootp] Is this a normal behaviour? Machine running dhcrelay: eth1(ext) Link encap:Ethernet HWaddr 00:90:0B:21:43:F4 inet addr:192.168.0.81 Bcast:192.168.0.255 Mask:255.255.255.0 eth2(int) Link encap:Ethernet HWaddr 00:90:0B:21:43:F5 inet addr:172.16.17.81 Bcast:172.16.17.255 Mask:255.255.255.0 3582 ? Ss 0:00 /usr/sbin/dhcrelay -i eth2 192.168.0.1 Machine running dhcpd: eth1 Link encap:Ethernet HWaddr 00:90:0B:23:97:D1 inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0 option domain-name "test.com"; option subnet-mask 255.255.255.0; authoritative; ignore client-updates; ddns-update-style ad-hoc; default-lease-time 86400; max-lease-time 86400; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.135 192.168.0.169; option broadcast-address 192.168.0.255; option domain-name-servers 192.168.0.1; option domain-name "test.com"; option routers 192.168.0.1; } subnet 172.16.17.0 netmask 255.255.255.0 { local-address 192.168.0.1; server-identifier 192.168.0.1; range 172.16.17.10 172.16.17.11; option broadcast-address 172.16.17.255; option routers 172.16.17.81; } (I put local-address and server-identifier. But this does not help ) Regards, -- Oguz YILMAZ UPDATE: The first problem is found. I have configured dhcrelay only on listening internel interface. It seems (of course) is should also listen to external interface for replies. It appears it is not important where the packet destined to. dhrelay will forward it to internal net. HOWEVER, I have deleted route on dhcpd server to reach 172.16.17.x subnet. It again tries to send reply to 172.16.17.81. Because it does not know the route it send it from default gateway to the internet. eth0: IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: UDP (17), length: 328) 192.168.1.2.67 > 172.16.17.81.67: BOOTP/DHCP, Reply, length: 300, hops:1, xid:0x32830125, secs:3, flags: [none] (0x0000) eth0: Your IP: 172.16.17.11 eth0: Gateway IP: 172.16.17.81 eth0: Client Ethernet Address: 00:1e:68:06:eb:37 [|bootp] How can I force dhcpd to force to send replies to requesting IP? Because, it is not much meaningful to add routes to subnet we distribute IP for. Internet - dhcpd - 192.168.0.1 - SOMENET - 192.168.0.81 - dhcrelay - 172.16.17.0/24 192.168.0.1 has no route for 172.16.17.0 and has no interface directly attached to that net.

    Read the article

  • Unable to install SQL Server 2008 Express SP1

    - by dahacker89
    I am facing difficulties installing the MS SQL Server Express 2008 Service Pack 1. I already have MS SQL Server Express 2008 installed and all I want to do is to install the SP1 however I get following error message even though all features are selected, it still tells me to select one or more features: Also just for information, when I open the SQL Server Configuration Manager to manage my SQL Server Services, the following error message is displayed: If anyone who has faced this and has a solution then please let me know, as my aim is to install management studio, but for that I must have SP1 installed at least, and I'm stuck at that point. Thanks.

    Read the article

  • zip being too nice (Mac OS X)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi This all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • Move drive from iomega home media nas to win 7 pc

    - by user41993
    My Iomega Home Media NAS would not boot, so I unscrewed the enclosure and removed the drive out. It's a 500 GB SATA drive that I plugged into my Win 7 PC so that I could backup the data. Windows does recognize it (it's there in disk management) but I can't assign a letter to it in order to access it. The only option available is Delete Volume... which I obviously don't want to do :) How can I accomplish getting the data off that drive? Thanks.

    Read the article

  • Move drive from iomega home media nas to win 7 pc

    - by user41993
    My Iomega Home Media NAS would not boot, so I unscrewed the enclosure and removed the drive out. It's a 500 GB SATA drive that I plugged into my Win 7 PC so that I could backup the data. Windows does recognize it (it's there in disk management) but I can't assign a letter to it in order to access it. The only option available is Delete Volume... which I obviously don't want to do :) How can I accomplish getting the data off that drive? Thanks.

    Read the article

  • Database hidden in SQL Server

    - by Colin Desmond
    During an aborted TFS import (2008 into 2010), I have managed to "lose" a database in 2008. The database is not visible in Management Studio, but the SQL Server exe has a handle on the .mdf file (according to UnLocker), it says it cannot attach it because the file is in use and it cannot attach a copy of the file (created when SQL Server was stopped), as it says a DB of the same name is already attached. Given I am using the same TFS admin account I have always used and have always been able to see the database in, why is this database missing and, more importantly, how do I get it back again?

    Read the article

  • ASP.NET Website not responding

    - by brinthhillerup
    Hello everybody I am setting with my server tech, trying to figure out what is going on here. When I try to access my site through a browser, my site hangs indefinately. It does not respond with anything, not even after 15 minutes! The application pool has been restarted, the website has been restarted. Doesn't helpt at all. It happened during an upload. A backup has been loaded and the problem is still the same It is a very urgent matter, I hope someone can help me! I think we are on a IIS 6 on a Windows 2003 Server. (Remote hosted, with a tech, who has no idea what the issue is as well, so I am trying to find the solution alogn with him). Any suggestions are VERY appriciated.

    Read the article

  • Stopping local drive mappings from transfering to a RDP session

    - by Chad
    We have a SQL server that locally has about 6 physical drives mapped. However let's say G: is a mount point to the SAN, if I connect with my local machine and have a personal folder mapped locally as g:\userdata that transfers to the remote desktop session on the server overwriting the value of the 'NAME' of the share. Here is the kicker, the G: on the server still has the right information but has the wrong label coming from the share on my PC. Does anyone know how to prevent this from happening? My tick box for local resources is unchecked in my Microsoft RDP client.

    Read the article

  • Better Method of Opening TTY Permissions

    - by VxJasonxV
    At work, I have a few legacy servers that I log into as root, and then su down to a user. I continue to run into an issue where after doing so, I am unable to run screen as this user. I don't want to open screen as root, because then I have to consciously su down the user every new shell, and I often forget. The question is, is there an easier resolution to this than I'm currently aware of? My current solution is to find my terminal pts number, then set it chmod 666. I'm looking for something akin to X11's xhost ACL management, if such a thing exists for this situation.

    Read the article

  • RAID-0 problem with a Sony sporting a new HDD

    - by redrock
    Sony Windows 7 PC. Originally had 2 x 300 Gb HDD. One HDD completely pancaked so have replaced with a new 500 Gb HDD. When both drives are connected the 300GB doesn't appear to be recognized as a 300Gb HDD as a separate entity. BIOS sees it but the operating system only sees a total of 465GB of HD space. When both disks are attached under disk management it shows one 465Gb as RAID 0 and the new drive as STxxxxxx 465Gb. My question I guess is what should I see in total HDD space and is this configured correctly as I thought I would see 2 separate drives 1x500Gb and 1x300Gb. My customer insisted that prior to the HDD crash he saw 2 drives both registering as 300Gb (a c: and d: drive).

    Read the article

  • How to set Brocade 200E SAN Fabric Switch Port Health Monitoring to "monitored"

    - by Kenny
    Hi, I have a Brocade SAN Fabric Switch, a 200E. When using the web based management interface "SwitchExplorer" I can click the port, and I see "Port Administration Services". In the first screen of data that appreas, there's a row called "Health" which has value "Unmonitored". Do you know how to set this port to be Monitored? And if also - what "Health" monitoring does? I'm hoping it'll email or log if there's a connection or disconnection.. Many thanks for looking... Kenny

    Read the article

  • Connecting a LAN to an OpenVPN server via a windows 7 client gateway

    - by user705142
    I've got OpenVPN set up between my windows 7 client and linux server. The goal is that I'll get secure access to a webapp running on the server from any computer on the client LAN. I'm using ccd to assign static ip addresses to each client connection, with key authentication. It's working on my client machine (10.83.41.9), and when you go to the gateway IP address (10.83.41.1), it loads up the webapp. Now I really need the other computers on the client LAN to be able to connect to the webapp as well, via the windows machine. The client has a static IP address of 192.168.2.100 on the LAN, and I've enabled IP forwarding in windows (confirmed by ipconfig /all). In my router I've forwarded 10.83.41.1 / 255.255.255.255 to 192.168.2.100. In server.conf I have.. route 192.168.2.0 255.255.255.0 And in the office ccd.. ifconfig-push 10.83.41.9 10.83.41.10 iroute 192.168.2.0 255.255.255.0 The client log is as follows: Thu Mar 15 20:19:56 2012 OpenVPN 2.2.2 Win32-MSVC++ [SSL] [LZO2] [PKCS11] built on Dec 15 2011 Thu Mar 15 20:19:56 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Thu Mar 15 20:19:56 2012 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Thu Mar 15 20:19:56 2012 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Thu Mar 15 20:19:56 2012 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Thu Mar 15 20:19:56 2012 LZO compression initialized Thu Mar 15 20:19:56 2012 Control Channel MTU parms [ L:1558 D:166 EF:66 EB:0 ET:0 EL:0 ] Thu Mar 15 20:19:56 2012 Socket Buffers: R=[8192->8192] S=[64512->64512] Thu Mar 15 20:19:56 2012 Data Channel MTU parms [ L:1558 D:1450 EF:58 EB:135 ET:0 EL:0 AF:3/1 ] Thu Mar 15 20:19:56 2012 Local Options hash (VER=V4): '9e7066d2' Thu Mar 15 20:19:56 2012 Expected Remote Options hash (VER=V4): '162b04de' Thu Mar 15 20:19:56 2012 UDPv4 link local: [undef] Thu Mar 15 20:19:56 2012 UDPv4 link remote: 111.65.224.202:1194 Thu Mar 15 20:19:56 2012 TLS: Initial packet from 111.65.224.202:1194, sid=ceb04c22 8cc6d151 Thu Mar 15 20:19:56 2012 VERIFY OK: depth=1, /C=NZ/O=XXX./CN=XXX Thu Mar 15 20:19:56 2012 VERIFY OK: nsCertType=SERVER Thu Mar 15 20:19:56 2012 VERIFY OK: depth=0, /C=NZ/O=XXX./CN=XXX Thu Mar 15 20:19:56 2012 Replay-window backtrack occurred [1] Thu Mar 15 20:19:56 2012 Data Channel Encrypt: Cipher 'AES-256-CBC' initialized with 256 bit key Thu Mar 15 20:19:56 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Thu Mar 15 20:19:56 2012 Data Channel Decrypt: Cipher 'AES-256-CBC' initialized with 256 bit key Thu Mar 15 20:19:56 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Thu Mar 15 20:19:56 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Thu Mar 15 20:19:56 2012 [server] Peer Connection Initiated with 111.65.224.202:1194 Thu Mar 15 20:19:58 2012 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Thu Mar 15 20:19:59 2012 PUSH: Received control message: 'PUSH_REPLY,route 10.83.41.1,topology net30,ping 10,ping-restart 120,ifconfig 10.83.41.9 10.83.41.10' Thu Mar 15 20:19:59 2012 OPTIONS IMPORT: timers and/or timeouts modified Thu Mar 15 20:19:59 2012 OPTIONS IMPORT: --ifconfig/up options modified Thu Mar 15 20:19:59 2012 OPTIONS IMPORT: route options modified Thu Mar 15 20:19:59 2012 ROUTE default_gateway=192.168.2.1 Thu Mar 15 20:19:59 2012 TAP-WIN32 device [OpenVPN] opened: \\.\Global\{B32D85C9-1942-42E2-80BA-7E0B5BB5185F}.tap Thu Mar 15 20:19:59 2012 TAP-Win32 Driver Version 9.9 Thu Mar 15 20:19:59 2012 TAP-Win32 MTU=1500 Thu Mar 15 20:19:59 2012 Notified TAP-Win32 driver to set a DHCP IP/netmask of 10.83.41.9/255.255.255.252 on interface {B32D85C9-1942-42E2-80BA-7E0B5BB5185F} [DHCP-serv: 10.83.41.10, lease-time: 31536000] Thu Mar 15 20:19:59 2012 Successful ARP Flush on interface [45] {B32D85C9-1942-42E2-80BA-7E0B5BB5185F} Thu Mar 15 20:20:04 2012 TEST ROUTES: 1/1 succeeded len=1 ret=1 a=0 u/d=up Thu Mar 15 20:20:04 2012 C:\WINDOWS\system32\route.exe ADD 10.83.41.1 MASK 255.255.255.255 10.83.41.10 Thu Mar 15 20:20:04 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Thu Mar 15 20:20:04 2012 Route addition via IPAPI succeeded [adaptive] Thu Mar 15 20:20:04 2012 Initialization Sequence Completed From the other machines I can ping 192.169.2.100, but not 10.83.41.1. In the how-to, it mentions "Make sure your network interface is in promiscuous mode." as well. I can't find in the windows network config, so this may or may not be part of it. Ideally this would be achieved without any special configuration the other LAN computers. Not sure how far I'm going to get on my own at this point, any ideas? Is there something I'm missing, or anything I should need to know?

    Read the article

  • Are Plesk server backups useful?

    - by Michael T. Smith
    I'm working for a startup now, and I'm the programmer. Because of our small team size, I'm also handling the server management for now (until we get a dedicated server administrator.) I've never used Plesk before, and the server we're using (a Media Temple Dedicated Virtual server) had it installed when I got here. One of my first jobs was to set up backups: Plesk was already running it's nightly server-wide backups. I created a small script to dump the web app, it's DBs and any assets, tar them, store them, and then copy them to another small server we have (to backup the backups.) But, we're constantly running into hard drive space issues because of the Plesk backups. And I'm wondering, are they useful? If I have the web app and all of it's assets, I could easily enough get another server up and running. Do we need to keep running Plesk's backups? Thoughts?

    Read the article

  • Internal+external interfaces with multiple default gateways on win2003

    - by fileitup
    Im trying to set up several web servers for a load balanced cluster and need to have each server connected to the internal network (for load balancing) as well as to an external network (internet - for administration). I have two NICs but since I cant set two default gateways I have the external gateway as default and the internal as a route rule. This setup only works half way - the internal network is fine but I cant log in from outside or see the web from the box. If I switch the gateways remote login/web will work, but the internal wont. Im sure someone encountered this before but wasnt able to find anything online. Any help will be appreciated.

    Read the article

  • Bash script for mysql backup - error handling

    - by Jure1873
    I'm trying to backup a bunch of MyISAM tables in a way that would allow me to rsync/rdiff the backup directory to a remote location. I've came up with a script that dumps only the recently changed tables and sets the date of the file so that rsync can pick up only the changed ones, but now I don't know how to do the error handling - I would like the script to exit with a non 0 value if there are errors. How could I do that? #/bin/bash BKPDIR="/var/backups/db-mysql" mkdir -p $BKPDIR ERRORS=0 FIELDS="TABLE_SCHEMA, TABLE_NAME, UPDATE_TIME" W_COND="UPDATE_TIME >= DATE_ADD(CURDATE(), INTERVAL -2 DAY) AND TABLE_SCHEMA<>'information_schema'" mysql --skip-column-names -e "SELECT $FIELDS FROM information_schema.tables WHERE $W_COND;" | while read db table tstamp; do echo "DB: $db: TABLE: $table: ($tstamp)" mysqldump $db $table | gzip > $BKPDIR/$db-$table.sql.gz touch -d "$tstamp" $BKPDIR/$db-$table.sql.gz done exit $ERRORS

    Read the article

  • Can you set CIFS permisions from EMC Command Line?

    - by TJ.
    I am in the process of migrating file shares from my EMC NS-20 to my new VNXe 3100. I am using a RoboCopy script to move the files but am getting errors on some files and folders. I have Domain Admin privileges but when I go to view the security permissions on the folders it says I don't have permissions. I have tried taking ownership to get around the permissions issue but that fails too. So as a last resort can I set permissions on this folder from the EMC console or Web management console?

    Read the article

  • SQL Merge Replication - Filter Sets

    - by Refracted Paladin
    I have a "working" Replication Set in SQL 2005 that we use in house to our users at remote branches on SQL Express 2005. I want to apply a filter to our biggest Set to help minimize the bandwidth impact. What I am asking is what considerations do I need to take into account before throwing a filter on there. Will it cause any issues I should be aware of? Does it affect compression adversely. Will everyone need to reinitialize after applying it? Any heads up or insight would be appreciated. Thanks,

    Read the article

  • How to automatically restart RRAS service after OpenVPN

    - by JT
    I have OpenVPN set up on a Windows Server 2003 box using a routed configuration. This allows users to connect and access the work LAN subnet. There are remote hosts/services however that are only accessible when used via the work network. To enable access these, I push routes out to the clients to make sure traffic to these destinations goes across the VPN, and NAT the traffic using RRAS. This all works, except: if I restart the OpenVPN service, network traffic stops working until I restart the RRAS service as well. Is there a good way for me to make the RRAS service start/restart after OpenVPN? Are service dependencies the way to go? Obviously I could write a batch file to do this, but I'd like to make the process as bullet-proof and obvious as I can so it doesn't cause problems for other admins.

    Read the article

  • Problem upgrading kernel on debian 3.1

    - by exhuma
    Hi, I have a quite old box in a remote server farm. So I have no direct access. Only remote SSH (and via SSH to a serial console). I haven't updated this box in ages. Now, whenever I want to install a new package, a dependency to glibc appears. Unfortunately, the install of glibc depends on a 2.6 kernel and I am running a venerable 2.4 kernel (one more reason to upgrade). The problem is, that the install of a new kernel has an indirect (over locales) dependency to glibc. So, to install glibc, I need a new kernel. For a new kernel, I need to upgrade glibc. Essentially I am blocked. What's the best way to proceed considering I have no "hardware" access? Here's a quick transcript of the upgrade process: [green:~]% sudo aptitude install linux-image-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages are unused and will be REMOVED: gcc-4.3-base The following NEW packages will be automatically installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 module-init-tools yaird The following packages have been kept back: adduser apache2 apache2-mpm-prefork apache2-utils apache2.2-common apt apt-utils aptitude autoconf autotools-dev awstats base-files base-passwd [...snip...] util-linux vacation vim vim-common wamerican wbritish wget whiptail whois wwwconfig-common zlib1g The following NEW packages will be installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 linux-image-686 module-init-tools yaird The following packages will be upgraded: hotplug libc6 2 packages upgraded, 8 newly installed, 1 to remove and 277 not upgraded. Need to get 0B/22.7MB of archives. After unpacking 52.1MB will be used. Do you want to continue? [Y/n/?] Writing extended state information... Done Preconfiguring packages ... (Reading database ... 34065 files and directories currently installed.) Preparing to replace libc6 2.3.6.ds1-13 (using .../libc6_2.7-18lenny2_i386.deb) ... Checking for services that may need to be restarted... Checking init scripts... WARNING: init script for postgresql not found. [ --- libc6 config screen appears here --- ] WARNING: POSIX threads library NPTL requires kernel version 2.6.8 or later. If you use a kernel 2.4, please upgrade it before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb (--unpack): subprocess pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Ack! Something bad happened while installing packages. Trying to recover: dpkg: dependency problems prevent configuration of locales: locales depends on glibc-2.7-1; however: Package glibc-2.7-1 is not installed. dpkg: error processing locales (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: locales Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done Now, if I follow the instrunctions as promted I get the following. Note that I am using aptitude instead of apt-get to benefit from the better dependency tracking. I did try with apt-get first. But that let me to the same problem. [green:~]% sudo aptitude install -t etch linux-image-2.6.26-2-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done E: Unable to correct problems, you have held broken packages. E: Unable to correct dependencies, some packages cannot be installed E: Unable to resolve some dependencies! Some packages had unmet dependencies. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following packages have unmet dependencies: linux-image-2.6.26-2-686: Depends: initramfs-tools (>= 0.55) but it is not installable or yaird (>= 0.0.13) but it is not installable or linux-initramfs-tool which is a virtual package. Any ideas?

    Read the article

  • Public folder emails not being delivered

    - by Rob
    Hello, We have just introduced an Exchange 2010 installation into our existing Exchange 2003 (all standard) environment. We make a lot of use of our Public Folders in 2003, so I am wanting to make a small PF tree in the 2010 system to test some applications against. I have created a few public folders in the 2010 public folder management tool, and mail enabled them, gotten email addresses, etc. However, mail will not be delivered, it queues on my existing 2003 Exchange server's 'Local Delivery' queue, and eventually times out and bounces. I guess the Exchange 'system' including the new 2010 server thinks that all public folder email must need to be delivered to the old 2003 server. Is it possible for me to have two public folder databases that each receive mail? If so, is there something I am missing to enable this? Thanks -R

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >