Search Results

Search found 44691 results on 1788 pages for 'first'.

Page 204/1788 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • How to setup a django site with Cherokee, DynDNS and virtual_env?

    - by e-satis
    I have a django project running with the dev server, and would like to try run it in a production environment. I wanted to try Cherokee for a change, so I installed it. We don't have a domain name yet, so I set up a DynDNS looking like stuff.gotdns.org. It works fine, I can see the Cherokee welcome page (so red, I first believed I got an error :-p). I ran the wizard to create a new virtual server for Django. No everything is setup, but I have nothing. Still the default Cherokee welcome page. What should I do now if I want to go to "http://stuff.gotdns.org" and see my website? What should I do now if I next want to make it available only at "http://project.stuff.gotdns.org"? Important fact, I use virtual_env, so your can call Python directly, you have to activate it first.

    Read the article

  • Tracking down rogue disk usage

    - by Amadan
    I found several other questions regarding the theory behind my problem (e.g. this, this), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G (du) and 345G (df). Where are the remaining 334G? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) I rebooted to see if fsck does anything. But, from /var/log/boot.log, it seems there are no issues: /dev/mapper/server-root: clean, 3936097/32768000 files, 125368568/131064832 blocks Thinking maybe someone overzealously reserved root space, I checked the master record: # tune2fs -l /dev/mapper/server-root tune2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 86430ade-cea7-46ce-979c-41769a41ecbe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131064832 Reserved block count: 6553241 Free blocks: 5696264 Free inodes: 28831903 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 1 13:44:04 2013 Last mount time: Tue Aug 19 16:56:13 2014 Last write time: Fri Feb 1 13:51:28 2013 Mount count: 9 Maximum mount count: -1 Last checked: Fri Feb 1 13:44:04 2013 Check interval: 0 (<none>) Lifetime writes: 1215 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28836028 Default directory hash: half_md4 Directory Hash Seed: bca55ff5-f530-48d1-8347-25c004f66d43 Journal backup: inode blocks The system is: # uname -a Linux server 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Does anyone have any tips on what exactly to do to find and hopefully reclaim the missing space?

    Read the article

  • Exchange 2007 Standard Edition

    - by Phrontiste
    We Have : Exchange 2007 Standard Edition IBM System X3650 2 x Intel Xeon 5430 2.66 GHz Version 8.1 Build 240.6 Mailbox, Hub Transport, Client Access Role Installed on One Box Total Number of Mailboxes : 110 - 130 6 Physical Disks Disk 0,1 (68 GB) = Raid-1, OS Partition ( C: Partition) Disk 2,3 (279GB) = Raid-1, Exchange Database (First and Second Storage Groups) ( D: Partition ) Disk 4,5 (68 GB) = Raid-1, Exchange Transaction Logs ( E: Partition ) Setup: Storage Groups : D:\First Storage group\Mailbox database.edb Storage Groups : D:\Second Storage Group\Public Folder Database.edb Transaction Logs : E Partition Problem 1: On our D Partition (Mailbox Database Partition), total size is 279 GB, free space remaining is 64.7 GB, when I select the first storage group and second storage group folders and right click properties they report a size of 165 GB. Mailbox database reports a size of 157GB when right clicked Properties. where as the size displayed in the folder is 164,893,456 KB So, we are missing around 50-54 GB, there is nothing else on these drives, no page file, nothing at all. The partition housing the Transaction logs is reporting the sizes accurately. Any suggestions / fixes on the above ? Problem 2: As you may have already read in Problem 1, the size of the mailbox database is 157GB or 164GB reported; which is not recommended, a) What would you suggest we should do to divide mailboxes in storage groups on this same server ? b) How would we move mailboxes into different storage groups ? c) This is the information store size ? (Am I right in thinking that this is not recommended) d) Having multiple storage groups with one Mailbox DB in each, would that reduce the size of the Information Store? e) Any suggestions / how-to reduce the size of information store ? We didn't install this, we have inherited this - what other recommendations you can make in order to keep ourselves better prepared for any server disaster? We are backing up with Yosemite Backup on RD1000 (320GB) at the moment, which is backing up successfully, flushing the logs daily. We haven't done a test restore YET. I have tried to provide as much info as possible, please let me know if you need further info. Also, we haven't yet faced any problems in mailflow, access speeds, everything is working fine, we have two to five people accessing OWA or Outlook via vpn only. Thanks for your time to read the above - will look forward to your expert suggestions.

    Read the article

  • Starting/Stopping IBM WebSphere Application Server (WAS) 7 from the Command Line

    - by Christopher Parker
    I've written a script to automate the process of starting, stopping, and restarting WAS7 from the command line. Nothing starts automatically on one of our staging servers, so I have to start everything: deployment manager, node agent, app server, and Web server. The script I wrote seems to work pretty well. A coworker of mine recommended that I structure my commands differently. I'm wondering if there's a good, valid reason for doing so. First, my variables: WAS_HOME="/opt/IBM/WebSphere/AppServer" WAS_PROFILE_NAME="AppSrv01" WAS_APP_SERVER="server1" WAS_WEB_SERVER="webserver1" How I had the start commands: "${WAS_HOME}/bin/startManager.sh" "${WAS_HOME}/bin/startNode.sh" -profileName $WAS_PROFILE_NAME "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_APP_SERVER "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_WEB_SERVER I was told that I should do it like this, instead: WAS_DMGR="Dmgr01" # Added variable "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startNode.sh" "${WAS_HOME}/profiles/${WAS_DMGR}/bin/startManager.sh" "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_APP_SERVER "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_WEB_SERVER How is the second way of starting up everything for WebSphere any better or more correct than the first, original, way?

    Read the article

  • SQL like group by and sum for text files in command line?

    - by dnkb
    I have huge text files with two fields, the first is a string the second is an integer. The files are sorted by the first field. What I'd like to get in the output is one line per unique string and the sum of the numbers for the identical strings. Some strings appear only once while other appear multiple times. E.g. Given the sample data below, for the string glehnia I'd like to get 10+22=32 in the result. Any suggestions how to do this either with gnuwin32 command line tools or in linux shell? Thanks! glehnia 10 glehnia 22 glehniae 343 glehnii 923 glei 1171 glei 2283 glei 3466 gleib 914 gleiber 652 gleiberg 495 gleiberg 709

    Read the article

  • Double Filter in Excel

    - by Joe
    I'm trying to "stack" filters in excel, so to speak. I want to filter column A to show anything greater than 30 and then I want to filter column B to show the top ten items. When I do this, however, it shows me all rows that fit both criteria (only five records). I want to first fit the criteria for column A and then filter these results to show the top ten items in column B (10 records total). I know that I could just copy the rows from my first filter to a new sheet and then filter the new worksheet, but is there any way to apply both filters so that I don't physically have to delete records this way? Thanks for your help!

    Read the article

  • Openfiler RAID 10 option not found

    - by chrisling106
    Hi, I'm building a NAS using Openfiler2.3 (from 32-bit ISO), first of all I want to experiment it on VM first before going out and buy the harddrives needed. I created 5 virtual drives on VMware, sda is 2GB and the rest 1GB each (sdb to sde). I left sda blank and want to setup a RAID 10 disk using sdb, sdc, sdd and sde, 4 RAID partitions are setup successfully, but when I try to create a RAID device the only option for RAID level is 1, 0, 5 and 6. RAID 10 is not there! Can someone let me know what have I missed, please? TIA.

    Read the article

  • Disable Acer eRecovery system

    - by Joel Coehoorn
    The meat of this question is that I'm looking for a way to either require a password before using a recovery partition or "break" the recovery partition (specifically, Acer eRecovery) in a way that I can later "unbreak" only by booting normally into windows first. Here's the full details: I have a set of new Acer Veriton n260g machines in a computer lab. A lot work went into setting up this lab to work well - for example, Office 2007 and other programs needed by the students were installed, all windows updates are applied, and a default desktop is setup. All in all it's several hours work to fully set up one machine. Unfortunately, I don't currently have the ability to easily image these machines, and even if I did I would want to avoid downtime even while an image is restored. Therefore, I've taken steps to lock them down — namely DeepFreeze and a bios password to prevent booting from anywhere but the frozen hard drive. DeepFreeze is an amazing product — as long as you boot from the frozen hard drive, there is no way to actually make permanent changes to that hard drive. Anything you do is wiped after the machine restarts. It lets me give students the leeway to do what they want on lab computers without worrying about them breaking something. The problem is that even with the bios locked and set to only boot from the hard drive, these Acers still have a simple way to choose a different boot source: shut them down and put a paper click in a little hole at the top while you turn it on again. This puts them into the "Acer eRecovery" mode. This by itself is no big deal — you can still power cycle with no impact. But if you then click through the menu to reset the machine (we're now past the point of curiosity and on to intent) it will wipe the hard drive and restore it to the original state. Of course, a few students have already figured this out and reset a couple machines. That's unfortunate, but inevitable. I don't want to destroy the ability to do this entirely (which I could by repartitioning the drives to remove the recovery partition) but I would like a way to require a password first, or "break" the recovery system in a way that I can "unbreak" only if I first un-freeze the hard drive in DeepFreeze. Any ideas?

    Read the article

  • Why doens't my Postgres user have permissions to add a Postgres database?

    - by orokusaki
    First, I ran: sudo su postgres createuser -U postgres foouser -P which worked fine, and I ran: createdb -U foouser -E utf8 -O foouser foodatabase -T template0 and got "permission denied: cannot create database" Firstly, should I even su as postgres to do operations like the first one (assuming my postgres data dir is owned by postgres), or is -U postgres from any user (assuming trust is used in pg_hba.conf) sufficient? Secondly, why am I running into this error? Is this because the user foouser is a non-superuser? Should I create foodatabase using the postgres user and simply -O foouser?

    Read the article

  • How to set the PHP Api Version for phpize

    - by Tom Frost
    I'm upgrading php on my server but I'm running into a problem with phpize and compiling external modules. phpize -v reports: Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20090115 Zend Extension Api No: 220090115 But on my test server (which I'm trying to replicate) I get this: Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 I'm running debian squeeze, pulling the php 5.3.0-2 packages from the experimental repo. The difference betweent he two servers is that the first server has had old verisons of php on it, and the test server was installed with php 5.3.0-2 from the start. I've attempted uninstalling all PHP packages from the first server (using --purge to get rid of all the config files) and re-installing 5.3 fresh, but I'm still having the same issue. Help!

    Read the article

  • Problems with Xen installation

    - by Rodnower
    Hello, I have CentOS 5.4 installed. Now I'm trying to install Xen with out connecting to Internet (I have any driver for modem, so I search on Inernet only from Windows). All I have are 7 installation disks. First I done was to find kind of some add/remove programs wizard but it needed connection to Inernet. Second I try was to find Xen rpm on all disks and install it. But I fell on some dependency of some dependency. Third I attempted was to boot from first disk and do upgrade, but also it was unsuccessfully... So my question is: is there some way to install Xen from CentOS installation disks with out network? Thank you for ahead.

    Read the article

  • Why does my $LD_LIBRARY_PATH get unset when using screen with bash?

    - by UltraNurd
    This is related to http://superuser.com/questions/27376/why-does-my-ld-library-path-get-unset-launching-terminal, but a different set of symptoms. First, /usr/bin/screen is setuid as per the other question. Second, the default shell on this system is /bin/tcsh for various historical reasons, and we're not allowed to chsh to /bin/bash, so I typically run bash manually immediately after login. Third, I almost always use screen, but I want ctrl-a ctrl-c in screen to create a new bash "tab", so I always invoke bash first. That is: {~} $ echo $SHELL /bin/tcsh {~} $ bash [~] echo $SHELL /bin/bash [~] screen -U [~] ...and when reconnecting: {~} $ echo $SHELL /bin/tcsh {~} $ screen -dUr [~] echo $SHELL /bin/bash [~] However, my $LD_LIBRARY_PATH is there in tcsh, there in bash, but empty once I run screen; it is still present if I just run screen from tcsh, but then I get new tcsh "tabs" when I use ctrl-a ctrl-c in screen. Any ideas?

    Read the article

  • conditional formatting for subsequent rows or columns

    - by Trailokya Saikia
    I have data in a range of cells (say six columns and one hundred rows). The first four column contains data and the sixth column has a limiting value. For data in every row the limiting value is different. I have one hundred such rows. I am successfully using Conditional formatting (e.g. cells containing data less than limiting value in first five columns are made red) for 1st row. But how to copy this conditional formatting so that it is applicable for entire hundred rows with respective limiting values. I tried with format painter. But it retains the same source cell (here limiting value) for the purpose of conditional formatting in second and subsequent rows. So, now I am required to use conditional formatting for each row separately s

    Read the article

  • How do the routers communicate with each other ?

    - by Berkay
    Let's say that i want make a request a to a web page which is hosted in Europe (i live in USA).My packets only consist the IP address of the web page, first the domain name to ip address transformation is done, then my packets start their journey through to europe. i assume that MAC addresses never used in this situation? are they? First, my packets deal with many routers on way how these routers communicate with each other?, are router addresses added to my packet headers ? Second, is there a specific path router to router comminication or which conditions affect this route? Third to cross the Atlantic Ocean, are cables used or... ?

    Read the article

  • PostgreSQL: performance descrease due to index bloatper

    - by Henry-Nicolas Tourneur
    I'm running a PgSQL 8.1 on a CentOS 4.4 system (not upgradable unfortunately). There's a Java app running on top of the PgSQL daemon and we got to reindex the database every 2 months or so. Also important: the database isn't growing. It looks like the bloat is now coming faster than before and this tends to increase. My config is available here, autovacuum daemon is enabled and running quite often: pastebin.com/RytNj7dK You can also find the output of this query wiki.postgresql.org/wiki/Show_database_bloat 3 hours after running reindex: http://pastebin.com/raw.php?i=75fybKyd 72 hours after running reindex: http://pastebin.com/raw.php?i=89VKd7PC Does anyone have any idea what should I tweak to get rid of that growing bloat? Thanks for your help PS: due to antispam prevention system, I had to remove the first 2 http:// prefixes for my two first links.

    Read the article

  • How to unlock and remove a protected partition from Prestigio USB stick?

    - by mr.b
    Ok, so, I have one of those fancy schmancy devices, which is given to me by a frustrated friend of mine. Device is a Prestigio Leather 8GB, which identifies itself to Linux host as: Bus 001 Device 006: ID 1307:0165 Transcend Information, Inc. 2GB/4GB Flash Drive Kernel messages as USB device is plugged in: kernel: [ 2769.580042] usb 1-9: new high speed USB device using ehci_hcd and address 7 kernel: [ 2769.714782] scsi8 : usb-storage 1-9:1.0 kernel: [ 2770.713937] scsi 8:0:0:0: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.714535] scsi 8:0:0:1: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.715734] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 2770.716108] sd 8:0:0:1: Attached scsi generic sg4 type 0 kernel: [ 2770.722175] sd 8:0:0:0: [sdc] 962560 512-byte logical blocks: (492 MB/470 MiB) kernel: [ 2770.722657] sd 8:0:0:0: [sdc] Write Protect is on kernel: [ 2770.731078] sd 8:0:0:1: [sdd] 14012416 512-byte logical blocks: (7.17 GB/6.68 GiB) kernel: [ 2770.731215] sdc: kernel: [ 2770.738251] sd 8:0:0:1: [sdd] Write Protect is off kernel: [ 2770.880328] kernel: [ 2770.885876] sd 8:0:0:0: [sdc] Attached SCSI removable disk kernel: [ 2770.887442] sdd: unknown partition table kernel: [ 2771.049605] sd 8:0:0:1: [sdd] Attached SCSI removable disk So, symptoms are typical for U3-like devices: two separate devices inside of a single flash device. Windows sees it also as two identical usb devices, and mounts two separate drives to system, whereas first one presents itself as a CDROM device, holding a write-protected content, and second is a regular flash-disk partition, that "can" be written to. However, it seems like it's broken in some weird way, since it won't let me write anything to it, format it, nothing, but that's not the issue right now. Question: How can I unlock entire USB stick so it appears to system as a single, 8GB device which can be partitioned and used normally, without restrictions? Since it appeared to be an U3 device, I have tried standard utilities: both U3 Uninstaller by u3.com (found on SoftPedia), and opensource u3_tool from sourceforge (on both Windows and Linux). First utility failed to even detect USB stick as U3 device (simply stood idle while I re-plugged stick several times), while second tool failed with some obscure error about SCSI command unable to do something (I might be able to provide exact errors when I switch back to windows). u3_tool -i /dev/sg3 (Display device info) fails with u3_partition_info() failed: Device reported command failed: status 1 ...and every other option fails with same error, minus first part which states which command precisely has failed. So, apparently, this isn't a U3 device. Or, if it is, it doesn't behave like one. I read on a few occasions that this device protection is done by special command sent to device which tells it to lock itself, and so there should be an unlock command, that would set drive straight. Does anyone have any idea about what could I do to this device to fix it? P.S. I also mentioned a problem with being unable to use second "drive", but I'll tackle that problem when (and if) I manage to merge those two devices into one...

    Read the article

  • Why doesn't my sound card start? It reports code 10.

    - by Sarim Ali
    When I installed the XP for first time my sound card was working fine. But only after the first reboot it has stopped working. Under Device Manager, the section "sound video and game controller" the entry YAMAHA Native DS1 WDM Driver, has a exclamation mark on it. On left clicking and selecting properties it shows this message "This device cannot start. (Code 10)". On trying to update driver it says a better driver can not be found. Any idea how I can solve the issue? Update: As suggested by harrymc I uninstalled the device "YAMAHA Native DS1 WDM Driver" , and scanned for hardware changes. The windows detected and installed the soundcard and it started working properly.BUT, after restarting the same problem occurs. So, now everytime I reboot the computer I am stuck to, repeating the process of uninstalling the device to make it work.

    Read the article

  • Change the order of DNS lookup when connected in the VPN

    - by qwerty2010
    Hi, Using Windows 7 Pro here. I have my LAN network adapter with DNS server 8.8.8.8 (Google's DNS). I also have OpenVPN client to connect to my company's network. If I type "nslookup" while disconnected from the VPN, I get 8.8.8.8 (from my LAN network adapter). If I type "nslookup" while connected in the VPN, I get the DNS IP from my company's network. That makes me think that when connected to the VPN all DNS's resolution are routed first to my company's DNS. How can I change this order, and make the DNS resolution be routed to 8.8.8.8 first, when I'm connected to the VPN? Thank you

    Read the article

  • Apache2: Trying to map a subdomain to a subdirectory

    - by user1561753
    So basically I want to have: sub.domain.com/anything - domain.com/asub/anything I'm a bit new to this and a bit confused. The first thing I did was configure my DNS settings so sub.domain.com goes to domain.com using a CNAME (would an A record and the IP be better?) Next I went into my VirtualHost file and have: RewriteEngine on RewriteCond %{HTTP_HOST} www.(.+) [NC] RewriteRule ^/(.*) http://domain.com/$1 [R] RewriteCond %{HTTP_HOST} ^sub.domain.com RewriteRule ^/(.*) http://domain.com/asub/$1 [R] So the first rule is meant to handle www. and making sure that is caught correctly and it works. The second rule is what I've added for the subdomain mapping and it doesn't seem to be doing anything

    Read the article

  • Add a remote printer over ssh on OSX?

    - by GradGuy
    I have a printer at my office that is connected to a local network and my linux box at work can see it on the network. However, it is not visible to the outside world. I was trying to figure out a way to add it on my MacAir and so far have found two options: 1) Using ssh tunnel via CLI: cat file.pdf | ssh user@linuxbox lpr. 2) With Chrome installed on the linux box, using the Google Cloud Print service on the remote box and automator on my MacAir I can add the printer to Cmnd+p dialog box I like the first method since it does not require Chrome be installed and the second one since it allows to use Cmnd+p inside all applications. I was wondering if there is a way to combine by using automator to run the first command line script. What about port forwarding? Is it possible to forward the remote CUPS 631 port to a local port and then add the printer normally? What other methods would you recommend?

    Read the article

  • Weird noise while scanning, using scanimage and a Canon Lide 35

    - by Manu
    I'm trying to scan a bunch of images, using xsane's scanimage : scanimage --format=tiff --batch --batch-prompt This command scans the first picture perfectly, but as soon as I press enter, the scanner makes a weird noise, and the scanning "arm" moves very, very slowly. If I stop scanimage and start again, it scans normally again. Is there another scanimage option that I need to add? I've checked the man page, but can't see what I'm missing. Edit: the problem seems to be that the scanning "arm" doesn't go back to it's original position after the first scan.

    Read the article

  • Stable MSN messenger for mac?

    - by Console
    I use Adium in my mac as a client for MSN messenger. (I have also tried the "official" live messenger for mac for a while but it had even worse issues than what I am experiencing with Adium). Everything works fine for the first 5-10 minutes, perhaps until the first automatic "away", then suddenly any message I send is followed by a dozen or so messages saying: "Message could not be sent because a connection error occurred:" and Message may have not been sent because a timeout occurred: If I send a few messages, they usually eventually arrive (all at once) after a long delay. Windows machines on the same network running live messenger work fine. Any ideas why this might be, or suggestions for a client that might work when both Adium and Microsofts own messenger for mac seem to disconnect?

    Read the article

  • Windows 8.1 upgrade created a second recovery partition. Can I remove the original?

    - by Dave S
    Windows 8.1 upgrade created a second recovery partition. Can I remove the original? Prior to this the partitions were Recovery, EFI, OS(C:), Data(D:). After the upgrade partitions are Recovery, EFI, OS(C:), Recovery, Data(D:). The first Recovery partition is 1023MB the second is 350MB The "Create a system image" tool selects the EFI, OS(C:), and the second 350MB Recovery partitions. The first 1023MB Recovery partition is not listed, I have to "assume" it is now redundant. The factory (HP) Recovery Partition was removed using the HP provided tool after creating recovery disks, and the D: partition created months ago.

    Read the article

  • Dell PR03X port replicator and DisplayPort to DVI adapter not detecting second monitor

    - by yothenberg
    Hi, I have a dell M4400 connected to a PR03X port replicator/docking station. I use the DVI port to connect it to a first Dell 2208WFP monitor and I'm trying to use a DisplayPort-to-DVI adapter to connect it to a second Dell 2208WFP monitor. The second monitor, connected via the DisplayPort-to-DVI adapter immediately goes into sleep mode and the laptop doesn't detect it. What is really weird is that it did detect it the first time I plugged it in but after I unplugged the monitor and plugged it back in it stopped working. I swapped the monitors round and it detected them both but after unplugging the monitor connected via the DisplayPort-to-DVI and plugging it in again it stopped working. Both monitors work if plugged in directly to the DVI port. Is there some way to force re-detection? Any ideas? Thanks, Mark

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >