Search Results

Search found 24253 results on 971 pages for 'multiple monitor'.

Page 310/971 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • How to specialize a type parameterized argument to multiple different types for in Scala?

    - by jmount
    I need a back-check (please). In an article ( http://www.win-vector.com/blog/2010/06/automatic-differentiation-with-scala/ ) I just wrote I stated that it is my belief in Scala that you can not specify a function that takes an argument that is itself a function with an unbound type parameter. What I mean is you can write: def g(f:Array[Double]=>Double,Array[Double]):Double but you can not write something like: def g(f[Y]:Array[Y]=>Double,Array[Double]):Double because Y is not known. The intended use is that inside g() I will specialize fY to multiple different types at different times. You can write: def g[Y](f:Array[Y]=>Double,Array[Double]):Double but then f() is of a single type per call to g() (which is exactly what we do not want). However, you can get all of the equivalent functionality by using a trait extension instead insisting on passing around a function. What I advocated in my article was: 1) Creating a trait that imitates the structure of Scala's Function1 trait. Something like: abstract trait VectorFN { def apply[Y](x:Array[Y]):Y } 2) declaring def g(f:VectorFN,Double):Double (using the trait is the type). This works (people here on StackOverflow helped me find it, and I am happy with it)- but am I mis-representing Scala by missing an even better solution?

    Read the article

  • What is an efficient strategy for multiple threads posting jobs and waiting for response from a single thread?

    - by jakewins
    In java, what is an efficient solution to the following problem: I have multiple threads (10-20 or so) generating jobs ("Job Creators"), and a single thread capable of performing them ("The worker"). Once a job creator has posted a job, it should wait for the job to finish, yielding no result other than "it's done", before it keeps going. For sending the jobs to the worker thread, I think a ring buffer or similar standard fan-in setup would perhaps be a good approach? But for a Job Creator to find out that her job has been done, I'm not so sure.. The job creators could sleep, and the worker interrupt them when done.. Or each job creator could have an atomic boolean that it checks, and that the worker sets. I dunno, neither of those feel very nice. I'd like to do it with as few (none, if possible) locks as absolutely possible. So to be clear: What I'm looking for is speed, not necessarily simplicity. Does anyone have any suggestions? Links to reading about concurrency strategies would also be very welcome!

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • How to prevent asymmetric routing with multiple eBGP routers?

    - by Andy Shinn
    I have 2 routers announcing a /22 subnet to different providers (one providers connects to each of the 2 routers). I have split the /22 in two /23 to announce one /23 on each of the routers plus the /22 (the providers will take the more specific route). This allows me to fail over and keep traffic inside the /23 in and out the same provider. What are other ways in which I could announce just the /22 with both routers and have packets from servers on the network behind the routers go back out the same router in which they came in from? EDIT: The main problem I come across, which end users and clients complain about the most, is that the least hop route is sometimes not the "optimal" route. In my case, I know that Provider B may have better latency to X nation. But when packets come in from provider B, they may go out Provider A or provider B. The reverse is also true. If I send a packet to X nation out provider A, even though it may have more hops back, the packet will likely come in from Provider B (which may have higher latency, packet loss, etc. to this nation)

    Read the article

  • Creating multiple csv files from data within a csv file.

    - by S1syphus
    System OSX or Linux I'm trying to automate my work flow at work, each week I receive an excel file, which I convert to a csv. An example is: ,,L1,,,L2,,,L3,,,L4,,,L5,,,L6,,,L7,,,L8,,,L9,,,L10,,,L11, Title,r/t,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,neede d,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst EXAMPLEfoo,60,6,6,6,0,0,0,0,0,0,6,6,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 EXAMPLEbar,30,6,6,12,6,7,14,6,6,12,6,6,12,6,8,16,6,7,14,6,7.5,15,6,6,12,6,8,16,6,0,0,6,7,14 EXAMPLE1,60,3,3,3,3,5,5,3,4,4,3,3,3,3,6,6,3,4,4,3,3,3,3,4,4,3,8,8,3,0,0,3,4,4 EXAMPLE2,120,6,6,3,0,0,0,6,8,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 EXAMPLE3,60,6,6,6,6,8,8,6,6,6,6,6,6,0,0,0,0,0,0,6,8,8,6,6,6,0,0,0,0,0,0,0,10,10 EXAMPLE4,30,6,6,12,6,7,14,6,6,12,6,6,12,3,5.5,11,6,7.5,15,6,6,12,6,0,0,6,9,18,6,0,0,6,6.5,13 And so you can get a picture of how it looks in excel: What I need to do, is create multiple csv files for each instance in row 1, so L1, L2, L3, L4... And within that each csv file it needs to contain the title, r/t, needed So for L1 an example out put would look like: EXAMPLEfoo,60,6 EXAMPLEbar,30,6 EXAMPLE1,60,3 EXAMPLE2,120,6 EXAMPLE3,60,6 EXAMPLE4,30,6 And for L2: EXAMPLEfoo,60,0 EXAMPLEbar,30,6 EXAMPLE1,60,3 EXAMPLE2,120,0 EXAMPLE3,60,6 EXAMPLE4,30,6 And so on. I have tried playing around with sed and awk and hit google but I have found nothing that really solves the issue. I'd imagine perl would be particular suited to this or maybe python, so I would be more than happy to accept suggestions from users. So, any suggestions? Thanks in advance.

    Read the article

  • ftp server monitoring

    - by Supra Man
    I need to monitor some ftp servers for any changes in file structure, things that I need to monitor is how many times an file is downloaded (not sure if possible), if files are changed or not, if files are deleted or not, if ftp server still exists, i would like this to be something that i can run server=side and would like a sms message or email if any of the above changes have occured any one have any experience or would recommend an particular language or script? thanks just for reference, i don't want to install an ftp server, i just want something to help me monitor other remote ftp servers by periodically logging in

    Read the article

  • Parallel.Foreach loop creating multiple db connections throws connection errors?

    - by shawn.mek
    Login failed. The login is from an untrusted domain and cannot be used with Windows authentication I wanted to get my code running in parallel, so I changed my foreach loop to a parallel foreach loop. It seemed simple enough. Each loop connects to the database, looks up some stuff, performs some logic, adds some stuff, closes the connection. But I get the above error? I'm using my local sql server and entity framework (each loop uses it's own context). Is there some problem with connecting multiple times using the same local login or something? How did I get around this? I have (before trying to covert to a parallel.foreach loop) split my list of objects that I am foreach looping through into four groups (separate csv files) and run four concurrent instances of my program (which ran faster overall than just one, thus the idea for parallel). So it seems connecting to the db shouldn't be a problem? Any ideas? EDIT: Here's before var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); ForEach(cloneIdAndAccessions in allAccessionsFromObs) DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); after var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); Parallel.ForEach(allAccessionsFromObs, cloneIdAndAccessions => DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); Inside the DoWork I use the BioEntities using (var bioEntities = new BioEntities(connectionString)) {...}

    Read the article

  • RAID 10 or RAID 5 for multiple VMs - what is the best choice?

    - by Lars Fastrup
    I have just ordered a new rig for my business. We do a lot of software development for Microsoft SharePoint and need the rig to run several virtual machines for development and test purposes. We will be using the free VMware ESXi for virtualization. For a start, we plan to build and start the following VMs - all with Windows Server 2008 R2 x64: Active Directory server MS SQL Server 2008 R2 Automated Build Server SharePoint 2010 Server for hosting our public Web site and our internal Intranet for a few people. The load on this server is going to be quite insignificant. 2xSharePoint 2007 development server 2xSharePoint 2010 development server Beyond that we will need to build several SharePoint farms for testing purposes. These VMs will only be started when needed. The specs of the new rig is: Dell R610 rack server 2xIntel XEON E5620 48GB RAM 6x146GB SAS drives Dell H700 RAID controller We believe the new server is going to make our VMs perform a lot better than our existing setup (2xIntel XEON, 16GB RAM, 2x500 GB SATA in RAID 1). But we are not sure about the RAID level for the new rig. Should we go for having the the 6x146GB SAS drives in a RAID 10 configuration or a RAID 5 configuration? RAID 10 seems to offer better write performance and lower risk of a RAID failure. But it comes at a cost of less drive space. Do we need RAID 10 or would RAID 5 also be a good choice for us?

    Read the article

  • Adobe Acrobat: How to batch to combine multiple pdf files?

    - by Andrei Andre
    I have 3 folders: Folder 1 Folder 2 Folder 3 In each folder I have 5 pdf files: Folder 1 file1.pdf file2.pdf Folder 2 file1.pdf file2.pdf Folder 3 file1.pdf file2.pdf I want that in each folder to have a combined file of those two files: Folder 1 binder.pdf Folder 2 binder.pdf Folder 3 binder.pdf Any idea? Don't tell to do it manually. This case is just to explain you my problem. Think that I have hundreds of folders. :) Maybe I can use another tool instead of Adobe Acrobat?!

    Read the article

  • In your ssh config is it possible to have one host entry for multiple machines on the same domain

    - by Joshua Olson
    I'd like to be able to do something like Host * HostName *.mydomain.com ... So I can type something like ssh test ssh ci ssh dev Instead of having to type ssh test.mydomain.com ssh ci.mydomain.com ssh dev.mydomain.com Right now I have separate entries for each one, but we have dozens of machines, so I'd rather have a default rather than have to duplicate everything so many times.

    Read the article

  • How to authenticate multiple entry points in a facebook app?

    - by Simon_Weaver
    I am using an IFrame application with XFBML and the new Javascript API. I'd like to have a facebook application with multiple entry points. These will most likely represent different links coming from a fan page tab. I can do this quite easily if the pages don't require authentication - for instance I can create several pages under the app and if a new user comes I can send them to any page: http://apps.facebook.com/myapp/offers http://apps.facebook.com/myapp/game http://apps.facebook.com/myapp/products The problem is that if I need to have authentication then once the user is authenticated they get redirected to my default post-authorization url. Is there a way for a user that comes to /game to stay on /game after they are authenticated without redirecting. I thought I could do it with the AJAX login form - but I cannot find out how to do that in a Facebook IFrame application. I think the example using requirelogin only works for FBML. <a href="http://apps.facebook.com/mysmiley" requirelogin=1> Welcome to my app</a>. Is there a way to accomplish this with Facebook APIs - or will I have to do some kind of clever cookie handling?

    Read the article

  • Is it possible to guarantee a unique id for multiple items using the same id variable at a point in

    - by Scarface
    First of all, do not be overwhelmed by the long code, I just put it for reference...I have a function that preg_replaces content and puts it in a jquery dialog box with a matching open-link. For example, if there is a paragraph with two matches, they will be put inside two divs, and a jquery dialog function will be echoed twice; one for each div. While this works for one match, if there are multiple matches, it does not. I am not sure how to distribute unique ids at a point in time for each of the divs and matching dialog open-scripts. Keep in mind, I removed the preg replace function since it kind of complicates the problem. If anyone has any ideas, they will be greatly appreciated. <?php $id=uniqid(); $id2=uniqid(); echo "<div id=\"$id2\"> </div>"; ?> $.ui.dialog.defaults.bgiframe = true; $(function() { $("<?php echo"#$id2"; ?>").dialog({hide: 'clip', modal: true ,width: 600,height: 350,position: 'center', show: 'clip',stack: true,title: 'title', minHeight: 25, minWidth: 100, autoOpen: false}); $('<?php echo"#$id"; ?>').click(function() { $('<?php echo"#$id2"; ?>').dialog('open'); }) .hover( function(){ $(this).addClass("ui-state-hover"); }, function(){ $(this).removeClass("ui-state-hover"); } ).mousedown(function(){ $(this).addClass("ui-state-active"); }) .mouseup(function(){ $(this).removeClass("ui-state-active"); }); });

    Read the article

  • How to Merge Data From Multiple Excel Files into a Single Excel File or Access Database?

    - by lalabeans
    I have a few dozen excel files which are all of the same format (i.e. 4 worksheets per Excel file). I need to combine all the files into 1 master file which must have just 2 of the 4 worksheets. The corresponding worksheets from each Excel file are named exactly the same as are the column headers. While each file is structured the same, the information within sheet 1 and 2 (for example) is different. So it can’t be combined into one file with everything in one sheet! I've never used VBA before and I'm wondering where I might start this task!

    Read the article

  • DVI splitter not working as expected/confusion between DVI-D and -I

    - by Freakishly
    Hey guys, thanks for looking. I have an ATI FirePro™ V3700 in my desktop machine, and I have been running a dual-monitor setup quite effortlessly, thanks to the two DVI ports on the card. I came upon a third monitor, and wanted to extend my desktop to 3 screens, so I purchased a DVI splitter from Amazon. Now, I can only duplicate the second monitor onto the third, not extend it. I've tried all possible combinations of input to no avail. Here's the setup: The ATI FirePro™ V3700 has two Dual-Link DVI-I outputs The splitter splits a single Dual-Link DVI-I port into two Dual-Link DVI-I outputs Two of the monitors are NEC E222W, and the third monitor is a Dell 2001FP. Each monitor has one D-Sub and one Dual-Link DVI-D input. Cables going from the video card to the monitors are two Dual-Link DVI-D to the NECs and one Single-Link DVI-D to the Dell. Is the problem likely with the DVI-D/DVI-I mismatch? Or is it with the cable on the Dell that is only a Single-Link? The cables are easily replaceable, the monitors not so much. Thanks for your time, I really appreciate it. http://www.amd.com/us/products/workstation/graphics/ati-firepro-3d/v3700/Pages/v3700-specs.aspx http://www.amazon.com/Cables-Unlimited-DVI-D-Splitter-PCM-2260/product-reviews/B000H09RFM/ref=dp_top_cm_cr_acr_txt?ie=UTF8&showViewpoints=1 www dot newegg dot com/Product/Product.aspx?Item=N82E16824002495 accessories dot us dot dell dot com/sna/PopupProductDetail.aspx?cs=19&l=en&c=us&sku=320-1578 Apologies for the fudged links, I'm new here and they won't let me post more than two :P

    Read the article

  • Multiple keyboard shortcuts for the same command in OS X?

    - by Nik
    I am used to doing Ctrl-Shift-Tab to cycle through tabs. I do slowly pick up the niceness about using Cmd-Shift-[ or ] to do the same on the Mac. I have currently set TextMate to take Ctrl-Shift-Tab to do the tab cycling, but I'd want the Ctrl-Shift-[ or ] in addition to that to perform tab cycling. Is this possible? The reason I ask is because I see that Chrome can do it both ways (not through any configuration of my own of course).

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

  • Why do some software not get load balanced even when there are multiple cores?

    - by Nav
    While VTune Analyzer was running on a blade server with 8 cores, I observed the cpu useage percentage using mpstat -P ALL 1. mpstat showed me that VTune was taking up 100% of a single core, while all other cores were idle. Why does that happen? Shouldn't the OS (RHEL Server 5.2) automatically distribute load across cores? The same happened when I tried running MATLAB (even after enabling multithreading support in the MATLAB settings). p.s: I'm a developer. Not a sys admin. So felt it better to ask here rather than at serverfault.

    Read the article

  • Is it safe to use a single switch for multiple subnets?

    - by George Bailey
    For a moment, forget about whether the following is typical or easy to explain, is it safe and sound? Internet | ISP supplied router x.x.x.1 (public subnet) | switch-------------------------------------+ | (public subnet) | (public subnet) BVI router (switch with an access list) NAT router | (public subnet) | (private subnet 192.168.50.1) +--------------------------------switch----+ (both subnets) | | computer with IP 192.168.50.2 ------+ +----computer with IP x.x.x.2 I don't plan to implement this setup, but I am curious about it. The 50.2 computer may send a packet to the x.2 computer, but it will use 50.1 as the router, since 50.2 knows that the subnet is different. Would this result in the packet being received twice by the x.2 machine, first directly through the switch, second by way of the two routers? Do you see any problems with this aside from how confusing it is, and that it would put one switch doing the work of two subnets?

    Read the article

  • How could I portably split large backup files over multiple discs?

    - by sourcejedi
    Context: I make backups / archives, primarily of photos. I'm experimenting with Bup, which is designed for backup to hard disk. Basically it creates Git repos which include packfiles of up to 1GB. But I still need last-ditch backups to keep offline and move offsite (and keeping them on read-only media is good too!). What are the options for archiving and splitting large files over several discs like CDs (and reading them back!)? I'd prefer methods which will stay readable in future. are portable e.g. to Windows. have known simple implementations, so I could re-implement them myself if necessary. (Using Bup packs will stretch my robustness budget. So I want to be confident about how other parts of the system would behave). I heard split archives are possible with both ZIP and 7-Zip. Is that right?

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • Where do Client-only rules execute on a single profile with multiple Outlooks open?

    - by Roger
    I have two outlooks 2007 open on two different machines logged in to the same account. The issue is I have is with client-only rules for sound alerts executing intermittently. All other rules work correctly, because they are run on the Exchange Server, but the client-only ones seem to pick and choose when to produce sounds. Is there something I can do to make my client-only rules run reliably on one particular instance of outlook?

    Read the article

  • Outgoing email getting stuck in outbox and sent out multiple times.

    - by Alain
    I have someone with 3 accounts on Outlook - one of them is giving trouble, in that when sending mail from it, it gets stuck in the outbox, saying it hasn't sent yet. When it finally does send off, he gets reports that about 10 copies have been received. The webmail associated with that account works fine, which means the hosting company (Godaddy) hasn't been helpful in solving this problem. It seems to be an Outlook issue. The other two email accounts (one of which is also a Godaddy email), work normally.

    Read the article

  • How do I open multiple windows when Outlook 2010 starts?

    - by Eric
    OS: Windows 7 64-bit App: Outlook 2010 32-bit Server: Exchange 2010 I'd like to modify Outlook's default startup behavior so that it shows both my Inbox and Calendar when I click my shortcut. I use both of them all day, and know how to just right-click the calendar and select "Open in New Window." I run my inbox on one screen and my calendar on another. I also configured my calendar to be the folder that opens by default when I start Outlook so I don't miss early appointments, but if I could somehow have BOTH open in two separate windows, that would be awesome. Is there a command-line interface or something that can accomplish this? Thanks in advance.

    Read the article

  • How to scan multiple pages from a book under Linux?

    - by rumtscho
    I want the process to look like: I choose the correct scan settings (dpi, color depth, etc) I lay the first page on the scanner and trigger the process The scanner scans the page and waits for me to position the next page correctly I confirm that the next page is ready for scanning Repeat the above two steps until I tell the scanner that there are no more pages to come The scanner saves everything into a single PDF. I tried both xsane and gscan2pdf. First problem: they want me to know how many pages will be scanned. This is already a nuisance, but I can do the counting if needed. The main problem is that in step 3, the scanner does not pause. It is probably optimised for being fed loose sheets. The next scan process is triggered automatically as soon as the CCD has returned to the start position. The time the scanner needs to return the CCD is very short and I can't turn the page and position the book properly. Is there a software which can do the scan process in the way I described above, or did I just miss a setting available in xsane or gscan2pdf to make the scanner pause? If it makes any difference, the scanner is an Epson Stylus SX620FW, I run it using the manufacturer-provided driver.

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >