Search Results

Search found 1794 results on 72 pages for 'embarrassingly parallel'.

Page 14/72 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Russian-to-English Parallel Word Corpus?

    - by Cygorger
    Hi: I am looking for a simple Russian to English word corpus. It can be as simple as a csv that lists a russian word in the first column and the equivalent English word in the second. Any ideas where I can find such a thing? Does the NLTK toolkit have something like this? Thanks

    Read the article

  • Multithreading/Parallel Processing in PHP

    - by manyxcxi
    I have a PHP script that will generate a report using PHPExcel from data queried from a MySQL DB. Currently, it is linear in processing in that it gets the data back from MySQL, reads in the Excel template, writes the data to the template, then outputs it. I have optimized the code to the point that the data is only iterated over once, and there is very little processing done on the PHP side. The query returns hundreds of lines in less than .001 seconds, so it is running fast enough. After some timing I have found my bottlenecks to be (surprise, surprise) reading the template and writing the output. I would like to do this: Spawn a thread/process to read the template Spawn a thread/process to fetch the data Return back to parent thread - Parent thread will wait until both are complete Proceed on as normal My main questions are is this possible, is it worth it? If yes to both, how would you tackle it? Also, it is PHP 5 on CentOS

    Read the article

  • Running multiple image manipulations in parallel causing OutOfMemory exception

    - by Tom
    I am working on a site where I need to be able to split and image around 4000x6000 into 4 parts (amongst many other tasks) and I need this to be as quick as possible for multiple users. My current code for doing this is var bitmaps = new RenderTargetBitmap[elements.Length]; using (var stream = blobService.Stream(key)) { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.StreamSource = stream; bi.EndInit(); for (var i = 0; i < elements.Length; i++) { var element = elements[i]; TransformGroup transformGroup = new TransformGroup(); TranslateTransform translateTransform = new TranslateTransform(); translateTransform.X = -element.Left; translateTransform.Y = -element.Top; transformGroup.Children.Add(translateTransform); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.PushTransform(transformGroup); cont.DrawImage(bi, new Rect(new Size(bi.PixelWidth, bi.PixelHeight))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); bitmaps[i] = rtb; } } for (var i = 0; i < bitmaps.Length; i++) { using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmaps[i])); encoder.Save(ms); var regionKey = WebPath.Variant(key, elements[i].Id); saveBlobService.Save("image/png", regionKey, ms); } } I am running multiple threads which take jobs off a queue. I am finding that if this part of code is hit by 4 threads at once I get an OutOfMemory exception. I can stop this happening by wrapping all the code above in a lock(obj) but this isn't ideal. I have tried wrapping just the first using block (where the file is read from disk and split) but I still get the out of memory exceptions (this part of the code executes quite quickly). I this normal considering the amount of memory this should be taking up? Are there any optimisations I could make? Can I increase the memory available? UPDATE: My new code as per Moozhe's help public static void GenerateRegions(this IBlobService blobService, string key, Element[] elements) { using (var stream = blobService.Stream(key)) { foreach (var element in elements) { stream.Position = 0; BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.SourceRect = new Int32Rect(element.Left, element.Top, element.Width, element.Height); bi.StreamSource = stream; bi.EndInit(); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.DrawImage(bi, new Rect(new Size(element.Width, element.Height))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(rtb)); encoder.Save(ms); var regionKey = WebPath.Variant(key, element.Id); blobService.Save("image/png", regionKey, ms); } } } }

    Read the article

  • How to implement reject in parallel approval workflow?

    - by Dmitry Martynov
    I develop a SharePoint workflow with a Replicator activity to replicate a custom activity for every approver. The custom activity implements an approval branch for a particular user. It has classic form with CreateTask, While, OnTaskChanged and CompleteTask activities. I setup UntilCondition on the replicator to cancel execution after one approver chooses to reject the approval and then workflow finishes. The problem happens with other uncompleted tasks which "hang" in their current state. User does not see this state when open the task. I put UpdateAllTasks after the replacator to set the task status to Cancelled. But since there is no event activities between CompleteTask (for the rejected task) and UpdateAllTasks, the UpdateAllTask activity set Cancelled for the rejected task also. The question, what can I do to flush the pending change made by CompleteTask before UpdateAllTasks? Or perhaps, there is another way to implement such workflow. I was thinking about the way to implement Cancel handler for the custom activity with UpdateTask. But I do not know how to implement it and tell to the cancel handler that it executes in the case of the rejection.

    Read the article

  • Quartz Thread Execution Parallel or Sequential?

    - by vikas
    We have a quartz based scheduler application which runs about 1000 jobs per minute which are evenly distributed across seconds of each minute i.e. about 16-17 jobs per second. Ideally, these 16-17 jobs should fire at same time, however our first statement, which simply logs the time of execution, of execute method of the job is being called very late. e.g. let us assume we have 1000 jobs scheduled per minute from 05:00 to 05:04. So, ideally the job which is scheduled at 05:03:50 should have logged the first statement of the execute method at 05:03:50, however, it is doing it at about 05:06:38. I have tracked down the time taken by the scheduled job which comes around 15-20 milliseconds. This scheduled job is fast enough because we just send a message on an ActiveMQ queue. We have specified the number of threads of quartz to be 100 and even tried with increasing it to 200 and more, but no gain. One more thing we noticed is that logs from scheduler are coming sequential after first 1 minute i.e. [Quartz_Worker_28] <Some log statement> .. .. [Quartz_Worker_29] <Some log statement> .. .. [Quartz_Worker_30] <Some log statement> .. .. So it suggesting that after some time quartz is running threads almost sequential. May be this is happening due to the time taken in notifying the job completion to persistence store (which is a separate postgres database in this case) and/or context switching. What can be the reason behind this strange behavior? EDIT: More detailed Log [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] fired job [<job_name>] scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute begin--------- ScheduledLocateJob with key: <job_name> started at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute end--------- ScheduledLocateJob with key: <job_name> ended at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] completed firing job [<job_name>] with resulting trigger instruction code: DO NOTHING. Next scheduled at: 06-07-2012 10:34:53.000 I am doubting on this section of the above log scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 because this job was scheduled for 10:04:53, but it fired at 10:08:33 and still quartz didn't consider it as misfire. Shouldn't it be a misfire?

    Read the article

  • QProcess, QEventLoop - of any use for parallel-processing

    - by dlib
    I wonder whether I could use QEventLoop (QProcess?) to parallelize multiple calls to same function with Qt. What is precisely the difference with QtConcurrent or QThread? What is a process and an event loop more precisely? I read that QCoreApplication must exec() as early as possible in main() method, so that I wonder why it is different from main Thread. could you point as some efficient reference to processes and thread with Qt? I came through the official doc and those things remain unclear. Thanks and regards.

    Read the article

  • High CPU usage when running several "java -version" in parallel

    - by Prateesh
    This is just out of curiosity to understand i have a small shell script for ((i = 0; i < 50; i++)) do java -version & done when i run this my CPU usage report by sar is as below 07:51:25 PM CPU %user %nice %system %iowait %steal %idle 07:51:30 PM all 6.98 0.00 1.75 1.00 0.00 90.27 07:51:31 PM all 43.00 0.00 12.00 0.00 0.00 45.00 07:51:32 PM all 86.28 0.00 13.72 0.00 0.00 0.00 07:51:33 PM all 5.25 0.00 1.75 0.50 0.00 92.50 As you can see, on the third line the CPU is at 100% My java version is 1.5.0_22-b03.

    Read the article

  • Unit testing with Mocks when SUT is leveraging Task Parallel Libaray

    - by StevenH
    I am trying to unit test / verify that a method is being called on a dependency, by the system under test. The depenedency is IFoo. The dependent class is IBar. IBar is implemented as Bar. Bar will call Start() on IFoo in a new (System.Threading.Tasks.)Task, when Start() is called on Bar instance. Unit Test (Moq): [Test] public void StartBar_ShouldCallStartOnAllFoo_WhenFoosExist() { //ARRANGE //Create a foo, and setup expectation var mockFoo0 = new Mock<IFoo>(); mockFoo0.Setup(foo => foo.Start()); var mockFoo1 = new Mock<IFoo>(); mockFoo1.Setup(foo => foo.Start()); //Add mockobjects to a collection var foos = new List<IFoo> { mockFoo0.Object, mockFoo1.Object }; IBar sutBar = new Bar(foos); //ACT sutBar.Start(); //Should call mockFoo.Start() //ASSERT mockFoo0.VerifyAll(); mockFoo1.VerifyAll(); } Implementation of IBar as Bar: class Bar : IBar { private IEnumerable<IFoo> Foos { get; set; } public Bar(IEnumerable<IFoo> foos) { Foos = foos; } public void Start() { foreach(var foo in Foos) { Task.Factory.StartNew( () => { foo.Start(); }); } } } I appears that the issue is obviously due to the fact that the call to "foo.Start()" is taking place on another thread (/task), and the mock (Moq framework) can't detect it. But I could be wrong. Thanks

    Read the article

  • List of objects or parallel arrays of properties?

    - by Headcrab
    The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension?

    Read the article

  • IIS SMTP server (Installed on local server) in parallel to Google Apps

    - by sharru
    I am currently using free version of Google Apps for hosting my email.It works great for my official mails my email on Google is [email protected]. In addition I'm sending out high volume mails (registrations, forgotten passwords, newsletters etc) from the website (www.mydomain.com) using IIS SMTP installed on my windows machine. These emails are sent from [email protected] My problem is that when I send email from the website using IIS SMTP to a mail address [email protected] I don’t receive the email to Google apps. (I only receive these emails if I install a pop service on the server with the [email protected] email box). It seems that the IIS SMTP is ignoring the domain MX records and just delivers these emails to my local server. Here are my DNS records for domain.com: mydomain.com A 82.80.200.20 3600s mydomain.com TXT v=spf1 ip4: 82.80.200.20 a mx ptr include:aspmx.googlemail.com ~all mydomain.com MX preference: 10 exchange: aspmx2.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx3.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx4.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx5.googlemail.com 3600s mydomain.com MX preference: 1 exchange: aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt1.aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt2.aspmx.l.google.com 3600s Please help! Thanks.

    Read the article

  • load multiple directions in parallel in google map

    - by Maneesh
    My aim is to get distance between many locations thru GDirection object. Since I am using multiple objects, am not able to get distnace. Can someone help me out. Below is the code. for(ct=0;ct<size;ct++) { var gd=new GDirections(); gd.load("from: " + address + " to: " + offaddress[ct]); GEvent.addListener(directions,"load", function() { sor[ct]= gd.getDistance().html; }

    Read the article

  • Unable to make 2 parallel TCP requests to the same TCP Client

    - by soldieraman
    Error: Unable to read data from the transport connection: A blocking operation was interrupted by a call to WSACancelBlockingCall Situation There is a TCP Server My web application connects to this TCP Server Using the below code: TcpClientInfo = new TcpClient(); _result = TcpClientInfo.BeginConnect(<serverAddress>,<portNumber>, null, null); bool success = _result.AsyncWaitHandle.WaitOne(20000, true); if (!success) { TcpClientInfo.Close(); throw new Exception("Connection Timeout: Failed to establish connection."); } NetworkStreamInfo = TcpClientInfo.GetStream(); NetworkStreamInfo.ReadTimeout = 20000; 2 Users use the same application from two different location to access information from this server at the SAME TIME Server takes around 2sec to reply Both Connect But One of the user gets above error "Unable to read data from the transport connection: A blocking operation was interrupted by a call to WSACancelBlockingCall" when trying to read data from stream How can I resolve this issue? Use a better way of connecting to the server Can't because it's a server issue if a server issue, how should the server handle request to avoid this problem

    Read the article

  • Existing LINQ extension method similar to Parallel.For?

    - by Joel Martinez
    The linq extension methods for ienumerable are very handy ... but not that useful if all you want to do is apply some computation to each item in the enumeration without returning anything. So I was wondering if perhaps I was just missing the right method, or if it truly doesn't exist as I'd rather use a built-in version if it's available ... but I haven't found one :-) I could have sworn there was a .ForEach method somewhere, but I have yet to find it. In the meantime, I did write my own version in case it's useful for anyone else: using System.Collections; using System.Collections.Generic; public delegate void Function<T>(T item); public delegate void Function(object item); public static class EnumerableExtensions { public static void For(this IEnumerable enumerable, Function func) { foreach (object item in enumerable) { func(item); } } public static void For<T>(this IEnumerable<T> enumerable, Function<T> func) { foreach (T item in enumerable) { func(item); } } } usage is: myEnumerable.For<MyClass>(delegate(MyClass item) { item.Count++; });

    Read the article

  • TeamCity run Nunit tests in Parallel

    - by Bob Sinclar
    So I was thinking that there must be a better way to run NUnit tests for a .net project via teamcity. Currently the build of the project takes about 10 minutes , and the testing step takes 30ish minutes. I was thinking about splitting up the Nunit tests into 3 groups, assigning them each to a different agent. And then make sure they have a build dependency on the initial build before they start. This was the best way i thought of doing it, Is there a different way I should also consider? On a side note Is it possible to combine all the Nunit tests at the end to get one report from the tests being build on 3 different machines? I dont think this is possible unless someone thought of a clever hack.

    Read the article

  • Parallel Task In C#.net

    - by Test123
    I have C#.net application. I wanted to run my application In Thread. But because of third party dll it dont allow to use application in multiThread. There is one object in thrid party dll ,which only allow to create instance at one time only. When i manually run application exe instnace multiple time & process my data it process successfully..(might because of each exe run with its application domain) Same thing i require to implement from C# code. for that i have created dll which can accessible by Type.GetTypeFromProgID()..but multiple dll instnace creating same problem. Is there any way i could achive manual parallelism through code to process same exe code in multiple application domain?

    Read the article

  • C#: Parallel forms, multithreading and "applications in application"

    - by Harry
    First, what I need is - n WebBrowser-s, each in its own window doing its own job. The user should be able to see them all, or just one of them (or none), and to execute commands on each one. There is a main form, without a browser, this one contains control panel for my application. The key feautre is, each browser logs on to secured web page and it needs to stay logged in as long as possible. Well, I've done it, but I'm afraid something is wrong with my approach. The question is: Is code below valid, or rather a nasty hack which can cause problems: internal class SessionList : List<Session> { public SessionList(Server main) { MyRecords.ForEach(record => { var st = new System.Threading.Thread((data) => { var s = new Session(main, data as MyRecord); this.Add(s); Application.Run(s); Application.ExitThread(); }); st.SetApartmentState(System.Threading.ApartmentState.STA); st.Start(record); }); } // some other uninteresting methods here... } What's going on here? Session inherits from Form, so it creates a form, puts WebBrowser into it, and has methods to operate on websites. WebBrowser requires to be run in STA thread, so we provide one for each browser. The most interesting part of it is Application.Run(s). It makes the newly created forms alive and interactive. The next Application.ExitThread() is called after browser window is closed and its controls disposed. Main application stays alive to perform the rest of the cleanup job. When user select "Exit" or "Shutdown" option - first the browser threads are ended, so Application.ExitThread() is called. It all works, but everywhere I can read about "main GUI thread" - and here - I've created many GUI threads. I handle communication between main form and my new forms (sessions) with thread-safe methods using Invoke(). It all works, so is it right or is it wrong? Is everything right with using Application.Run() more than once in one application? :) An ugly hack or a normal practice? This code dies if I start a WebBrowser from the session form thread. It beats me why. It works however if I start WebBrowser (by changing its Url property) from any other thread. I'd like to know more what is really happening in such application. But most of all - I'd like to know if my idea of "applications in application" is OK. I'm not sure what exactly does Application.Run() do. Without it forms created in new threads were dead unresponsive. How is it possible I can call Application.Run() many times? It seems to do exactly what it should, but it seems a little undocumented feature to me. I'm almost sure, that the crashes are caused by WebBrowser component itself (since it's not completely "managed" and "native"). But maybe it's something else.

    Read the article

  • timing control for parallel processes

    - by omrihsan
    how can i control two processes to run alternately in separate terminal windows. for example i run the code for each on separate terminal windows at 11:59 and both of them wait for the time to be 12:00. at this moment process one starts execution and process two waits for say 10 seconds. then they switch, process two executes and process one waits. in this way they take turns until the process is complete.

    Read the article

  • python parallel computing: split keyspace to give each node a range to work on

    - by MatToufoutu
    My question is rather complicated for me to explain, as i'm not really good at maths, but i'll try to be as clear as possible. I'm trying to code a cluster in python, which will generate words given a charset (i.e. with lowercase: aaaa, aaab, aaac, ..., zzzz) and make various operations on them. I'm searching how to calculate, given the charset and the number of nodes, what range each node should work on (i.e.: node1: aaaa-azzz, node2: baaa-czzz, node3: daaa-ezzz, ...). Is it possible to make an algorithm that could compute this, and if it is, how could i implement this in python? I really don't know how to do that, so any help would be much appreciated

    Read the article

  • Understanding configuration for parallel calling in web app (IIS + MS SQL)

    - by mmcteam.com.ua
    We have an ASP.NET MVC application + IIS 7.5 + SQL Server 2008 R2. We have to load a lot of aggregate counters on the each page. We decided to use ajax and call with javascript for each counter or groups of counters and return them as JSON result. We solve the problem that user doesn't wait for page loading, page loads fast. User waits for counters loading while seeing other page content. But we thought that if we make calls from javascript - our queries will be make async, but we notice, that it is not. All our javascipt calls runs immediately, but action that they invoke are in queue. If we use Async Controller ability - all counters calculating simultaneously, but user has to wait for the longest counter calculating before page loads. The question: We want to understand what is happens if we use ajax and call two or more actions simultaneously. And how can we configuring this. (also in each action we make some queries to sql server)

    Read the article

  • definition of wait-free (referring to parallel programming)

    - by tecuhtli
    In Maurice Herlihy paper "Wait-free synchronization" he defines wait-free: "A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless the execution speeds on the other processes." www.cs.brown.edu/~mph/Herlihy91/p124-herlihy.pdf Let's take one operation op from the universe. (1) Does the definition mean: "Every process completes a certain operation op in the same finite number n of steps."? (2) Or does it mean: "Every process completes a certain operation op in any finite number of steps. So that a process can complete op in k steps another process in j steps, where k != j."? Just by reading the definition i would understand meaning (2). However this makes no sense to me, since a process executing op in k steps and another time in k + m steps meets the definition, but m steps could be a waiting loop. If meaning (2) is right, can anybody explain to me, why this describes wait-free? In contrast to (2), meaning (1) would guarantee that op is executed in the same number of steps k. So there can't be any additional steps m that are necessary e.g. in a waiting loop. Which meaning is right and why? Thanks a lot, sebastian

    Read the article

  • Automatic translation from fortran 90 to f77

    - by osgx
    Hello Is there an converter from fortran 90 downto fortran 77 ? I have a fortran77 only compiler and want to run NAS Parallel Benchmark (NPB for short) on it. But NPB uses some features of F90, like do enddo, smth else. All features are rather simple. Is there A way to translate NPB to F77 strict language? Tags: fortran parallel convert programming-languages

    Read the article

  • How to SET TIMING ON for parallel upgrades to 12c?

    - by Mike Dietrich
    Have you asked yourself how to get timings in an Oracle Database 12c upgrade for all statements? When you run the parallel upgrade via catctl.pl, the parallel upgrade Perl driving script in Oracle Database 12c, you may also want to get timings written in your logfile during execution. As catctl.pl does not offer an option yet the best way to achieve this is to edit the catupses.sql script in $ORACLE/rdbms/admin as this script will get called all time over and over again throughout all steps of theupgrade run. Just add these lines marked in RED to catupses.sql and start your upgrade: Rem =============================================Rem Call Common session settingsRem =============================================@@catpses.sql Rem =============================================Rem  Set Timing On during the UpgradeRem =============================================SET TIMING ON; Rem =============================================Rem Turn off PL/SQL event used by APPSRem =============================================ALTER SESSION SET EVENTS='10933 trace name context off'; -Mike PS: This may become the default in a future patch set

    Read the article

  • How can I use a CanoScan N640Pex scanner over a USB/parallel cable?

    - by detly
    I have an old CanoScan N640Pex flat bed scanner and a USB-to-parallel port cable through which I can connect it to my PC. Unfortunately neither Simple Scan nor XSane detect the scanner. I'm running Ubuntu 12.04 (plus updates and backports) with kernel 3.2.0-31-generic. dmesg tells me this when I plus the cable in: [256411.641910] usb 7-1: new full-speed USB device number 10 using uhci_hcd [256411.872392] usblp0: USB Bidirectional printer dev 10 if 0 alt 1 proto 2 vid 0x067B pid 0x2305 [256411.872417] usbcore: registered new interface driver usblp lsusb shows this device for my cable: Bus 007 Device 010: ID 067b:2305 Prolific Technology, Inc. PL2305 Parallel Port The device node created is /dev/usb/lp0 crw-rw---- 1 root lp 180, 0 Sep 9 17:46 /dev/usb/lp0 There is no extra information from any of these commands when I attach the scanner to the cable and power it on, though. I suspect I might need to change something in /etc/sane.d/canon_pp.conf, but I have no idea what to put for the ieee1284 line, since there is pretty much zero documentation for that parameter. So how can I get it to work?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >