Search Results

Search found 2149 results on 86 pages for 'peter di cecco'.

Page 19/86 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • DOMDocument groupping nodes, with clone, nodeClone, importNode, fragment... What the better way?

    - by Peter Krauss
    A "DOMNodeList grouper" (groupList() function below) is a function that envelopes a set of nodes into a tag. Example: INPUT <root><b>10</b><a/><a>1</a><b>20</b><a>2</a></root> OUTPUT of groupList($dom->getElementsByTagName('a'),'G') <root><b>10</b> <G><a/><a>1</a><a>2</a></G> <b>20</b></root> There are many ways to implement it, what is the better? function groupList_v1(DOMNodeList &$list,$tag,&$dom) { $list = iterator_to_array($list); // to save itens $n = count($list); if ($n && $list[0]->nodeType==1) { $T = $dom->createDocumentFragment(); $T->appendChild($dom->createElement($tag)); for($i=0; $i<$n; $i++) { $T->firstChild->appendChild( clone $list[$i] ); if ($i) $list[$i]->parentNode->removeChild($list[$i]); } $dom->documentElement->replaceChild($T,$list[0]); }//if return $n; }//func function groupList_v2(DOMNodeList &$list,$tag,&$dom) { $list = iterator_to_array($list); // to save itens $n = count($list); if ($n && $list[0]->nodeType==1) { $T = $dom->createDocumentFragment(); $T->appendChild($dom->createElement($tag)); for($i=0; $i<$n; $i++) $T->firstChild->appendChild( clone $list[$i] ); $dom->documentElement->replaceChild($T,$list[0]); for($i=1; $i<$n; $i++) $list[$i]->parentNode->removeChild($list[$i]); }//if return $n; }//func // ... YOUR SUGGESTION ... // My ugliest function groupList_vN(DOMNodeList &$list,$tag,&$dom) { $list = iterator_to_array($list); // to save itens $n = count($list); if ($n && $list[0]->nodeType==1) { $d2 = new DOMDocument; $T = $d2->createElement($tag); for($i=0; $i<$n; $i++) $T->appendChild( $d2->importNode($list[$i], true) ); $dom->documentElement->replaceChild( $dom->importNode($T, true), $list[0] ); for($i=1; $i<$n; $i++) $list[$i]->parentNode->removeChild($list[$i]); }//if return $n; }//func Related questions: at stackoverflow, at codereview.

    Read the article

  • The Excel Column Name assigment problem

    - by Peter Larsson
    Here is a generic algorithm to get the Excel column name according to it's position. By changing the @Base parameter, you can do this for any sequence according to same style as Excel. DECLARE @Value INT = 8839,         @Base TINYINT = 26   ;WITH cteSequence(Value, Delta, Quote, Base, Chr) AS (     SELECT  CAST(@Value AS INT) AS Value,             CAST(1 AS INT) AS Delta,             CAST(@Base AS INT) AS Quote,             CAST(@Base AS INT) AS Base,             CHAR(65 +(@Value - 1) % @Base) AS Chr       UNION ALL       SELECT  Value AS Value,             Quote AS Delta,             26 * Quote AS Quote,             Base AS Base,             CHAR(65 +((Value - Delta)/ Quote - 1) % Base) AS Chr     FROM    cteSequence     WHERE   CHAR(65 +((Value - Delta)/ Quote - 1) % Base) <> '@' ) SELECT  CAST(Msg AS VARCHAR(MAX)) FROM    (             SELECT        '' + Chr             FROM        cteSequence             ORDER BY    Delta DESC             FOR XML        PATH('')         ) AS x(Msg)

    Read the article

  • Does the type of prior employers matter when applying for a new job?

    - by Peter Smith
    Is there a bias in industry regarding the kind of previous employers an applicant has had (Government contractors, researchers, small business, large corporations)? I'm currently working for a University as a generalist programmer and I like my job here. But I'm worried that if I had to switch jobs down the road and apply for a corporate job that my resume would be dismissed based on the fact that I'm working in academia.

    Read the article

  • Is it possible to have multiple sets of key columns in a table?

    - by Peter Larsson
    Filtered indexes is one of my new favorite things with SQL Server 2008. I am currently working on designing a new datawarehouse. There are two restrictions doing this It has to be fed from the old legacy system with both historical data and new data It has to be fed from the new business system with new data When we incorporate the new business system, we are going to do that for one market only. It means the old legacy business system still will produce new data for other markets (together with historical data for all markets) and the new business system produce new data to that one market only. Sounds interesting this far? To accomplish this I did a thorough research about the business requirements about the business intelligence needs. Then I went on to design the sucker. How does this relate to filtered indexes you ask? I'll give one example, the Stock transaction table. Well, the key columns for the old legacy system are different from the key columns from the new business system. The old legacy system has a key of 5 columns Movement date Movement time Product code Order number Sequence number within shipment And to all thing, I found out that the Movement Time column is not really a time. It starts out like a time HH:MM:SS but seconds are added for each delivery within the shipment, so a Movement Time can look like "12:11:68". The sequence number is ordered over the distributors for shipment. As I said, it is a legacy system. The new business system has one key column, the Movement DateTime (accuracy down to 100th of nanosecond). So how to deal with this? On thing would be to have two stock transaction tables, one for legacy system and one for the new business system. But that would lead to a maintenance overhead and using partitioned views for getting data out of the warehouse. Filtered index will be of a great use here. MovementDate DATETIME2(7) MovementTime CHAR(8) NULL ProductCode VARCHAR(15) NOT NULL OrderNumber VARCHAR(30) NULL SequenceNumber INT NULL The sequence number is not even used in the new system, so I created a clustered index for a new IDENTITY column to make a new identity column which can be shared by both systems. Then I created one unique filtered index for old system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Legacy (MovementDate, MovementTime, ProductCode, SequenceNumber) INCLUDE (OrderNumber, Col5, Col6, ... ) WHERE SequenceNumber IS NOT NULL And then I created a new unique filtered index for the new business system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Business (MovementDate) INCLUDE (ProductCode, OrderNumber, Col12, ... ) WHERE SequenceNumber IS NULL This way I can have multiple sets of key columns on same base table which is shared by both systems.

    Read the article

  • Bad at math, feeling limited

    - by Peter Stain
    Currently I'm a java developer, making websites. I'm really bad at math, in high school I got suspened because of it once. I didn't program then and had no interest in math. I started programming after high school and started feeling that my poor math skills are limiting me. I feel like the programming's not that hard for me. Though web development in general is not that hard, i guess. I've been doing Spring and Hibernate a lot. What i'm trying to ask is : if I understand and can manage these technologies and programming overall, would it mean that I have some higher than average prerequisite for math and details? Would there be any point or would it be easy for me to take some courses in high school math and get a BSc in math maybe? This web development is really starting to feel like not my cup of tea anymore, i would like to do something more interesting. I'm 25 now and feel like stuck. Any help appreciated.

    Read the article

  • Record 8 separate Line IN Channels from M-Audio Delta 1010 Card

    - by Peter Hoffmann
    I want to record the 8 separate Line IN Channels from my M-Audio Delta 1010 Card. The card is recogniced nicely and a can record a single channel via arecord -d 10 -f cd -t wav -D channel1 out2.wav. I've set up the different channels in ~/.asoundrc. Now if I want to record a second channel in parallel (arecord -d 10 -f cd -t wav -D channel2 out2.wav) I get the error arecord: main:564: audio open error: Device or resource busy As I understand the delta 1010 is a single Access Card, so only one application can access it at a time. Is this correct? The next step was to configure a dual channel input in .asoundrc # envy24 channel 1+2 only pcm.test { type plug ttable.0.0 1 ttable.0.1 1 slave.pcm ice1712 } Which works ok when I do a arecord -d 10 -f cd -t wav -D test -c 2 out.wav (BTW can anyone point me to a tool to split a multi channel wav into a file per channel?) But when I want to record the channels separately with (-I option) arecord -d 10 -f cd -t wav -D test -c 2 -I channel1.wav channel2.wav I get no recordings. Did I miss something with the configuration or what are my options to record all 8 channels via arecord. I've no experience with jackd. Is it an option to install jackd and record the line ins via jackd?

    Read the article

  • Gnome, Desktop, Gui, Menu Panel : Upgrading from 10.04 to 11.04

    - by Avukonke Peter
    After upgrading from ubuntu 10.04 to ubuntu 11.04, my gnome (Desktop) is completely messed up. Because I was hesitant to remove all the packages that were on y desktop. I chose to keep all the dependent files during my upgrade to Ubuntu 11.04. After the upgrade my GUI is simply not working. I think it's because of the conflicting files that I choose to keep while upgrading. I can launch nautilus manually,but still I don't have access to any of the menus available in ubuntu. Is there a way I can upgrade from 11.04 to 11.10 and restore my GUI. I tried to upgrade using aptitude, but it doesn't detect the latest ubuntu release, is there a way I can specify where to find the latest release as well get my GUI back ?

    Read the article

  • The one feature that would make me invest in SSIS 2012

    - by Peter Larsson
    This week I was invited my Microsoft to give two presentations in Slovenia. My presentations went well and I had good energy and the audience was interacting with me. When I had some time over from networking and partying, I attended a few other presentations. At least the ones who where held in English. One of these was "SQL Server Integration Services 2012 - All the News, and More", given by Davide Mauri, a fellow co-worker from SolidQ. We started to talk and soon came into the details of the new things in SSIS 2012. All of the official things Davide talked about are good stuff, but for me, the best thing is one he didn't cover in his presentation. In earlier versions of SSIS than 2012, it is possible to have a stored procedure to act as a data source, as long as it doesn't have a temp table in it. In that case, you will get an error message from SSIS that "Metadata could not be found". This is still true with SSIS 2012, so the thing I am talking about is not really a SSIS feature, it's a SQL Server 2012 feature. And this is the EXECUTE WITH RESULTSETS feature! With this, you can have a stored procedure with a temp table to deliver the resultset to SSIS, if you execute the stored procedure from SSIS and add the "WITH RESULTSETS" option. If you do this, SSIS is able to take the metadata from the code you write in SSIS and not from the stored procedure! And it's very fast too. Let's say you have a stored procedure in earlier versions and when referencing that stored procedure in SSIS forced SSIS to call the stored procedure (which can take hours), to retrieve the metadata. Now, with RESULTSETS, SSIS 2012 can continue in milliseconds! This is because you provide the metadata in the RESULTSETS clause, and if the data from the stored procedure doesn't match this RESULTSETS, you will get an error anyway, so it makes sense Microsoft has provided this optimization for us.

    Read the article

  • Memory concerns while plotting escape from DLL Hell in Delphi

    - by Peter Turner
    I work on a program with about 50 DLLs that are loaded from one executable, it's an old organically grown program where the only rationale for creating a new DLL is that one previously didn't exist to fill a given need. (and namespaces didn't exist in Delphi so it never crossed our mind to make dll1.main.pas, dll2.main.pas or something even more unique) What we want to do is consolidate all these DLLs into one executable, since none of them are used out of the program, there shouldn't be much of a problem. The concern my boss has is that if we did this, the memory overhead for terminal server clients would go through the roof. So, I've stepped through enough initialization code to know that lots of stuff is done every time a DLL is loaded in to memory, but say I've got a project with about 4000 files, and 50 dlls, 10 of which are probably utilized by any one user in any one session of the program. The 50 dlls are about 2/3rds form files, if not more, but beyond that there's not a lot of other resources being loaded (only a few embedded pictures, icons, cursors, etc..). If I loaded all these files in to memory, how much memory is used per unit? how much is used per class? How do I keep the overhead down? and what is the biggest project one can reasonably expect to build with Delphi? This tidbit won't help answering, but I think it might clarify what my boss is worried about, we currently start our program at about 18megs, normal working conditions are usually less than 40 megs, he thinks it could climb as high as 120 megs.

    Read the article

  • Should maven generate jaxb java code or just use java code from source control?

    - by Peter Turner
    We're trying to plan how to mash together a build server for our shiny new java backend. We use a lot of jaxb XSD code generation and I was getting into a heated argument with whoever cared that the build server should delete jaxb created structures that were checked in generate the code from XSD's use code generated from those XSD's Everyone else thought that it made more sense to just use the code they checked in (we check in the code generated from the XSD because Eclipse pretty much forces you to do this as far as I can tell). My only stale argument is in my reading of the Joel test is that making the build in one step means generating from the source code and the source code is not the java source, but the XSD's because if you're messing around with the generated code you're gonna get pinched eventually. So, given that we all agree (you may not agree) we should probably be checking in our generate java files, should we use them to generate our code or should we generate it using the XSD's?

    Read the article

  • How to determine if you should use full or differential backup?

    - by Peter Larsson
    Or ask yourself, "How much of the database has changed since last backup?". Here is a simple script that will tell you how much (in percent) have changed in the database since last backup. -- Prepare staging table for all DBCC outputs DECLARE @Sample TABLE         (             Col1 VARCHAR(MAX) NOT NULL,             Col2 VARCHAR(MAX) NOT NULL,             Col3 VARCHAR(MAX) NOT NULL,             Col4 VARCHAR(MAX) NOT NULL,             Col5 VARCHAR(MAX)         )   -- Some intermediate variables for controlling loop DECLARE @FileNum BIGINT = 1,         @PageNum BIGINT = 6,         @SQL VARCHAR(100),         @Error INT,         @DatabaseName SYSNAME = 'Yoda'   -- Loop all files to the very end WHILE 1 = 1     BEGIN         BEGIN TRY             -- Build the SQL string to execute             SET     @SQL = 'DBCC PAGE(' + QUOTENAME(@DatabaseName) + ', ' + CAST(@FileNum AS VARCHAR(50)) + ', '                             + CAST(@PageNum AS VARCHAR(50)) + ', 3) WITH TABLERESULTS'               -- Insert the DBCC output in the staging table             INSERT  @Sample                     (                         Col1,                         Col2,                         Col3,                         Col4                     )             EXEC    (@SQL)               -- DCM pages exists at an interval             SET    @PageNum += 511232         END TRY           BEGIN CATCH             -- If error and first DCM page does not exist, all files are read             IF @PageNum = 6                 BREAK             ELSE                 -- If no more DCM, increase filenum and start over                 SELECT  @FileNum += 1,                         @PageNum = 6         END CATCH     END   -- Delete all records not related to diff information DELETE FROM    @Sample WHERE   Col1 NOT LIKE 'DIFF%'   -- Split the range UPDATE  @Sample SET     Col5 = PARSENAME(REPLACE(Col3, ' - ', '.'), 1),         Col3 = PARSENAME(REPLACE(Col3, ' - ', '.'), 2)   -- Remove last paranthesis UPDATE  @Sample SET     Col3 = RTRIM(REPLACE(Col3, ')', '')),         Col5 = RTRIM(REPLACE(Col5, ')', ''))   -- Remove initial information about filenum UPDATE  @Sample SET     Col3 = SUBSTRING(Col3, CHARINDEX(':', Col3) + 1, 8000),         Col5 = SUBSTRING(Col5, CHARINDEX(':', Col5) + 1, 8000)   -- Prepare data outtake ;WITH cteSource(Changed, [PageCount]) AS (     SELECT      Changed,                 SUM(COALESCE(ToPage, FromPage) - FromPage + 1) AS [PageCount]     FROM        (                     SELECT CAST(Col3 AS INT) AS FromPage,                             CAST(NULLIF(Col5, '') AS INT) AS ToPage,                             LTRIM(Col4) AS Changed                     FROM    @Sample                 ) AS d     GROUP BY    Changed     WITH ROLLUP ) -- Present the final result SELECT  COALESCE(Changed, 'TOTAL PAGES') AS Changed,         [PageCount],         100.E * [PageCount] / SUM(CASE WHEN Changed IS NULL THEN 0 ELSE [PageCount] END) OVER () AS Percentage FROM    cteSource

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • How does the GPL work in regards to languages like Dart which compile to other languages?

    - by Peter-W
    Google's Dart language is not supported by any Web Browsers other than a special build of Chromium known as Dartium. To use Dart for production code you need to run it through a Dart-JavaScript compiler/translator and then use the outputted JavaScript in your web application. Because JavaScript is an interpreted language everyone who receives the "binary"(Aka, the .js file) has also received the source code. Now, the GNU General Public License v3.0 states that: "The “source code” for a work means the preferred form of the work for making modifications to it." Which would imply that the original Dart code in addition to the JavaScript code must also be provided to the end user. Does this mean that any web applications written in Dart must also provide the original Dart code to all visitors of their website even though a copy of the source code has already been provided in a human readable/writable/modifiable form?

    Read the article

  • Maximum file size for iFrame in IE7

    - by Peter Turner
    I've got a "super secure" javascript downloader* that I wrote, and it usually works alright. But I noticed, while trying to download a 90 meg file with it on a client's machine that on IE7, it's getting hung up about 1/3rd of the way through. I've never tried to send a file that large through the iFrame and it works fine in other browsers. Is there a size restriction on files that IE7 can read in an iFrame? * It's really just a PHP line that sets header("location: http://someplace/downloadbigthing.exe"); after it does some logging and verification.

    Read the article

  • Installing LBP 2900 ubuntu -> libs folders wrong?

    - by Peter Smit
    I am trying to get my Canon LBP2900 printer to work on Ubuntu 11.10 64 bit. What I have done is try to follow the steps on https://help.ubuntu.com/community/CanonCaptDrv190 So I downloaded the version 2.3 driver and tried to convert the rpm files to debian and installed them sudo alien cndrvcups-capt-2.30-1.x86_64.rpm cndrvcups-common-2.30-1.x86_64.rpm sudo dpkg -i cndrvcups-capt-2.30-1.x86_64.deb cndrvcups-common-2.30-1.x86_64.deb restarted cups and try to install the printer with lpadmin: sudo service cups restart sudo /usr/sbin/lpadmin -p LBP2900 -m /usr/share/cups/model/CNCUPSLBP2900CAPTK.ppd -v ccp://localhost:59787 -E What I noticed however that on the step with lpadmin it goes wrong with the error: lpadmin: Bad device-uri scheme "ccp" After trying to trace what has gone wrong, I think I nailed it to the fact that dpkg installed a file /usr/lib64/cups/backend/ccp instead of /usr/lib/cups/backend/ccp Checking the original rpm with archive manager shows indeed that /usr/lib and /usr/lib64 are used, with the backend/cpp file only installed in lib64. As I understand correctly, Ubuntu 11.10 uses /usr/lib32 and /usr/lib instead so the files are installed in the wrong place. Is there an automated method of converting the rpm/deb files with the wrong lib structure to one with the right lib structure for ubuntu 11.10? Or am I completely on the wrong track for getting my printer installed?

    Read the article

  • What's the best language combo for code generation?

    - by Peter Turner
    I read through Code Generation in Action but never bothered to make anything of it because Ruby just doesn't fit with my lifestyle at this juncture. The book came out more on the cusp of the C# revolution, and it said that C# "was a language designed to be generated", apparently using Ruby as the generator language. In your experience, what is the ideal combination of languages to generate the most useful code?

    Read the article

  • Megjelent a Glassfish 3.1

    - by peter.nagy
    Mivel ezzel van tele a blogvilág és a java valamint open source community oldalak többsége nem is mennék bele a részletekbe. De akinek információra van szüksége, mindenféleképpen Arun Gupta blogját ajánlom kiindulási pontként. És az o blogja szerintem ebben a témában jobb kezdolap is, mint egy google keresés. Azért ami a legfontosabb: megjelent a clusterezés, Alább pedig a letöltheto csomagok összetevoi.

    Read the article

  • Applying WCAG 2.0 to Non-Web ICT: second draft published from WCAG2ICT Task Force - for public review

    - by Peter Korn
    Last Thursday the W3C published an updated Working Draft of Guidance on Applying WCAG 2.0 to Non-Web Information and Communications Technologies. As I noted last July when the first draft was published, the motivation for this guidance comes from the Section 508 refresh draft, and also the European Mandate 376 draft, both of which seek to apply the WCAG 2.0 level A and AA Success Criteria to non-web ICT documents and software. This second Working Draft represents a major step forward in harmonization with the December 5th, 2012 Mandate 376 draft documents, including specifically Draft EN 301549 "European accessibility requirements for public procurement of ICT products and services". This work greatly increases the likelihood of harmonization between the European and American technical standards for accessibility, for web sites and web applications, non-web documents, and non-web software. As I noted last October at the European Policy Centre event: "The Accessibility Act – Ensuring access to goods and services across the EU", and again last month at the follow-up EPC event: "Accessibility - From European challenge to global opportunity", "There isn't a 'German Macular Degernation', a 'French Cerebral Palsy', an 'American Autism Spectrum Disorder'. Disabilities are part of the human condition. They’re not unique to any one country or geography – just like ICT. Even the built environment – phones, trains and cars – is the same worldwide. The definition of ‘accessible’ should be global – and the solutions should be too. Harmonization should be global, and not just EU-wide. It doesn’t make sense for the EU to have a different definition to the US or Japan." With these latest drafts from the W3C and Mandate 376 team, we've moved a major step forward toward that goal of a global "definition of 'accessible' ICT." I strongly encourage all interested parties to read the Call for Review, and to submit comments during the current review period, which runs through 15 February 2013. Comments should be sent to public-wcag2ict-comments-AT-w3.org. I want to thank my colleagues on the WCAG2ICT Task Force for the incredible time and energy and expertise they brought to this work - including particularly my co-authors Judy Brewer, Loïc Martínez Normand, Mike Pluke, Andi Snow-Weaver, and Gregg Vanderheiden; and the document editors Michael Cooper, and Andi Snow-Weaver.

    Read the article

  • Simple script to get referenced table and their column names

    - by Peter Larsson
    -- Setup user supplied parameters DECLARE @WantedTable SYSNAME   SET     @WantedTable = 'Sales.factSalesDetail'   -- Wanted table is "parent table" SELECT      PARSENAME(@WantedTable, 2) AS ParentSchemaName,             PARSENAME(@WantedTable, 1) AS ParentTableName,             cp.Name AS ParentColumnName,             OBJECT_SCHEMA_NAME(parent_object_id) AS ChildSchemaName,             OBJECT_NAME(parent_object_id) AS ChildTableName,             cc.Name AS ChildColumnName FROM        sys.foreign_key_columns AS fkc INNER JOIN  sys.columns AS cc ON cc.column_id = fkc.parent_column_id                 AND cc.object_id = fkc.parent_object_id INNER JOIN  sys.columns AS cp ON cp.column_id = fkc.referenced_column_id                 AND cp.object_id = fkc.referenced_object_id WHERE       referenced_object_id = OBJECT_ID(@WantedTable)   -- Wanted table is "child table" SELECT      OBJECT_SCHEMA_NAME(referenced_object_id) AS ParentSchemaName,             OBJECT_NAME(referenced_object_id) AS ParentTableName,             cc.Name AS ParentColumnName,             PARSENAME(@WantedTable, 2) AS ChildSchemaName,             PARSENAME(@WantedTable, 1) AS ChildTableName,             cp.Name AS ChildColumnName FROM        sys.foreign_key_columns AS fkc INNER JOIN  sys.columns AS cp ON cp.column_id = fkc.parent_column_id                 AND cp.object_id = fkc.parent_object_id INNER JOIN  sys.columns AS cc ON cc.column_id = fkc.referenced_column_id                 AND cc.object_id = fkc.referenced_object_id WHERE       parent_object_id = OBJECT_ID(@WantedTable)

    Read the article

  • Installing LBP 2900 printer -> libs folders wrong?

    - by Peter Smit
    I am trying to get my Canon LBP2900 printer to work on Ubuntu 11.10 64 bit. What I have done is try to follow the steps on https://help.ubuntu.com/community/CanonCaptDrv190 So I downloaded the version 2.3 driver and tried to convert the rpm files to debian and installed them sudo alien cndrvcups-capt-2.30-1.x86_64.rpm cndrvcups-common-2.30-1.x86_64.rpm sudo dpkg -i cndrvcups-capt-2.30-1.x86_64.deb cndrvcups-common-2.30-1.x86_64.deb restarted cups and try to install the printer with lpadmin: sudo service cups restart sudo /usr/sbin/lpadmin -p LBP2900 -m /usr/share/cups/model/CNCUPSLBP2900CAPTK.ppd -v ccp://localhost:59787 -E What I noticed however that on the step with lpadmin it goes wrong with the error: lpadmin: Bad device-uri scheme "ccp" After trying to trace what has gone wrong, I think I nailed it to the fact that dpkg installed a file /usr/lib64/cups/backend/ccp instead of /usr/lib/cups/backend/ccp Checking the original rpm with archive manager shows indeed that /usr/lib and /usr/lib64 are used, with the backend/cpp file only installed in lib64. As I understand correctly, Ubuntu 11.10 uses /usr/lib32 and /usr/lib instead so the files are installed in the wrong place. Is there an automated method of converting the rpm/deb files with the wrong lib structure to one with the right lib structure for ubuntu 11.10? Or am I completely on the wrong track for getting my printer installed?

    Read the article

  • Bug fix for Eclipse runtime plugin

    - by Peter Benedikovic
    This blog is intended to inform about bug fix that solves this issue. Before continuing further, one important note – the linux and mac users do not need to read further because this bug appears only on Windows.  The problem was that the runtime plugin registered new runtime and server each time the Eclipse started. Users ended up with server view looking like this: I have created new runtime plugin which is now available at the update site http://download.java.net/glassfish/eclipse/indigo (or the same ending with juno for Juno users). You will still need to unistall the buggy plugin and (optionally but recommended) to remove runtimes created by this plugin. Here is the guide how to install bugfix: Uninstall buggy runtime plugin via menu Help->About Eclipse->Installation details. Remove runtimes created by old plugin – via Window->Preferences->Server->Runtime Environment. After pressing remove button you may be asked if you want to remove also the servers based on runtime being removed. Recommended is to do so. Now you can install new runtime plugin. Go to Help->Install New Software. You may ask why I haven‘t provided the update for buggy runtime which could be installed via Check for updates feature of Eclipse. It has two main reasons: The bug fix is needed only for Windows users so I didn't want to bother other users by updating working plugin. The runtime plugin has had structure that was not quite suitable for Eclipse update. This structure is now changed so future bugs (I am sure that there will be no such ;)) can be fixed by standard update. Have a good one!

    Read the article

  • What should developers know about Windows executable binary file compression?

    - by Peter Turner
    I'd never heard of this before, so shame on me, but programs like UPX can compress my files by 80% which is totally sweet, but I have no idea what the the disadvantages are in doing this. Or even what the compressor does. Website linked above doesn't say anything about dynamically linking DLLs but it mentions about compressing DESCENT 2 and about compressing Netscape 4.06. Also, it doesn't say what the tradeoffs are, only the benefits. If there weren't tradeoffs why wouldn't my linker compress the file? If I have an environment where I have one executable and 20-30 DLL's, some of which are dynamically loaded an unloaded fairly arbitrarily, but not in loops (hopefully), do I take a big hit in processing time decompressing these DLL's when they're used?

    Read the article

  • Literature in programming and computer science

    - by Peter Turner
    I hope, gentle programmers, that you'll forgive me for not asking a "Soft Question" on theoreticalCS.SE and asking this here. It has recently come to my attention that bigendian came from Jonathan Swift's Gulliver's Travels. I was pretty surprised when listening to the book on my commute to hear something I'd only heard before in Comp Sci / Engineering classes. I thought it was some sort of nouveau-politically incorrect piece of holdover jargon like Master and Slave drives or Polish Notation. Are there any other incidents, not of politically incorrect jargon, but of literature influencing aspects of computers, programming or software development?

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • What feature is at play when Ctrl+Shift+Alt+U,E "types" an unprintable hex 000E?

    - by Peter.O
    I tend to use Ctrl+Shift+Alt for my customized system-wide keybindings. When I tried Ctrl+Shift+Alt+U it printed an underscored u and waited for more keyboard input!... Some keys were accepted and some were not... eg. Numbers were accepted and they too were underlined, but only a few keys allowed me to break out. I then tried Ctrl+Shift+Alt+U immediately followed by Ctrl+Shift+Alt+E. This produced an unprintable hex 000E(?) and broke out of the loop... The unprintable character got me thinking that this may be Unicode related. If so, how so? What is happening here? Is this underscored u a trigger for an Input Method Editor? This behaviour occurs: Here (as I type), "gedit", text-edit fields... (but not in the Terminal)... and "gvim" reported "pattern not found"...

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >