Search Results

Search found 2011 results on 81 pages for 'raw'.

Page 35/81 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • Feedback Filtration&ndash;Processing Negative Comments for Positive Gains

    - by D'Arcy Lussier
    After doing 7 conferences, 5 code camps, and countless user group events, I feel that this is a post I need to write. I actually toyed with other names for this post, however those names would just lend itself to the type of behaviour I want people to avoid – the reactionary, emotional response that speaks to some deeper issue beyond immediate facts and context. Humans are incredibly complex creatures. We’re also emotional, which serves us well in certain situations but can hinder us in others. Those of us in leadership build up a thick skin because we tend to encounter those reactionary, emotional responses more often, and we’re held to a higher standard because of our positions. While we could react with emotion ourselves, as the saying goes – fighting fire with fire just makes a bigger fire. So in this post I’ll share my thought process for dealing with negative feedback/comments and how you can still get value from them. The Thought Process Let’s take a real-world example. This week I held the Prairie IT Pro & Dev Con event. We’ve gotten a lot of session feedback already, most of it overwhelmingly positive. But some not so much – and some to an extreme I rarely see but isn’t entirely surprising to me. So here’s the example from a person we’ll refer to as Mr. Horrible: How was the speaker? Horrible! Worst speaker ever! Did the session meet your expectations? Hard to tell, speaker ruined it. Other Comments: DO NOT bring this speaker back! He was at this conference last year and I hoped enough negative feedback would have taught you to not bring him back...obviously not...I will not return to this conference next year if this speaker is brought back. Now those are very strong words. “Worst speaker ever!” “Speaker ruined it” “I will not return to this conference next year if the speaker is brought back”. The speakers I invite to speak at my conference are not just presenters but friends and colleagues. When I see this, my initial reaction is of course very emotional: I get defensive, I get angry, I get offended. So that’s where the process kicks in. Step 1 – Take a Deep Breath Take a deep breath, calm down, and walk away from the keyboard. I didn’t do that recently during an email convo between some colleagues and it ended up in my reacting emotionally on Twitter – did I mention those colleagues follow my Twitter feed? Yes, I ate some crow. Ok, now that we’re calm, let’s move on to step 2. Step 2 – Strip off the Emotion We need to take off the emotion that people wrap their words in and identify the root issues. For instance, if I see: “I hated this session, the presenter was horrible! He spoke so fast I couldn’t make out what he was saying!” then I drop off the personal emoting (“I hated…”) and the personal attack (“the presenter was horrible”) and focus on the real issue this person had – that the speaker was talking too fast. Now we have a root cause of the displeasure. However, we’re also dealing with humans who are all very different. Before I call up the speaker to talk about his speaking pace, I need to do some other things first. Back to our Mr. Horrible example, I don’t really have much to go on. There’s no details of how the speaker “ruined” the session or why he’s the “worst speaker ever”. In this case, the next step is crucial. Step 3 – Validate the Feedback When I tell people that we really like getting feedback for the sessions, I really really mean it. Not just because we want to hear what individuals have to say but also because we want to know what the group thought. When a piece of negative feedback comes in, I validate it against the group. So with the speaker Mr. Horrible commented on, I go to the feedback and look at other people’s responses: 2 x Excellent 1 x Alright 1 x Not Great 1 x Horrible (our feedback guy) That’s interesting, it’s a bit all over the board. If we look at the comments more we find that the people who rated the speaker excellent liked the presentation style and found the content valuable. The one guy who said “Not Great” even commented that there wasn’t anything really wrong with the presentation, he just wasn’t excited about it. In that light, I can try to make a few assumptions: - Mr. Horrible didn’t like the speakers presentation style - Mr. Horrible was expecting something else that wasn’t communicated properly in the session description - Mr. Horrible, for whatever reason, just didn’t like this presenter Now if the feedback was overwhelmingly negative, there’s a different pattern – one that validates the negative feedback. Regardless, I never take something at face value. Even if I see really good feedback, I never get too happy until I see that there’s a group trend towards the positive. Step 4 – Action Plan Once I’ve validated the feedback, then I need to come up with an action plan around it. Let’s go back to the other example I gave – the one with the speaker going too fast. I went and looked at the feedback and sure enough, other people commented that the speaker had spoken too quickly. Now I can go back to the speaker and let him know so he can get better. But what if nobody else complained about it? I’d still mention it to the speaker, but obviously one person’s opinion needs to be weighed as such. When we did PrDC Winnipeg in 2011, I surveyed the attendees about the food. Everyone raved about it…except one person. Am I going to change the menu next time for that one person while everyone else loved it? Of course not. There’s a saying – A sure way to fail is to try to please everyone. Let’s look at the Mr. Horrible example. What can I communicate to the speaker with such limited information provided in the feedback from Mr. Horrible? Well looking at the groups feedback, I can make a few suggestions: - Ensure that people understand in the session description the style of the talk - Ensure that people understand the level of detail/complexity of the talk and what prerequisite knowledge they should have I’m looking at it as possibly Mr. Horrible assumed a much more advanced talk and was disappointed, while the positive feedback by people who – from their comments – suggested this was all new to them, were thrilled with the session level. Step 5 – Follow Up For some feedback, I follow up personally. Especially with negative or constructive feedback, its important to let the person know you heard them and are making changes because of their comments. Even if their comments were emotionally charged and overtly negative, it’s still important to reach out personally and professionally. When you remove the emotion, negative comments can be the best feedback you get. Also, people have bad days. We’ve all had one of “those days” where we talked more sternly than normal to someone, or got angry at something we’d normally shrug off. We have various stresses in our lives and sometimes they seep out in odd ways. I always try to give some benefit of the doubt, and re-evaluate my view of the person after they’ve responded to my communication. But, there is such a thing as garbage feedback. What Mr. Horrible wrote is garbage. It’s mean spirited. It’s hateful. It provides nothing constructive at all. And a tell-tale sign that feedback is garbage – the person didn’t leave their name even though there was a field for it. Step 6 – Delete It Feedback must be processed in its raw form, and the end products should drive improvements. But once you’ve figured out what those things are, you shouldn’t leave raw feedback lying around. They are snapshots in time that taken alone can be damaging. Also, you should never rest on past praise. In a future blog post, I’m going to talk about how we can provide great feedback that, even when its critical, can still be constructive.

    Read the article

  • Cannot start session without errors in phpMyAdmin running Nginx with PHP-FPM

    - by Infinity
    Whenever I open phpMyAdmin from my VPS I get the following error: Cannot start session without errors, please check errors given in your PHP and/or webserver log file and configure your PHP installation properly. I have researched it, but cant seem to find a solution, I have done the following: Cleared cache and cookies Checked the php.ini (see below) Checked the logs (found nothing relevant) Given the correct permissions. [by sudo chown -R root:nginx /home/humza/pma] I am running Nginx with PHP-FPM, I have php-mysql and all that working fine but I can't get phpMyAdmin to work. I downloaded it off phpMyAdmin's website and extracted it, that's all. http://pastebin.com/raw.php?i=6n57cW8H - my php.ini sessions bit http://pastebin.com/raw.php?i=VaNP2TLi - my whole php.ini None of my logs have anything relevant. My error logs have other PHP errors but not this one and my access logs don't have anything either. I have checked my nginx logs and my PHP-FPM logs. I tried installing phpMyAdmin via yum and got a whole lot of dependency errors. [root@infinity ~]# yum install phpmyadmin Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package phpMyAdmin.noarch 0:2.11.11.3-1.el5 set to be updated --> Processing Dependency: php-mcrypt >= 4.1.0 for package: phpMyAdmin --> Processing Dependency: php >= 4.1.0 for package: phpMyAdmin --> Processing Dependency: php-mbstring >= 4.1.0 for package: phpMyAdmin --> Running transaction check ---> Package php.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php --> Processing Dependency: php-cli = 5.1.6-27.el5_5.3 for package: php ---> Package php-mbstring.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php-mbstring ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Running transaction check ---> Package php.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php ---> Package php-cli.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php-cli ---> Package php-mbstring.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php-mbstring ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Finished Dependency Resolution php-5.1.6-27.el5_5.3.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-5.1.6-27.el5_5.3.i386 (base) php-cli-5.1.6-27.el5_5.3.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-cli-5.1.6-27.el5_5.3.i386 (base) php-mbstring-5.1.6-27.el5_5.3.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-mbstring-5.1.6-27.el5_5.3.i386 (base) php-mcrypt-5.1.6-15.el5.centos.1.i386 from extras has depsolving problems --> Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5.1.6-15.el5.centos.1.i386 (extras) Error: Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5.1.6-15.el5.centos.1.i386 (extras) Error: Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-cli-5.1.6-27.el5_5.3.i386 (base) Error: Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-5.1.6-27.el5_5.3.i386 (base) Error: Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-mbstring-5.1.6-27.el5_5.3.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. [root@infinity ~]# Any ideas?

    Read the article

  • Ubuntu Apache 10 second timeout

    - by Andreas Jansson
    Hi, I'm debugging an API I'm building using netcat to send raw HTTP requests. The thing is that Apache closes the connection after 10 seconds, giving me very little time to type. I know that I could pipe a file to nc, or use any other workaround, but I'd like it to work as it's supposed to. The Timeout directive in apache2.conf is at its default of 300 seconds, KeetAliveTimeout at 15 seconds. Where could this 10 second timeout possibly be defined? I'm running Ubuntu 10.04 Desktop. Thanks, Andreas

    Read the article

  • Microsoft Word 2007 opening all docs with field codes toggled off

    - by WilliamKF
    Recently, something changed with my Microsoft Word 2007 installation/preferences on Windows XP, such that whenever I open a word document, all the field codes are displayed raw instead of as their expanded value. For example, my header reads: My Name { TITLE \* MERGEFORMAT } Version { REVNUM \* MERGEFORMAT } But, if I copy and paste it here, it reads expanded: My Name My Doc Title Version 42 I expect to see the copy and paste version directly inside Word, I can work around this by right clicking on each such field and choosing toggle field codes, however, I never had to do that before, as previously, the document opened with all such field codes expanded. Another example is the Table of Contents which shows as: { TOC \o "1-3" \h \z \u } Instead of the full table of contents. I searched the word options dialog, but could not find anything that appeared relevant. Please suggest how to restore the old behavior.

    Read the article

  • Disable linux read and write file cache on partition

    - by complistic
    How do i disable the linux file cache on a xfs partition (both read an write). We have a xfs partition over a hardware RAID that stores our RAW HD Video. Most of the shoots are 50-300gb each so the linux cache has a hit-rate of 0.001%. I have tryed the sync option but it still fills up the cache when copinging the files. ( about 30x over per shoot :P ) /etc/fstab: /dev/sdb1 /video xfs sync,noatime,nodiratime,logbufs=8 0 1 Im running debian lenny if it helps.

    Read the article

  • How easy is it to migrate a Linux VM image from one VM env to another?

    - by T.J. Crowder
    If I stick to one of the standard, well-supported VM disk images (like a raw image, or VDI, VMDK, ...), are Linux VMs typically easy to move between VM environments? E.g., between (say) VirtualBox and KVM, or VMWare and Xen? I'm talking here of fully virtualized environments, not paravirtualization requiring support within the guest OS. It seems to me that the kernels in most Linux distributions these days are configured to...keep an open mind and detect things at boot time, so you don't have the issue that you sometimes have moving a Windows VM from one virtualization system to another (I'm thinking particularly of HAL issues that Windows has, like ACPI vs. non-ACPI; I've also just had Windows VMs generally acting strangely when moved from VMWare to VirtualBox, for instance). I'm looking for a general answer, but if it helps, specifically I'm mostly going to be doing this with Ubuntu 8.04 LTS and 10.04 LTS guests. But that could change.

    Read the article

  • How to adjust and combine multiple lower quality photos into one better using FOSS?

    - by Vi
    I have multiple noisy photos (caputed without tripod) that needs to be adjusted (moved/rotated) and averaged. How it's better to do it in Linux with FOSS console-based programs? Current way is something like: mplayer mf://*.JPG -vo yuv4mpeg:file=qqq.yuv transcode -i qqq.yuv -y null -J stabilize=maxshift=500:fieldsize=100:fieldnum=6:stepsize=50:shakiness=10 transcode -i qqq.yuv -J transform=smoothing=100000:sharpen=0:optzoom=0 -y raw -o www.yuv mplayer www.yuv -vo pnm gm convert -average 0*.ppm q.ppm i.e.: Convert photos to video Apply Transcode's "Stabilize" filter Convert the video back to images Average the images. It works, but bad: photos still not perfectly adjusted and the whole sequence is very slow. What is better way of doing it?

    Read the article

  • Install RT Failed: DateTime >= 0.44 ...MISSING

    - by javano
    I am trying to install RT-4.0.5 (Request Tracker) but I keep getting the following output; $ make fixdeps <output cut> SOME DEPENDENCIES WERE MISSING. CORE missing dependencies: DateTime >= 0.44 ...MISSING make: *** [fixdeps] Error 1 The full output is here (it's quite long); http://pastebin.com/raw.php?i=Tn7GrkYw $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 8.04.4 LTS Release: 8.04 Codename: hardy $ perl --version This is perl 5, version 14, subversion 2 (v5.14.2) built for i686-linux $ cpan --version /usr/local/bin/cpan version 1.57 calling Getopt::Std::getopts (version 1.06 [paranoid]), running under Perl version 5.14.2. [Now continuing due to backward compatibility and excessive paranoia. See ``perldoc Getopt::Std'' about $Getopt::Std::STANDARD_HELP_VERSION.] Nothing to install! I can't see why this is a problem; $ cpan DateTime Going to read '/root/.cpan/Metadata' Database was generated on Thu, 08 Mar 2012 16:11:26 GMT DateTime is up to date (0.72).

    Read the article

  • mac osX file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • Will increasing RAM improve Lightroom 3 large tiff loading times

    - by andy
    Set up: mid 2009 17" unibody MacBook Pro 4GB RAM 2.66 Core 2 Duo Snow Leopard 10.6.6 Lightroom 3 When working with 12 MegaPixel RAW files from a Nikon D700, no problem. Lightroom is fine. Recently I've been scanning film and they result in large tiff files, about 130mb each. The tiff files themselves are good, and I'm happy with my scanning workflow. Working with these files in Lightroom is perfectly fine, except for one step. When I choose one of these photos in the Develop module, Lightroom displays the "Loading" on the image for about a minute or two, which is quite long. Once the image is loaded, then everything is fine again, and applying effects is instant. So my only issue is reducing that "loading" time in the develop module (the library module is fine too). Will increasing my RAM to 8GB help? I'm worried about spending the money and it not making any difference. thanks andy

    Read the article

  • v4l - capture and watch at the same time

    - by John Barrett
    Capturing v4l and line-in audio using mencoder works very well, but I would like to record real-time gameplay video from consoles plugged into the video card. I've used xawtv for this (Works quite well, can preview and record in real time), but when I enable any deinterlacing or aspect ration options the video fails to record. I have to record raw and re-encode the video with the appropriate filters later to get something workable. Other things I have tried: tvtime with xvidcap and jack audio capture - xvidcap drops frames and muxing the audio is impossible as it will go out of sync (I have not found muxer options that work to force a correct frame rate) mencoder capture to file, attempt to pipe tail of file to mplayer... mencoder works great, piping the file is far too heavy to attempt gameplay. Soooo, v4l capture and preview simultaneously, recommendations?

    Read the article

  • WinXP How to Tunnel LPT over USB

    - by Michael Pruitt
    I have a windows program that accesses a device connected to a LPT (1-3) 25 pin port. The communication is bidirectional, and I suspected the control lines are also accessed directly. I would like to migrate the device to a machine that does not have a LPT port. I saw the dos2usb software, but that takes the output (from a DOS program) and 'prints' it formatted for a specific printer. I need a raw LPT connection, and a cable that provides access to all the control signals. I do have a USB to 36-pin Centronics that may have the extra signals. I use it with a vinyl cutter that doesn't like most of the USB dongles. It comes up as USB001. Would adding and sharing a generic printer, then mapping LPT1 to the share get me closer? Would that work for a parallel port scanner? My preferred solution is a USB cable with a driver that will map it to LPT1, LPT2, or LPT3.

    Read the article

  • Delay between printing via lp in opensuse

    - by adamweeks
    I am experiencing a 10-15 second delay when printing multiple documents to a barcode printer in opensuse. I have had the same setup on other systems with older versions of opensuse without any issue. The setup is as follows: The print queue is setup as a "generic with driver Raw Queue". The files being sent down to the printers are simple text files with the lp command: lp -dprinter1 /path/file The printer is a JetDirect compatible device (Intermec brand) with a standard 9100 port socket setup. If I send a multi-page document to the printer, it will print nonstop the multiple pages. If I send 2 or more text files down via separate "lp" commands, the delay will be there between each printout. I've tried multiple different printers and they all experience the same issue.

    Read the article

  • Sensei mouse sensitivity

    - by Marcelo
    I've recently acquired a Steelseries Sensei. Despite being a great mouse, I'm having some trouble finding settings that I can get used to... The mouse engine allows me to set a CPI from 0 to 5700. It also allows me to set it even higher, calling it "DCPI" (Double CPI), from 5701 to 11400. On Window's Control Panel, there's a "Pointer Speed" slider and a "Enhance Pointer Precision" checkbox (wording may be different as I use a non-english version). The majority of games allow me to set an in-game "Mouse Sensitivity". Some games let me use a "Raw mouse input". I'm already familiar with the basics of CPI/DPI - "higher CPI means less hand movement", but what are the differences between all those options? Is there a "better" or "worst" setting?

    Read the article

  • How to get better video quality in Lync?

    - by sinned
    I want to use a ConferenceCam from Logitech to stream talks live via Lync. When I view the RAW webcam image via VLC, the quality is very good (but the latency is high because of buffering). However, when I stream it using Lync, the video gets blurry. Is there a way to ensure QoS in Lync or otherwise improve the video quality to (near-)native? I would rather have some dropped frames than a lower resolution where I can't read the slides. In my setup, I use Lync with an Office365-E3 contract, so I have no Lync-Server in my network. I thought about replacing Lync completely with VLC, but I first want to try Lync because VLC will probably cause firewall issues. Also, I haven't looked up the VLC parameters for less buffering, faster encoding, a bit lower resolution (natively it's more than HD) and streaming.

    Read the article

  • Remote Desktop or Streaming Software/Services that Supports Gaming

    - by Griffin
    I've simply been amazed by the quality and speed of Onlive, as this technology has the potential of making hardware requirements irrelevant to the average user. However, at the moment Onlive is only for remotely controlling video games, and not desktops or other devices in general. I'm in pursuit of software or services that can accomplish this as well as Onlive does. I need: viewer (client) program portability (able to run on a USB stick) DirectX, OpenGL / full-screen game compatibility on the server side.** Gaming-acceptable color/scaling quality and responsiveness. I have a very powerful desktop at home and I want to be able to access this raw power from any other computer that I stick my USB into (in the same way Onlive gives gamers use of their powerful servers) What software/services has most of the above? NOTE: please specify what features your suggestion doesn't have.

    Read the article

  • Mac OS X file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • Parse HTTP requests through Wireshark?

    - by diogobaeder
    Hi, guys, Is there any way to parse HTTP request data in wireshark? For example, can I expose the request parameters upon an HTTP GET request (being sent by my machine), so that I don't need to read the (sometimes) truncated URL and find them by myself? I was using Tamper Data and Firebug, on my Firefox, to analyse these requests, but they're not as reliable as a stand-alone tool for monitoring my network interface, but wireshark keeps data too raw concerning HTTP flow. If you guys know any other stand-alone tool that does this (must be Linux-compatible), please tell me. Thanks!

    Read the article

  • How to improve the HTML formatting in Evolution mail client

    - by Tom
    I have a question about viewing HTML emails in the Evolution mail client. Basically, I am receiving some emails that look lovely in Thunderbird but not in Evolution because the HTML rendering of Evolution isn't as advanced. Does anyone know how to improve the HTML rendering of Evolution? e.g. a plugin, tip, code patch, etc... The closest I've got is to right-click the email, "Save As...", save as a html file, then open in Firefox. Not exactly streamline! What emails can't it display well? We use the subversion revision control system which is set up to send an email whenever someone commits via svnnotify all nicely coloured via the --handler HTML::ColorDiff -d parameter. When Evolution fails to use the colours, I find it very had to read the raw diff.

    Read the article

  • How to Recover HDD Formatted by "Create a Recovery Drive" Tool of Windows 8.1?

    - by ide
    I have 2 TB USB HDD which had these drives F: about 1 TB with 750 GB data H: about 120 GB with 60 GB data I: about 780 GB with 250 GB data (For TV: It was raw in Windows but visible in the Smart TV) I took 521 MB from last part of H to get new G drive. Then I run "Create a Recovery Drive" tool of Windows 8.1 and chose G drive. It said all data in the drive will be deleted. I thought it is just G drive but it deleted my whole HDD. It created 32 GB new F drive with writing 337 MB on it and rest of HDD is unallocated. I tried these programs to get my first 3 drives but non of them helped for getting 1st partition. TestDisk MiniTool Partition Wizard Home Edition EaseUS Partition Master 9.2.2 (I deleted new F drive volume because it scans only unallocated part) Recuva PC Inspector File Recovery

    Read the article

  • please explain my fio results - is O_SYNC|O_DIRECT misbehaving on linux?

    - by Zoltan
    I'm going mad over figuring out what the problem could be with one of our storage boxes. With a simple fio script I'm testing random writes using bs=1M and direct=1. The SSD is a Samsung 840pro attached to an LSI HBA (3Gbit/s ports). This is the result I'm getting under FreeBSD 9.1: WRITE: io=13169MB, aggrb=224743KB/s, minb=224743KB/s, maxb=224743KB/s, mint=60002msec, maxt=60002msec This is regardless of sync being set to 0 or 1. On linux, this is the result with sync=0: WRITE: io=14828MB, aggrb=253060KB/s, minb=253060KB/s, maxb=253060KB/s, mint=60001msec, maxt=60001msec and with sync=1: WRITE: io=6360.0MB, aggrb=108542KB/s, minb=108542KB/s, maxb=108542KB/s, mint=60001msec, maxt=60001msec My understanding is that since I'm operating on the raw block device, O_SYNC should not make any difference - there's no filesystem, any barrier, anything between the writes and the drive itself. Especially with O_DIRECT|O_SYNC set. Any ideas? For reference, here's the fio script I'm testing with: [global] bs=1M ioengine=sync iodepth=4 size=16g direct=1 runtime=60 filename=/dev/sdh sync=1 [rand-write] rw=randwrite stonewall

    Read the article

  • Socat and rich terminals (with Ctrl+C/Ctrl+Z/Ctrl+D propagation)

    - by Vi
    socat - exec:'bash -li',pty,stderr,ctty - bash: no job control in this shell What options should I use to get fully fledged shell as I get with ssh/sshd? I want to be able to connect the shell to everything socat can handle (SOCKS 5, UDP, OpenSSL), but also to have a nice shell which correctly interprets all keys, various Ctrl+C/Ctrl+Z, tab completion, up/down keys (with remote history). Update: Found "setsid" socat option. It fixes "no job control". Now trying to fix Ctrl+D. Update 2: socat file:`tty`,raw,echo=0 exec:'bash -li',pty,stderr,setsid,sigint,sane. Not it handles Ctrl+D/Ctrl+Z/Ctrl+C well, I can start Vim inside it, remote history is OK.

    Read the article

  • Duplicate incoming TCP traffic on Debian Squeeze

    - by Erwan Queffélec
    I have to test a homebrew server that accepts a lot of incoming TCP traffic on a single port. The protocol is homebrew as well. For testing purposes, I'd like to send this traffic both : - to the production server (say, listening on port 12345) - to the test server (say, listening on port 23456) My clients apps are "dumb" : they never read data back, and the server never replies anyway, my server only accepts connections, and do statistical computations and store/forward/service both raw and computed data. Actually, client apps and hardware are so simple there is no way I can tell clients to send their stream on both servers... And using "fake" clients is not good enough. What could be the simplest solution ? I can of course write an intermediary app that just copy incoming data and send it back to the testing server, pretending to be the client. I have a single server running Squeeze and have total control over it. Thanks in advance for your replies.

    Read the article

  • Is it possible to fake a Windows install for grub to add to boot menu?

    - by Mussnoon
    When doing a fresh install of a Linux distro (Ubuntu, for instance) on a fresh hard drive, if I want to install Linux first, and Windows later, is it possible to make grub think there's a Windows install on the first partition so that it'll be added to the boot menu after the installation is complete? To illustrate, I have a new hard drive and have created two primary partitions (both still raw) and two logical (Ext4 and Swap). I want to install Ubuntu on the Ext4 partition first, and some version of Windows on the first primary partition only after that (because I currently don't have a Windows install disk, but do have one for Ubuntu). Is it possible to make Ubuntu add an entry for Windows right now and avoid having to repair grub after I've installed Windows?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >