Search Results

Search found 367 results on 15 pages for 'encoder'.

Page 11/15 | < Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >

  • How do I convert a video to GIF using ffmpeg, with reasonable quality?

    - by Kamil Hismatullin
    I'm converting .flv movie to .gif file with ffmpeg. ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif It works great, but output gif file has a very law quality. Any ideas how can I improve quality of converted gif? Output of command: $ ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif ffmpeg version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:52:53 with gcc 4.7.2 *** THIS PROGRAM IS DEPRECATED *** This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.flv': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 Duration: 00:00:18.85, start: 0.000000, bitrate: 3098 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x720, 2905 kb/s, 25 fps, 25 tbr, 50 tbn, 50 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 192 kb/s Metadata: creation_time : 2013-02-14 04:00:07 [buffer @ 0x92a8ea0] w:1280 h:720 pixfmt:yuv420p [scale @ 0x9215100] w:1280 h:720 fmt:yuv420p -> w:320 h:240 fmt:rgb24 flags:0x4 Output #0, gif, to 'output.gif': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 encoder : Lavf53.21.1 Stream #0.0(und): Video: rawvideo, rgb24, 320x240, q=2-31, 200 kb/s, 90k tbn, 10 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream mapping: Stream #0.0 -> #0.0 Press ctrl-c to stop encoding frame= 101 fps= 32 q=0.0 Lsize= 8686kB time=10.10 bitrate=7045.0kbits/s dup=0 drop=149 video:22725kB audio:0kB global headers:0kB muxing overhead -61.778676% Thanks.

    Read the article

  • Does the same lame settings (--alt-preset standard) have differrent names?

    - by erikric
    I've always used windows, and therefore EAC to rip my CDs, but since I've started using Ubuntu more often, I decided to try to rip some albums there. I ended up using k3b (since I found it in the Ubuntu Software center. Tried to install RubyRipper first, but when 'sudo apt-get install ' or UDC fails, a Windows user like me is lost) The real question here is about the settings for the lame encoder. I'm used to just writing --alt-preset standard, and everything works like a charm, but the default in k3b look like this: lame -r --bitwidth 16 --little-endian -s 44.1 -h --tt %t --ta %a --tl %m --ty %y --tc %c --tn %n - %f I assume these are some sensible lame settings, and not a malicious perl script (although it looks like it). It seems to me like some of these ought to be there, and that I can not overwrite the whole thing with my good ol' --alt-preset. So, the question is do I need to replace anything, or is -h the same as old --alt-preset? Is it a difference between '--preset standard' and '--alt-preset standard'? And are those the same as -V 2?

    Read the article

  • Incorrect durations mp4 file created by ffmpeg (avconv)

    - by Ruslan Sharipov
    Example usage: avconv -i rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af -vcodec copy -an -sn -map 0 -f segment -segment_format mp4 -segment_time 60 -y %05d.mp4 avconv version 0.8.3-6:0.8.3-1+b1, Copyright (c) 2000-2012 the Libav developers built on Jun 15 2012 13:54:35 with gcc 4.7.0 HandShake: client signature does not match! Metadata: height 480.00 remote_addr: sdp_session {sdp_session,0, {sdp_o,"-","1289703354974145","1289703354974145",inet4, "10.1.12.99"}, "Media Presentation", {inet4,"0.0.0.0"}, {0,0}, [{"control","*"},{"range","npt=0.0 start 30400239.52 timeshift_duration 319250.58 timeshift_size 120000.00 width 640.00 [flv @ 0x1d36a40] Estimating duration from bitrate, this may be inaccurate Input #0, flv, from 'rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af': Duration: N/A, start: 0.000000, bitrate: N/A Stream #0.0: Video: h264 (Baseline), yuvj420p, 640x480 [PAR 1:1 DAR 4:3], 1k tbr, 1k tbn, 2k tbc Output #0, segment, to '%05d.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuvj420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 1k tbn, 1k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding ^Cframe= 9566 fps= 36 q=-1.0 Lsize= -0kB time=318.25 bitrate= -0.0kbits/s video:30348kB audio:0kB global headers:0kB muxing overhead -100.000071% Received signal 2: terminating. Result: serafim@yard:~/video2$ ls 00000.mp4 00001.mp4 00002.mp4 00003.mp4 00004.mp4 00005.mp4 Now try to play the files in the player, such as VLC. And that's what we get: the first fragment (00000.mp4) played well, no problems, but the second (00001.mp4 and beyond) starts the bug manifests itself, namely the file 00001.mp4 first 60 seconds black screen, but since 61 seconds starts playing the video. Attachments: https://dl.dropbox.com/u/760901/rtmp_and_mp4.zip How to get rid of the delay with black screen at the beginning of the segments? Maybe ffmpeg to pass parameters, or third-party software is able to correct the obtained segments mp4?

    Read the article

  • converting to MXF using ffmpeg

    - by Prakash
    I have been trying to use FFmpeg utility to convert a avi file using DNxHD to mxf format. I am using "FFmpeg" with params as following: ffmpeg -i ccvt_box.avi -vcodec dnxhd -video_size 1920x1080 -r 24 -b:v 115m ex.mxf The error it is giving : ffmpeg version N-43737-g76c3fff Copyright (c) 2000-2012 the FFmpeg developers built on Aug 20 2012 18:50:42 with llvm-gcc 4.2.1 (LLVM build 2336.11.00) configuration: libavutil 51. 70.100 / 51. 70.100 libavcodec 54. 53.100 / 54. 53.100 libavformat 54. 25.104 / 54. 25.104 libavdevice 54. 2.100 / 54. 2.100 libavfilter 3. 11.101 / 3. 11.101 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 Input #0, avi, from 'ccvt_box.avi': Duration: 00:00:10.00, start: 0.000000, bitrate: 691 kb/s Stream #0:0: Video: indeo5 (IV50 / 0x30355649), yuv410p, 340x344, 10 tbr, 10 tbn, 10 tbc Metadata: title : bob.avi [dnxhd @ 0x7fcd60818e00] video parameters incompatible with DNxHD Output #0, mxf, to 'ex.mxf': Stream #0:0: Video: dnxhd, yuv422p, 340x344, q=2-1024, 90k tbn, 24 tbc Metadata: title : bob.avi Stream mapping: Stream #0:0 -> #0:0 (indeo5 -> dnxhd) Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

    Read the article

  • FFmpeg convert video w/ dropped frames, out of sync

    - by preahkumpii
    I recorded a video using Bandicam with the MJPEG encoder to get the least amount of lag. Now, I am trying to convert that massive file to a h264 avi using ffmpeg. I know there are dropped frames in the video stream...more than 100 in the first two minutes, which I assume is simply because Bandicam dropped some when it couldn't keep up. So, when I convert the file to h264, the video and audio are out of sync, and appear to be more and more out of sync as output video progresses. Here is my basic command in ffmpeg: ffmpeg -i "C:\...\input.avi" -vcodec libx264 -q 5 -acodec libmp3lame -ar 44100 -ac 2 -b:a 128k "C:\...\output.avi" I have tried EVERYTHING I can think of including: -itsoffset [-]00:00:01 Tried this before and after input file. This doesn't work because as the video progresses it becomes more and more out of sync. -async 1 Doesn't work. -vsync 1 Doesn't work, but it does show dropped frames being duplicated. Two inputs of same file with mapping using -map 0:0 -map 1:1. Doesn't work. The source plays just fine. Any ideas how to convert it with ffmpeg and keep the audio and video synced? Thanks.

    Read the article

  • ffmpeg cutting video duration

    - by Steve Spence
    When using ffmpeg on linux, my 4.3GB 2.21 second video is being chopped down to 1.56 duration. I'm trying to reduce file size, but not lose frames. steve@steve-OptiPlex-170L:~/Desktop$ ffmpeg -i microbe.avi microbe.mp4 ffmpeg version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Jun 12 2012 16:37:58 with gcc 4.6.3 * THIS PROGRAM IS DEPRECATED * This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, avi, from 'microbe.avi': Duration: 00:02:21.80, start: 0.000000, bitrate: 242311 kb/s Stream #0.0: Video: rawvideo, bgr24, 1280x960, 10 tbr, 10 tbn, 10 tbc Incompatible pixel format 'bgr24' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x9f861e0] w:1280 h:960 pixfmt:bgr24 [avsink @ 0x9f86440] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x9f7d800] w:1280 h:960 fmt:bgr24 - w:1280 h:960 fmt:yuv420p flags:0x4 Output #0, mp4, to 'microbe.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 10 tbn, 10 tbc Stream mapping: Stream #0.0 - #0.0 Press ctrl-c to stop encoding frame= 1164 fps= 6 q=31.0 Lsize= 3775kB time=116.40 bitrate= 265.7kbits/s video:3765kB audio:0kB global headers:0kB muxing overhead 0.272870% steve@steve-OptiPlex-170L:~/Desktop$

    Read the article

  • which grabber is good enough to get 1000fps?

    - by user261002
    I have two framegrabber with a fast camera (1800+ fps). can anybody who understand the hardware, explain to me which of the following grabbers can help me more to grab 1000fps ? here are the the features of the two grabbers : Inspecta-5 Full Camera Link® Version: · Support for line scan and area cameras. · Video data rate of up to 660 Mbytes/sec. · PCI – X bus interface for 64 Bit data width and 66 MHz clock frequency. · PCI bus interface for 32 Bit data width and 33 MHz clock frequency. · 2 Gigabyte Onboard Memory for fast video streams. · Four opt coupled input- output ports for external trigger and encoder signals. · 528 Mbytes/sec. maximum data rate on the PCI–X Bus. · SDK for Windows 2000/XP SILICONSOFTWARE V-Series Camera Link : “microEnable IV VD4-CL” · Camera Pixel Clock Support 85 MHz · Area Scan Cameras 32k * 64k max. image size · Line Scan Cameras 64k max. image width · Acquisition Buffer: 512 MB DDR-RAM · Sustainable Transfer Rate (max.) 850 MBytes/sec. · microEnable SDK for Windows XP/Vista/ 7/ Linux

    Read the article

  • Codecs, Premiere Pro & Quicktime: Import or Play Error

    - by Nchpmn
    Original Question I've been using a FS-H200 (not the Pro variant) recorder with a JVC ProHD camera. I have been shooting with the DTE FORMAT to Quicktime (.mov). I copied the files to an external hard drive and am now trying to edit. The files will play back in VLC, as they would be expected to. However they will not import into Adobe Premiere CS5.5, instead giving an error: Unsupported format or damaged file. Quicktime gives the following error when attempting to play the files: Error -2002: a bad public movie atom was found in the movie (Filename) To try and fix this, I have installed the following codec packs: K-Lite Codec Pack 64-bit Full (version 5.9, latest) K-Lite Codec Pack 32-bit Full (version 8.4, latest) MainConcept Codec Suite (Broadcast) v5.1 for Adobe CS5 Reinstalled Quicktime with new download from Apple The same errors and problems still exist. From this I can assume that there is an issue with Quicktime and that is what Premiere is using as an encoder/decoder for the codec. Is there any way to fix this? From looking at the "Codec Information" from VLC: Stream 0 Type: Video Codec: MPEG-1/2 (mpgv) Language: English Resolution: 1280 x 720 Frame Rate: 25 Stream 1 Type: Audio Codec: PCM S16 BE (twos) Language: English Channels: Stereo Sample Rate: 48000 Hz Bits per sample: 16 Other computer specs: Windows 7 Professional 64-bit (SP1) Gigabyte Z68X-UD3-B3 Intel i7-2600K 16GB DDR3 2TB WD 7200RPM SATA 6Gb/s LaCie d2 Quadra 2TB v3 7200RPM (External HDD) NVIDIA GeForce GTX 560 Ti Golden Sample Updates 2012-03-11 @ 2050 AEDT MPEG Steamclip doesn't recognise, play or convert the footage. File open error: unrecognised file type. [Open Anyway] File open error: can't find video or audio tracks. 2012-03-24 @ 1920 AEDT Had to transcode the footage. :(

    Read the article

  • Silverlight Cream for June 08, 2010 -- #877

    - by Dave Campbell
    In this Issue: Miroslav Miroslavov, Chris Klug, Beau, Christian Schormann(-2-), Dan Wahlin, Pete Brown, Michael S. Scherotter, Philipp Sumi, Andy Wigley, and Phil Middlemiss. Shoutouts: Mark Tucker set about learning Caliburn, and in the process is writing a Caliburn Book: Chapters 1-3 Jesse Liberty has a great link-laden post up about why we should all be learning/using Blend: Why Developers Should, Must, Do Care About The New Expression Blend be sure to read what he says about WP7 development, however! Charlie Kindel announced an Install problem with the Developer Tools CTP Refresh and the WP7 tools... check this out if you're having problems. John Papa has a good post up on the happenings yesterday: Expression Studio 4 Launch of Blend, SketchFlow, Encoder and More! Erik Mork & Company's latest "This Week in Silverlight" is titled First Drop: Prism v4 – First Drop is Available From SilverlightCream.com: Animated navigation between Pages Miroslav Miroslavov has Part 8 of his "Silverlight in Action" series up, detailing cool things from the CompleteIT site... this one is on Animated navigation between pages. Subtitling videos Chris Klug got a gig adding subtitles to videos for Microsoft (sweet) ... and no, not *that* kind of subtitles... read how he approached the final solution. Silverlight Watermark TextBox I'm not sure we can have too many Watermark TextBoxes, and neither does Beau , who sent me a link to this one... give it a dance and decide. Blend 4: Collaborative SketchFlow Feedback with SharePoint With the new Blend release, Christian Schormann has a post up describing the lashup to Sharepoint for sharing Sketchflow and getting feedback. New Utility, Links, and Tutorials for Path-Based Layout Christian Schormann also has a collection of resources for Path-Based Layouts, including a utility "that lets you apply a whole bunch of position-specific effects without having to write any code"... lots of links to resources here. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application Dan Wahlin draws on his recent experience and lays out some of the fun and pitfalls of building LOB apps in Silverlight... WCF, MVVM, slides, and code included WPF (and Silverlight): Choose your Fonts and Text Rendering Options Wisely Pete Brown has a great post up on using fonts wisely across multiple platforms... lots of info and good discussion in the comments as well. Ball Watch USA Remember the awesome watch Michael S. Scherotter did in Silverlight 1 and then converted to Updated Ball Trainmaster Cannonball Watch to Silverlight 2? Well... there's now a contest underfoot and 8 videos to help you get started... all good stuff, and good luck! ... Michael has a post up about the contest: Enter to Win a Ball Watch by Creating One in Silverlight Announcing Sketchables – Rapid Mockup Creation with SketchFlow By way of Jesse Libertyhttp://jesseliberty.com/2010/06/08/why-developers-should-must-do-care-about-the-new-expression-blend/, this is a cool production by Philipp Sumi about a simple mockup framework he's created. Perst - a database for Windows Phone 7 Silverlight I think one of my first comments to Michael Washington back at the MVP Summit 2010 was that we'd need a database engine, and too cool, but we've got one, Andy Wigley discusses Perst in this post... to save you some time, here's the Perst site A Chrome and Glass Theme - Part 7 Phil Middlemiss has part 7 of his great theme-building series up... this time he's giving the accordian control a once-over. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark (TOTD #184)

    - by arungupta
    TOTD #183 explained how to build a WebSocket-driven application using GlassFish 4. This Tip Of The Day (TOTD) will explain how do view/debug on-the-wire messages, or frames as they are called in WebSocket parlance, over this upgraded connection. This blog will use the application built in TOTD #183. First of all, make sure you are using a browser that supports WebSocket. If you recall from TOTD #183 then WebSocket is combination of Protocol and JavaScript API. A browser supporting WebSocket, or not, means they understand your web pages with the WebSocket JavaScript. caniuse.com/websockets provide a current status of WebSocket support in different browsers. Most of the major browsers such as Chrome, Firefox, Safari already support WebSocket for the past few versions. As of this writing, IE still does not support WebSocket however its planned for a future release. Viewing WebSocket farmes require special settings because all the communication happens over an upgraded HTTP connection over a single TCP connection. If you are building your application using Java, then there are two common ways to debug WebSocket messages today. Other language libraries provide different mechanisms to log the messages. Lets get started! Chrome Developer Tools provide information about the initial handshake only. This can be viewed in the Network tab and selecting the endpoint hosting the WebSocket endpoint. You can also click on "WebSockets" on the bottom-right to show only the WebSocket endpoints. Click on "Frames" in the right panel to view the actual frames being exchanged between the client and server. The frames are not refreshed when new messages are sent or received. You need to refresh the panel by clicking on the endpoint again. To see more detailed information about the WebSocket frames, you need to type "chrome://net-internals" in a new tab. Click on "Sockets" in the left navigation bar and then on "View live sockets" to see the page. Select the box with the address to your WebSocket endpoint and see some basic information about connection and bytes exchanged between the client and the endpoint. Clicking on the blue text "source dependency ..." shows more details about the handshake. If you are interested in viewing the exact payload of WebSocket messages then you need a network sniffer. These tools are used to snoop network traffic and provide a lot more details about the raw messages exchanged over the network. However because they provide lot more information so they need to be configured in order to view the relevant information. Wireshark (nee Ethereal) is a pretty standard tool for sniffing network traffic and will be used here. For this blog purpose, we'll assume that the WebSocket endpoint is hosted on the local machine. These tools do allow to sniff traffic across the network though. Wireshark is quite a comprehensive tool and we'll capture traffic on the loopback address. Start wireshark, select "loopback" and click on "Start". By default, all traffic information on the loopback address is displayed. That includes tons of TCP protocol messages, applications running on your local machines (like GlassFish or Dropbox on mine), and many others. Specify "http" as the filter in the top-left. Invoke the application built in TOTD #183 and click on "Say Hello" button once. The output in wireshark looks like Here is a description of the messages exchanged: Message #4: Initial HTTP request of the JSP page Message #6: Response returning the JSP page Message #16: HTTP Upgrade request Message #18: Upgrade request accepted Message #20: Request favicon Message #22: Responding with favicon not found Message #24: Browser making a WebSocket request to the endpoint Message #26: WebSocket endpoint responding back You can also use Fiddler to debug your WebSocket messages. How are you viewing your WebSocket messages ? Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • FFmpeg Video Hosting for Linux and Windows Server

    - by Aditi
    FFmpeg hosting is a special type of web hosting where the host servers have video transcoding software loaded on them, which allows the automatic conversion of videos from one format to another. FFmpeg is a cross-platform solution for recording, converting, transcoding and stream audio and video. It includes libavcodec – the leading audio/video codec library. FFmpeg hosting gets its name from a set of server side programs (modules) called FFmpeg. There are a number of applications or web scripts available, which allow webmasters to create their own video sharing websites. Video hosting typically requires: PHP 4.3 and above (including support of CLI) Mencoder and also Mplayer FFMpeg-PHP MySQL database server LAME MP3 Encoder Libogg + Libvorbis GD Library 2 or higher CGI-BIN There are number of web service providers who provide FFmpeg hosting service. Following is a list of some of the Best FFmpeg hosting providers for both Linux and Windows Server below. Dream Host Dreamhost provides for web based email access, mail filtering, spam filtering, unlimited email ids, vacation autoresponder, python support, full CGI access and many more services. Price: $7.95 View Details Micfo It offers unlimited disk space and bandwidth. Other services include free domain for life and free Website Transfer with many more services. All in all one of the best option to consider. Price: $5 View Details Host Upon HostUpon offers FFMpeg Hosting on all their hosting packages, with readily installed modules to start a Video website or Social Network with Video uploading. These scripts such as Boonex Dolphin / PHPMotion / Social Engine / ABKsoft Scripts / Joomla Video Plugin / Clipshare / ClipBucket / Social Media / Rayzz / Vidi Script work with their ffmpeg. Their FFMPEG hosting plan offers 24/7/365 support with typical response time of 15min or less. Price: $5.95 View Details DownTown Host DownTown Host provides full and exceptional support by live chat and telephone. It has high-power, modern servers and the finest web server technology. It offers free search engine Submission and continuous data backup protection with free email forwarding and site move. There are many more services too. Site5 This ffmpeg service provider offers uptime guarantee, a real time stats on each server and many more attractive services. Price: $4.95 View Details Cirtex Hosting Cirtex Hosting allows to host 7 websites & domains and provides for unlimited storage space and monthly bandwidth. It also offers FTP and email accounts and many more services. Price: $2.49 View Details FLV Hosting FLV hosting supplies RTMP SERVER STREAMING for large size video streaming and server side recording. It is flexible and costs less. They customize to the clients requirements. Price: $9.95 View Details AptHost This hosting service provides for 24x7x365 Premium Support and fully ffmpeg enabled services. Price: $4.95 View Details HostMDS Great Support, Priced Low. It provides for SSH access, CGI, Ruby on Rails, Perl, PHP, MySQL, front page extentions, 24/7 Support, FREE Domain transfer and spam filtering. It offers instant account setup, low latency fast bandwidth & much more! They were formerly known as Vistapages. Price: $4.95 View Details Related posts:Best WordPress Video Themes for a Video Blog Free Web Based Applications 24+ Coda Alternatives for Windows and Linux

    Read the article

  • BizTalk 2009 - Pipeline Component Wizard

    - by Stuart Brierley
    Recently I decided to try out the BizTalk Server Pipeline Component Wizard when creating a new pipeline component for BizTalk 2009. There are different versions of the wizard available, so be sure to download the appropriate version for the BizTalk environment that you are working with. Following the download and expansion of the zip file, you should be left with a Visual Studio solution.  Open this solution and build the project. Following this installation is straight foward - locate and run the built setup.exe file in the PipelineComponentWizard Setup project and click through the small number of installation screens. Once you have completed installation you will be ready to use the wizard in Visual Studio to create your BizTalk Pipeline Component. Start by creating a new project, selecting BizTalk Projects then BizTalk Server Pipeline Component.  You will then be presented with the splash screen. The next step is General Setup, where you will detail the classname, namespace, pipeline and component types, and the implementation language for your Pipeline Component. The options for pipeline type are Receive, Send or Any. Depending on the pipeline type chosen there are different options presented for the component type, matching those available within the BizTalk Pipelines themselves: Receive - Decoder, Disassembling Parser, Validate, Party Resolver, Any. Send -  Encoder, Assembling Serializer, Any. Any - Any. The options for implementation language are C# or VB.Net Next you must set up the UI settings - these are the settings that affect the appearance of the pipeline component within Visual Studio. You must detail the component name, version, description and icon.  Next is the definition of the variables that the pipeline component will use.  The values for these variables will be defined in Visual Studio when creating a pipeline. The options for each variable you require are: Designer Property - The name of the variable. Data Type - String, Boolean, Integer, Long, Short, Schema List, Schema With None Clicking finish now will complete the wizard stage of the creation of your pipeline component. Once the wizard has completed you will be left with a BizTalk Server Pipeline Component project containing a skeleton code file for you to complete.   Within this code file you will mainly be interested in the execute method, which is left mostly empty ready for you to implement your custom pipeline code:          #region IComponent members         /// <summary>         /// Implements IComponent.Execute method.         /// </summary>         /// <param name="pc">Pipeline context</param>         /// <param name="inmsg">Input message</param>         /// <returns>Original input message</returns>         /// <remarks>         /// IComponent.Execute method is used to initiate         /// the processing of the message in this pipeline component.         /// </remarks>         public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)         {             //             // TODO: implement component logic             //             // this way, it's a passthrough pipeline component             return inmsg;         }         #endregion Once you have implemented your custom code, build and compile your Custom Pipeline Component then add the compiled .dll to C:\Program Files\Microsoft BizTalk Server 2009\Pipeline Components . When creating a new pipeline, in Visual Studio reset the toolbox and the custom pipeline component should appear ready for you to use in your Biztalk Pipeline. Drop the pipeline component into the relevant pipeline stage and configure the component properties (the variables defined in the wizard). You can now deploy and use the pipeline as you would any other custom pipeline.

    Read the article

  • Barcodes and Bugs

    - by Tim Dexter
    A great mail from Mike at Browning last week. He has been through the ringer getting his BIP barcoding sorted out but he's now out of the woods. Here's the final result. By way of explanation, an excerpt from Mike's email:   This is an example of the GS1_128 carton shipping labels we are now producing with BIP in our web application for our vendors who drop ship products to our dealers. It produces 4 labels per printed page, in PDF format, on peel & stick label paper. Each label has a unique carton number, and a unique carton serial number in the SSCC-18 barcode. This example is for Cabelas (each customer has slightly different GS1-128 label format requirements – custom template for each - a pain!). I am using custom java encoders I wrote for the UPC and SSCC-18 barcodes, and a standard encoder (code128b) for the ShipTo zip barcode. Is there any way yet to get around that SUPER ANNOYING bug when opening the rtf template in MS Word, and it replaces my xsl code text in the barcode fields with gibberish??? Every time I open it I have to re-enter all the xsl code. Not only to be able to read & edit it, but also to get it to work in BIP (BIP doesn’t like the gibberish if I upload the template that has it). Mike's last point, regarding the annoying bug in the template builder, is one that I have experienced occasionally. The development team have looked at it and found it to be an issue with MSWord and not a plugin problem. That's all well and good but how can you get around it? Well, you can take advantage of the font mapping that BIP offers to get the barcodes into the PDF output. As many of you know, getting a barcode font to appear in the PDF output, you need employ the use of the xdo.cfg file in the template builder config directory.You would normally have an entry such as this:         <font family="Code 128" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>to map a barcode font to get it to render in the PDF output when testing from the template builder plugin.   Mike's issue is only present when the formfield is highlighted with a barcode font. The other fields in the template are OK. What you can do to get around the issue is to bend the config entry to get around having to use the barcode font in the template at all. Changing the entry to something like:         <font family="Calibri" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>   Note that we are mapping the Calibri; a humanly readable and non 'erroring' font in the template, to the code 128 barcode font. Where you used to highlight the field with the barcode in MSWord, you now use the Calibri font instead. At run time, BIP will go look for the Calibri font mapping and will drop in the Code128 font. Of course, Calibri is an example; you need to pick a font that you are not going to use any where else in the layout.

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • Problem with my whiteboard application

    - by swift
    I have to develop a whiteboard application in which both the local user and the remote user should be able to draw simultaneously, is this possible? If possible then any logic? I have already developed a code but in which i am not able to do this, when the remote user starts drawing the shape which i am drawing is being replaced by his shape and co-ordinates. This problem is only when both draw simultaneously. any idea guys? Here is my code class Paper extends JPanel implements MouseListener,MouseMotionListener,ActionListener { static BufferedImage image; int bpressed; Color color; Point start; Point end; Point mp; Button elipse=new Button("elipse"); Button rectangle=new Button("rect"); Button line=new Button("line"); Button empty=new Button(""); JButton save=new JButton("Save"); JButton erase=new JButton("Erase"); String selected; int ex,ey;//eraser DatagramSocket dataSocket; JButton button = new JButton("test"); Client client; Point p=new Point(); int w,h; public Paper(DatagramSocket dataSocket) { this.dataSocket=dataSocket; client=new Client(dataSocket); System.out.println("paper"); setBackground(Color.white); addMouseListener(this); addMouseMotionListener(this); color = Color.black; setBorder(BorderFactory.createLineBorder(Color.black)); //save.setPreferredSize(new Dimension(100,20)); save.setMaximumSize(new Dimension(75,27)); erase.setMaximumSize(new Dimension(75,27)); } public void paintComponent(Graphics g) { try { g.drawImage(image, 0, 0, this); Graphics2D g2 = (Graphics2D)g; g2.setPaint(Color.black); if(selected==("elipse")) g2.drawOval(start.x, start.y,(end.x-start.x),(end.y-start.y)); else if(selected==("rect")) g2.drawRect(start.x, start.y, (end.x-start.x),(end.y-start.y)); else if(selected==("line")) g2.drawLine(start.x,start.y,end.x,end.y); } catch(Exception e) {} } //Function to draw the shape on image public void draw() { Graphics2D g2 = image.createGraphics(); g2.setPaint(color); if(selected=="line") g2.drawLine(start.x, start.y, end.x, end.y); if(selected=="elipse") g2.drawOval(start.x, start.y, (end.x-start.x),(end.y-start.y)); if(selected=="rect") g2.drawRect(start.x, start.y, (end.x-start.x),(end.y-start.y)); repaint(); g2.dispose(); start=null; } //To add the point to the board which is broadcasted by the server public synchronized void addPoint(Point ps,String varname,String shape,String event) { try { if(end==null) end = new Point(); if(start==null) start = new Point(); if(shape.equals("elipse")) selected="elipse"; else if(shape.equals("line")) selected="line"; else if(shape.equals("rect")) selected="rect"; else if(shape.equals("erase")) { selected="erase"; erase(); } if(end!=null && start!=null) { if(varname.equals("end")) end=ps; if(varname.equals("mp")) mp=ps; if(varname.equals("start")) start=ps; if(event.equals("drag")) repaint(); else if(event.equals("release")) draw(); } } catch(Exception e) { e.printStackTrace(); } } //To set the size of the image public void setWidth(int x,int y) { System.out.println("("+x+","+y+")"); w=x; h=y; image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB); Graphics2D g2 = image.createGraphics(); g2.setPaint(Color.white); g2.fillRect(0,0,w,h); g2.dispose(); } //Function which provides the erase functionality public void erase() { Graphics2D pic=(Graphics2D) image.getGraphics(); pic.setPaint(Color.white); pic.fillRect(start.x, start.y, 10, 10); } //Function to add buttons into the panel, calling this function returns a panel public JPanel addButtons() { JPanel buttonpanel=new JPanel(); JPanel row1=new JPanel(); JPanel row2=new JPanel(); JPanel row3=new JPanel(); JPanel row4=new JPanel(); buttonpanel.setPreferredSize(new Dimension(80,80)); //buttonpanel.setMinimumSize(new Dimension(150,150)); row1.setLayout(new BoxLayout(row1,BoxLayout.X_AXIS)); row1.setPreferredSize(new Dimension(150,150)); row2.setLayout(new BoxLayout(row2,BoxLayout.X_AXIS)); row3.setLayout(new BoxLayout(row3,BoxLayout.X_AXIS)); row4.setLayout(new BoxLayout(row4,BoxLayout.X_AXIS)); buttonpanel.setLayout(new BoxLayout(buttonpanel,BoxLayout.Y_AXIS)); elipse.addActionListener(this); rectangle.addActionListener(this); line.addActionListener( this); save.addActionListener( this); erase.addActionListener( this); buttonpanel.add(Box.createRigidArea(new Dimension(10,10))); row1.add(elipse); row1.add(Box.createRigidArea(new Dimension(5,0))); row1.add(rectangle); buttonpanel.add(row1); buttonpanel.add(Box.createRigidArea(new Dimension(10,10))); row2.add(line); row2.add(Box.createRigidArea(new Dimension(5,0))); row2.add(empty); buttonpanel.add(row2); buttonpanel.add(Box.createRigidArea(new Dimension(10,10))); row3.add(save); buttonpanel.add(row3); buttonpanel.add(Box.createRigidArea(new Dimension(10,10))); row4.add(erase); buttonpanel.add(row4); return buttonpanel; } //To save the image drawn public void save() { try { ByteArrayOutputStream bos = new ByteArrayOutputStream(); JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(bos); JFileChooser fc = new JFileChooser(); fc.showSaveDialog(this); encoder.encode(image); byte[] jpgData = bos.toByteArray(); FileOutputStream fos = new FileOutputStream(fc.getSelectedFile()+".jpeg"); fos.write(jpgData); fos.close(); //add replce confirmation here } catch (IOException e) { System.out.println(e); } } public void mouseClicked(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseEntered(MouseEvent arg0) { } public void mouseExited(MouseEvent arg0) { // TODO Auto-generated method stub } public void mousePressed(MouseEvent e) { if(selected=="line"||selected=="erase") { start=e.getPoint(); client.broadcast(start,"start", selected,"press"); } else if(selected=="elipse"||selected=="rect") { mp = e.getPoint(); client.broadcast(mp,"mp", selected,"press"); } } public void mouseReleased(MouseEvent e) { if(start!=null) { if(selected=="line") { end=e.getPoint(); client.broadcast(end,"end", selected,"release"); } else if(selected=="elipse"||selected=="rect") { end.x = Math.max(mp.x,e.getX()); end.y = Math.max(mp.y,e.getY()); client.broadcast(end,"end", selected,"release"); } draw(); } //start=null; } public void mouseDragged(MouseEvent e) { if(end==null) end = new Point(); if(start==null) start = new Point(); if(selected=="line") { end=e.getPoint(); client.broadcast(end,"end", selected,"drag"); } else if(selected=="erase") { start=e.getPoint(); erase(); client.broadcast(start,"start", selected,"drag"); } else if(selected=="elipse"||selected=="rect") { start.x = Math.min(mp.x,e.getX()); start.y = Math.min(mp.y,e.getY()); end.x = Math.max(mp.x,e.getX()); end.y = Math.max(mp.y,e.getY()); client.broadcast(start,"start", selected,"drag"); client.broadcast(end,"end", selected,"drag"); } repaint(); } @Override public void mouseMoved(MouseEvent arg0) { // TODO Auto-generated method stub } public void actionPerformed(ActionEvent e) { if(e.getSource()==elipse) selected="elipse"; if(e.getSource()==line) selected="line"; if(e.getSource()==rectangle) selected="rect"; if(e.getSource()==save) save(); if(e.getSource()==erase) { selected="erase"; erase(); } } } class Button extends JButton { String name; public Button(String name) { this.name=name; Dimension buttonSize = new Dimension(35,35); setMaximumSize(buttonSize); } public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D)g; g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); //g2.setStroke(new BasicStroke(1.2f)); if (name == "line") g.drawLine(5,5,30,30); if (name == "elipse") g.drawOval(5,7,25,20); if (name== "rect") g.drawRect(5,5,25,23); } }

    Read the article

  • Trying to save file from Flash to PHP using $GLOBALS["HTTP_RAW_POST_DATA"]

    - by jolyonruss
    Let me start by saying PHP isn't my forte, I'm usually reluctant to try working with it because of problems exactly like this. The code works fine on my local machine under MAMP and on my server, but doesn't on the clients server :'( So what am I trying to do, well - save an image from Flash onto the server, simple right?! I'm using the method described on this site here: http://designreviver.com/tutorials/actionscript-3-jpeg-encoder-revealed-saving-images-from-flash/ but have made a small alteration so that instead of echoing the jpg causing the browser to download it locally, I do an fwrite and an fclose to save it to the server. Here is my PHP: $imageFile = '../images/' . $_GET['name']; $imageHandle = fopen($imageFile, "w"); fwrite($imageHandle, $jpg); fclose($imageHandle); } ? I've dona a phpinfo() on my clients server and it's running 5.2.2 my host is running 5.2.11 I don't know if much can have changed in those 9 minor revisions? I've also read another question on here which suggests making suer always_populate_raw_post_data is set to ON, but it's set to OFF on all of the server environments I've been testing in. I'm doing some XML saving using file_get_contents('php://input') which I've tried but failed to get working with images. Any help would be gratefully received, I'm happy to post the AS3 as well but it's EXACTLY the same as example I've linked above and works locally. As far as I can tell the problem lies with the PHP. Cheers.

    Read the article

  • Percent-Encoded Percent in URI

    - by Lukas
    In our application, it is possible for a user to upload files then download them later. We don't restrict them from having any special characters in the file name. The problem comes in when we create the link for the user to download the file. I use the Java URL encoder to encode the file name that gets put into the href of the link, but I'm still having problems with percent (%) signs. For example, if the user uploads a file named fi%le.jpg, the href that gets generated is fi%25le.jpg, and everything is fine. The problem is when the percent sign is right before the period (i.e., file%.jpg, which gets converted to file%25.jpg). When the user clicks on the link, they get a 404 (Not Found) error. The strange thing is that it is not a problem if the two characters following the percent sign are hex characters.... Weird, eh? Any help is appreciated. I am using Tomcat/Struts. Could the built-in URL decoder have anything to do with this problem?

    Read the article

  • VLC desktop streaming

    - by StackedCrooked
    Edit I stopped using VLC and switched to GMax FLV Encoder. It does a much better job IMO. Original post I am sending my desktop (screen) as an H264 video stream to another machine that saves it to a file using the follwoing command lines: Sender of the stream: vlc -I dummy --sout='#transcode{vcodec=h264,vb=512,scale=0.5} :rtp{mux=ts,dst=192.168.0.1,port=4444}' Receiver of the stream: vlc -I rc rtp://@:4444 --sout='#std{access=file,mux=ps,dst=/home/user/output.mp4}' --ipv4 This works, but there are a few issues: The file is not playable with most players. VLC is able to playback the file but with some weirdness: = it takes about 10 seconds before the playback actually begins. = seeking doesn't work. Can someone point me in the right direction on how to fix these issues? EDIT: I made a little progress. The initial delay in playback is because the player is waiting for a keyframe. By forcing the sender of the stream to create a new key-frame every 4 seconds I could decrease the delay: :screen-fps=10 --sout='#transcode{vcodec=h264,venc=x264{keyint=40},vb=512,scale=0.5} :rtp{mux=ts,dst=192.168.0.1,port=4444}' The seeking problem is not solved however, but I understand it a little better. The RTP stream is saved as a file in its original streaming format, which is normally not playable as a regular video file. VLC manages to play this file, but most other players don't. So I need to convert it to a regular video file. I am currently investigating whether I can do this with ffmpeg if I provide it with an SDP file for the recorded stream. All help is welcome!

    Read the article

  • Segmentation fault while feeding in an mpeg file through ffmpeg

    - by angel6
    Hi, I've set up FFserver as the streaming server. I'm trying to feed in an mpeg file. But it comes up with a segmentation fault. Does anyone know how to fix this? The following is the command-line output I get $ ./ffmpeg -i test1.mpg http://localhost:8090/feed1.ffm FFmpeg version SVN-r22945, Copyright (c) 2000-2010 the FFmpeg developers built on Apr 22 2010 19:18:45 with gcc 4.4.1 configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-pthreads --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libx264 --enable-libxvid --enable-x11grab libavutil 50.14. 0 / 50.14. 0 libavcodec 52.66. 0 / 52.66. 0 libavformat 52.61. 0 / 52.61. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg @ 0xab0c420]max_analyze_duration reached Input #0, mpeg, from 'test1.mpg': Duration: 00:00:20.96, start: 0.768300, bitrate: 269 kb/s Stream #0.0[0x1e0]: Video: mpeg1video, yuv420p, 160x120 [PAR 1:1 DAR 4:3], 104857 kb/s, 30 fps, 30 tbr, 90k tbn, 30 tbc Stream #0.1[0x1c0]: Audio: mp2, 32000 Hz, 2 channels, s16, 64 kb/s Output #0, ffm, to 'http://localhost:8090/feed1.ffm': Metadata: encoder : Lavf52.61.0 Stream #0.0: Audio: mp2, 22050 Hz, 1 channels, s16, 48 kb/s Stream #0.1: Video: mpeg1video, yuv420p, 160x128, q=2-31, 40 kb/s, 1000k tbn, 50 tbc Stream #0.2: Audio: libmp3lame, 22050 Hz, 1 channels, s16, 64 kb/s Stream #0.3: Video: msmpeg4, yuv420p, 352x240, q=2-31, 256 kb/s, 1000k tbn, 15 tbc Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Stream #0.1 -> #0.2 Stream #0.0 -> #0.3 Press [q] to stop encoding Segmentation fault

    Read the article

  • Copy EXIF Metadata from TIF to JPEG in C# / VB.NET

    - by George
    Hello! I would really appreciate if you could shed light on this problem. I have 2 images, one was created from TIF file with metadata, the other is an in-memory image that will be saved as jpeg. Then I use this routine to transfer exif metadata from first image to the second one (that is from the one created from tif file to the in-memory image): For Each _p In image1.PropertyItems image2.SetPropertyItem(_p) Next And this works perfectly fine. All exif items are successfully copied. I confirmed this by using watches in debug mode. The problem comes when you save image2 as jpeg using this: Dim eps As EncoderParameters = New EncoderParameters(1) eps.Param(0) = New EncoderParameter(Encoder.Quality, 85) Dim ici As ImageCodecInfo = GetEncoderInfo("image/jpeg") image2.Save("C:\1.jpg", ici, eps) Only very few EXIF properties are saved with image2 jpeg file however, namely only camera model and camera maker. However If I save image2 as TIF, all properties from the original tif will be there. Can anyone explain why is that? Thanks.

    Read the article

  • ASP.NET error on Bitmap.Save "Exception (0x80004005): A generic error occurred in GDI+."

    - by Batu
    Hi, I have a function which first reads an image from disk, resizes it and then saves to another directory. when i use the Bitmap.Save(directory + theimagename) it returns the error as i stated in the question title. i checked the directory is right, and the given image name doesn't exist in that dir. what is weird, is that the same code works great on the local machine. but when i upload it to my shared server. it just doesn't work. the code is below. bmpOut = new Bitmap(Size, Size); Graphics g = Graphics.FromImage(bmpOut); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; g.FillRectangle(Brushes.White, 0, 0, Size, Size); int topBottomPadding = 0; int leftRightPadding = 0; if (Size > lnNewWidth + 1) leftRightPadding = Convert.ToInt32((Size - lnNewWidth) / 2); else if (Size > lnNewHeight + 1) topBottomPadding = Convert.ToInt32((Size - lnNewHeight) / 2); g.DrawImage(loBMP, leftRightPadding, topBottomPadding, lnNewWidth, lnNewHeight); Bitmap bmp = new Bitmap(bmpOut); if (bmp != null) bmp.Save(ResizedOutput); bmp.Dispose(); bmpOut.Dispose(); g.Dispose(); loBMP.Dispose(); stack trace: [ExternalException (0x80004005): A generic error occurred in GDI+.] System.Drawing.Image.Save(String filename, ImageCodecInfo encoder, EncoderParameters encoderParams) +377630 System.Drawing.Image.Save(String filename, ImageFormat format) +69 System.Drawing.Image.Save(String filename) +25 Utilities.ResizeImage(String fileName, String mode) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Utilities.cs:181 Link.ToProductImage(String fileName) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Link.cs:79 Product.PopulateControls(ProductDetails pd) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:37 Product.Page_Load(Object sender, EventArgs e) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:20

    Read the article

  • How can you make a PHP application require a key to work?

    - by jasondavis
    About 4 years ago I used a php product called amember pro, it is a membership script which has plugins for lie 30 different payment processors, it was an easy way to set up an automated membership site where users would pay a payment and get access to a certain area. The script used ioncube http://www.ioncube.com/sa_encoder.php to prevent non-paying users from using the script, it requered that you register the domain that the script would be used on, you were then given a key to enter into the file that would make the system/script work. Now I am wanting to know how to do such a task, I know ioncube encoder just makes it hard to see the code, in the script I mention, they would just have a small section at the tp of 1 of the included pages that was encrypted and without that part of the code it would break and in addition if the owner of the script did not put you domain in the list and give you a valid key it would not work, also if you tried to use the script on a different domain it would not work. I realize that somewhere in the encrypted code that is must of sent you key to there server and checked that it was valid for the domain name it is on, or possibly it did not even do that, maybe the key would just verify that it matched the domain the script was on, that more likely what it did. Here is where the real question is, How would you make a script require the portion that is encrypted? If I made a script and had a small encrypted part at the top, it would seem a user would be able to easily just remove the encrypted part and figure out what the non encrypted part is doing and fix it to work. Any ideas?

    Read the article

  • Get Image from Byte Aarray.

    - by Arun Thakkar
    Hello Everyone!! Hope You all are fine and also in one of your best of moods!! Hope You all are Enjoying iPhone development. I herewith one issue that i am not able to solve, may be i don't know the depth concept of iPhone. So Its my humble requet to you to guide me or suggest or share your ideas. I do find an issue with getting an image from Bytes array. I am calling a webservice which returns an image in form of Bytes Array as response. I have Converted this bytes array in to form of NSData, Now i have NSData, But When i Try to get an image from this NSData, It shows nil. I Did lots of R&D and Find one suggestion to use base64 encoder, But unfortunately because of not proper guidance I was not able to Implement that. I was also suggested to use OPenSSL Library for base64 from url http://www.cocoadev.com/index.pl?BaseSixtyFour But again i was not able to include #include #include these two files. as in Newer Version of SDK 3.X family Apple has depreciated those (as i guess). So Now i need help from you guys. kindly help me if you have solution or if you know the steps to solve these. Looking Forwards. Regards, Arun Thakkar

    Read the article

  • "A generic error occurred in GDI+" error while showing uploaded images

    - by Prasad
    i am using the following code to show the image that has been saved in my database from my asp.net mvc(C#) application:. public ActionResult GetSiteHeaderLogo() { SiteHeader _siteHeader = new SiteHeader(); Image imgImage = null; long userId = Utility.GetUserIdFromSession(); if (userId > 0) { _siteHeader = this.siteBLL.GetSiteHeaderLogo(userId); if (_siteHeader.Logo != null && _siteHeader.Logo.Length > 0) { byte[] _imageBytes = _siteHeader.Logo; if (_imageBytes != null) { using (System.IO.MemoryStream imageStream = new System.IO.MemoryStream(_imageBytes)) { imgImage = Image.FromStream(imageStream); } } string sFileExtension = _siteHeader.FileName.Substring(_siteHeader.FileName.IndexOf('.') + 1, _siteHeader.FileName.Length - (_siteHeader.FileName.IndexOf('.') + 1)); Response.ContentType = Utility.GetContentTypeByExtension(sFileExtension.ToLower()); Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.BufferOutput = false; if (imgImage != null) { ImageFormat _imageFormat = Utility.GetImageFormat(sFileExtension.ToLower()); imgImage.Save(Response.OutputStream, _imageFormat); imgImage.Dispose(); } } } return new EmptyResult(); } It works fine when i upload original image. But when i upload any downloaded images, it throws the following error: System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+. System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+. at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams) at System.Drawing.Image.Save(Stream stream, ImageFormat format) For. Ex: When i upload the original image, it shows as logo in my site and i downloaded that logo from the site and when i re-upload the same downloaded image, it throws the above error. It seems very weird to me and not able to find why its happening. Any ideas on this?

    Read the article

  • Handling Corrupted JPEGs in C#

    - by ddango
    We have a process that pulls images from a remote server. Most of the time, we're good to go, the images are valid, we don't timeout, etc. However, every once and awhile we see this error similar to this: Unhandled Exception: System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+. at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderPa rameters encoderParams) at ConsoleApplication1.Program.Main(String[] args) in C:\images\ConsoleApplic ation1\ConsoleApplication1\Program.cs:line 24 After not being able to reproduce it locally, we looked closer at the image, and realized that there were artifacts, making us suspect corruption. Created an ugly little unit test with only the image in question, and was unable to reproduce the error on Windows 7 as was expected. But after running our unit test on Windows Server 2008, we see this error every time. Is there a way to specify non-strictness for jpegs when writing them? Some sort of check/fix we can use? Unit test snippet: var r = ReadFile("C:\\images\\ConsoleApplication1\\test.jpg"); using (var imgStream = new MemoryStream(r)) { using (var ms = new MemoryStream()) { var guid = Guid.NewGuid(); var fileName = "C:\\images\\ConsoleApplication1\\t" + guid + ".jpg"; Image.FromStream(imgStream).Save(ms, ImageFormat.Jpeg); using (FileStream fs = File.Create(fileName)) { fs.Write(ms.GetBuffer(), 0, ms.GetBuffer().Length); } } }

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >