Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 802/1981 | < Previous Page | 798 799 800 801 802 803 804 805 806 807 808 809  | Next Page >

  • Best Design pattern for social media file transfer

    - by Onema
    Our system would like our clients to link their accounts with different social media sites like youtube, vimeo, facebook, myspace and so on. One of the benefits we would like to give to the user is to transfer, update and delete files they have uploaded to our sites and transfer them to the social media sites mentioned above. this files could be videos, images or audio. We started thinking about using a strategy pattern, as all of these sites share a common process ( authentication, connection, use the API to transfer/edit/delete the file ), but we soon realized that it may not work as me may want to use some of the extended functionality that is specific to each service (eg: associate a youtube video with a channel, or upload images to a specific album on facebook, and much, much more...) My question is, what would be the best Structural Design Patter to use for this scenario?

    Read the article

  • Git How do I Push a project, that was Downloaded from Source

    - by JZ
    I worked with a graphic designer that did not clone from my github account. He downloaded the project from source rather than using the command "git clone". Since he pulled his files, a month has gone by and I want to do the following tasks: Create a new branch Push the graphic designers project into that branch Merge his branch with Master I've tried the following the github forking guide with not much luck; when I attempt to push the files into a new branch I get an error: fatal: Not a git repository (or any of the parent directories): .git How do I do this?

    Read the article

  • SQL CE not loading from network share

    - by David Veeneman
    I installed VS 2010 RC yesterday, and suddenly, SQL Server CE isn't loading files from a network share. In projects compiled with VS 2008, if I try to open a SQL CE file located on a network share, I get an error that reads like this: Internal error: Cannot open the shared memory region. If I try to create a data connection in VS 2010 to a SQL CE file on a network share, I get this error: SQL Server Compact does not support opening database files on a network share. Can anyone shed any light on what's going on? Thanks.

    Read the article

  • nginx with passenger

    - by Luc
    Hello, I'm trying to move from Apache + Passenger to Nginx + passenger on my Ubuntu Lucid Lynx box. When I install passenger, sudo gem install passenger and cd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin sudo ./passenger-install-nginx-module everything is fine (no error). Nginx is downloaded / compiled and installed at the same time (when selecting the first option during passenger installation). By default it is installed in /opt/nginx. I end up with the configuration file /opt/nginx/conf/nginx.conf (this conf file was automatically updated with passenger config). The thing I do not understand is that I also have the configuration file /etc/nginx/nginx.conf... what is the purpose of this one when it seems that the conf file in /opt/... is the main one ? When I run /etc/init.d/nginx start, it starts correclty saying that /etc/nginx/nginx.conf is ok... Does it mean that it does not check the other conf file ? I have updated /etc/init.d/nginx script and add /opt/nginx/sbin at the beginning of the PATH and it seems the correct conf file is taken into account. It seems like I have 2 nginx installations where I only relied on passenger to install it... Thanks a lot for your help, I am kind of lost here :) Luc

    Read the article

  • Make two servers talk to each other

    - by Maksim
    I have application written in GWT and hosted on Google AppEngine/Java. In this application user will have an option to upload video/audio/text file to the server. Those files could be big, up to 1gb or so and because GAE/J does not support large file I have to use another server to store those files. This would be easy to implement if there was no cross-domain security feature in browsers. So, what I'm thinking is to make GAE Server talk to my server (Glassfish or any other java servers if needed) to tell url to the file and if possible send status of uploaded file (how many percent was uploaded) so I can show status on clients screen. Here is what I'm thinking to do. When user loads GWT page that is stored on GAE/J he/she will upload file to my server, then my server will send response back to GAE and GAE will send response to the client. If this scenario is possible what would be the best way to implement GAE to Glassfish conversation?

    Read the article

  • Handling missing resources

    - by Domchi
    I've just found myself in situation where I needed to handle exception I'll probably never get, so out of curiosity, let's do a small poll. Do you validate the presence of resources in your programs? I mean, those resources which are installed with your program, like icons, images and similar. Generally, if those are missing, either your install didn't do its job, or the user randomly deleted files in your app. If you do validate the presence, what do you do when the files are not there? Of course, for web apps, you'll have nice 404 page or broken link, but what about the rest? Fail early, yes, but leave handling failures to your compiler, or what?

    Read the article

  • Why did my flash drive become "read only" and (how) can I fix it?

    - by Bob
    I have a brand new flash drive (one week old) that has become marked as read only, by Windows, Kubuntu and a bootable partitioner. Why did this happen? Is it fixable? If it is, how can I fix this? The problem Firstly, this drive is new. It's certainly not been used enough to die from normal wear and tear, though I would not discount defective components. The drive itself has somehow become locked in a read only state. Windows' Disk management: Diskpart: Generic Flash Disk USB Device Disk ID: 33FA33FA Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : Yes Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No What really confuses me is Current Read-only State : Yes and Read-only : No. Attempted solutions So far, I've tried: Formatting it in Windows (in Disk management, the format options are greyed out when right clicking). DiskPart Clean (CLEAN - Clear the configuration information, or all information, off the disk.): DISKPART> clean DiskPart has encountered an error: The media is write protected. See the System Event Log for more information. There was nothing in the event log. Windows command line format >format G: Insert new disk for drive G: and press ENTER when ready... The type of the file system is FAT32. Verifying 7740M Cannot format. This volume is write protected. Windows chkdsk: see below for details Kubuntu fsck (through VirtualBox USB passthrough): see below for details Acronis True Image to format, to convert to GPT, to destroy and rebuild MBR, basically anything: failed (could not write to MBR) Details (and a nice story) Background This was a brand new, generic, 8GB flash drive I wanted to create a multiboot flash drive with. It came formatted as FAT32, though oddly a little larger than most 8 GIGAbyte flash drives I've come across. Approximately 127MB was listed as "used" by Windows. I never discovered why. The end usable space was about what I normally expect from a 8GB drive (approx 7.4 GIBIbytes). I had thrown quite a few Linux distros on, along with a copy of Hiren's. They would all boot perfectly. They were put on with YUMI. When I tried to put the Knoppix DVD on, YUMI added an odd video option to its boot comman which caused Knoppix to boot with a black screen on X. ttys 1 through 6 still worked as text only interfaces. A few days later, I took some time to take that odd video option off, making the boot command match the one that comes with Knoppix. On the attempt to boot, Knoppix reported some form of LZMA corruption. Leading up to the current issue I was thinking the Knoppix files may have been corrupted somehow, so I tried reloading it. The drive was nearly full (45MB free), so I deleted a generic ISO that also was not booting. That went fine. I then went through YUMI to 'uninstall' Knoppix, i.e. delete files and remove from the menus. The files went first, then the menus were cleared successfully. However, the free space was stuck at about 700MB, same as it was before removing Knoppix. In the old Knoppix folder, there was a 0 byte file named KNOPPIX that could not be deleted. I tried reinserting the drive to delete this file - without safely removing, if that made a difference (hey, first time for everything). Running the standard Windows chkdsk scan without /r or /f reported errors found. Running with /r just got it stuck. I decided to give fsck a shot, so I loaded up my Kubuntu VM and attached the drive to it with VirtualBox's USB 2.0 passthrough. I umounted it (/dev/sda1) and ran a fsck. There are differences between boot sector and its backup. I chose No action. It told me FATs differ and asked me to select either the first or second FAT. Whichever I selected, I got a notice of Free cluster summary wrong. If I chose Correct, it gave a list of incorrect file names. To try to fix something, at least, I ran it with the -p option. Halfway through fixing the files, the VM froze - I ended its process about ten minutes later. Cause? My next attempt was to use YUMI, again, to rebuild the whole drive. I used YUMI's built in reformat (to FAT32) option and installed a Kubuntu ISO (700MB). The format was successful, however, the extract and copy of Kubuntu (which YUMI uses a 7zip binary for) froze at about 60% done. After waiting for about fifteen minutes (longer than the 3.5GB Knoppix ISO took last time), I pulled the drive out. The drive at this point was already formatted, SYSLINUX already installed, just waiting on the unpacking of an ISO and the modifying of the boot menus. Plugging it back in, it came up as normal - however, any write action would fail. Disk management reported it as read only. On reconnect, it would come up as normal but a write operation would cause it to go read only again. After a few attempts, it started coming up as read only on insertion. Attempts to fix This is when I ran through the attempts listed above, to try and reformat it in case of a faulty format. However the inability to do so even on a bootable disk indicated something more serious is wrong. chkdsk now reports nothing is wrong, and fsck still reports MBR inconsistencies, but now always chooses first FAT automatically after telling me FATs differ. It still does the same Free cluster summary wrong afterwards. I cannot run with -p anymore because it is now marked as read only. It also managed to corrupt my VM's disk somehow on the first attempt (yes, I'm sure I chose sda, which is mapped to a 7.4GB drive - I triple checked). Thank god for snapshots? I'm just about out of ideas. To my inexperienced mind it looks like something in the drive's firmware set it to read only "permanently" somehow - is there any way to reset this? I don't particularly care about keeping data, considering I've reformatted it twice. Also, fixes that keep me in Windows are better; it reduces the risk of me accidentally nuking my main hard drive. Update 1: I pulled apart the drive out of curiosity. As you can see, there are no obvious write protect switches. There is an IC on the other side, ALCOR branded labelled AU6989HL, if that matters. If there appears to be no way to fix this, I'll probably pull out the (glued down) card and put it in a card reader to check if it's the card or the controller that died. Update 2: I've pulled the card off, Windows detects the drive as a card reader now. The contacts on the card don't appear to be used, and there are several rows of holes on the card itself. Putting it into the card reader only detects about 30MB total, RAW. It's probably either the reader incorrectly reporting the card as faulty (as if a real SD card's write protect was switched on) or a bad contact somewhere. If nothing else, I have a spare 8GB Micro SD card now... as soon as I figure out how to format it as 8GB.

    Read the article

  • Cannot login to drupal in Chrome or Firefox, but Safari works

    - by WmasterJ
    Problem: Login is not working in Firefox and Chrome but it does in Safari. Details: We just moved a drupal 6 installation to another host and followed some steps: Moved sites/site1/Themes/themeFolder to sites/all/Themes/themeFolder Made these changes in page-node-NNN.tpl.php files (searched all files in themes/themeFolder): 1) find: /oldpath/ replace: /newpath/ 2) find: oldsubdomain. replace: www. 3) find: .com/sites/ replace: .com/newpath/sites/ Then as I login it fails in any browser when the wrong information is entered but when it is correct it simply redirects to that users profile page...and then nothing. There are no admin menus, no edit buttons for content and it is a though it authenticated but somehow never stored anything that would help with the authentication later. The strange thing is that for 3 people with three different systems Firefox and Chrome don't work. But Safari does. We have ruled out that it is the database or old cookies. Any one have a good guess?

    Read the article

  • problem using base64 encoder and InputStreamReader

    - by karoberts
    I have some CLOB columns in a database that I need to put Base64 encoded binary files in. These files can be large, so I need to stream them, I can't read the whole thing in at once. I'm using org.apache.commons.codec.binary.Base64InputStream to do the encoding, and I'm running into a problem. My code is essentially this FileInputStream fis = new FileInputStream(file); Base64InputStream b64is = new Base64InputStream(fis, true, -1, null); InputStreamReader reader = new InputStreamReader(b64is); preparedStatement.setCharacterStream(1, reader); When I run the above code, I get one of these during the execution of the update java.io.IOException: Underlying input stream returned zero bytes, it is thrown deep in the InputStreamReader code. Why would this not work? It seems to me like the reader would attempt to read from the base 64 stream, which would read from the file stream, and everything should be happy.

    Read the article

  • acessing network shared folder with a username and password string in vb.net

    - by Irene
    i am using the following code to read the details from a network folder which is restricted for only one user shell("net use q: \\serveryname\foldername /user:admin pwrd", AppWinStyle.Hide, True, 10000) Process.Start(path) shell("net use q: /delete") when i run this to open any pdf or jpg or any other files except word/excel/powerpoint, everything is working fine. but the problem comes only when i access a word file. in the step one, i am giving permission to access the word file. in the step two, word file is open. in the third, i am deleting the q drive. the problem is the word file is still open. so i am getting a dos window, saying that "some connections of still connected or searching some folders, do you want to force disconnect" please help.... how to access a word file (editable files) providing user name and password from the code and at the same time he shoud not have access to any other folders directly.

    Read the article

  • May the FileInputStream.available foolish me?

    - by Tom Brito
    This FileInputStream.available() javadoc says: Returns an estimate of the number of remaining bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream. The next invocation might be the same thread or another thread. A single read or skip of this many bytes will not block, but may read or skip fewer bytes. In some cases, a non-blocking read (or skip) may appear to be blocked when it is merely slow, for example when reading large files over slow networks. I'm not sure if in this check: if (new FileInputStream(xmlFile).available() == 0) can I rely that empty files will always return zero?

    Read the article

  • System.ArgumentException: Invalid hex character at DecryptAssemblyResource

    - by Radu094
    My webapp is trowing these exceptions intermitently ever since we migrated to Mono + Apache: The error sounds more like a problem reading/processing some assembly, so I was wondering if I should be worried that there might be a problem with the hard-drive? System.ArgumentException: Invalid hex character at System.Web.Configuration.MachineKeySectionUtils.ToHexValue (Char c, Boolean high) [0x00000] in <filename unknown>:0 at System.Web.Configuration.MachineKeySectionUtils.GetBytes (System.String key, Int32 len) [0x00000] in <filename unknown>:0 at System.Web.Handlers.ScriptResourceHandler.GetBytes (System.String val) [0x00000] in <filename unknown>:0 at System.Web.Handlers.ScriptResourceHandler.DecryptAssemblyResource (System.String val, System.String& asmName, System.String& resName) [0x00000] in <filename unknown>:0 at System.Web.Handlers.ScriptResourceHandler.ProcessRequest (System.Web.HttpContext context) [0x00000] in <filename unknown>:0 at System.Web.Handlers.ScriptResourceHandler.System.Web.IHttpHandler.ProcessRequest (System.Web.HttpContext context) [0x00000] in <filename unknown>:0 at System.Web.HttpApplication+<Pipeline>c__Iterator2.MoveNext () [0x00000] in <filename unknown>:0 at System.Web.HttpApplication.Tick () [0x00000] in <filename unknown>:0 Method: Void Application_Error(System.Object, System.EventArgs) at File: at Line Number: 0 Method: Void ProcessError(System.Exception) at File: at Line Number: 0 Method: Void Tick() at File: at Line Number: 0 Method: Void Start(System.Object) at File: at Line Number: 0 Method: Void System.Web.IHttpHandler.ProcessRequest(System.Web.HttpContext) at File: at Line Number: 0 Method: Void Process(System.Web.HttpWorkerRequest) at File: at Line Number: 0 Method: Void RealProcessRequest(System.Object) at File: at Line Number: 0 Method: Void ProcessRequest(System.Web.HttpWorkerRequest) at File: at Line Number: 0 Method: Void ProcessRequest() at File: at Line Number: 0 Method: Void ProcessRequest(Mono.WebServer.MonoWorkerRequest) at File: at Line Number: 0 Method: Void ProcessRequest(Int32, System.String, System.String, System.String, System.String, System.String, Int32, System.String, Int32, System.String, System.String[], System.String[], System.Object) at File: at Line Number: 0 Method: Void InnerRun(System.Object) at File: at Line Number: 0 Method: Void Run(System.Object) at File: at Line Number: 0

    Read the article

  • Setting up gcov in Xcode 3.1

    - by Algorithmic
    I'm trying to setup my Xcode project to be instrumented with gcov so I can determine the code coverage of my unit tests. All of the documentation I find online talks about settings that I don't find in Xcode 3.1, though. An example: To work with Coverstory, first you need to set up your target to work with gcov. This requires turning on "Instrument Program Flow", "Generate Test Coverage Files" and linking with the gcov library. (Using Coverstory) The closest thing I can find to "Instrument Program Flow" and "Generate Test Coverage Files" in my build settings is "Generate Profiling Code", which doesn't appear to do what I want it to do. Am I looking in the wrong place for these settings or are all of the examples I'm finding online stale?

    Read the article

  • Iphone progressive download audio player

    - by joynes
    Hi! Im trying to implement a progressive download audio player for the iphone, ie using http and fixed size mp3-files. I found the AudioStreamer project but it seems very complicated and works best with endless streams. I need to be able to find out the total length of audiofiles and I also need to be able to seek in the files. I found a hacked deviation from AudioStreamer but it doesnt seem to work very well for me. http://www.saygoodnight.com/?p=14 Im wondering if there is a more simple way to achieve my goals or if there are some better working samples out there? I found the bass library but not much documentation about it. /Br Johannes

    Read the article

  • How to create an "hybrid" usb stick?

    - by rdesign
    Hey guys, I was wondering how to make an hybrid usb stick. That means a usb stick that runs under mac and windows and displays specific content. Example: Plug in on win : index.html opens. Mac os X files are invisible. Plug in on mac: indexMac.html opens. Win files are invisible. I know that every usb stick can be read by both platforms. The Apple Mac os X CD is something which inspired me. thanks a lot.

    Read the article

  • Python file-io code listing current folder path instead of the specified

    - by Tom Brito
    I have the code: import os import sys fileList = os.listdir(sys.argv[1]) for file in fileList: if os.path.isfile(file): print "File >> " + os.path.abspath(file) else: print "Dir >> " + os.path.abspath(file) Located in my music folder ("/home/tom/Music") When I call it with: python test.py "/tmp" I expected it to list my "/tmp" files and folders with the full path. But it printed lines like: Dir >> /home/tom/Music/seahorse-gw2jNn Dir >> /home/tom/Music/FlashXX9kV847 Dir >> /home/tom/Music/pulse-DcIEoxW5h2gz This is, the correct file names, but the wrong path (and this files are not either in my Music folder).. What's wrong with this code?

    Read the article

  • Rails3 and `cd somehwere && do something`

    - by Samer Abukhait
    I have a rails project that has other projects under it, sub-projects have rake and bundler files. When I do ruby -e `cd sub-project && rake`, or ruby -e `cd sub-project && bundle`, commands work as expected and use the sub-project rake/bundler files. However, when I do the same thing from a Rails3 console (rails 3.0.3), rake gives the error no such file to load -- initializer, and bundle operates as if it was fired from the root directory. I tried the same commands from a Rails2.3.10 console and they worked as expected. Is Rails3 doing something wrong here? I am using Ruby 1.9.2 via RVM. $ ruby -v ruby 1.9.2p136 (2010-12-25 revision 30365) [i686-linux]

    Read the article

  • Use DivX settings to encode to mp4 with ffmpeg

    - by sjngm
    I'm used to use VirtualDub to encode a video to AVI container with DivX-codec (and MP3 for audio). Now I'm planning to use ffmpeg to encode videos to MP4 container with h264-codec. What I've figured out is that I need to use libx264 and one of those presets to make anything work. However, I'm amazed about the video bitrate ffmpeg uses for encoding. What I currently have is this little batch file: @ECHO OFF SETLOCAL SET IN=source.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-fpre "%FFMPEG_PATH%\presets\libx264-lossless_slow.ffpreset" SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %VIDEO% %PRESET% test.mp4 ENDLOCAL With this I tell ffmpeg to use 1978k as the bitrate, but ffmpeg uses 15000k+! I tried other presets, but they don't use my specified bitrate. Here are the presets I have: libx264-baseline.ffpreset libx264-ipod320.ffpreset libx264-ipod640.ffpreset libx264-lossless_fast.ffpreset libx264-lossless_max.ffpreset libx264-lossless_medium.ffpreset libx264-lossless_slow.ffpreset libx264-lossless_slower.ffpreset libx264-lossless_ultrafast.ffpreset ffmpeg version: FFmpeg git-N-29181-ga304071 libavutil 50. 40. 1 / 50. 40. 1 libavcodec 52.120. 0 / 52.120. 0 libavformat 52.108. 0 / 52.108. 0 libavdevice 52. 4. 0 / 52. 4. 0 libavfilter 1. 79. 0 / 1. 79. 0 libswscale 0. 13. 0 / 0. 13. 0 Note that I don't use the latest version as it has problems with spaces in filenames. Here's what seems to be the full parameter list DivX 6.9.2 uses: -bvnn 1978000 -vbv 218691200,100663296,100663296 -dir "C:\Users\sjngm\AppData\Roaming\DivX\DivX Codec" -w -b 1 -use_presets=1 -preset=10 -windowed_fullsearch=2 -thread_delay=1 What command line parameters would that be for ffmpeg? EDIT: Going with slhck's suggestion I tried a new 32-bit version. I have no idea if that is 0.9 or newer, I can't find that info. ffmpeg version N-36890-g67f5650 libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 I reworked my batch file to look like this (interestingly enough I can't find parameter -vprofile in the documentation): @ECHO OFF SETLOCAL SET IN=VTS_01_1.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-vprofile high -preset veryslow SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %PRESET% %VIDEO% test.mp4 ENDLOCAL I see that it now uses the bitrate properly (thanks to LongNeckbeard for pointing out that the lossless-stuff ignores the bitrate!). Just in case you wonder how I came up with the 1978000, I'm using this formula which I found valid for DivX-files (I'm guessing the bitrate won't change that much for h264): width * height * 25 * 0.22 / 1000 I'm not sure if the 0.22 correlates with the CRF somehow. Overall I forgot to say the I will use a two-pass scenario, which is why I don't use the CRF here. I will try to read more about this. Currently I'm just trying to get something running that shows me that I'm doing something right (ffmpeg isn't the easiest tool to understand ;)). C:\Program Files (x86)\ffmpeg\ffmpeg.exe" -i VTS_01_1.avs -acodec libmp3lame -ab 128000 -vcodec libx264 -vb 1978000 -vprofile high -preset veryslow test.mp4 The output is now: ffmpeg version N-36890-g67f5650 Copyright (c) 2000-2012 the FFmpeg developers built on Jan 16 2012 21:57:13 with gcc 4.6.2 configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 Input #0, avs, from 'VTS_01_1.avs': Duration: 00:58:46.12, start: 0.000000, bitrate: 0 kb/s Stream #0:0: Video: rawvideo (YV12 / 0x32315659), yuv420p, 576x448, 77414 kb/s, 25 tbr, 25 tbn, 25 tbc Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, 2 channels, s16, 1536 kb/s File 'test.mp4' already exists. Overwrite ? [y/N] y w:576 h:448 pixfmt:yuv420p tb:1/1000000 sar:0/1 sws_param: [libx264 @ 05A2C400] using cpu capabilities: MMX2 SSE2Fast FastShuffle SSEMisalign LZCNT [libx264 @ 05A2C400] profile High, level 3.1 [libx264 @ 05A2C400] 264 - core 120 r2120 0c7dab9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=16 deblock=1:0:0 analyse=0x3:0x133 me=umh subme=10 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=24 chroma_me=1 trellis=2 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=8 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=60 rc=abr mbtree=1 bitrate=1978 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'test.mp4': Metadata: encoder : Lavf53.30.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuv420p, 576x448, q=-1--1, 1978 kb/s, 25 tbn, 25 tbc Stream #0:1: Audio: mp3 (i[0][0][0] / 0x0069), 48000 Hz, 2 channels, s16, 128 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #0:1 -> #0:1 (pcm_s16le -> libmp3lame) Press [q] to stop, [?] for help frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 3 fps= 1 q=22.0 size= 39kB time=00:00:00.04 bitrate=8063.8kbits/ frame= 8 fps= 2 q=22.0 size= 82kB time=00:00:00.24 bitrate=2801.3kbits/ frame= 13 fps= 3 q=23.0 size= 120kB time=00:00:00.44 bitrate=2229.5kbits/ frame= 16 fps= 4 q=23.0 size= 147kB time=00:00:00.56 bitrate=2156.7kbits/ frame= 20 fps= 4 q=22.0 size= 175kB time=00:00:00.72 bitrate=1987.4kbits/ : video:4387kB audio:273kB global headers:0kB muxing overhead 0.260038% [libx264 @ 05A2C400] frame I:2 Avg QP:19.53 size: 29850 [libx264 @ 05A2C400] frame P:76 Avg QP:22.24 size: 19541 [libx264 @ 05A2C400] frame B:359 Avg QP:25.93 size: 8210 [libx264 @ 05A2C400] consecutive B-frames: 0.5% 0.5% 0.0% 8.2% 17.2% 52.2% 16.0% 5.5% 0.0% [libx264 @ 05A2C400] mb I I16..4: 5.4% 75.3% 19.3% [libx264 @ 05A2C400] mb P I16..4: 1.3% 16.5% 2.2% P16..4: 36.3% 28.6% 12.7% 1.8% 0.2% skip: 0.4% [libx264 @ 05A2C400] mb B I16..4: 0.4% 3.8% 0.3% B16..8: 40.0% 18.4% 4.7% direct:18.5% skip:13.9% L0:45.4% L1:38.1% BI:16.5% [libx264 @ 05A2C400] final ratefactor: 20.35 [libx264 @ 05A2C400] 8x8 transform intra:83.1% inter:68.5% [libx264 @ 05A2C400] direct mvs spatial:99.2% temporal:0.8% [libx264 @ 05A2C400] coded y,uvDC,uvAC intra: 64.9% 83.4% 49.2% inter: 49.0% 50.4% 4.4% [libx264 @ 05A2C400] i16 v,h,dc,p: 25% 22% 27% 26% [libx264 @ 05A2C400] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 10% 7% 23% 9% 10% 10% 10%10% 13% [libx264 @ 05A2C400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 11% 13% 9% 12% 11% 10% 9% 12% [libx264 @ 05A2C400] i8c dc,h,v,p: 42% 28% 16% 14% [libx264 @ 05A2C400] Weighted P-Frames: Y:18.4% UV:7.9% [libx264 @ 05A2C400] ref P L0: 29.1% 11.3% 15.7% 7.3% 6.9% 4.9% 5.1% 3.4%3.9% 2.7% 2.8% 1.8% 1.7% 1.2% 1.4% 0.9% [libx264 @ 05A2C400] ref B L0: 68.8% 11.4% 5.5% 2.9% 2.3% 1.9% 1.5% 1.1%1.1% 1.0% 0.9% 0.7% 0.5% 0.3% 0.1% [libx264 @ 05A2C400] ref B L1: 91.9% 8.1% [libx264 @ 05A2C400] kb/s:2055.88 As far as I'm concerned it doesn't look that bad to me.

    Read the article

  • PHP - Processing Invalid XML

    - by Paul
    I'm using SimpleXML to load in some xml files (which I didn't write/provide and can't really change the format of). Occasionally (eg one or two files out of every 50 or so) they don't escape any special characters (mostly &, but sometimes other random invalid things too). This creates and issue because SimpleXML with php just fails, and I don't really know of any good way to handle parsing invalid XML. My first idea was to preprocess the XML as a string and put ALL fields in as CDATA so it would work, but for some ungodly reason the XML I need to process puts all of its data in the attribute fields. Thus I can't use the CDATA idea. An example of the XML being: <Author v="By Someone & Someone" /> Whats the best way to process this to replace all the invalid characters from the XML before I load it in with SimpleXML?

    Read the article

  • simple Java "service provider frameworks"?

    - by Jason S
    I refer to "service provider framework" as discussed in Chapter 2 of Effective Java, which seems like exactly the right way to handle a problem I am having, where I need to instantiate one of several classes at runtime, based on a String to select which service, and an Configuration object (essentially an XML snippet): But how do I get the individual service providers (e.g. a bunch of default providers + some custom providers) to register themselves? interface FooAlgorithm { /* methods particular to this class of algorithms */ } interface FooAlgorithmProvider { public FooAlgorithm getAlgorithm(Configuration c); } class FooAlgorithmRegistry { private FooAlgorithmRegistry() {} static private final Map<String, FooAlgorithmProvider> directory = new HashMap<String, FooAlgorithmProvider>(); static public FooAlgorithmProvider getProvider(String name) { return directory.get(serviceName); } static public boolean registerProvider(String name, FooAlgorithmProvider provider) { if (directory.containsKey(name)) return false; directory.put(name, provider); return true; } } e.g. if I write custom classes MyFooAlgorithm and MyFooAlgorithmProvider to implement FooAlgorithm, and I distribute them in a jar, is there any way to get registerProvider to be called automatically, or will my client programs that use the algorithm have to explicitly call FooAlgorithmRegistry.registerProvider() for each class they want to use?

    Read the article

  • visualize irregular data in vtk

    - by aaron berry
    I have an irregular data, x dimension - 384, y dimension - 256 and z dimension 64. Now these coordinates are stored in 3 separate binary files and i have a data file having a data value for these points. I want to know, how can i represent such data to be easily visualized in vtk. Till now we were using AVS which has fld files, which can read such data easily. I dont know how to do it in vtk. Would appreciate any pointers in this direction.

    Read the article

  • How to solve with using Flex Builder 3 and BlazeDS?

    - by Teerasej
    Hi, everyone. Thank you for interesting in my question, I think you can help me out from this little problem. I am using Flex builder 3, BlazeDS, and Java with Spring and Hibernate framework. I using the remote object to load a string from spring's configuration files. But in testing, I found this fault event like this: RPC Fault faultString="java.lang.NullPointerException" faultCode="Server.Processing" faultDetail="null" I have check the configuration in remote-config.xml and services-config.xml. But it looks good. There is some people talked about this problem around the internet and I think you can help me and them. I am using these environment: Flex Builder 3 BlazeDS 3.2.0 JBoss server Full stacktrace: [RPC Fault faultString="java.lang.NullPointerException" faultCode="Server.Processing" faultDetail="null"] at mx.rpc::AbstractInvoker/http://www.adobe.com/2006/flex/mx/internal::faultHandler()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\AbstractInvoker.as:220] at mx.rpc::Responder/fault()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\Responder.as:53] at mx.rpc::AsyncRequest/fault()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\AsyncRequest.as:103] at NetConnectionMessageResponder/statusHandler()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\messaging\channels\NetConnectionChannel.as:569] at mx.messaging::MessageResponder/status()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\messaging\MessageResponder.as:222]

    Read the article

  • Localization of strings in static lib

    - by AO
    I have a project that uses a static library (SL). In that SL, there are a couple of strings I'd like to localize and the project includes all of the localization files. The localization works just fine when storing all text translations in the same file. The thing is that I'd like to separate the SL strings from the other strings. I have tried to put two different *.strings files (Localizable.strings and Localizable2.strings) in the language folder of interest but that did not work. I have also tried to use two *.strings file with the same name (Localizable.strings) but with different paths. It didn't work either. It seems that only one localization file is supported, right? Could anyone suggest a good way of doing this? I'm using SDK 3.2 beta 2.

    Read the article

< Previous Page | 798 799 800 801 802 803 804 805 806 807 808 809  | Next Page >