Search Results

Search found 8187 results on 328 pages for 'mind mapping'.

Page 189/328 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Email test deferred (mail transport unavailable) with ClamAV

    - by dirt
    I'm trying to set up a simple new mail server; when I send a test email to the server the email is getting hung up during delivery (user mapping is found) and the email is never found in /home/user/Maildir/new Here is my maillog after a fresh reboot and test email, there are a few warnings I am unfamiliar with. Can you please point me in the right direction? Oct 25 14:54:57 loki dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled) Oct 25 14:54:58 loki postfix/postfix-script[1369]: starting the Postfix mail system Oct 25 14:54:58 loki postfix/master[1370]: daemon started -- version 2.6.6, configuration /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: request to update table btree:/etc/postfix/smtpd_scache in non-postfix directory /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: redirecting the request to postfix-owned data_directory /var/lib/postfix Oct 25 14:56:00 loki postfix/smtpd[1455]: connect from mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/smtpd[1455]: 1CF5E20A8B: client=mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/cleanup[1461]: 1CF5E20A8B: message-id= Oct 25 14:56:01 loki postfix/qmgr[1379]: 1CF5E20A8B: from=, size=1788, nrcpt=1 (queue active) Oct 25 14:56:01 loki postfix/qmgr[1379]: warning: connect to transport private/scan: No such file or directory Oct 25 14:56:01 loki postfix/error[1462]: 1CF5E20A8B: to=, orig_to=, relay=none, delay=0.18, delays=0.15/0.02/0/0.01, dsn=4.3.0, status=deferred (mail transport unavailable) Oct 25 14:56:01 loki postfix/smtpd[1455]: disconnect from mail-ob0-f180.google.com[209.85.214.180] master.cf snippets: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING scan unix - - n - 16 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes 127.0.0.1:10026 inet n - n - 16 smtpd -o content_filter= -o local_recipient_maps= -o relay_recipient_maps= -o smtpd_restriction_classes= -o smtpd_client_restrictions= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks_style=host -o smtpd_authorized_xforward_hosts=127.0.0.0/8

    Read the article

  • Networking DOS within Windows 7 XP Mode, with a Windows XP/7 Networked Share

    - by theonlylos
    For awhile now, one of my clients has been stuck with Corel Paradox 4.0 (it used to be the biggest database system in the DOS days, until Microsoft released Access in the early 90's) so for awhile I've managed to keep it on life support on Windows XP for a few years, however since switching to Windows 7 x64, I've had to resort to using XP Mode as the sandbox to keep it up and running. While I am able to run Paradox as usual in XP Mode, I'm having a serious issue where if I try connecting the install to the network share (which is located on the Windows 7 portion of the system), Paradox keeps exiting because it says the serial number is invalid. Now, I know for a fact that this is an issue with the virtual loopback adapter and also having the VM linked to the physical ethernet adapter -- and while I have solved this issue before, most of my fixes have been bandages since after a few weeks the issue pops up again. Long story short, I wanted to ask if there is a permanent way to link a DOS program to a network share address. For example, when I try doing \tsclient\paradox (the Windows 7 Address) I keep getting an error saying I need a valid network address. I've tried mapping that folder to various drive letters such as P:\Paradox -- but for some reason that keeps failing over time. For what it's worth, Paradox uses a .SOM file to store the network settings, however it isn't editable in Notepad but rather it's controlled by a wizard in Paradox. But if that extension rings any bells, I'd welcome any insights.

    Read the article

  • Network share not always available on Windows 2003

    - by JP Hellemons
    Hello everybody, we have a windows 2003 server with a shared directory/folder. I've seen this thread but this wasn't any help: http://superuser.com/questions/58890/the-specified-network-name-is-no-longer-available I have a ping -t running from 3 pc's (vista and two windows 7) they all work. the problem occurss when two users enter the network share then this 'network share is no longer available' appears and the explorer windows turn white. after f5 or refresh the shared directory is back. this is really strange. there is no anti virus or kasparsky running on either end. this is all in the same LAN. the internet connection is really stable, so it's really strange. because a stable internet connection should imply that the local network connection is also stable and that this is a windows issue. can it be a router issue? I have checked the eventlog on the server for diskfailure related messages, but there are none. EDIT: can this be related to mapping a shared directory to a drive letter? and that there is a router between me and the mapped network drive? or is it just windows that is not working well with two users on the same shared folder? should I install samba or something?

    Read the article

  • QNAP NAS 509 (LINUX) - how to unmout busy volume and find physical disk?

    - by Horst Walter
    On my NAS QNAP TS 509 I do have a technical issue. I need to run e2fsck. This works fine for me on md0 (see below), but how can I unmount the busy devices md9 and sda4 in order to do the same. Whenever I try, I fail because the device is busy. [This part is solved, see below] In order to further track down the issue, I'd need to sort out the physical disk to device relationship. How can I find out this, e.g. md0 is a stripped volume on 2 disk (but I need to find out on what physical disk). Remark: As you can easily derive from my questions, I am not a Linux expert, but manage to get along. /dev/ram0 124.0M 94.1M 29.8M 76% / tmpfs 32.0M 80.0k 31.9M 0% /tmp /dev/sda4 310.0M 103.9M 206.1M 34% /mnt/ext /dev/md9 509.5M 39.2M 470.2M 8% /mnt/HDA_ROOT /dev/md0 1.8T 1.4T 444.7G 76% /share/MD0_DATA tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp -- Added -- QNAP seems to be based on Busybox. I do not find something like init / telinit / runlevel. At busybox docs it says that I need to run the below. But in /var/service sv is not available. I want to go to single user mode to unmount the devices. # cd /var/service # sv d * # sv u getty* -- Added, thanks A4L -- This QNAP Box runs a special flavor of Linux, so not all SOPs do apply. In my particular case I found a services.sh script, stopping all services. After that the drive could be unmounted. The information passed by A4L is valid and worth reading it, maybe I'll profit from it next time. Links: http://unix.stackexchange.com/questions/19918/umount-device-is-busy and http://unix.stackexchange.com/questions/15024/umount-device-is-busy-why So the unmount issue is solved, still looking for the best option to find the physical to volume mapping.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.Self-Service BISelf-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.Collaborative BII have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."The Microsoft BI Stack in GeneralA question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"Expo HallI had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to extract a Vorbis stream from a WAVE file?

    - by H.B.
    I would like to move the Vorbis stream into an ogg container but ffmpeg does not seem to recognize the stream. Even though MPlayer gives this output upon playback: Opening audio decoder: [acm] Win32/ACM decoders Loading codec DLL: 'vorbis.acm' Loaded DLL driver vorbis.acm at 10000000 Warning! ACM codec reports srcsize=0 AUDIO: 44100 Hz, 2 ch, s16le, 128.0 kbit/9.07% (ratio: 16000-176400) Selected audio codec: [vorbisacm] afm: acm (OggVorbis ACM) ffmpeg: ffmpeg -i Source.wav -acodec copy Target.ogg Input #0, wav, from 'Source.wav': Duration: 00:02:15.17, bitrate: 128 kb/s Stream #0.0: Audio: qg[0][0] / 0x6771, 44100 Hz, 2 channels, 128 kb/s [ogg @ 00000000003096C0] Unsupported codec id in stream 0 Output #0, ogg, to 'Target.ogg': Metadata: encoder : Lavf53.6.0 Stream #0.0: Audio: qg[0][0] / 0x6771, 44100 Hz, 2 channels, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Could not write header for output file #0 (incorrect codec parameters ?) Of course this does not necessarily need to be done via ffmpeg, any method that is workable would be fine... I have cut down one of the files to 512KB: sample.wav (Changed two chunk size fields in the wave header to account for this, the embedded stream is cut "without notice")

    Read the article

  • No LPT port in Windows 7 virtual machines

    - by KeyboardMonkey
    Windows 7 has MS virtual PC integrated, the VM settings don't give a parallel LPT port mapping to the physical machine. Where did it go? Has anyone else noticed this, and found a solution? Update: After much digging, I found the one and only reference to this issue, on the VPC Blog: "Parallel port devices are not supported, as they are relatively rare today." -More details- It's a XP VM I've been using since VPC 2007 days, which did have this functionality. This is to configure barcode printers via the LPT port. Since the (new) MS VM can't map to my physical LPT port, I'm having a hard time configuring printers. My physical ports are enabled in the BIOS. It has worked the past 3 years, before switching to Win 7. Any help is appreciated. This screen shot of the VM settings shows COM ports, but LPT is no more In contrast, here is a screen shot of VPC 2007 (before it got integrated into Win 7). Notice how it has LPT support

    Read the article

  • postfix (for sending mail only) multiple domain setup

    - by seanl
    I have the following problem, I have a Centos 5.4 VPS hosting a few nginx sites (some static, some cakephp), I would like to be able to send email from each sites contact page through postfix to my google apps hosted email (different accounts for each site) so that apps can then send out an auto email to the person filling in the contact form etc I have a bare-bones postfix installation with the following added into the main.cf config file. from using this guide virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps (both of these files have been converted into db files using postmap) I have configured DNS correctly for each site and setup SPF records. (I'm aware R-DNS will still reference my actual hostname not the domain name and cause a possible spam issue but one thing at a time) I can telnet localhost and the helo localhost so that I can send a command line email from an address in the virtual_alias_domains to an email in the virtual_alias_maps file which seems sends without giving an error but it is sending to my local linux account not the email address specified. my question is am i approching this the wrong way in terms of the virtual alias mapping or is this even possible to do in the manner im trying. Any help is greatly appreciated thanks. my postconf -n outlook looks like this alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = myactual hostname mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps

    Read the article

  • IIS doesn't serve certain file extensions

    - by Alekc
    Hi, i have this weird issue on Win 2k3 server and IIS: Iis has several sites, in one of them i need to create a subdir and set up it as web application. I've noticed that if i create new directory and put some .js/.txt file into it, they will not be served by iis (IE gives an error Internet Explorer cannot display the webpage). If i put the same files in another old site's subdirectory it will show correctly. By sniffing traffic i've seen that iis reply connection state 200 and then drop completely any connection http://domain.com/test2/prova.txt GET /test2/prova.txt HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.x 200 OK If i rename file prova.txt in prova.asp for example it showing without problems so it shouldn't be permissions issue. After making some researches I've found out that it can be caused by missing mime types, I've checked out .txt and .js are present and served by aspnet_isapi.dll. And here comes another weird thing: if i remove mime mapping from directory's properties it's served correctly, but the same thing doesn't work with js. I'm really beginning to be out of ideas, is there someone who have some hint? Thanks in advance.

    Read the article

  • FFMPEG: how to add watermark to video?

    - by DocWiki
    My Platform: Ubuntu 10.10 + FFMPEG 0.5.3(I installed ffmpeg from source) I try to add Watermark to a .MOV video with FFMPEG 0.5.3 imlib2.so (Please note FFMPEG 0.6+ dont support imlib2.so, so I use ffmpeg 0.5.3) Here is my code: ffmpeg -sameq -i example.mov -vhook '/usr/local/lib/vhook/imlib2.so -x 0 -y 0 -i /var/www/files/watermark.png' newexample.mov Here is the output: FFmpeg version 0.5.3, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-avfilter --enable-filter=movie --enable-avfilter-lavf libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 built on Jul 3 2011 12:05:08, gcc: 4.4.5 Seems stream 1 codec frame rate differs from container frame rate: 59.94 (5994/100) - 29.97 (30000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'example.mov': Duration: 00:03:14.06, start: 0.000000, bitrate: 3350 kb/s Stream #0.0(eng): Audio: aac, 48000 Hz, stereo, s16 Stream #0.1(eng): Video: h264, yuv420p, 1150x647, 29.97 tbr, 29.97 tbn, 59.94 tbc Output #0, mov, to 'newexample.mov': Stream #0.0(eng): Video: mpeg4, yuv420p, 1150x647, q=2-31, 200 kb/s, 90k tbn, 29.97 tbc Stream #0.1(eng): Audio: 0x0000, 48000 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.1 - #0.0 Stream #0.0 - #0.1 Unsupported codec for output stream #0.1 What could be the possible problem? Is that AAC or H264 that is not supported? I installed libavcodec-extra-52, linfaac, libfaad and etc. but the error is the same. Do I have to install following this instruction? HOWTO: Install and use the latest FFmpeg and x264 or there is a simpler solution?

    Read the article

  • Xen domU passwd file overwritten with console log output

    - by malfy
    I was setting up a Debian Xen domU and after booting it fine, I added basic configuration to /etc/network/interfaces and ran /etc/init.d/networking restart. This failed so I decided to reboot. After the reboot I also ran xm shutdown box. When dropped to a shell prompt it wouldn't let me login. Upon further inspection, I now have garbage in some critical files in /etc: root@box:/# tail +1 mnt/etc/{passwd-,shadow} tail: cannot open `+1' for reading: No such file or directory ==> mnt/etc/passwd- <== 0000000000100000 (reserved) Nov 23 02:02:39 box kernel: [ 0.000000] Xen: 0000000000100000 - 0000000004000000 (usable) Nov 23 02:02:39 box kernel: [ 0.000000] DMI not present or invalid. Nov 23 02:02:39 box kernel: [ 0.000000] last_pfn = 0x4000 max_arch_pfn = 0x1000000 Nov 23 02:02:39 box kernel: [ 0.000000] initial memory mapped : 0 - 033ff000 Nov 23 02:02:39 box kernel: [ 0.000000] init_memory_mapping: 0000000000000000-0000000004000000 Nov 23 02:02:39 box kernel: [ 0.000000] NX (Execute Disable) protection: active Nov 23 02:02:39 box kernel: [ 0.000000] 0000000000 - 0004000000 page 4k Nov 23 02:02:39 box kernel: [ 0.000000] kernel direct mapping tables up to 4000000 @ 7000-2c000 Nov 23 02:02:3 ==> mnt/etc/shadow <== 32 nr_cpumask_bits:32 nr_cpu_ids:1 nr_node_ids:1 Nov 23 02:02:39 box kernel: [ 0.000000] PERCPU: Embedded 15 pages/cpu @c15b0000 s37688 r0 d23752 u65536 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: s37688 r0 d23752 u65536 alloc=16*4096 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: [0] 0 Nov 23 02:02:39 box kernel: [ 0.000000] Xen: using vcpu_info placement Nov 23 02:02:39 box kernel: [ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16160 Nov 23 02:02:39 box kernel: [ 0.000000] Kernel command line: root=/dev/mapper/xen-guest_root ro quiet root=/dev/xvda1 ro Nov 23 02:02:39 box kernel: [ 0.000000] PID hash table entries: The garbage is also present in the passwd file and the group file (although I didn't paste that above since I have since ran debootstrap on the filesystem again). Does anyone have any insight into what happened and why?

    Read the article

  • GA 8KNXP Rev1.0: 4 GB installed, only 3.5 GB recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4 GB. The specification from the manual says: Maximum memory support: 4 GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my Linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in Linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

  • Adjust timezone of an AVM Fritz!Box 7390

    It's been a while that I purchased an AVM Fritz!Box 7390 but since I'm using this 'PABX' here in Mauritius, I'm not really happy about the wrong time in the logs or handsets connected. Lately, I had some spare time to address this issue, and the following article describes how to adjust the timezone settings in general. The original idea came from an FAQ found in c't 21/11 (for a 7270 written in German language) but I added a couple of things based on other resources online. The following tutorial may be valid for other models, too. Use your common sense and think before you act. Brief introduction to AVM Fritz!Box devices The Fritz!Box series of AVM has been around for more than a decade and those little 'red boxes' have a high level of versatility for your small office or home. High-speed connections, secure WLAN and convenient telephony make a home network out of any network. Whether it's a computer, tablet or smartphone, any device can be connected to the FRITZ!Box. And best of all, installation is so simple that users will be online in a matter of minutes. If you want to have peace of your mind in your small network then a Fritz!Box is the easiest way to achieve that. I'm using my box primarly as WiFi access point, VoIP gateway and media server but only because it came in second after my Linux system. Limitations in the administrative Web UI Unfortunately, there are no possibilities to adjust the timezone settings in the Web UI at all - even not in Expert mode. I assume that this is part of the 'simplification' provided by AVM's design team. That's okay, as long as you reside in Central Europe, and the implicit time handling is correct for your location. Adjusting the timezone I got my device through an order at Amazon Germany already some time ago, and honestly I wasn't bothered too much about the pre-configured (fixed) timezone setting - CET or CEST depending on daylight saving. But you know, it's that kind of splinter at the back of your head that keeps nagging and bothering you indirectly. So, finally I sat down yesterday evening and did a quick research on how to change the timezone. Even though there are a number of results, I read the FAQ from the c't magazine first, as I consider this as a trusted and safe source of information. Of course, it is most important to avoid to 'brick' your device. You've been warned - No support Tinkering with the configuration of any AVM devices seems to be a violation of their official support channels. So, be warned and continue onlyin case that you're sure about what you are going to do. The following solutions are 'as-is' and they worked for my box flawlessly but may cause an issue in your case. Don't blame me... Solution 1 - Backup, modify and restore That's the way as described in the c't article and a couple of other forum postings I found online, mainly from Australia. Login the administrative Web UI and navigate to 'System => Einstellungen sichern' (System => Backup configuration) and store your current configuration to a local file on your machine. Despite some online postings it is not necessary to specify a password in order to secure or encrypt your backup. IMHO, this only adds another unnecessary layer of complexity to the process. Anyway, next you should create a another copy of your settings and keep it unmodified. That's our safety net to restore the current settings in case that we might have to issue a factory setting reset to the box. Now, open the configuration file with an advanced text editor which is capable to deal with Unix carriage returns properly - Windows Notepad doesn't do the job but Wordpad or Notepad++. Personally, I don't care and simply use geany, gedit or nano on Linux. In total there are 3 modifications that we have to apply to the configuration file - one new line and two adjustments. First, we have to add an instruction near the top of file that overrides the device internal checksum validation. Without this line, your settings won't be accepted. Caution: The drectives are case-sensitve and your outcome should read something like this: **** FRITZ!Box Fon WLAN 7390 CONFIGURATION EXPORTPassword=$$$$<ignore>FirmwareVersion=84.05.52CONFIG_INSTALL_TYPE=iks_16MB_xilinx_4eth_2ab_isdn_nt_te_pots_wlan_usb_host_dect_64415OEM=avmCountry=049Language=deNoChecks=yes**** CFGFILE:ar7.cfg/* * /var/flash/ar7.cfg * Mon Jul 29 10:49:18 2013 */ar7cfg {... Then search for the expression 'timezone' and you should find a section like this one (~ line 1113): timezone_manual {        enabled = no;        offset = 0;        dst_enabled = no;        TZ_string = "";        name = "";} We would like to manually handle the timezone setting in our device and therefore we have to enable it and set the proper value for Mauritius. The configuration block should like so afterwards: timezone_manual {        enabled = yes;        offset = 0;        dst_enabled = no;        TZ_string = "MUT-4";        name = "";} We specify the designation and the offset in hours of the timezone we would like to have. Caution: The offset indicates the value one has to add to the local time to arrive at UTC. More details are described in the Explanation of TZ strings. Mauritius has GMT+4 which means that we have to substract 4 hours from the local time to have UTC. Finally, we restore the modified configuration file via the administrative Web UI under 'System => Einstellungen sichern => Wiederherstellen' (System => Backup configuration => Restore). This triggers a reboot of the device, so please be patient and wait until the Web UI displays the login dialog again. Good luck! Solution 2 - Telnet A more elegant, read: technically interesting, way to adjust configuration settings in your Fritz!Box is to access it directly through Telnet. By default AVM disables that protocol channel and you have to enable it with a connected telephone. In order to activate the telnet service dial the following combination: #96*7* #96*8* (to disable telnet again after work has been completed) If you're using an AVM handset like the Fritz!Fon then you will receive a confirmation message on the display like so: telnetd ein Next, depending on your favourite operating system, you either launch a Command prompt in Windows or a terminal in Linux, get your Admin password ready, and you connect to your box like so: $ telnet fritz.box Trying 192.168.1.1...Connected to fritz.box.Escape character is '^]'.password: BusyBox v1.19.3 (2012-10-12 14:52:09 CEST) built-in shell (ash)Enter 'help' for a list of built-in commands.ermittle die aktuelle TTYtty is "/dev/pts/0"Console Ausgaben auf dieses Terminal umgelenkt# That's it, you are connected and we can continue to change the configuration manually. In order to adjust the timezone setting we have to open the ar7.cfg file. As we are now operating in a specialised environment, we only have limited capabilities at hand. One of those is a reduced version of vi - nvi. Let's open a second browser window with the fine manual page of nvi and start to edit our configuration file: # nvi /var/flash/ar7.cfg In our configuration file, we have to navigate to the timezone directives. The easiest way is to search for the expression 'timezone' by typing in the following: /timezone    (press Enter/Return) Now, we should see the exact lines of code like in the backed up version: timezone_manual {                                                                            enabled = no;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "";                                                     name = "";                                                        } And of course, we apply the same changes as described in the previous section: timezone_manual {                                                                            enabled = yes;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "MUT-4";                                                     name = "";                                                        } Finally, we have to write our changes back to the file and apply the new settings. :wq    (press Enter/Return) # ar7cfgchanged That's it! Finally, close the telnet session by pressing Ctrl+] and enter 'quit'. Additional ideas... There are a couple of more possibilities to enhance and to extend the usability of a Fritz!Box. There are lots of resources available on the net, but I'd like to name a few here. Especially for Linux users it is essential to be able to connect to any device remotely in a  safe and secure way. And the installation of a SSH server on the box would be a first step to improve this situation, also to avoid to run telnet after all. Sometimes, there might be problems in your VoIP connections, feel free to adjust the settings of codecs and connection handling, too. I guess, you'll get the idea... The only frontiers are in your mind.

    Read the article

  • Cannot find network path for computer in workgroup of home Windows XP PCs

    - by John Galt
    VMWare Workstation 6.5 is running as an app on a Windows Vista 64bit PC host. Thanks to Workstation we have 2 guest machines running: TerriVM and MattVM (both of these run Windows XP SP2). We are attempting to get virtual networking configured so we can access the files of both of these VM guest systems from other real PCs connected to this home network. We think we are close but we can't quite get it right... Here is what we've done so far: * On VM Workstation, we set "Host Virtual Network Mapping" to use VMnet0 with the setting "Bridge to an automatically chosen adapter". * On each VM guest (i.e. using Windows explorer on XP), we rightmouse on the C disk, click "Sharing" tab, set shareName to "C_Disk" and check both boxes labeled "Share this folder on the network" and "Allow network users to change my files". Symptoms: On "JohnsRealXP" PC, we go to Windows Explorer, My Computer, Map Network Drive, type into Folder textbox: \TerriVM\C_Disk and assign drive letter T. We see all the folders on this shared drive and can open files on them. So that is good. On same "JohnsRealXP" PC, we go to Windows Explorer, My Computer, Map Network Drive, type into Folder textbox: \MattVM\C_Disk and assign drive letter M. We get a message box "_The network path \mattvm\C_Disk could not be found_". Alternatively, we type just \mattvm\ into the Folder box and click "Browse" and get a dialog box where we drill down from "Entire Network" to "Microsoft Windows Network" to "Workgroup" where both TerriVM and MattVM are listed as computers on the network. Clicking the + sign next to MattVM gives an hourglass and never enables the OK button and I have to cancel. In summary, I think we've attempted to share both of these virtual machines using the same techniques and connect to them in similar fashion, but one connects properly and the other machine can be seen but no shared resources on it can be accessed. Can anyone suggest something possibly overlooked or something to try? Thanks so much in advance.

    Read the article

  • Group policy not applying to security group

    - by ihavenoideawhatimdoing
    Preface: I have enough privileges to create GPOs in my OU, and have made a few of them for some simple tasks (like deploying a printer to certain users). Not actually a sysadmin...I'm a developer who is winging it. I wanted to create a GPO that would set a mapped folder for a certain security group (which I recently created and that contains only myself). Did the following: Created the GPO in MyOU - Users Removed the default Authenticted Users under Security Filtering Add the security group with my account to Security Filtering Set up the mapping via the User Configuration option Changed GPO Status to "Computer configuration settings disabled" Left WMI filtering to Closed the GPO at this point... Logged in as the target user; ran gpupdate /force Logged out, logged in, ran gpresult /r, no mention of my GPO Rebooted Logged in, re-ran gpupdate /force Logged out, logged in, ran gpresult /r, still no mention of my GPO If I log in with another completely different user, their RSOP information shows that the new GPO is being ignored due to a security restriction, so it appears to be "working" for other users. I just can't get it to actually show up in RSOP for the user it should be working. Is there anything else I can do short of rebooting endlessly and crossing my fingers?

    Read the article

  • Vlans and subinterfaces

    - by Adeodatus
    I've inherited a moderate size network that I'm trying to bring some sanity to. Basically, its 8 public class Cs and a slew of private ranges all on one vlan (vlan1, of course). Most of the network is located throughout dark sites. I need to start separating some of the network. I've changed the ports from the main cisco switch (3560) to the cisco router (3825) and the other remote switches to trunking with dot1q encapsulation. I'd like to start moving a few select subnets to different vlans. To get some of the different services provided on our address space (and to separate customers) on to different vlans, do I need to create a subinterface on the router for each vlan and, if so, how do I get the switch port to work on a specific vlan? Keep in mind, these are dark sites and geting console access is difficult if not impossible at the moment. I was planning on creating a subinterface on the router for each vlan then setting the ports with services I want to move to a different vlan to allow only that vlan. Example of vlan3: 3825: interface GigabitEthernet0/1.3 description Vlan-3 encapsulation dot1Q 3 ip address 192.168.0.81 255.255.255.240 the connection between the switch and router: interface GigabitEthernet0/48 description Core-router switchport trunk encapsulation dot1q switchport mode trunk show interfaces gi0/48 switchport Name: Gi0/48 Switchport: Enabled Administrative Mode: trunk Operational Mode: trunk Administrative Trunking Encapsulation: dot1q Operational Trunking Encapsulation: dot1q Negotiation of Trunking: On Access Mode VLAN: 1 (default) Trunking Native Mode VLAN: 1 (default) Administrative Native VLAN tagging: enabled Voice VLAN: none Administrative private-vlan host-association: none Administrative private-vlan mapping: none Administrative private-vlan trunk native VLAN: none Administrative private-vlan trunk Native VLAN tagging: enabled Administrative private-vlan trunk encapsulation: dot1q Administrative private-vlan trunk normal VLANs: none Administrative private-vlan trunk private VLANs: none Operational private-vlan: none Trunking VLANs Enabled: ALL Pruning VLANs Enabled: 2-1001 Capture Mode Disabled Capture VLANs Allowed: ALL Protected: false Unknown unicast blocked: disabled Unknown multicast blocked: disabled Appliance trust: none So, if the boxen hanging off of gi0/18 on the 3560 are on an unmanaged layer2 switch and all within the 192.168.0.82-95 range and are using 192.168.0.81 as their gateway, what is left to do, especially to gi0/18, to get this working on vlan3? Are there any recommendations for a better setup without taking everything offline?

    Read the article

  • Postfix: How do I Make Email Aliases Work?

    - by Nick
    The documentation claims that I can add aliases in a file (like /etc/postfix/virtusertable) and then use the "virtual_maps" directive to point to it. This does not appear to be working, however. My mail is bouncing with: Recipient address rejected: User unknown in local recipient table; If I mail the user from the server using the mail command, it works. mail myuser The message goes through postfix and inserts itself in the Cyrus inbox correctly. When I use fetchmail to get the user's messages off a pop3 server, postfix fails. The user's email is "[email protected]", but it doesn't seem to be mapping correctly to "myuser", the cyrus mailbox name. /etc/postfix/main.cf myhostname = localhost alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all mailbox_transport = lmtp:unix:/var/run/cyrus/socket/lmtp #lmtp:unix:/var/run/lmtp virtual_alias_domains = mydomain.com virtual_maps = hash:/etc/postfix/virtusertable /etc/fetchmailrc et syslog; set daemon 20; poll "mail.pop3server.com" with protocol pop3 user "[email protected]" password "12345" is "myuser" fetchall keep /etc/postfix/virtusertable [email protected] myuser postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_size_limit = 0 mailbox_transport = lmtp:unix:/var/run/cyrus/socket/lmtp mydestination = localhost myhostname = localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes virtual_alias_domains = mydomain.com Why is it ignoring my alias?

    Read the article

  • FastCGI Error when installing PHP on IIS7.5

    - by ytoledano
    I'm trying to install MediaWiki on a Win2008r2 server, but can't manage to install PHP. Here's what I did: Grabbed a Zip archive of PHP and unzipped it into C:\PHP. Created two subdirs: c:\PHP\sessiondata and c:\PHP\uploadtemp. Granted modify rights to the IUSR account for the subdirs. Copied php.ini-production as php.ini Edited php.ini and made the following changes: fastcgi.impersonate = 1 cgi.fix_pathinfo = 1 cgi.force_redirect = 0 open_basedir = "c:\inetpub\wwwroot;c:\PHP\uploadtemp;C:\PHP\sessiondata" extension = php_mysql.dll extension_dir = "./ext" upload_tmp_dir = C:\PHP\uploadtemp session.save_path = C:\php\sessiondata Install Web server role, selected CGI and HTTP Redirection options. In the Handler Mappings: Added Module Mapping. Entered the following values: Path = *.php, Module = FastCgiModule, Executable = c:\php\php-cgi.exe, Name = PHP via FastCGI. Created a test page into wwwroot directory: phpinfo.php and set the contents like this: < ?php phpinfo(); ? Browsed to http://localhost/phpinfo.php But then I get: HTTP Error 500.0 - Internal Server Error An unknown FastCGI error occured Detailed Error Information Module: FastCgiModule Notification: ExecuteRequestHandler Handler: PHP via FastCGI Error Code: 0x800736b1 Requested URL: http://localhost:80/phpinfo.php Physical Path: C:\inetpub\wwwroot\phpinfo.php Logon Method: Anonymous Logon User: Anonymous Does anyone know what I'm doing wrong here? Thanks.

    Read the article

  • How to convert an MKV to AVI with minimal loss

    - by Linux Jedi
    To convert an MKV to AVI, I do two things. The first thing I do is this: ffmpeg -i filename.mkv -vcodec copy -acodec copy output.avi This converts the MKV to an AVI, but the problem is that the video does not play smoothly for some reason. That's fine, because if I do one more thing it gets fixed: ffmpeg -i output.avi -vcodec mpeg4 -b 4000k -acodec mp2 -ab 320k converted.avi After I do this then the file plays without problem. I had success doing it this way for one file, but then I tried it on another file, and there is a slight, but noticeable loss in video quality. This is the output I get when doing the second step: FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers built on Dec 29 2010 18:02:10 with gcc 4.2.1 (Apple Inc. build 5664) configuration: libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.11. 0 / 0.11. 0 Seems stream 0 codec frame rate differs from container frame rate: 359.00 (359/1) -> 29.92 (359/12) Input #0, avi, from 'output.avi': Metadata: ISFT : Lavf52.64.2 Duration: 00:04:17.21, start: 0.000000, bitrate: 3074 kb/s Stream #0.0: Video: mpeg4, yuv420p, 704x480 [PAR 229:189 DAR 5038:2835], 29.92 fps, 29.92 tbr, 29.92 tbn, 359 tbc Stream #0.1: Audio: vorbis, 48000 Hz, stereo, s16 Output #0, avi, to 'nidome_no_kanojo.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 704x480 [PAR 229:189 DAR 5038:2835], q=2-31, 4000 kb/s, 29.92 tbn, 29.92 tbc Stream #0.1: Audio: mp2, 48000 Hz, stereo, s16, 320 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 I just used arbitrarily large settings on the second step and it worked nicely before but not in this case. What settings should I use?

    Read the article

  • How is a subdomain passed to the webserver?

    - by Joshua Frank
    I know that dns resolves an address like example.com to an IP address like 11.22.33.44, but I'm a little confused about how subdomains are resolved, so that when you type http://subdomain.example.com, what actually gets passed to the server at 11.22.33.44? In other words, example.com = 11.22.33.44, but subdomain.example.com/path = ??? Are "subdomain" and "path" passed as http headers, or mapped in the url in some way, or what? Thanks in advance. Edit: If I'm understanding correctly, BloodPhilia says that subdomain.example.com actually is a different domain that in principle could resolve to a totally different IP. But if that's so, then what about hosts that have huge numbers of (what look like) subdomains, but which actually map to some path on the site. For instance, blogspot hosts millions of blogs, and they all look like this: aaa.blogspot.com bbb.blogspot.com ...millions more... yyy.blogspot.com zzz.blogspot.com Those are clearly not subdomains with their own IP's, but rather some mapping like aaa.blogspot.com -- www.blogspot.com/aaa, but how is this accomplished? What actually gets passed to the web server at blogspot.com?

    Read the article

  • GA 8KNXP Rev1.0: 4GB installed, only 3.5 recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4GB. The specs from the manual say: Maximum memory support: 4GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

  • avconv gets killed if mkv has subtitles

    - by Lukas Knuth
    What I'm trying to do is to take a movie (in an Matroska container), convert all audio tracks to AC3 and don't touch anything else. I'm using this line: avconv -i infile.mkv -map 0 -vcodec copy -scodec copy -acodec ac3 -ab 256k outfile.mkv This works fine, except when there are subtitles embedded. Then, after some time processing with no progress, avconv just "dies" (output shortened, these seem to be the interesting parts): [matroska,webm @ 0xf867a0] max_analyze_duration reached [matroska,webm @ 0xf867a0] Estimating duration from bitrate, this may be inaccurate ... Incompatible sample format 's16' for codec 'ac3', auto-selecting format 'flt' ... Stream #0.0(eng): Video: H264 / 0x34363248, yuv420p, 1280x536 [PAR 1:1 DAR 160:67], q=2-31, 1k tbn, 1k tbc (default) Stream #0.1(ger): Audio: ac3, 48000 Hz, 5.1, flt, 256 kb/s (default) Stream #0.2(eng): Audio: ac3, 48000 Hz, 5.1, flt, 256 kb/s Stream #0.3(ger): Subtitle: dvdsub (default) (forced) Metadata: title : forced Stream #0.4(ger): Subtitle: dvdsub Metadata: title : complete Stream mapping: Stream #0:0 -> #0:0 (copy) Stream #0:1 -> #0:1 (dca -> ac3) Stream #0:2 -> #0:2 (dca -> ac3) Stream #0:3 -> #0:3 (copy) Stream #0:4 -> #0:4 (copy) Input stream #0:2 frame changed from rate:48000 fmt:s16 ch:6 to rate:48000 fmt:flt ch:6 Input stream #0:1 frame changed from rate:48000 fmt:s16 ch:6 to rate:48000 fmt:flt ch:6 frame= 2606 fps=1303 q=-1.0 size= 3kB time=107.36 bitrate= 0.2kbits/s ... frame=96141 fps=813 q=-1.0 size= 2195806kB time=2807.04 bitrate=6408.2kbits/s frame=96251 fps=810 q=-1.0 size= 2195806kB time=2807.04 bitrate=6408.2kbits/s ... frame=97015 fps=397 q=-1.0 size= 2195806kB time=2807.04 bitrate=6408.2kbits/s Getötet ["Killed", in English] I have no idea why this happens, as there is no error-output. I'd like to just copy the subtitles over, not touch them at all. If that won't work, they can be completely dropped.

    Read the article

  • convert decrypted .vobs to .avi with ffmpeg on ubuntu

    - by Arcath
    I have a .vob file that has bee ripped from a dvd, when I watch the .vob its very good quality video and 5.1 english audio but when I use ffmpeg it has rubbish video and mono french audio. That was using this command: ffmpeg -i /samba/ripping/vobs/12161840#2.vob -f avi /samba/ripping/avis/test.avi I've tried a few different variations on that but it never comes back with anything good just bigger files with bad video and incorrect sound. I know the videos good and the correct audio streams exist so how do I select a 5.1 track and get good video? ffmpeg gives the .vob details as: Input #0, mpeg, from '/samba/ripping/vobs/12161840#2.vob': Duration: 00:42:05.56, start: 0.287267, bitrate: 5738 kb/s Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720x576 [PAR 64:45 DAR 16:9], 8436 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0.1[0x80]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Stream #0.2[0x81]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Stream #0.3[0x82]: Audio: ac3, 48000 Hz, mono, s16, 192 kb/s Output #0, avi, to '/samba/ripping/avis/test.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 720x576 [PAR 64:45 DAR 16:9], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, mono, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.3 -> #0.1

    Read the article

  • How do I convert a video to GIF using ffmpeg, with reasonable quality?

    - by Kamil Hismatullin
    I'm converting .flv movie to .gif file with ffmpeg. ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif It works great, but output gif file has a very law quality. Any ideas how can I improve quality of converted gif? Output of command: $ ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif ffmpeg version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:52:53 with gcc 4.7.2 *** THIS PROGRAM IS DEPRECATED *** This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.flv': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 Duration: 00:00:18.85, start: 0.000000, bitrate: 3098 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x720, 2905 kb/s, 25 fps, 25 tbr, 50 tbn, 50 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 192 kb/s Metadata: creation_time : 2013-02-14 04:00:07 [buffer @ 0x92a8ea0] w:1280 h:720 pixfmt:yuv420p [scale @ 0x9215100] w:1280 h:720 fmt:yuv420p -> w:320 h:240 fmt:rgb24 flags:0x4 Output #0, gif, to 'output.gif': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 encoder : Lavf53.21.1 Stream #0.0(und): Video: rawvideo, rgb24, 320x240, q=2-31, 200 kb/s, 90k tbn, 10 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream mapping: Stream #0.0 -> #0.0 Press ctrl-c to stop encoding frame= 101 fps= 32 q=0.0 Lsize= 8686kB time=10.10 bitrate=7045.0kbits/s dup=0 drop=149 video:22725kB audio:0kB global headers:0kB muxing overhead -61.778676% Thanks.

    Read the article

  • ffmpeg - h264 to xvid creates large file

    - by fatnic
    I'm trying to use ffmpeg to convert a h264/aac video file to an xvid/mp3 file so I can play it in my ultra-cheap media player. At the moment the converted video file is TWICE the size of the original mp4. Is there any way to get a smaller file size without loosing too much quality? Even a drop to -qmin 1 is pretty awful! The command i'm using is ffmpeg -i input.mp4 -vcodec libxvid -sameq -acodec libmp3lame -ab 128k -ac 2 output.avi And the ffmpeg output is Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4' Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 Duration: 01:34:27.69, start: 0.000000, bitrate: 1520 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x304 [PAR 1:1 DAR 45:19], 1387 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0(und): Video: mpeg4, yuv420p, 720x304 [PAR 1:1 DAR 45:19], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1(und): Audio: libmp3lame, 48000 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >