Search Results

Search found 2956 results on 119 pages for 'sources'.

Page 87/119 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Networking lost after update from Debian Wheezy to Jessie

    - by Charaf
    I am currently setting a Virtual Machine for development purposes. I did a big part of this configuration under Wheezy, but I need some debs that were available only on Jessie. So, I've updated the sources.list and did a dist-upgrade. Everything went well, but after the reboot, I noticed that I lost all the networking. Repositories are unreachable, as well as a simple ping google.fr returns nothing. What can I do to quickly restore networking so that I can continue my working. I have a poor connexion and can not afford to download the whole install DVDs. root@vm~# ifconfig lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:65536 Metric 1 RX packets:452 errors:0 dropped:0 overruns:0 frame:0 TX packets:452 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:164238 (160.3 KiB) TX bytes:164238 (160.3 KiB) root@vm~# I am running VMware 1.0.1 build 1379776 and the last update of Jessie (debian 3.14.4-1) Please help. Thanks.

    Read the article

  • Files built with a makefile are disapearing (including the binary)

    - by Reid
    I am building a program on a TS-7800(SBC), and when I run make (show below), it appears to go through all of the steps normally, but in the end i do not get a binary file. Why is this, and how can I get my file. makefile CC= /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc # compiler options #CFLAGS= -O2 CFLAGS= -mcpu=arm9 #CFLAGS= -pg -Wall # linker LN= $(CC) # linker options LNFLAGS= #LNFLAGS= -pg # extra libraries used in linking (use -l command) LDLIBS= -lpthread # source files SOURCES= HMITelem.c Cpacket.c GPS.c ADC.c Wireless.c Receivers.c CSVReader.c RPM.c RS485.c # include files INCLUDES= Cpacket.h HMITelem.h CSVReader.h RS485.h # object files OBJECTS= HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o HMITelem: $(OBJECTS) $(LN) $(LNFLAGS) -o $@ $(OBJECTS) $(LDLIBS) .c.o: $*.c $(CC) $(CFLAGS) -c $*.c RUN : ./HMITelem #clean: # rm -f *.o # rm -f *~ Output root@ts7800:ReidTest# make /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c HMITelem.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Cpacket.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c GPS.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c ADC.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Wireless.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Receivers.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c CSVReader.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RPM.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RS485.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -o HMITelem HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o -lpthread Thank you.

    Read the article

  • How to create VHD disk image from a Linux live system?

    - by Federico
    Once more, I have to resort at the experts here at SuperUser, as my other sources (mainly Google ;-)) didn't prove very helpful... So basically, I would like to create a VHD image of a physical disk to be archived/accessed/maybe even mounted in a virtual machine. Now, there are dozens of articles and tutorials on how to do that on the web, but none that meets exactly the conditions I would like to achieve: I would like the destination file to be a VHD image, as Windows 7 can mount it natively, even over the network and many other programs can use it (VirtualBox, ...) The disk I'm trying to image contains a Windows XP install, so in theory, I could use the disk2vhd utility, but I would like to find a solution that doesn't require booting that Windows XP install (ie keep the disk read-only) Thus I was searching for a solution involving some sort of live system (running from a USB stic or the network) However, all the solutions that I've came across either make use of disk2vhd or use the dd command under linux, which does a complete copy of the disk (ie even empty blocks) and does not output a VHD file. Is there a tool/program under Linux that can directly create a VHD file? Or is is possible to convert a raw disk image created using dd to a VHD file, without allocating space for the empty blocks? How would you proceed? As always, any advice or comment is highly appreciated!!

    Read the article

  • How do I find funny pictures?

    - by Hanno Fietz
    No, not lolcats. And I'm not really looking for a specific site, either. I have often wished that I had some funny picture to illustrate a presentation, a website, a post, an email, or something else. Google image search and stock photo services have hardly ever helped me, although that may be because I'm doing something wrong. Jeff Atwood seems to have no problem to find funny pictures for his codinghorror and stackoverflow blogs, as well as for the error messages on the trilogy sites. One of my favourites was this elephant. Other bloggers also seem to be quite good at it. I'm wondering if I simply lack the creativity or if there's sources or methods I don't know about. I could think of the following ways to get pictures, but I'm not sure whether this is really "how they do it". keep a collection of pictures that you stumbled upon and liked (takes quite some time to build up to a proper library), when you need a picture, there's one in there maybe have pictures on paper, too, like from magazines or ads. when you are looking for a picture, search online (Flickr, Google, stock photos). This has never really worked for me, I don't know why. produce the pictures yourself, i. e. have a good library of source material or find some online and apply some creativity and suitable software. I could imagine that this could work well once you have the practice.

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • .NET 3.5 installation comes up with Error 0x800F0906, then 0x800F0081F using dism

    - by Austin Meadows
    I've recently tried installing .NET 3.5 for an application on Windows 8.1. I used the OS's popup thing to download/install .NET 3.5 and always get error code 0x800F0906. Upon further research, I found I would have to pop in my Windows 8 CD and install it with this command, where "E:\" is where my CD is mounted: Dism /online /enable-feature /featurename:NetFx3 /All /Source:E:\sources\sxs /LimitAccess This and any derivative of it (e.g., removing /LimitAccess) has not worked for me and has either given me the same error code (0x800F0906) or a different one, 0x800F0081F. I've even copied the sxs folder to my hard drive, just in case something was going on with the CD Drive, only to have the same results. In that case, I used this command line: Dism /online /enable-feature /featurename:NetFx3 /All /Source:C:\dotnet35 /LimitAccess I find this surreal because in both cases, the files are indeed there but the program thinks it's not. Here's the CBS.log file. Any ideas on how to fix this? Any help is very appreciated :) EDIT: I now have a proper dism.log file, I'm not sure what happened to the last one or why it did that. Here's the link to the new log file. It's interesting to note that it doesn't recognize some of the commands in the script such as "featurename" or "source".

    Read the article

  • online backup plan for a home office with servers

    - by TiernanO
    So, i am in the process of tweaking my spending and i need to change my backup plan... I am currently using a mix of JungleDisk and ZManda ZCB to backup files on my MacBook Pro, Main Windows Server Wrokstation, a dedicated Windows Server in a datacenter, and various other machines and file sources. The problem is the cost: this month, it has cost me about $90 to backup a little over 500Gb... This amount of data will increese over time too, since i am backing up Photos (24Mb RAW images + 4-8MB JPEGs), Videos (various cameras shooting 720p and 1080p), Music, Movies, TV shows and Apps from iTunes (though with iTunes cloud, this might not need to be backed up again) and source code... I have looked at the likes of Mozy, CrashPlan+ and Pro, Backblaze and Carbonite, but each have their problems: Mozy seems overly expenvice per gig at 50C Crashplan wont sell to me since i am outside the US (they hide it on their site... hidden in the FAQ section!) Backblaze dont support Windows Server Carbonite business pricing is $600 up front for 500Gb of storage... Fro $229, they will not backup Windows Servers. So, other than those, Jungle Disk (at 15c per Gig) or ZManda (also at 15c per Gig) what other options are there? what are other people using?

    Read the article

  • Which ports are needed for NTLM (Windows Authentication) to connect to SQL Server?

    - by Adam Bellaire
    I've got SQL server running on a machine which is not in a domain, and which is not operating in mixed mode (it's running with "Windows Authentication"). I'm trying to connect to it from a Linux web server running freetds via TCP/IP, using NTLM to authenticate. The firewall on the SQL server is very restrictive. 1433 is open to my web server, but I'm getting conflicting information from the web on what additional ports (TCP/UDP) are needed for NTLM to succeed. It is currently fail; I can talk on 1433 to request NTLM, but the actual authentication always fails. One source says 137, 138, 139, but those are just the NetBIOS ports. Do I really need those? Another source says 135. Still others seem to say 1434... I can't make heads or tails of it. Dammit Jim, I'm a programmer, not a network administrator! EDIT: The exact error message: Msg 18452, Level 14, State 1, Server , Line 0 Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection. Msg 20002, Level 9, State -1, Server OpenClient, Line -1 Adaptive Server connection failed I am attempting to connect with a remote machine username, i.e. 'servername\username'. Some sources recommend that I set up mirrored accounts on the local and remote machines, but the local machine is running Linux, not IIS under Windows.

    Read the article

  • Postfix selective header_checks: smtpd_relay_restrictions vs. smtpd_recipient_restrictions

    - by luke
    Some of my customers implemented commercial software that violate email-RFCs such that we have had to relax our header checks. In consequence, we receive more spam. Prolog: I know the domains (customer.com) and IP-addresses (a.b.c.d/C) these emails come from Kind request for help: Is it possible to setup one Postfix (2.11) instance on Linux such that: It applies only some header checks for emails from .*@customer.com But applies all header checks for all other email sources? I thought of a combination of mynetworks that includes the subnet a.b.c.d/C in smtpd_recipient_restrictions -- allowing all these messages through -- and simultaneously avoid an open-relay with smtpd_relay_restrictions. However, this has not worked out as expected. Any idea or help is highly appreciated. Thanks in advance. Luke ==EDIT== For the current issue, I solved the problem by prepending REDIRECTs to header_checks as follows: /^received: from.*customer.com.*by mail.own.com.*for.*luke@own.*/ REDIRECT [email protected] This works so far as neeeded. Irrespective thereof, I am still looking for a postfix configuration that would turn this text-based setting into an IP-Address-Range based forwarding rule.... Thanks. Luke

    Read the article

  • SPDIF passthrough not working in Windows 7

    - by adriangrigore
    Hi, I'm running Windows 7 on a computer with an Audigy Platinum eX sound card connected to a surround receiver via optical cabling. Sound works fine when listening to non-surround audio sources, such as windows sounds or MP3. However, when I view a DVD in Media Center and the SPDIF passthrough kicks in, I can only hear an awful noise instead of the movie soundtrack. Also, the receiver does not show the Dolby Digital or DTS symbol, but stays at Dolby Prologic, so it seems it doesn't identify the sound encoding properly. I could switch off SPDIF passthrough and use the sound card's decoder instead, but that's not an option for me since it would create more problems with regular MP3 playback via additional Stereo Receiver which is also connected to the same sound card. I've tried both the default Audigy drivers that come with Windows 7 and the latest drivers from the Soundblaster website, but the problem remains unchanged. Also, I have ensured that the receiver's Dolby Digital decoder is not broken by successfully connecting it to my PS3 to view a Dolby Digital DVD. Besides, SPDIF passthrough was working fine in Vista before I upgraded to Windows 7. Is there anything else I could try?

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Can't install PHP after apt-get dist-upgrade

    - by WASD42
    I had a server with perfectly running for months classical LAMP installation on Ubuntu 8.04: Linux localhost 2.6.24-23-generic #1 SMP Wed Apr 1 21:47:28 UTC 2009 i686 GNU/Linux DISTRIB_ID=Ubuntu DISTRIB_RELEASE=8.04 DISTRIB_CODENAME=hardy DISTRIB_DESCRIPTION="Ubuntu 8.04.4 LTS" Don't know why I've started apt-get update, apt-get upgrade but everything ended with apt-get dist-upgrade :) Everything gone alright... But now I can't start nor Apache, nor PHP, because PHP was simply deleted. When I'm trying to install it: > apt-get install php5 <...> The following packages have unmet dependencies: php5: Depends: libapache2-mod-php5 (>= 5.2.4-2ubuntu5.17) but it is not going to be installed or php5-cgi (>= 5.2.4-2ubuntu5.17) but it is not going to be installed E: Broken packages When I'm trying to install libapache2-mod-php5: The following packages have unmet dependencies: libapache2-mod-php5: Depends: php5-common (= 5.2.4-2ubuntu5.17) but 5.3.6-6~dotdeb.1 is to be installed E: Broken packages I don't know what 5.3.6-6~dotdeb.1 is and where is this package, because I've already removed dotdeb repository from APT sources :/ Tried to do apt-get update, apt-get upgrade, apt-get install php5 php5-common php5-cli with no success... Don't know what to try next :(

    Read the article

  • Apt pin and self hosted apt repo

    - by Hamish Downer
    We have our own apt/deb repository with a handful of packages where we want to control the version. Crucially this includes puppet, which can be sensitive to versions being different. I want our desktops to only get puppet from our repository, but also for people to be able to add their own PPAs, enable backports etc. The current problem we have is backports on Ubuntu Lucid. Some important lines from /etc/apt/sources.list: deb http://gb.archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://deb.example.org/apt/ubuntu/lucid/ binary/ And in /etc/apt/preferences.d/puppet: Package: puppet puppet-common Pin: release a=binary Pin-Priority: 800 Package: puppet puppet-common Pin: release a=lucid-backports Pin-Priority: -10 Currently policy says: $ sudo apt-cache policy puppet puppet: Installed: (none) Candidate: (none) Package pin: 2.7.1-1ubuntu3.6~lucid1 Version table: 2.7.1-1ubuntu3.6~lucid1 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-backports/main Packages 100 /var/lib/dpkg/status 2.6.14-1puppetlabs1 -10 500 http://deb.example.org/apt/ubuntu/lucid/ binary/ Packages 0.25.4-2ubuntu6.8 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-updates/main Packages 500 http://security.ubuntu.com/ubuntu/ lucid-security/main Packages 0.25.4-2ubuntu6 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid/main Packages If I use n= instead of a= then I get Package pin: (not found) I'm just plain confused at this point as to what I should use. Any help appreciated.

    Read the article

  • How have I locked me out from my Ubuntu VPS?

    - by Sanoj
    I have a Ubuntu Server as VPS (OpenVZ), and yesterday I installed php-fpm, but I guess something went wrong with the installation. Because since then I cannot log in to my server over SSH with PuTTY or using WinSCP. The message I get when connecting is Network error: Connection timed out. Immediately after the installation I was not able to use emacs either, I had to re-install it with apt-get install emacs. I have tried with clearing the firewall and rebooting the server from my web-based "control panel", but it doesn't help. The commands I used for installation of the PHP-fpm was from Installing PHP 5.3, Nginx And PHP-fpm On Ubuntu/Debian. And I guess it has something to do with these commands: cd /tmp wget http://us.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkrb53_1.6.dfsg.4~beta1-5ubuntu2_i386.deb wget http://us.archive.ubuntu.com/ubuntu/pool/main/i/icu/libicu38_3.8-6ubuntu0.2_i386.deb sudo dpkg -i *.deb sudo echo "deb http://php53.dotdeb.org stable all" >> /etc/apt/sources.list sudo apt-get update sudo apt-get install php5-cli php5-common php5-suhosin sudo apt-get install php5-fpm php5-cgi The web-sites that are hosted from my server works fine. Anyone that have the same experience or know how this could happen? I guess that I have to re-install Ubuntu Server from my "control panel" now, but I would like to avoid this situation in the future. Finally, I have backup on everything so nothing is lost if I have to re-install the machine.

    Read the article

  • OpenVPN, install a TAP adapter

    - by GolezTrol
    When I try to connect to my work VPN using OpenVPN, the connection fails with the message: All TAP-Win32 adapters on this system are currently in use. Many sources suggest to look in Control Panel\Network and Internet\Network Connections an enable the TAP adapter, but when I look there, there is none. Now I've run addtap.bat which is provided with OpenVPN, but I still don't get to see any TAP adapter, and logging in in VPN still fails. The output of addtap.bat is C:\Windows\system32>"C:\Program Files (x86)\OpenVPN\bin\tapinstall.exe" install "C:\Program Files (x86)\OpenVPN\driver\OemWin2k.inf" tap0801 Device node created. Install is complete when drivers are updated... Updating drivers for tap0801 from C:\Program Files (x86)\OpenVPN\driver\OemWin2k .inf. Drivers updated successfully. I've Run As Administrator both the setup of OpenVPN and addtap.bat. I've run deltapall.bat to remove any (maybe hidden) adapters. It said it removed three of them, after which I ran addtap.bat again to try to create another one. I also run OpenVPN itself as administrator. What's wrong? Running Windows 7 Home Premium on a HP Pavilion dv7 4050ed. It has worked before, but I recently had to reinstall my laptop, for which I used the restore disks I created when I just got it. Everything else seems to work fine. == UPDATE == The TAP adapter is found in Device Manager, but apparently it is disabled because it is incompatible with Windows 7 64bit. I've deïnstalled OpenVPNGui, downloaded a version that should be 64bit compatible, and installed that. Still no cigar. Then I found a tip to install OpenVPN (version 9) after installing OpenVPNGui, because that installs OpenVPN version 8. Now I got a v9 TAP driver in Device Manager, but it still doesn't work and shows up in device manager with an exclamation mark, and not at all in my network devices.

    Read the article

  • Windows 7 fails to install

    - by Brian Ortiz
    I'm upgrading from Vista SP1 (which was actually upgraded from XP over a year ago) to Windows 7 RTM (64-bit Ultimate to 64-bit Ultimate). After 4 hours or so, the install fails with the message "This version of Windows could not be installed, Your previous version of Windows has been restored, and you can continue to use it." This error is back at my Vista desktop, there's no error that I could see during install, I just a message indicating that it was reverting everything. I tracked down the error logs and here's the log at I uploaded the error log (from C:\$WINDOWS.~BT\Sources\Panther) and uploaded onto Pastebin. Here is an excerpt: 2009-08-09 02:54:57, Error Number of Enumerated Devices = 21[gle=0x00000103] 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x[gle=0x80092004] 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x[gle=0x80092004] It was suggested that I upgrade to SP2 before upgrading to Vista, but this made no difference. I since uninstalled SP2 since it was creating some problems with a piece of hardware. I know a fresh install is best, but I'm hoping to avoid that because I'd need a new hard drive. Per Reuben's instruction, I found the install's dump and uploaded it here. (266 KB)

    Read the article

  • Codec Problems with trying to edit videos with VirtualDub

    - by Roy Rico
    So, I'm a little frustrated. According to this post and various other internet sources, virtualdub is supposed to allow users to quickly split and join video files. I am using windows 7 64 Bit and the latest version of VirtualDub (64-bit). I have tried to edit various movie files, and each attempt at editing various files I have done has not worked for me. AVI file A.avi won't load, saying that it can't located the Decompressor for the "FMP4" format. I have tried this solution and this one, and neither of them work. I have tried setting the VFW Decompressor for 'Other MPEG4' setting to XVID or LIBAVCODEC. There is no change in virtual dub AVI file C.avi will load in Virtual Dub, but any attempt to split it gives me an error that I don't have XVID codecs installed. I've attempted to install the proper codecs (Shark's Windows 7 Codecs, CCCP) with no change. AVI file C.avi will load, and it will split, but won't split using the "Direct Stream Copy" claiming the compression algorithm is incompatible. I tried "Fast Recompress" option and it created a 27GB file out of what was supposed to be about a 300-400MB file. Can someone please give me some insight into what I'm messing up?

    Read the article

  • SQL server agent job to execute SSIS package fails, package succeds if run manually

    - by growse
    I've got a SSIS package installed on a SQL server (SQL Server 2012). It's fairly simple and just fetches data from a remote data source and adds it into a local table. The remote connection string is using SQL server authentication, while the local connection is using Windows auth. The remote connection password is protected, and the package was imported setting the protection level to Rely on server storage and roles for access control. If I run the SSIS package manually, it works. If I run it from the command line using dtexec, it works. If I use runas to switch to the domain account that the SQL server agent is running under, and then run the package using dtexec, it works. If I create a SQL Agent job with a single step to run the package, it fails, providing very little detail as to what's going on. I'm guessing it's not able to get the password to log into the remote SQL server, because it fails very quickly. Also, if I tick 'log to table' and view the resulting file, I get the following: Description: ADO NET Source has failed to acquire the connection {0D8F2CD4-A763-4AEB-8B52-B8FAE0621ED3} with the following error message: "Login failed for user 'username'.". If I try to add the password in the connection string manually under data sources in the job step dialog, it refuses to save it, always seeming to remove the 'password' bit of the connection string. I thought that SQL server agent jobs always ran under the context of the account which the SQL server agent is running under. This account is a sysadmin on the local SQL server, and the package works using dtexec under that account, so why would it fail when trying to run as an agent job?

    Read the article

  • I have added a port to the public zone in firewalld but still can't access the port

    - by mikemaccana
    I've been using iptables for a long time, but have never used firewalld until recently. I have enabled port 3000 TCP via firewalld with the following command: # firewall-cmd --zone=public --add-port=3000/tcp --permanent However I can't access the server on port 3000. From an external box: telnet 178.62.16.244 3000 Trying 178.62.16.244... telnet: connect to address 178.62.16.244: Connection refused There are no routing issues: I have a separate rule for a port forward from port 80 to port 8000 which works fine externally. My app is definitely listening on the port too: Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 99 36797 18662/node firewall-cmd doesn't seem to show the port either - see how ports is empty. You can see the forward rule I mentioned earlier. # firewall-cmd --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: port=80:proto=tcp:toport=8000:toaddr= icmp-blocks: rich rules: However I can see the rule in the XML config file: # cat /etc/firewalld/zones/public.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name="dhcpv6-client"/> <service name="ssh"/> <port protocol="tcp" port="3000"/> <forward-port to-port="8000" protocol="tcp" port="80"/> </zone> What else do I need to do to allow access to my app on port 3000? Also: is adding access via a port the correct thing to do? Or should I make a firewalld 'service' for my app instead?

    Read the article

  • Shrink a Volume Group in LVM / Linux in order to install Windows on the freed space

    - by Stephan Kristyn
    I have a Volume Group with Unused space. This 40Gig should become an entidy in order to install Microsoft windows 7 on it. I do not have extra space on the drive - that is why I want to shrink the VG! LVG berta resides on sda2 and consists of lv_root lv_swap unused_space I want it to become lv_root lv_swap and have a seperate entidy made out of unused_space. Microsoft Windows 7 has to get installed on this entidy. I do not understand why Linux made simple things complicated. I utterly hate LVM and think its absolute bollocks. Useful Sources: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-system-config-lvm.html Edit: I found the answer. The necessary steps depict how complicated LVM really is. In my opinion it is best to avoiding LVM until pvresize matures as promised in its man pages. Answer: http://fedorasolved.org/Members/zcat/shrink-lvm-for-new-partition If you run into problems when you want to remove lvswap even if in resuce mode, then try swapoff /dev/vg_1/lv_swap lvchange -an /dev/vg_1/lv_swap

    Read the article

  • During Vista Repair - No operating system is listed.

    - by Jack Marchetti
    After a Windows update, my brother's Gateway computer loads to the "Step 3 of 3: 0%" and reboots. Safe Mode does not work. I placed a Vista DVD in the drive, and re-booted. (Note, this is my Vista DVD, not the Recovery/System disc that would come with a computer. Gateway does not give you CD's anymore. I believe they store recovery on a partition, but that partition has been wiped out). I chose "Repair Your Computer" I get a dialog box, but no operating system is listed. I'm then prompted to "Load Drivers". What drivers am I supposed to be loading here and where from? I placed a CD in the drive to "load drivers" but I don't see my DVD drive listed. All I saw where X:/Sources along with several Removable Media slots that were empty. On another screen I tried Startup Repair, which didn't do anything. I attempted to use System Restore - but it doesn't detect the hard drive. I'm guessing that I'm missing some sort of SATA driver and that is why the hard disk is not being found. Any ideas on this?

    Read the article

  • Why apache throws 403 on index file after install?

    - by den-javamaniac
    Hi. I've just installed apache and php from sources using next commands: ./configure --prefix="/mnt/workspace/servers/web/apache-2.2.17" \ --enable-info --enable-rewrite --enable-usertrack --enable-mime-magic for apache and ./configure --with-apxs2=/mnt/workspace/servers/web/apache-2.2.17/bin/apxs \ --prefix=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-config-file-path=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-mysql=mysqlnd for php. After adjusting configuration (httpd.conf) and starting apache it gives a 403 response on http://localhost:8060/index.html (presuming that 8060 is used) request. There are next directory settings in httpd.conf: <Directory "/mnt/workspace/servers/web/apache-2.2.17/htdocs"> ... Order allow,deny Allow from all ... </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> It should be noted that I've got apache on a mounted (default auto mount configured while installing ubuntu) partition. Log Files Access log: ::1 - - [12/Feb/2011:17:48:30 +0200] "GET / HTTP/1.1" 403 202 ::1 - - [12/Feb/2011:17:48:31 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /favicon.ico HTTP/1.1" 403 213 Error log: [Sat Feb 12 18:59:13 2011] [notice] Apache/2.2.17 (Unix) PHP/5.3.5 configured -- resuming normal operations [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to / denied [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to /favicon.ico denied [Sat Feb 12 18:59:36 2011] [error] [client ::1] (13)Permission denied: access to /index.html denied

    Read the article

  • Error while installing boost_1_54

    - by Farhat
    On trying to install boost I get this error during configuration checks. Googling did not give any pointers. [root@heracles boost_1_54_0]# ./b2 install Performing configuration checks - 32-bit : no (cached) - 64-bit : yes (cached) - arm : no (cached) - mips1 : no (cached) - power : no (cached) - sparc : no (cached) - x86 : yes (cached) error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - has_icu builds : no (cached) warning: Graph library does not contain MPI-based parallel components. note: to enable them, add "using mpi ;" to your user-config.jam - zlib : yes (cached) - iconv (libc) : yes (cached) - icu : no (cached) - icu (lib64) : no (cached) - compiler-supports-ssse3 : yes (cached) - compiler-supports-avx2 : no (cached) - gcc visibility : yes (cached) - long double support : yes (cached) warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - zlib : yes (cached) How can the alternative for allocator sources be located? Thanks.

    Read the article

  • Ubuntu questions - important

    - by asdasd
    They had installed some modified edubuntu's at school... So i have some questions about setting some things up: How we can play HD videos ? They are made for windows machines and are in .wmv format but we need to play them on our multimedia class but don't know how - which player, which codecs etc. How to edit properly the /etc/apt/sources file ? Anything we try to install via apt-get it just says that E:\ is not available. Please tell me which repositories to put in there so we could be able to install some tools. Where are usually viruses/trojans put in ubuntu ? I mean in which directories ? Because our computers are behaving really slow and we need to check for some malware manually - we are not even allowed to install any kind of AV software. So tell me the usual directories and places for hiding such files, how are they hiddem, how to recognize them etc. Any others nice tricks/tips that we need to know. Thank you very much in advance.

    Read the article

  • Debian - starting UFW (Uncomplicated Firewall) before network interfaces are operational

    - by Tomasz Zielinski
    I want to install UFW on Debian Lenny. Everything looks straightforward except that I don't know where to plug UFW startup script so that it configures iptables before hax0rs can break in. I've reviewed runlevel directories and in /etc/rc0.d, /etc/rc6.d and /etc/rcS.d there are items like these: S35networking -> ../init.d/networking S36ifupdown -> ../init.d/ifupdown Runlevel 0 and 6 are for shutdown and reboot so I guess nothing should be changed there, but runlevel S advertises itself (in README) like something for me: The scripts in this directory whose names begin with an 'S' are executed once when booting the system, even when booting directly into single user mode. The following sequence points are defined at this time: * After the S40 scripts have executed, all local file systems are mounted and networking is available. All device drivers have been initialized. (What bothers me is that both rc0/6.d and rcS.d point to the same networking and ifupdown scripts, but after looking at sources I believe those scripts are smart enough to figure out where to start and where to stop networking.) Now, I think that I should plug my /lib/ufw/ufw-init into /etc/rcS.d, with priority higher that the one of ifupdown and networking, i.e. <= 38 for my /etc/rcS.d. Am I right in this "analysis" ?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >