Search Results

Search found 40031 results on 1602 pages for 'command message'.

Page 374/1602 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • When is an object oriented program truly object oriented?

    - by Syed Aslam
    Let me try to explain what I mean: Say, I present a list of objects and I need to get back a selected object by a user. The following are the classes I can think of right now: ListViewer Item App [Calling class] In case of a GUI application, usually click on a particular item is selection of the item and in case of a command line, some input, say an integer representing that item. Let us go with command line application here. A function lists all the items and waits for the choice of object, an integer. So here, I get the choice, is choice going to conceived as an object? And based on the choice, return back the object in the list. Does writing this program like the way explained above make it truly object oriented? If yes, how? If not, why? Or is the question itself wrong and I shouldn't be thinking along those lines?

    Read the article

  • Dual Screen with 12.04 ATI randr extension is not present

    - by Trevor Pearson
    Here's my problem I have two screens. One is a 22"led monitor and the other is 42 inch led tv. In windows 7 I run xbmc on the second monitor. I'm trying to mimic the same function in setting xbmc to display on the tv only. So I installed the latest x64 linux drivers from ATI. I configured (using the catalyst control panel) using single display (multi desktop) I then got a white screen with black x so I enable xinerama. with xinerama enable I got the displays to work correctly, however I received an error when I tried to enter display settings to change the launcher location. the error message was " "randr extension not present =" So I tried to install libxandr2 using terimal but here's what I get trev@Lrig:~$ sudo get-apt install libxander2 [sudo] password for trev: sudo: get-apt: command not found I'm at a loss now because I can't find a solution for the xandr error message. I'f my specs are important amd athx2 6400+ ghz 8 gigs ram radeon hd6950

    Read the article

  • gnome-terminal and logging

    - by UAdapter
    Is there any way to log every that was displayed in gnome-terminal? for example I have a complex command doSomethingThatPrintoutsAlot ; doSomethingThatPrintoutsAlot2 ; doSomethingThatPrintoutsAlot3 I can add > file, but than I would have to do it for each command and I have to use tail in another console to see the output. maybe gnome-terminal support logging everything? there is .bash_history, so .... it might also support this.

    Read the article

  • How to move files over samba share with gnomevfs cli

    - by Allan
    Ok I am in the process of backing up my film collection to a NAS and I wanted to automate this as much as possible as I have to work at the same time. I am trying to setup a daily dump of ISO's ready to be converted overnight. I would like to do this as a cron job using gnomevfs. I have been able to connect and do an ls command successfully with gnomevfs-ls smb://user:WORKGROUP:password@media-centre/videos/ but I am having trouble setting up a mv command from a local folder to the same shared folder keep getting the Usage: gnomevfs-mv <from> <to> quote which isn't particularly informative ;) any ideas?

    Read the article

  • How can I repair a ttf-umefont installation

    - by Julio C Morales
    I have Ubuntu 13.04 and I started to upgrade to 13.10. Upgrading started well, files were sucessfully downloaded but a crash message appeared: <type ´exceptions.SystemError´> (E: The package ttf-umefont needs to be reinstalled, but I can´t find an archive for it) I looked for some info in fórums then I tried to update by terminal and with apt-get -f update. nothing changed; then, I triued to delete the ttf-umefont file, and nothing changed. Now, I can´t update anything because this error message. Could anyone help me?

    Read the article

  • Is this Ubuntu One DBus signal connection code correct?

    - by Chris Wilson
    This is my first time using DBus so I'm not entirely sure if I'm going about this the right way. I'm attempting to connect the the Ubuntu One DBus service and obtain login credentials for my app, however the slots I've connected to the DBus return signals detailed here never seem to be firing, despite a positive result being returned during the connection. Before I start looking for errors in the details relating to this specific service, could someone please tell me if this code would even work in the first place, or if I'm done something wrong here? int main() { UbuntuOneDBus *u1Dbus = new UbuntuOneDBus; if( u1Dbus->init() ){ qDebug() << "Message queued"; } } UbuntuOneDBus::UbuntuOneDBus() { service = "com.ubuntuone.Credentials"; path = "/credentials"; interface = "com.ubuntuone.CredentialsManagement"; method = "register"; signature = "a{ss} (Dict of {String, String})"; connectReturnSignals(); } bool UbuntuOneDBus::init() { QDBusMessage message = QDBusMessage::createMethodCall( service, path, interface, method ); bool queued = QDBusConnection::sessionBus().send( message ); return queued; } void UbuntuOneDBus::connectReturnSignals() { bool connectionSuccessful = false; connectionSuccessful = QDBusConnection::sessionBus().connect( service, path, interface, "CredentialsFound", "a{ss} (Dict of {String, String})", this, SLOT( credentialsFound() ) ); if( ! connectionSuccessful ) qDebug() << "Connection to DBus::CredentialsFound signal failed"; connectionSuccessful = QDBusConnection::systemBus().connect( service, path, interface, "CredentialsNotFound", "(nothing)", this, SLOT( credentialsNotFound() ) ); if( ! connectionSuccessful ) qDebug() << "Connection to DBus::CredentialsNotFound signal failed"; connectionSuccessful = QDBusConnection::systemBus().connect( service, path, interface, "CredentialsError", "a{ss} (Dict of {String, String})", this, SLOT( credential if( ! connectionSuccessful ) qDebug() << "Connection to DBus::CredentialsError signal failed"; } void UbuntuOneDBus::credentialsFound() { std::cout << "Credentials found" << std::endl; } void UbuntuOneDBus::credentialsNotFound() { std::cout << "Credentials not found" << std::endl; } void UbuntuOneDBus::credentialsError() { std::cout << "Credentials error" << std::endl; }

    Read the article

  • How to reinstall many removed packages at once?

    - by Logan
    I used sudo apt-get remove python command and accidently removed a bunch of packages that were required. I logged in via command line and installed ubuntu-desktop again but there are other packages that are missing, and I'm looking for a way to easily reinstall those removed packages. Since there's the log at software-center I wanted to ask what the easiest way might be to roll back changes or extract the removed packages list from the software center... note: I typed sudo apt-get install .... .... ... ... for about two dozen of those removed programs in that list, but when I pressed enter it didn't install any of them because some package names couldn't be found. The programs were removed at the same date.

    Read the article

  • How To Specify Bitrate, Codec and Demultiplexing for VLC Video Capture or Recording

    - by Subhash
    I capture video from old TV tuner card - Pinnacle PCTV - using VLC. The video is from the Composite input and audio is from I guess the mixer or Line in. The command I use is: vlc v4l2:///dev/video0:normal=pal:width=720:height=576:input=1 :input-slave="alsa://hw:0,0" In VLC, I have enabled the Advanced Controls toolbar, which allows me to record videos when I want to. However, these videos are uncompressed - very big and play only with VLC. Totem throws the "Could not demultiplex stream" error. I need to convert them using WinFF to reduce their size and make them playable with Totem and other software. My question is whether I can configure the recording settings - the codecs and the bitrate, and also get the stream demultiplexed. If I pass any -sout parameter with command I get a "Segmentation fault". I use 64-bit Ubuntu 10.10.

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • To what extent do code-signing certificates boost sales of your software?

    - by Dan W
    In the experiences of everyone here, have you found a certificate to boost sales of your (downloadable) program? I produce .NET software and upon clicking the installation file, Windows 7 pops up a message saying the software is from an "unknown publisher" and to proceed with caution. For Windows 8, this appears to be even more prominent, and may adversely affect the number of downloads, and therefore the number of sales. A certificate will help soften this 'warning' by (for example) changing the warning's colour from orange to blue, and give the publisher's name instead of 'unknown'. But I'd like more tangible evidence since many people are obviously used to that message, and may not care and download anyway. So has anyone noticed a jump in sales after the switch?

    Read the article

  • How to install an older version of Java

    - by Alex Spurling
    I updated my installation of the sun-java6-jdk package today to version 6.24-1build0.10.10.1 after being prompted by the update manager. However this now causes some compilation failures so I'd like to revert back to the previous version that I had installed. I've tried using Synaptic but the 'Force Version' menu command is disabled. I've tried the following command to install the previous version sudo apt-get install sun-java6-jdk=6.22-0ubuntu1~10.10 But I'm not sure that I have the correct version: Reading package lists... Done Building dependency tree Reading state information... Done E: Version ‘6.22-0ubuntu1~10.10’ for ‘sun-java6-jdk’ was not found I've taken this version number from this changelog: https://launchpad.net/ubuntu/+source/sun-java6/+changelog Is this the correct way to install a previous version of a package? Have I got the correct version from the sun-java6 change log?

    Read the article

  • java.net.SocketException: Connection reset; No available router to destination

    - by adejuanc
    Sometimes a Weblogic Server will be unreachable, there could be several reasons, the server is hang, down, and thus not responding to a ping, or it could be related to a network issue.A possible point of failure in the network layer are firewalls. You should contact your network team if you have following messages: java weblogic.Admin -adminurl t3://adminServer.mydomain.com:7777 -username admin -password lockdown PINGFailed to connect to t3://adminServer.mydomain.com:7777: Destination unreachable; nested exception is:java.net.SocketException: Connection reset; No available router to destination A possible work around for the ping command is to use the HTTP protocol, instead of the t3 protocol. To enable this you must configure WebLogic to do HTTP tunneling. To enable, access the administration console, click servers-> server you want to reach-> protocols -> http -> enable Tunneling. no restart is necessary. And then, following command will work: java weblogic.Admin -adminurl http://adminServer.mydomain.com:7777 -username igmadmin -password l0ckdown PING

    Read the article

  • How to workaround or diagnose a kernel panic when "safely removing" external hdd?

    - by Shawn
    I'm experiencing an issue when using the "Safely Remove" option to remove my 1TB external HDD from the Unity Launcher. Not every time, but occasionally my screen will go black and display LARGE amounts of text information (which I obviously cannot screen cap). The jist of the info displayed is that unmounting or 'safely removing' the drive causes a kernel panic. Is there a Command Line command to remove mounted drives, or at least one that would show me some sort of error output when the drive is removed? I'm trying to narrow down the cause. I could be imagining this, but it seems to happen most often when I have other programs running when I remove the drive (i.e. Firefox, Transmission). Please note that my external drive is not in use when I attempt to remove it and it is not being used either by Firefox or Transmission at these times. Any help would be appreciated.

    Read the article

  • Remove bottom panel from a remastered LiveCD

    - by Uri Herrera
    How can I remove or autohide to 0 the bottom Gnome panel from a remastered liveCD and autostart AWN (or any dock for that matter...) to replace it? , just as moonOS 4 (the distro that gave me the idea to try to do this) Using this command to enable the auto hide gconftool-2 -t bool -s /apps/panel/default_setup/toplevels/bottom_panel/auto_hide "true" i Figured out how to Autostart AWN,however that involves removing BOTH panels, so im not quite there yet Using UCK gconf-editor to manually edit the option to auto hide the bottom panel doesn't work, when testing the livecd BOTH panels are still there, even though i use the command and then run gconf-editor to check that the option is eneabled, which it is, however, once the livecd starts the autohide feature is disabled

    Read the article

  • Updated 11.10 - now trackpad and wifi won't work. How can these be fixed?

    - by Aron Standley
    I reinstalled 11.10 on my Asus EEE 1005HA, and my mousepad hasn't been working since. I've tried the workarounds mentioned in this post: http://ubuntuforums.org/showthread.php?t=1867357 but, I get the message on the command line that says, "Synaptic isn't installed." When I go to the network settings to install Synaptic, I can't turn on wifi - no wireless options exist. How can I turn on wifi from the command line, on a new 11.10 install? Or, is there a way to turn on the trackpad and wifi from the install thumb drive? How can I move forward?

    Read the article

  • SCHA API for resource group failover / switchover history

    - by krishna.k.murthy
    The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format. Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups. The command usage is as shown below: # scha_cluster_get -O RG_FAILOVER_LOG number_of_days number_of_days : the number of days to be considered for scanning the historical logs. The command returns a list of events in the following format. Each field is separated by a semi-colon [;]: resource_group_name;source_nodes;target_nodes;time_stamp source_nodes: node_names from which resource group is failed over or was switched manually. target_nodes: node_names to which the resource group failed over or was switched manually. There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag. In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup. # /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3 geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014 geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014 oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014 asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014 asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014 oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014 geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014 geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014 asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014 asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014 oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014 oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014 oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014 Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline. - Arpit Gupta, Harish Mallya

    Read the article

  • changing drive nodes & hdparm

    - by Kalamalka Kid
    I am currently attempting to create a command that works at startup to kill the power on two of my very noisy hard drives. I have edited the etc/rc.local file to include this command: sudo hdparm -y /dev/sdc sudo hdparm -y /dev/sdd exit 0 While I think this should work, it seems the allocated drives keep switching around every time I reboot. I have sda, sdb, sdc, sdd, and sde but they keep getting jumbled around (making the drive I wish to shut different than sdd which is making the task of shutting down the right drive on start-up quite cumbersome. I had a perfectly functioning ftstab file working which disappeard, but I restored it from the back up into the etc/ dir: # <file system> <mount point> <type> <options> <dump> <pass> #Entry for /dev/sda1 : UUID=43c09daf-08a5-44f2-89b0-fc7c6f0d1e67 / ext4 errors=remount-ro 0 1 #Entry for /dev/sdd1 : UUID=443AFBAD7FE50945 /media/DX100 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=FCE456F5E456B21E /media/GalaxyM83 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdf1 : UUID=1CA057FDA057DBB8 /media/Holideck ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdc1 : UUID=7ABB49654B799D40 /media/JX3P ntfs defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 it seems every time I boot the order of the drives changes. I do not know how to resolve this. A quick workaround the problem was to go with UUID instead of the DEV letter by editing the etc/rc.local file to include: hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 So I thought I was in the clear, as I heard both hard drives die down during the boot sequence, BUT, as soon as I log in both drives start up again! so now I have to figure out what is making them start up again after log in, or perhaps another way to get them to turn off. Is there some kind of command i can get to execute after log in? I tried editing the startup applications to include an autossh with: autoshh - sudo hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 autoshh - sudo hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 but this did not seem to work to turn off the disks after log in.

    Read the article

  • MOSS 2007 WSP Retraction 'Error"

    - by juanlarios
    This one is a quick post , but I thought I would post this information as I could not find anything that helped me on this specific scenario. Please read the entire article before taking action as there are some irreversable or very troublesome routes I caution about! Problem: I had a client trying to retract a WSP from Central Admin and would eventually go to an, 'Error' State. I could not retract it and after looking at event logs I figured it was a problem with security. I tried several accounts, checked the databases to see if there was some issue with readonly databases and nothing was working.   Solution: Delete the solution from central admin! Yes, I said it. With StsAdm , just delete the solution from Central Admin using this command: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\STSADM.exe" -o deletesolution -name "yoursolution.wsp" What has just happened is that Central Admin does not know about the WSP anymore but the feature and any deployed files are still on the server. For whatever reason SharePoint was not able to retract the files as it normally does. Now you can do one of two things, you can add the solution again to central admin and deploy overtop of the deployed files so it overrides them, or simply clean up the files manually. I re-added the solution through stsadm, but then deployed through stsadm using the -force option in the command. This overrides the existing files on the server. If you deploy through Central admin it will tell you you need the -force option that is not offered as part of the UI in central admin. Use the following command: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\STSADM.exe" -o deploysolution -name "YourSolution.wsp" -immediate -allowgacdeployment -force Just to make sure everything was good, I retracted to solution again, and it worked! then I deleted the solution from central admin alltogether. Then I checked the server and noticed all the files that were deployed with the WSP were cleaned up properly. I then re-added the new WSP the client was looking to install (an Updated WSP). Conclusion: I have no idea why it was not able to retract, but I have seen this several times. I don't know if has to do with security of certain accounts. Althought it's anoying at times, it is fairly easy to fix if you have good instructions. Hope it helps you out!   ***WORD OF CAUTION - if you clean up the files manually you might want to uninstall the features through STSADM commands as SharePOint might still recognize the features that were deployed as the WSP. You might not want to get into the mess of deleting files that are still part of activated or installed Features. THis is why I suggest doing what I did.

    Read the article

  • Out of memory on MATLAB

    - by Eric Sánchez
    I'm trying to run a script on matlab_2011a, which calculate same means for a climatology of 50 years. When I started to run the script for all the years it worked fine until the iteration 20th, and then appeared the message: Out of memory. Type HELP MEMORY for your options. Then I used clear v1 v2 v3 ... to clear all the variables inside the function, also i used clear train because i saw it in another forum, and these with the modifications or not, I run again the script (since the 21th iteration), and the result is the same message, but curiously sometimes it run a year and then stop. Any ideas about solving this problem?, what I have to clean to run correctly? (in this matlab version there's not the command memory which maybe could help me).

    Read the article

  • DockbarX Applet not loading

    - by Nik
    I used to have dockbarX applet installed on my gnome panel. However one day when I login in I got a error message which can be seen in the screenshot below. So I removed it, and then tried adding it again to the gnome panel but I still get the error message. I am running the latest version of dockbarX 0.43 with the helpers enabled and media buttons etc. I did not update it recently and have been using this version for a couple of weeks now and got this problem only now. How can I solve this problem?

    Read the article

  • Scripting with variables from file

    - by Nooster
    I have several videos on my PC that I would like to shorten. For instance I have a 30 sec video where I want to have the section from sec 15 to 20 (a 5sec video). To cut this, I use avconv. avconv -i input.mp4 -ss 15 -acodec copy -vcodec copy -t 5 output.mp4 This command works pretty well. I have many videos I want to cut the same way. This is why I created a textfile containing the information: input-name, start of cut, length of cut, output-name. Those are written into in.txt that looks like this: input.mp4 15 5 output.mp4 input1.mp4 32 10 output1.mp4 input2.mp4 10 7 output2.mp4 ... My question is: How do I have to modify the avconv-command to cut my videos automatically? What I tried was this, but it didn't work at all: avconv -i $1 -ss $2 -acodec copy -vcodec copy -t $3 $4 < in.txt Any idea?

    Read the article

  • How to properly set up Sun's JDK?

    - by jurchiks
    I'm trying to manually install the Sun JDK package (I have my reasons, don't bother asking why). I've successfully extracted the .bin file into /usr/lib/jvm/jdk1.6.0_23, but the problem is the PATH variable. I added this line to the /etc/environment file: JAVA_HOME="/usr/lib/jvm/jdk1.6.0_23" and added JAVA_HOME/bin to the PATH variable, BUT the OS still doesn't recognise the command java, says it's not installed and offers me gcj and openjdk. There was another way by using java-package and converting the .bin to .deb installer, but unfortunately that package is not available on/for maverick, so I can't do it that way. How can I make the PATH variable work and is there anything else required apart from the environment variables to make it all work? When I try to use the update-java-alternatives -l command, it says the following: awk: cannot open /usr/lib/jvm/*.jinfo (No such file or directory) jdk1.6.0_23 /usr/lib/jvm/jdk1.6.0_23 What should be the name of the file and the contents of it?

    Read the article

  • How to prevent thunderbird gmail/IMAP from marking deleted/archived emails as read

    - by Jesse
    I have thunderbird setup to use gmail IMAP. I followed the various guides that recommend setting Edit-Account Settings-Server Settings-When I delete a message-Remove it immediately. Unfortunately this didn't have quite the effect I wanted and after more digging I discovered here that: [Thunderbird] [Gmail] Mark Deleted = Archive Copy to [Gmail]/Trash = Delete Permanently Remove it immediately = Archive Unfortunately whenever I deleted (archived) a message it was also marked as read whether it had been or not. I didn't want this because I like to keep my inbox clean and archive anything that doesn't actually matter, such as funny emails, etc. and then go back and look through the archive later when I have time. What settings do I need to prevent messages from being marked read when I delete them? I'm using Thunderbird 17.0 on Ubuntu 12.04

    Read the article

  • Can't install Oracle J2EE due to "An internal Error has occurred"

    - by Gabriel Mendez
    I am trying to install Oracle JavaEE 6 SDK with Glassfish on Ubuntu 12.04 with Java SDK 7. I have downloaded java_ee_sdk-6u4-jdk7-linux-x64.sh already, but when I run it on terminal, the wizard appears, and after few steps, I get an Error Message Dialog: An internal Error has occurred. Please contact your system administrator... null. And, the terminal is showing something like WARNING: Could not process a navigation event for command=AC_NEXT [Command=AC_NEXT Error=null ] What can I do? How can I install J2EE/Glassfish under linux x64?

    Read the article

  • Printer State: Idle - /usr/lib/cups/backend/dnssd failed

    - by bilbo88
    A printer was installed and worked but does not now. A job is successfully submitted, but keeps waiting there for printing (as seen from lpq command). Ping to printer works fine. (The printer is on and works as it prints from Windows.) But on the Printer Property, it shows Printer State: Idle - /usr/lib/cups/backend/dnssd failed. The following command does not help either. sudo service avahi-daemon restart Does any one has a solution? Thank you for help.

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >