Search Results

Search found 23555 results on 943 pages for 'command timeout'.

Page 273/943 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Why does this script not open parallel gnome-terminals on a server?

    - by broiyan
    Why am I not able to have parallel gnome-terminals on my server while I can on my client. Here is a test that illustrates the problem. #!/bin/bash # this is the parent script gnome-terminal --command "./left.sh" sleep 10 gnome-terminal --command "./right.sh" #!/bin/bash echo "this is the left script" read -p "press any key to close this terminal" key #!/bin/bash echo "this is the right script" read -p "press any key to close this terminal" key When I run this on a regular ubuntu desktop (maverick) I see two terminals after 10 seconds. When I run this on a maverick server at a server farm, the second window does not appear until after I close the first one and wait 10 seconds. I am using tightvncserver to view the server desktop. (I could have simplified a bit more. The 10 second sleep is extraneous to the problem. In my real world application I need the first terminal to do some real work before starting the second. The problem probably still exists even if there is no sleep.)

    Read the article

  • Screen problems

    - by Erick
    I struggled a lot in order to install Ubuntu because of a problem with the video drivers caused by when installing the driver and after turning it on it just appears a black screen, which worked for me was to use the terminal with sudo setpci -s 00:02.0 F4.B=0 and the screen starts, but after rebooting the changes won't remain and I have to run that command again. My question is: How can I make the changes permanent and not to be in need to execute that command always when I shut it down? Original Question in Spanish: Problemas con pantalla He batallado mucho al instalar ubuntu por el problema con el controlador de video ya que al instalarlo al encenderla aparece solo una pantalla negra lo que me funciono fue usar desde consola sudo setpci -s 00:02.0 F4.B=0 enciende la pantalla pero al reiniciarla los cambios no se quedan guardados y tengo que volver a ejecutar ese comando mi pregunta es ¿como hacerle para que los cambios queden igual y no tener que estar ejecutando eso siempre que la apago?

    Read the article

  • Having trouble locating charms - pointing to store.juju.ubuntu.com? DNS error

    - by dt2511
    Can't seem to sort out how to get juju to point to the right source for charms, base install yields the following result when issued the following command. juju deploy --repository=examples mysql DNS lookup failed: address 'store.juju.ubuntu.com' not found: [Errno -2] Name or service not known. 2011-10-12 18:38:39,946 ERROR DNS lookup failed: address 'store.juju.ubuntu.com' not found: [Errno -2] Name or service not known. When trying to run it with juju deploy --repository=examples local:mysql I get this error: Charm 'local:oneiric/mysql' not found in repository /root/juju/examples 2011-10-12 18:53:57,311 ERROR Charm 'local:oneiric/mysql' not found in repository /root/juju/examples I've put the charm itself in the directory /root/juju/examples, and am running the command from /root/juju. What is wrong?

    Read the article

  • How to fully remove a package?

    - by user471011
    I installed Eclipse via command apt-get install eclipse. this command completed correct. After this I run eclipse add some configurations: Added new url for "Available Software Sites". On the next step I try removed Eclipse via apt-get remove eclipse. and install eclipse again. And here I see surprise: in new installed eclipse I see my old url for "Available Software Sites". So, I guess, that configuration files not removed! After this I tried different commands: something like this: sudo dpkg -r eclipse sudo apt-get --purge remove eclipse sudo apt-get autoremove but after I again install Eclipse I see my Url. So, How I can fully remove eclipse with configuration files???

    Read the article

  • tail stops displaying in case of a log rotation

    - by Rudy Vissers
    I have to tail the log of a server (servicemix) and the log rotation is enabled. As soon as the rotation happens, tail stops displaying. I did some investigations and it is a bug in Debian : Debian Bug Report. The bug has been around for a long time ago. Does anyone knows if this bug in Ubuntu is to be fixed? I'm on Ubuntu 12.04 64 bit. I don't have to mention that this bug is total hell! Every time I have the problem, I have to interrupt the command tail and re-execute the command!

    Read the article

  • How to cut audio file with avconv?

    - by x-yuri
    I have a hard time trying to figure out how to cut a file with avconv. Here's the command I use: avconv -ss 52:13:49 -t 01:13:52 -i RR119Accessibility.wav RR119Accessibility-2.wav But it doesn't work. I get the whole file as a result. Well, almost the whole file. Somehow the resulting file has duration 1:16:31 instead of 1:17:23. Also I believe I executed this command in every possible way: with -ss and -t after -i, with -t specifying ending point, with mp3 files, with specifying audio codec, with ffmpeg. Am I doing it wrong?

    Read the article

  • Why do I have man pages of commands that don't exist?

    - by Robert Vila
    What is panel-test-applets for?? I don't know. But if I want to know I type: $ whatis panel-test-applets The answer is: panel-test-applets (1) - display installed applets I have the man page too, and I can read it: $ man panel-test-applets But there is no program with that name. Maybe the command is not very useful, but is its man page more useful? Does someone know how to install that program or what is its man page for when you cannot execute the program? I can't even execute the command: $ panel-test-applets nor $ panel-test-applets --help which is the only thing its man page talks about!!

    Read the article

  • Scan problem -Fujitu Scansnap S1300i

    - by user214312
    I'm trying to install a scansnap s1300i scanner on ubuntu 13.10. It is still not working. My question: With a "sudo scanimage -L" command, I receive the response device hpaio:/net/Photosmart_C5100_series?zc=HPBA915D' is a Hewlett-Packard Photosmart_C5100_series all-in-one deviceepjitsu:libusb:002:011' is a FUJITSU ScanSnap S1300i scanner While, with a "scanimage -L"command I receive the response device `hpaio:/net/Photosmart_C5100_series?zc=HPBA915D' is a Hewlett-Packard Photosmart_C5100_series all-in-one What can go wrong ? This is probably a part of the problem. Update: "simple-scan" do not work while "sudo simple-scan" works Thanks.

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • Why can't I upgrade my kernel via the terminal?

    - by Alvar
    If I type sudo apt-get update && sudo apt-get upgrade I can only see that the kernel packages are kept back, and not installed. As the screenshot shows. If I then start the update manager I can install the kernel, with no problems at all. As the second screenshot shows. Why is this? The kernel is a new package and not an upgrade of an old one, this is why you can't use the command upgrade that upgrades packages. You need to use the command dist-upgrade to install new packages.

    Read the article

  • Error when adding indicator in quickly

    - by tachyons
    I just started new project using quickly Eevery thing work perfectly .I decided to add an indicator to my program I used the command quickly add indicator using the command quickly add indicator . After that my project stoped working It shows the following error quickly run /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwargs) Traceback (most recent call last): File "bin/mytube", line 33, in <module> mytube.main() File "/home/aboobacker/mytube/mytube/__init__.py", line 33, in main window = MytubeWindow.MytubeWindow() File "/home/aboobacker/mytube/mytube_lib/Window.py", line 35, in __new__ new_object.finish_initializing(builder) File "/home/aboobacker/mytube/mytube/MytubeWindow.py", line 24, in finish_initializing super(MytubeWindow, self).finish_initializing(builder) File "/home/aboobacker/mytube/mytube_lib/Window.py", line 75, in finish_initializing self.indicator = indicator.new_application_indicator(self) File "/home/aboobacker/mytube/mytube/indicator.py", line 52, in new_application_indicator ind = Indicator(window) File "/home/aboobacker/mytube/mytube/indicator.py", line 20, in __init__ self.indicator = AppIndicator3.Indicator('mytube', '', AppIndicator3.IndicatorCategory.APPLICATION_STATUS) TypeError: GObject.__init__() takes exactly 0 arguments (3 given) How to fix it ?

    Read the article

  • Problems creating a debdiff

    - by Chris Wilson
    I'm following this guide to create a debdiff for a package I'm patching. Everything goes fine until step number 8 and I attempt to create the debdiff after committing the changes. The package in question is Zim, pulled form Launchpad using bzr branch lp:zim and according to this guide I should execute the following command to create the debdiff: debdiff zim_0.49.dsc zim_0.49ubuntu1.dsc > zim_0.49ubuntu1.debdiff however, when I actually try to execute this command, I get the following error: debdiff: fatal error at line 314: Can't read file: zim_0.49.dsc Upon inspection of the directory in which the files created from debuild -S (step 6) are deposited, I find zim_0.49ubuntu1_source.changes zim_0.49ubuntu1.dsc zim_0.49ubuntu1.tar.gz zim_0.49ubuntu1_source.build but no sign of zim_0.49.dsc. I could probably create one by debuilding the package as soon as I check out the code, before starting work, but that would add an extraneous entry in the changelog. Is there a step missing from the guide that creates zim_0.49.dsc or is the file itself missing from the source?

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • 127.0.0.1:9051 doesnt work after apache, mysql, php installation?

    - by Rana Muhammad Waqas
    I have installed apache2, mysql, and php and now it doesnt let Vidalia run on localhost. i tried to change the TCP connection (controlport) to any other ip 192.168.0.40 and tried to change the default port 9051 to any other but that doesnt work. I thought apache is running so i used this command sudo service apache2 stop but that still doesnt work. So now when i type 127.0.0.1:9051 in browser it says and if i type only type 127.0.0.1 after stopping the apapche2 service with the command mentioned above it says unable to connect I am not sure what to do now Help!

    Read the article

  • Function inside .profile results in no log-in

    - by bioShark
    I've created a custom function in my .profile, and I've added right at the bottom, after my custom aliases : # custom functions function eclipse-gtk { cd ~/development/eclipse-juno ./eclipse_wb.sh & cd - } The function starts a custom version of my eclipse. After I've added it, because I didn't wanted to log-out/log-in, I've reloaded my profile with the command: . ~/.profile and then I've tested my function by calling eclipse-gtk and it worked without any issue. Today when I booted, I couldn't log in. After providing my password, in a few seconds I was back at the log-in screen. Dropping to command line using CTR + ALT + F1, I've commented out the function in my .profile and the log-in was possible without any issue. My question is, what did I do wrong when I wrote the function? And if there is something wrong, why did it work yesterday after reloading the profile. Thanks in advance. Using: Ubuntu 12.04

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • Can I force window to open on top of other windows when opened by keyboard shortcut?

    - by Rasmus
    I use SpaceFM as my primary file manager on Ubuntu. I typically open folder directly by keyboard shortcuts, so, e.g. Ctrl+Super+W opens my Work folder. Specifically, I use execute the command spacefm -w /home/rasmus/Work/ by the above shortcut, with the -w ensuring that SpaceFM opens a new window. However, this new window is not always open on top of the last active window on the workspace. This is rather annoying, as it means I sometimes have to "dig" for the newly opened window. So, my question is: Is there something additional I can add to the executed command that will ensure that the fresh window is opened on top? Alternative solutions to the same effect are welcome.

    Read the article

  • How to generate xorg.conf? (X -configure segfaults)

    - by Nicolas Raoul
    My video card is working fine, I have no screen problem. I am trying to generate an xorg.conf so I did: [ Logout ] sudo service gdm stop [ Move away xorg.conf.back and xorg.conf.fglrx-0 that were in /etc/X11 ] sudo dpkg-reconfigure xserver-xorg sudo X -configure But this last command segfaults: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux nico 2.6.32-25-generic #44-Ubuntu SMP Fri Sep 17 20:26:08 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-25-generic root=UUID=7447ab16-3406-442d-81e5-bb6a2d795205 ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Fri Oct 15 16:06:11 2010 List of video drivers: i740 ark geode siliconmotion mach64 s3 r128 apm intel neomagic vesa trident chips s3virge fglrx sis savage rendition i128 tseng ztv mga openchrome radeon ati nv v4l vmware cirrus tdfx nouveau sisusb voodoo fbdev (EE) Can't load FireGL DRM library (libfglrxdrm.so). Backtrace: 0: X (xorg_backtrace+0x3b) [0x80e938b] 1: X (0x8048000+0x61c8d) [0x80a9c8d] 2: (vdso) (__kernel_rt_sigreturn+0x0) [0x34d410] 3: X (xf86CallDriverProbe+0x182) [0x80b82d2] 4: X (DoConfigure+0x1c8) [0x816b898] 5: X (InitOutput+0x1da) [0x80b98aa] 6: X (0x8048000+0x1ebbb) [0x8066bbb] 7: /lib/tls/i686/cmov/libc.so.6 (__libc_start_main+0xe6) [0x467bd6] 8: X (0x8048000+0x1e961) [0x8066961] Segmentation fault at address (nil) Caught signal 11 (Segmentation fault). Server aborting Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Note the line Can't load FireGL DRM library (libfglrxdrm.so) Note: I do have file /usr/lib/fglrx/xorg/modules/linux/libfglrxdrm.so It is strange that it segfaults whereas I can use Gnome with no problem but well... Might be related: I tried to install the driver from ATI's website recently, and from then glxgears crashes at start. How can I generate xorg.conf in those conditions? It might or might not involve solving the segfault problem.

    Read the article

  • When is an object oriented program truly object oriented?

    - by Syed Aslam
    Let me try to explain what I mean: Say, I present a list of objects and I need to get back a selected object by a user. The following are the classes I can think of right now: ListViewer Item App [Calling class] In case of a GUI application, usually click on a particular item is selection of the item and in case of a command line, some input, say an integer representing that item. Let us go with command line application here. A function lists all the items and waits for the choice of object, an integer. So here, I get the choice, is choice going to conceived as an object? And based on the choice, return back the object in the list. Does writing this program like the way explained above make it truly object oriented? If yes, how? If not, why? Or is the question itself wrong and I shouldn't be thinking along those lines?

    Read the article

  • gnome-terminal and logging

    - by UAdapter
    Is there any way to log every that was displayed in gnome-terminal? for example I have a complex command doSomethingThatPrintoutsAlot ; doSomethingThatPrintoutsAlot2 ; doSomethingThatPrintoutsAlot3 I can add > file, but than I would have to do it for each command and I have to use tail in another console to see the output. maybe gnome-terminal support logging everything? there is .bash_history, so .... it might also support this.

    Read the article

  • How to move files over samba share with gnomevfs cli

    - by Allan
    Ok I am in the process of backing up my film collection to a NAS and I wanted to automate this as much as possible as I have to work at the same time. I am trying to setup a daily dump of ISO's ready to be converted overnight. I would like to do this as a cron job using gnomevfs. I have been able to connect and do an ls command successfully with gnomevfs-ls smb://user:WORKGROUP:password@media-centre/videos/ but I am having trouble setting up a mv command from a local folder to the same shared folder keep getting the Usage: gnomevfs-mv <from> <to> quote which isn't particularly informative ;) any ideas?

    Read the article

  • How to reinstall many removed packages at once?

    - by Logan
    I used sudo apt-get remove python command and accidently removed a bunch of packages that were required. I logged in via command line and installed ubuntu-desktop again but there are other packages that are missing, and I'm looking for a way to easily reinstall those removed packages. Since there's the log at software-center I wanted to ask what the easiest way might be to roll back changes or extract the removed packages list from the software center... note: I typed sudo apt-get install .... .... ... ... for about two dozen of those removed programs in that list, but when I pressed enter it didn't install any of them because some package names couldn't be found. The programs were removed at the same date.

    Read the article

  • How To Specify Bitrate, Codec and Demultiplexing for VLC Video Capture or Recording

    - by Subhash
    I capture video from old TV tuner card - Pinnacle PCTV - using VLC. The video is from the Composite input and audio is from I guess the mixer or Line in. The command I use is: vlc v4l2:///dev/video0:normal=pal:width=720:height=576:input=1 :input-slave="alsa://hw:0,0" In VLC, I have enabled the Advanced Controls toolbar, which allows me to record videos when I want to. However, these videos are uncompressed - very big and play only with VLC. Totem throws the "Could not demultiplex stream" error. I need to convert them using WinFF to reduce their size and make them playable with Totem and other software. My question is whether I can configure the recording settings - the codecs and the bitrate, and also get the stream demultiplexed. If I pass any -sout parameter with command I get a "Segmentation fault". I use 64-bit Ubuntu 10.10.

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • How to install an older version of Java

    - by Alex Spurling
    I updated my installation of the sun-java6-jdk package today to version 6.24-1build0.10.10.1 after being prompted by the update manager. However this now causes some compilation failures so I'd like to revert back to the previous version that I had installed. I've tried using Synaptic but the 'Force Version' menu command is disabled. I've tried the following command to install the previous version sudo apt-get install sun-java6-jdk=6.22-0ubuntu1~10.10 But I'm not sure that I have the correct version: Reading package lists... Done Building dependency tree Reading state information... Done E: Version ‘6.22-0ubuntu1~10.10’ for ‘sun-java6-jdk’ was not found I've taken this version number from this changelog: https://launchpad.net/ubuntu/+source/sun-java6/+changelog Is this the correct way to install a previous version of a package? Have I got the correct version from the sun-java6 change log?

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >