Search Results

Search found 8408 results on 337 pages for 'cgi bin'.

Page 228/337 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • hProduct-microformats not work in google

    - by silverfox
    I'm trying to work with hProduct was testing tool for google microformats (http://www.google.com/webmasters/tools/richsnippets), but it is not recognizing the data: does not recognize the photo does not recognize the price does not recognize the category only recognizes the rating HTML: <div class="hproduct"> <span class="brand">ACME</span> <span class="fn">Executive Anvil</span> <img class="photo" src="http://microformats.org/wiki/skins/Microformats/images/logo.gif" /> <span class="review hreview-aggregate"> Average rating: <span class="rating">4.4</span>, based on <span class="count">89 </span> reviews </span> Regular price: $179.99 Sale: $<span class="price">119.99</span> (Sale ends 5 November!) <span class="description">Sleeker than ACME's Classic Anvil, the Executive Anvil is perfect for the business traveler looking for something to drop from a height.</span> Category: <span class="category"> <span class="value-title" title="Hardware > Tools > Anvils">Anvils</span> </span> </div> and still shows this warning: waring: In order to generate a preview with rich snippets, either price or review or availability needs to be present. I used google's own example: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=186036 I also tested the microformas.org: http://microformats.org/wiki/google-rich-snippets-examples

    Read the article

  • How can I install Celtx 2.9.7 properly on Ubuntu 12.04 LTS?

    - by cruxfilm
    I am new to Ubuntu and Linux and I want to install and use the newest version of the screenwriting software Celtx on Ubuntu 12.04 LTS. After trying https://answers.launchpad.net/ubuntu/+source/ubiquity/+question/206295 and using sudo add-apt-repository ppa:dreamstudio/video sudo apt-get update sudo apt-get install celtx I unfortunately had to find out that was a rather old version with a fairly screwed up UI. I then downloaded the newest version from http://download.celtx.com/2.9.7/Celtx-2.9.7-64.tar.bz2 but now I don't know how to properly install it. I extracted it to /home/username/ (there was no ~/bin/) as described here and I can now launch the application by running the file celtx within that folder (I get asked whether I want to Display, run or run it in Terminal) and it works fine. But I can't get it to launch from Unity. I tried right-clicking the launcher button and going "Lock to Launcher" while it's running and it does create an icon but clicking it to launch the program does nothing. Also searching for celtx in the Dash doesn't find the app. And advice on how to properly install Celtx 2.9.7 in Ubuntu 12.04 LTS?

    Read the article

  • Why do I need to create a bios-grub partition when I install 12.04?

    - by raj
    Is the bios-grub partition in Ubuntu 12.04 mandatory? I have used 11.04, 11.10 and 12.04, But I was never asked for this. Today I tried a fresh installation of Ubuntu 12.04 and for the first time I was asked for this Grub partition of minimum 1Mb. I first tried to reinstall 12.04, but the error continued. So I installed Fedora 16, Keeping all partitions as they were (replaced Ubuntu with Fedora), And then did another fresh installation of 12.04. Is it ok to continue with this grub partition or is there a fault in my system's hardware? If this is a (hardware) fault, how can I fix it? I'm using a Lenovo S10-2 Ideapad. The only OS right now installed is Ubuntu 12.04. well, let me answer. It was /usr/bin/xorg issue that I had with firstly installed precise. I used fedora16 basically for removing precise totally (my experience tells me ubuntu can't completely erase and reinstall by itself). this 1mb grub is created by fedora. I then wanted to remove it while reinstalling ubuntu but got caution bootloader may fail. hence I have to keep this 1mb drive. but prior to yesterday, i used both fedora and ubuntu, even same CDs, but had no such partition. my question is if this partition is necessary or not? if not, how can i safely remove it from my system? Am using only ubuntu 12.04 -- before and after (now).

    Read the article

  • i can't uninstall ubuntu software

    - by cunix
    root@cunix:/home/cunix# sudo apt-get remove fern-wifi-cracker Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libqt4-test libqt4-sql-mysql mysql-common libqt4-xmlpatterns libqt4-help python-qt4 python-sip libqt4-sql-sqlite libqt4-sql macchanger libqt4-designer libmysqlclient16 python-scapy libqt4-scripttools Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: fern-wifi-cracker 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 3,514kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 167661 files and directories currently installed.) Removing fern-wifi-cracker ... dpkg (subprocess): unable to execute installed pre-removal script (/var/lib/dpkg/info/fern-wifi-cracker.prerm): Exec format error dpkg: error processing fern-wifi-cracker (--remove): subprocess installed pre-removal script returned error exit status 2 Errors were encountered while processing: fern-wifi-cracker E: Sub-process /usr/bin/dpkg returned an error code (1) how to uninstall?

    Read the article

  • Xubuntu 14.04 with Compton, strange screen tearing, only when playing videos though (advice needed)

    - by LinuxDudester
    Hello beloved community, Yet again I am in need of your great expertise. I ran into a very strange issue and just can't wrap my mind around it. I'm running Xubuntu 14.04 exclusively, with Compton installed. The OS runs great and I have absolutely no screen tearing when I move my windows around, scroll in my web browser, work in Gimp or Photoshop (wine) or even when I play very graphic demanding games, like Metro Last Light, Euro Truck Driver 2 and so on. There's not a tiny bit of tearing to see, but as soon as I play videos, in xbmc, vlc or parole media player the tearing begins (strangely enough this does not apply to youtube videos). I followed all available workarounds on askubuntu and the ubuntu forum,like the 50-xserver-command.conf, startx /etc/X11/Xsession /usr/bin/xbmc-standalone -- -bs or libsdl1.2debian fix and many more, but to no avail. I also tried the Open Source Nouveau display drivers as well, but for some odd reason they don't work so great on my system or at least with my graphics card. Even with Compton installed and configured, I have an extreme amount of screen tearing, as soon as I switch to the proprietary Nvidia drives the screen tearing is gone completely, except for the video playback with xbmc, vlc or parole media player. System info for your reference: OS: Xubuntu 14.04 Linux-x86_64 - Processor: Intel Core i7-4770S CPU @ 3.10GHz - Ram: 16 GB - GeForce GT 750M 1024 MB - Nvidia Driver: 331.38 Has anyone experienced such an odd issue or do you have any advice on how I could fix this? I would appreciate any help! Have a nice day!

    Read the article

  • Why does 12.04 upgrade abort with out of space error when I have lots of it?

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home The output of df -h Filesystem Size Used Avail Use% Mounted on /dev/sda7 9,3G 7,8G 1,1G 88% / udev 993M 4,0K 993M 1% /dev tmpfs 401M 884K 400M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1003M 152K 1002M 1% /run/shm /dev/sda8 35G 4,0G 29G 13% /home /dev/sda2 101G 64G 37G 64% /media/A2C8E28BC8E25CD3 Running sudo fdisk -l gives Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 63 96389 48163+ de Dell Utility /dev/sda2 * 98304 210434488 105168092+ 7 HPFS/NTFS/exFAT /dev/sda3 210436094 312576704 51070305+ f W95 Ext'd (LBA) /dev/sda5 306279288 312576704 3148708+ dd Unknown /dev/sda6 210436096 214341631 1952768 82 Linux swap / Solaris /dev/sda7 214343680 233873407 9764864 83 Linux /dev/sda8 233875456 306278399 36201472 83 Linux Partition table entries are not in disk order

    Read the article

  • Automatically create bug resolution task using the TFS 2010 API

    - by Bob Hardister
    My customer requires bug resolution to be approved and tracked.  To minimize the overhead for developers I implemented a TFS 2010 server-side plug-in to automatically create a child resolution task for the bug when the “CCB” field is set to approved. The CCB field is a custom field.  I also added the story points field to the bug WIT for sizing purposes. Redundant tasks will not be created unless the bug title is changed or the prior task is closed. The program writes an audit trail to a log file visible in the TFS Admin Console Log view. Here’s the code. BugAutoTask.cs /* SPECIFICATION * When the CCB field on the bug is set to approved, create a child task where the task: * name = Resolve bug [ID] - [Title of bug] * assigned to = same as assigned to field on the bug * same area path * same iteration path * activity = Bug Resolution * original estimate = bug points * * The source code is used to build a dll (Ows.TeamFoundation.BugAutoTaskCreation.PlugIns.dll), * which needs to be copied to * C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\Plugins * on ALL TFS application-tier servers. * * Author: Bob Hardister. */ using System; using System.Collections.Generic; using System.IO; using System.Xml; using System.Text; using System.Diagnostics; using System.Linq; using Microsoft.TeamFoundation.Common; using Microsoft.TeamFoundation.Framework.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.WorkItemTracking.Server; using Microsoft.TeamFoundation.Client; using System.Collections; namespace BugAutoTaskCreation { public class BugAutoTask : ISubscriber { public EventNotificationStatus ProcessEvent(TeamFoundationRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties) { statusCode = 0; properties = null; statusMessage = String.Empty; // Error message for for tracing last code executed and optional fields string lastStep = "No field values found or set "; try { if ((notificationType == NotificationType.Notification) && (notificationEventArgs.GetType() == typeof(WorkItemChangedEvent))) { WorkItemChangedEvent workItemChange = (WorkItemChangedEvent)notificationEventArgs; // see ConnectToTFS() method below to select which TFS instance/collection // to connect to TfsTeamProjectCollection tfs = ConnectToTFS(); WorkItemStore wiStore = tfs.GetService<WorkItemStore>(); lastStep = lastStep + ": connection to TFS successful "; // Get the work item that was just changed by the user. WorkItem witem = wiStore.GetWorkItem(workItemChange.CoreFields.IntegerFields[0].NewValue); lastStep = lastStep + ": retrieved changed work item, ID:" + witem.Id + " "; // Filter for Bug work items only if (witem.Type.Name == "Bug") { // DEBUG lastStep = lastStep + ": changed work item is a bug "; // Filter for CCB (i.e. Baseline Status) field set to approved only bool BaselineStatusChange = false; if (workItemChange.ChangedFields != null) { ProcessBugRevision(ref lastStep, workItemChange, wiStore, ref witem, ref BaselineStatusChange); } } } } catch (Exception e) { Trace.WriteLine(e.Message); Logger log = new Logger(); log.WriteLineToLog(MsgLevel.Error, "Application error: " + lastStep + " - " + e.Message + " - " + e.InnerException); } statusCode = 1; statusMessage = "Bug Auto Task Evaluation Completed"; properties = null; return EventNotificationStatus.ActionApproved; } // PRIVATE METHODS private static void ProcessBugRevision(ref string lastStep, WorkItemChangedEvent workItemChange, WorkItemStore wiStore, ref WorkItem witem, ref bool BaselineStatusChange) { foreach (StringField field in workItemChange.ChangedFields.StringFields) { // DEBUG lastStep = lastStep + ": last changed field is - " + field.Name + " "; if (field.Name == "Baseline Status") { lastStep = lastStep + ": retrieved bug baseline status field value, bug ID:" + witem.Id + " "; BaselineStatusChange = (field.NewValue != field.OldValue); if ((BaselineStatusChange) && (field.NewValue == "Approved")) { // Instanciate logger Logger log = new Logger(); // *** Create resolution task for this bug *** // ******************************************* // Get the team project and selected field values of the bug work item Project teamProject = witem.Project; int bugID = witem.Id; string bugTitle = witem.Fields["System.Title"].Value.ToString(); string bugAssignedTo = witem.Fields["System.AssignedTo"].Value.ToString(); string bugAreaPath = witem.Fields["System.AreaPath"].Value.ToString(); string bugIterationPath = witem.Fields["System.IterationPath"].Value.ToString(); string bugChangedBy = witem.Fields["System.ChangedBy"].OriginalValue.ToString(); string bugTeamProject = witem.Project.Name; lastStep = lastStep + ": all mandatory bug field values found "; // Optional fields Field bugPoints = witem.Fields["Microsoft.VSTS.Scheduling.StoryPoints"]; if (bugPoints.Value != null) { lastStep = lastStep + ": all mandatory and optional bug field values found "; } // Initialize child resolution task title string childTaskTitle = "Resolve bug " + bugID + " - " + bugTitle; // At this point I can check if a resolution task (of the same name) // for the bug already exist // If so, do not create a new resolution task bool createResolutionTask = true; WorkItem parentBug = wiStore.GetWorkItem(bugID); WorkItemLinkCollection links = parentBug.WorkItemLinks; foreach (WorkItemLink wil in links) { if (wil.LinkTypeEnd.Name == "Child") { WorkItem childTask = wiStore.GetWorkItem(wil.TargetId); if ((childTask.Title == childTaskTitle) && (childTask.State != "Closed")) { createResolutionTask = false; log.WriteLineToLog(MsgLevel.Info, "Team project " + bugTeamProject + ": " + bugChangedBy + " - set the CCB field to \"Approved\" for bug, ID: " + bugID + ". Task not created as open one of the same name already exist, ID:" + childTask.Id); } } } if (createResolutionTask) { // Define the work item type of the new work item WorkItemTypeCollection workItemTypes = wiStore.Projects[teamProject.Name].WorkItemTypes; WorkItemType wiType = workItemTypes["Task"]; // Setup the new task and assign field values witem = new WorkItem(wiType); witem.Fields["System.Title"].Value = "Resolve bug " + bugID + " - " + bugTitle; witem.Fields["System.AssignedTo"].Value = bugAssignedTo; witem.Fields["System.AreaPath"].Value = bugAreaPath; witem.Fields["System.IterationPath"].Value = bugIterationPath; witem.Fields["Microsoft.VSTS.Common.Activity"].Value = "Bug Resolution"; lastStep = lastStep + ": all mandatory task field values set "; // Optional fields if (bugPoints.Value != null) { witem.Fields["Microsoft.VSTS.Scheduling.OriginalEstimate"].Value = bugPoints.Value; lastStep = lastStep + ": all mandatory and optional task field values set "; } // Check for validation errors before saving the new task and linking it to the bug ArrayList validationErrors = witem.Validate(); if (validationErrors.Count == 0) { witem.Save(); // Link the new task (child) to the bug (parent) var linkType = wiStore.WorkItemLinkTypes[CoreLinkTypeReferenceNames.Hierarchy]; // Fetch the work items to be linked var parentWorkItem = wiStore.GetWorkItem(bugID); int taskID = witem.Id; var childWorkItem = wiStore.GetWorkItem(taskID); // Add a new link to the parent relating the child and save it parentWorkItem.Links.Add(new WorkItemLink(linkType.ForwardEnd, childWorkItem.Id)); parentWorkItem.Save(); log.WriteLineToLog(MsgLevel.Info, "Team project " + bugTeamProject + ": " + bugChangedBy + " - set the CCB field to \"Approved\" for bug, ID:" + bugID + ", which automatically created child resolution task, ID:" + taskID); } else { log.WriteLineToLog(MsgLevel.Error, "Error in creating bug resolution child task for bug ID:" + bugID); foreach (Field taskField in validationErrors) { log.WriteLineToLog(MsgLevel.Error, " - Validation Error in task field: " + taskField.ReferenceName); } } } } } } } private TfsTeamProjectCollection ConnectToTFS() { // Connect to TFS string tfsUri = string.Empty; // Production TFS instance production collection tfsUri = @"xxxx"; // Production TFS instance admin collection //tfsUri = @"xxxxx"; // Local TFS testing instance default collection //tfsUri = @"xxxxx"; TfsTeamProjectCollection tfs = new TfsTeamProjectCollection(new System.Uri(tfsUri)); tfs.EnsureAuthenticated(); return tfs; } // HELPERS public string Name { get { return "Bug Auto Task Creation Event Handler"; } } public SubscriberPriority Priority { get { return SubscriberPriority.Normal; } } public enum MsgLevel { Info, Warning, Error }; public Type[] SubscribedTypes() { return new Type[1] { typeof(WorkItemChangedEvent) }; } } } Logger.cs using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Windows.Forms; namespace BugAutoTaskCreation { class Logger { // fields private string _ApplicationDirectory = @"C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs"; private string _LogFileName = @"\CFG_ACCT_AT_OWS_BugAutoTaskCreation.log"; private string _LogFile; private string _LogTimestamp = DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss"); private string _MsgLevelText = string.Empty; // default constructor public Logger() { // check for a prior log file FileInfo logFile = new FileInfo(_ApplicationDirectory + _LogFileName); if (!logFile.Exists) { CreateNewLogFile(ref logFile); } } // properties public string ApplicationDirectory { get { return _ApplicationDirectory; } set { _ApplicationDirectory = value; } } public string LogFile { get { _LogFile = _ApplicationDirectory + _LogFileName; return _LogFile; } set { _LogFile = value; } } // PUBLIC METHODS public void WriteLineToLog(BugAutoTask.MsgLevel msgLevel, string logRecord) { try { // set msgLevel text if (msgLevel == BugAutoTask.MsgLevel.Info) { _MsgLevelText = "[Info @" + MsgTimeStamp() + "] "; } else if (msgLevel == BugAutoTask.MsgLevel.Warning) { _MsgLevelText = "[Warning @" + MsgTimeStamp() + "] "; } else if (msgLevel == BugAutoTask.MsgLevel.Error) { _MsgLevelText = "[Error @" + MsgTimeStamp() + "] "; } else { _MsgLevelText = "[Error: unsupported message level @" + MsgTimeStamp() + "] "; } // write a line to the log file StreamWriter logFile = new StreamWriter(_ApplicationDirectory + _LogFileName, true); logFile.WriteLine(_MsgLevelText + logRecord); logFile.Close(); } catch (Exception) { throw; } } // PRIVATE METHODS private void CreateNewLogFile(ref FileInfo logFile) { try { string logFilePath = logFile.FullName; // write the log file header _MsgLevelText = "[Info @" + MsgTimeStamp() + "] "; string cpu = string.Empty; if (Environment.Is64BitOperatingSystem) { cpu = " (x64)"; } StreamWriter newLog = new StreamWriter(logFilePath, false); newLog.Flush(); newLog.WriteLine(_MsgLevelText + "===================================================================="); newLog.WriteLine(_MsgLevelText + "Team Foundation Server Administration Log"); newLog.WriteLine(_MsgLevelText + "Version : " + "1.0.0 Author: Bob Hardister"); newLog.WriteLine(_MsgLevelText + "DateTime : " + _LogTimestamp); newLog.WriteLine(_MsgLevelText + "Type : " + "OWS Custom TFS API Plug-in"); newLog.WriteLine(_MsgLevelText + "Activity : " + "Bug Auto Task Creation for CCB Approved Bugs"); newLog.WriteLine(_MsgLevelText + "Area : " + "Build Explorer"); newLog.WriteLine(_MsgLevelText + "Assembly : " + "Ows.TeamFoundation.BugAutoTaskCreation.PlugIns.dll"); newLog.WriteLine(_MsgLevelText + "Location : " + @"C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\Plugins"); newLog.WriteLine(_MsgLevelText + "User : " + Environment.UserDomainName + @"\" + Environment.UserName); newLog.WriteLine(_MsgLevelText + "Machine : " + Environment.MachineName); newLog.WriteLine(_MsgLevelText + "System : " + Environment.OSVersion + cpu); newLog.WriteLine(_MsgLevelText + "===================================================================="); newLog.WriteLine(_MsgLevelText); newLog.Close(); } catch (Exception) { throw; } } private string MsgTimeStamp() { string msgTimestamp = string.Empty; return msgTimestamp = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss:fff"); } } }

    Read the article

  • Can't update kernel to 2.6.35.27

    - by Uri Herrera
    When I try to update I get this message, I'm guessing I'm missing something here? Filesystem Type Size Used Avail Use% Mounted on /dev/sdb6 ext4 43G 7.7G 33G 20% / none devtmpfs 1.6G 349k 1.6G 1% /dev none tmpfs 1.6G 5.9M 1.6G 1% /dev/shm none tmpfs 1.6G 218k 1.6G 1% /var/run none tmpfs 1.6G 0 1.6G 0% /var/lock /dev/sdb2 fuseblk 258G 198G 60G 77% /media/Backup /dev/sda1 fuseblk 321G 175G 146G 55% /media/Media /dev/sdb1 ext4 96M 84M 6.7M 93% /boot /dev/sdb7 ext4 175G 81G 86G 49% /home Here's the output: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.35-22-generic 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 5 not fully installed or removed. After this operation, 107MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 282211 files and directories currently installed.) Removing linux-image-2.6.35-22-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic /etc/default/grub: 23: Syntax error: newline unexpected run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.35-22- generic.postrm line 328. dpkg: error processing linux-image-2.6.35-22-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.35-22-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Add static ARP entries when network is brought up

    - by jozzas
    I have some pretty dumb IP devices on a subnet with my Ubuntu server, and the server receives streaming data from each device. I have run into a problem in that when an ARP request is issued to the device while it is streaming data to the server, the request is ignored, the cache entry times out and the server stops receiving the stream. So, to prevent the server from sending ARP requests to these devices altogether, I would like to add a static ARP entry for each, something like arp -i eth2 -s ip.of.the.device mac:of:the:device But these "static" ARP entries are lost if networking is disabled / enabled or if the server is rebooted. Where is the best place to automatically add these entries, preferably somewhere that will re-add them every time the interface eth2 is brought up? I really don't want to have to write a script that monitors the output of arp and re-adds the cache entries if they're missing. Edit to add what my final script was: Created the file /etc/network/if-up.d/add-my-static-arp With the contents: #!/bin/sh arp -i eth0 -s 192.168.0.4 00:50:cc:44:55:55 arp -i eth0 -s 192.168.0.5 00:50:cc:44:55:56 arp -i eth0 -s 192.168.0.6 00:50:cc:44:55:57 And then obviously add the permission to allow it to be executed: chmod +x /etc/network/if-up.d/add-my-static-arp And these arp entries will be manually added or re-added every time any network interface is brought up.

    Read the article

  • How do I install a Wimax usb driver?

    - by kakaz
    I am using wimax usb modem in Ubuntu 9.04 properly. I am familiar with Ubuntu 10.04 and try to install the same deb file to use my wimax USB modem, but it could not install and give me the following error message: $ sudo dpkg -i green-packet-wimax-usb_i386.iso.deb (Reading database ... 206628 files and directories currently installed.) Preparing to replace green-packet-wimax-usb 1.12 (using green-packet-wimax- usb_i386.iso.deb) ... /var/lib/dpkg/info/green-packet-wimax-usb.prerm: 45: /etc/init.d/wimaxd: not found Removing any system startup links for /etc/init.d/wimaxd ... FATAL: Module mt7118_usb_os not found. Unpacking replacement green-packet-wimax-usb ... Setting up green-packet-wimax-usb (1.12) ... FATAL: Error inserting mt7118_usb_glue (/lib/modules/2.6.32-28-generic/kernel/drivers/net/mt7118_usb_glue.ko): Invalid module format dpkg: error processing green-packet-wimax-usb (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for ureadahead ... Processing triggers for desktop-file-utils ... Processing triggers for python-gmenu ... Rebuilding /usr/share/applications/desktop.en_US.utf8.cache... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Processing triggers for python-support ... Errors were encountered while processing: The error (Line 9) give me some clue that the mt7118_usb_glue.ko kernel object can't insert it. So, I think this may be due to it's kernel dependencies. Can anybody tell me how I can install this kernel object to my new Ubuntu 10.04 kernel?

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32

    - by Microkernel
    I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two Windows XP machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VirtualBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is the following. Shared folders on ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (Even if I provide admin login details). I changed workgroup value to the domain_name in office laptop, but still the problem persists. Here is the smdb.conf I am using: [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail.

    Read the article

  • lirc_zilog IR transmission no longer working with HD-PVR on 12.04

    - by johnf
    I have been running a ubuntu 10.04 with a patched version of lirc_zilog for two years. I upgraded to 12.04 and lirc_zilog is no longer working with my HD-PVR. The MythTV wiki reports that it did work out of the box with 11.04. The error message I get on irsend is as follows: johnf@carbon:~$ /usr/local/bin/irsend SEND_ONCE blaster 0_130_KEY_POWER irsend: command failed: SEND_ONCE blaster 0_130_KEY_POWER irsend: hardware does not support sending The lircd daemon, run interactively, reports the following: lircd: accepted new client on /var/run/lirc/lircd lircd: could not get hardware features lircd: this device driver does not support the LIRC ioctl interface lircd: major number of /dev/lirc0 is 250 lircd: LIRC major number is 61 lircd: check if /dev/lirc0 is a LIRC device lircd: WARNING: Failed to initialize hardware lircd: error processing command: SEND_ONCE blaster 0_130_KEY_POWER lircd: hardware does not support sending lircd: removed client Checking dmesg seems to indicate that the kernel module is loading properly: [56497.730743] lirc_zilog: module is from the staging directory, the quality is unknown, you have been warned. [56497.730999] lirc_zilog: Zilog/Hauppauge IR driver initializing [56497.732484] lirc_zilog: ir_probe: ir_rx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x71 [56497.732493] lirc_zilog: ir_probe: ir_tx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x70 [56497.732496] lirc_zilog: probing IR Tx on Hauppage HD PVR I2C (i2c-0) [56497.756822] lirc_zilog: firmware of size 302355 loaded [56497.756945] lirc_zilog: 743 IR blaster codesets loaded [56497.757030] i2c i2c-0: lirc_dev: driver lirc_zilog registered at minor = 0 [56497.757033] lirc_zilog: IR unit on Hauppage HD PVR I2C (i2c-0) registered as lirc0 and ready [56497.757035] lirc_zilog: probe of IR Tx on Hauppage HD PVR I2C (i2c-0) done [56497.757056] lirc_zilog: initialization complete Here is my /etc/lirc/hardware.conf #Chosen IR Transmitter TRANSMITTER="HD-PVR" TRANSMITTER_MODULES="lirc_dev lirc_zilog" TRANSMITTER_DRIVER="" TRANSMITTER_DEVICE="/dev/lirc0" TRANSMITTER_SOCKET="" TRANSMITTER_LIRCD_CONF="" TRANSMITTER_LIRCD_ARGS="" My lircd.conf is a copy of the recommended one. Examination of the kernel source seems to indicate that the lirc_zilog module should support transmission, it's newer than the patched version I was manually compiling on 10.04. I was previously using a manually built version of lirc 0.8.7 and not the packaged one. I'm now running the packaged version 9.0. I can provide any additional information required and will perform tests quickly. I'm very eager to get this issue resolved.

    Read the article

  • Troubleshooting wireless network connectivity

    - by taserian
    I'm currently running Ubuntu 10.10, and I'm running into trouble keeping my wireless connection alive. After rebooting, I get about 5-10 minutes of a good speed connection; afterwards, the connection just zeroes out. I've gotten around to stirring up a shell script that gives me another 10 minutes or so of connectivity. Script contents below: #! /bin/bash sudo ifconfig wlan0 up echo "Enabling wireless device . . ." sleep 5 sudo iwconfig wlan0 essid MyNetworkName echo "Connecting to network. . ." sleep 10 sudo dhclient wlan0 echo "Getting IP address. . ." sleep 5 echo "Done. Closing window. . ." sleep 5 Shortly after running the line "sudo iwconfig wlan0 essid MyNetworkName" from the script, I notice the speed pick up. Other computers in my home running Windows XP are not affected by this problem, so all indications point to my Ubuntu machine. Does anyone have any pointers as to how to resolve this?

    Read the article

  • Software Center not Opening at all Error

    - by Newbie
    When I open software from menu, it says "cannot open software database. Please reinstall the software-center package. When I write software-center on terminal, such error comes: 2014-05-28 09:11:20,584 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2014-05-28 09:11:20,593 - softwarecenter.ui.gtk3.app - ERROR - xapian open failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 302, in __init__ if self.db.schema_version() != DB_SCHEMA_VERSION: File "/usr/share/software-center/softwarecenter/db/database.py", line 289, in schema_version return self.xapiandb.get_metadata("db-schema-version") File "/usr/share/software-center/softwarecenter/db/database.py", line 177, in xapiandb self._db_per_thread[thread_name] = self._get_new_xapiandb() File "/usr/share/software-center/softwarecenter/db/database.py", line 190, in _get_new_xapiandb xapiandb = xapian.Database(self._db_pathname) File "/usr/lib/python2.7/dist-packages/xapian/__init__.py", line 3667, in __init__ _xapian.Database_swiginit(self,_xapian.new_Database(*args)) DatabaseCorruptError: /var/cache/software-center/xapian/iamchert: Chert version file should be 28 bytes, actually 0 Now, when I write command sudo apt-get remove software-center dpkg: error: corrupt info database format file '/var/lib/dpkg/info/format' E: Sub-process /usr/bin/dpkg returned an error code (2) I had ubuntu before but it kind of got corrupted. Now, I have freshly reinstalled it and even at start, software center is not opening and this error comes. I hope you have a solution. Thanks.

    Read the article

  • CodePlex Daily Summary for Saturday, February 26, 2011

    CodePlex Daily Summary for Saturday, February 26, 2011Popular ReleasesDirectQ: Release 1.8.7 Beta 2: Beta 2 release to fix some early reported problems with the original 1.8.7 Beta.Chiave File Encryption: Chiave 0.9.2: Release Notes Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Now with added support to Windows XP! Change Log from 0.9.1 to 0.9.2: ==================== Added: > Now it displays number of files added in the wizard to the Window Title bar. > Added support to Windows XP. > Minor UI tweaks. I...Claims Based Identity & Access Control Guide: Drop 1 - Claims Identity Guide V2: Highlights of drop #1 This is the first drop of the new "Claims Identity Guide" edition. In this release you will find: All previous samples updated and enhanced. All code upgraded to .NET 4 and Visual Studio 2010. Extensive cleanup. Refactored Simulated Issuers: each solution now gets its own issuers. This results in much cleaner and simpler to understand code. Added Single Sign Out support. Added first sample using ACS ("ACS as a Federation Provider"). This sample extends the ori...HFR7 - Forum Hardware.fr pour Windows Phone 7: HFR7 - v1.1.1: NOUVELLES FONCTIONS : - Aucune. AMÉLIORATIONS : - La page "Catégories" est désormais intégrée au pivot "Bienvenue". - Apparition de boutons de changement de page en bas du sujet en consultation. - Les en-têtes sur les sujets sont désormais de la même couleur que la couleur d'accentuation de votre téléphone. BUGFIXES : - Les catégories s'affichent correctement. - Cliquer sur le bouton "drapeau" alors qu'on est dans la liste des topics d'une page n'amenait pas directement au dernier post non ...Simple Notify: Simple Notify Beta 2011-02-25: Feature: host the service with a single click in console Feature: host the service as a windows service Feature: notification cient application Feature: push client application Feature: push notifications from your powershell script Feature: C# wrapper libraries for your applicationsMono.Addins: Mono.Addins 0.6: The 0.6 release of Mono.Addins includes many improvements, bug fixes and new features: Add-in engine Add-in name and description can now be localized. There are new custom attributes for defining them, and can also be specified as xml elements in an add-in manifest instead of attributes. Support for custom add-in properties. It is now possible to specify arbitrary properties in add-ins, which can be queried at install time (using the Mono.Addins.Setup API) or at run-time. Custom extensio...patterns & practices: Project Silk: Project Silk Community Drop 3 - 25 Feb 2011: IntroductionWelcome to the third community drop of Project Silk. For this drop we are requesting feedback on overall application architecture, code review of the JavaScript Conductor and Widgets, and general direction of the application. Project Silk provides guidance and sample implementations that describe and illustrate recommended practices for building modern web applications using technologies such as HTML5, jQuery, CSS3 and Internet Explorer 9. This guidance is intended for experien...PhoneyTools: Initial Release (0.1): This is the 0.1 version for preview of the features.Minemapper: Minemapper v0.1.5: Now supports new Minecraft beta v1.3 map format, thanks to updated mcmap. Disabled biomes, until Minecraft Biome Extractor supports new format.Smartkernel: Smartkernel: ????,??????Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.2: New control, Toast Prompt! Removed progress bar since Silverlight Toolkit Feb 2010 has it.Umbraco CMS: Umbraco 4.7: Service release fixing 31 issues. A full changelog will be available with the final stable release of 4.7 Important when upgradingUpgrade as if it was a patch release (update /bin, /umbraco and /umbraco_client). For general upgrade information follow the guide found at http://our.umbraco.org/wiki/install-and-setup/upgrading-an-umbraco-installation 4.7 requires the .NET 4.0 framework Web.Config changes Update the web web.config to include the 4 changes found in (they're clearly marked in...HubbleDotNet - Open source full-text search engine: V1.1.0.0: Add Sqlite3 DBAdapter Add App Report when Query Cache is Collecting. Improve the performance of index through Synchronize. Add top 0 feature so that we can only get count of the result. Improve the score calculating algorithm of match. Let the score of the record that match all items large then others. Add MySql DBAdapter Improve performance for multi-fields sort . Using hash table to access the Payload data. The version before used bin search. Using heap sort instead of qui...Silverlight????[???]: silverlight????[???]2.0: ???????,?????,????????silverlight??????。DBSourceTools: DBSourceTools_1.3.0.0: Release 1.3.0.0 Changed editors from FireEdit to ICSharpCode.TextEditor. Complete re-vamp of Intellisense ( further testing needed). Hightlight Field and Table Names in sql scripts. Added field dropdown on all tables and views in DBExplorer. Added data option for viewing data in Tables. Fixed comment / uncomment bug as reported by tareq. Included Synonyms in scripting engine ( nickt_ch ).IronPython: 2.7 Release Candidate 1: We are pleased to announce the first Release Candidate for IronPython 2.7. This release contains over two dozen bugs fixed in preparation for 2.7 Final. See the release notes for 60193 for details and what has already been fixed in the earlier 2.7 prereleases. - IronPython TeamCaliburn Micro: A Micro-Framework for WPF, Silverlight and WP7: Caliburn.Micro 1.0 RC: This is the official Release Candicate for Caliburn.Micro 1.0. The download contains the binaries, samples and VS templates. VS Templates The templates included are designed for situations where the Caliburn.Micro source needs to be embedded within a single project solution. This was targeted at government and other organizations that expressed specific requirements around using an open source project like this. NuGet This release does not have a corresponding NuGet package. The NuGet pack...Caliburn: A Client Framework for WPF and Silverlight: Caliburn 2.0 RC: This is the official Release Candidate for Caliburn 2.0. It contains all binaries, samples and generated code docs.Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...Windows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...New Projectsconvert digit to word upto thousand: convert digit to word upto thousandCustom XSLT with recursive loop in Biztalk 2009: Custom XSLT with recursive loop in Biztalk 2009ECO contrib: ECO ist a excellent framework for domain driven design developed by CapableObjects AB. Share your additional features in this contrib project.euler21: euler 21euler22: euler 22 problemflowless: Some workflow thingHFR7 - Forum Hardware.fr pour Windows Phone 7: HFR7 vous permet de surfer sur le forum de Hardware.fr via votre Windows Phone. Il est développé sous Silverlight (C#/XAML).ISP.NET: ISP.NET is a code level verification tool for MPI programs. It includes a Visual Studio 2010 extension that allows for push button verification of user programs that are written in C, C++ and C#. ISP checks for deadlocks, assertion violations, and other MPI program issues. IVY Frameworks: Ivy Frameworks is an architectural framework to create multi-tire application in a consistent way. We are creating specific application blocks. When completed it will help to create a software development pattern, to mass produce .Net applications in a software factory model. Jobeet: Just another Symfony Tutorial I'm following.Lurity: Project Codename "Lurity".Metro Toolkit: Metro ToolkitMySchoolManager: MySchoolManager es un programa de fines didácticos, de código fuente abierto y documentado en español, que simula un sistema de administración del personal de una escuela. NewRapp: Project at KTH in course DD2388NHibernateCMS: NHibernateCMSParametero - Yet Another Console Arguments Parsing Library: Parametero - Yet Another Console Arguments Parsing LibraryRasifiel's test: Test projectSilverlight Gantt Chart: Don't waste time creating your own Gantt Chart. We've already done it!SKWebSite: This is my first ASP.NET MVC3 projectTechChat: TechChat is a ASP.net AJAX based stylish chat room. It demonstrates the use of database and use of AJAX. It is developed in VB.netUDK Development Kit Addons: UDK Development Kit Addons makes it easier for UDK users to develop and debug UDK scripts. It's developed in C# with using Visual Studio Shell.UltraLight.mvvm Windows Phone 7 MVVM Framework: A lightweight (< 300 lines, < 25KB) framework for developing MVVM Silverlight applications with support for tombstoning on the Windows Phone 7.Website Panel: Website PanelxMS (eXtensible Management System): The xMS is a simple console that can be extended using plugins. Those plugins can be written by developers using a simple interface. It's a good starting for who want to develop a plugin based application.Xna Midi Project: A project that will allow midi functionality using only the compact framework for windows and Xbox solutions.

    Read the article

  • Netcat I/O enhancements

    - by user13277689
    When Netcat integrated into OpenSolaris it was already clear that there will be couple of enhancements needed. The biggest set of the changes made after Solaris 11 Express was released brings various I/O enhancements to netcat shipped with Solaris 11. Also, since Solaris 11, the netcat package is installed by default in all distribution forms (live CD, text install, ...). Now, let's take a look at the new functionality: /usr/bin/netcat alternative program name (symlink) -b bufsize I/O buffer size -E use exclusive bind for the listening socket -e program program to execute -F no network close upon EOF on stdin -i timeout extension of timeout specification -L timeout linger on close timeout -l -p port addr previously not allowed usage -m byte_count Quit after receiving byte_count bytes -N file pattern for UDP scanning -I bufsize size of input socket buffer -O bufsize size of output socket buffer -R redir_spec port redirection addr/port[/{tcp,udp}] syntax of redir_spec -Z bypass zone boundaries -q timeout timeout after EOF on stdin Obviously, the Swiss army knife of networking tools just got a bit thicker. While by themselves the options are pretty self explanatory, their combination together with other options, context of use or boundary values of option arguments make it possible to construct small but powerful tools. For example: the port redirector allows to convert TCP stream to UDP datagrams. the buffer size specification makes it possible to send one byte TCP segments or to produce IP fragments easily. the socket linger option can be used to produce TCP RST segments by setting the timeout to 0 execute option makes it possible to simulate TCP/UDP servers or clients with shell/python/Perl/whatever script etc. If you find some other helpful ways use please share via comments. Manual page nc(1) contains more details, along with examples on how to use some of these new options.

    Read the article

  • Laptop battery life drastically decreased compared to Windows 7

    - by Aron Rotteveel
    I am running Ubuntu 10.10 on my Dell Studio XPS 1640 and have about one hour of battery life in it, compared to about 2.5 hours running on Windows 7. This is with wireless and bluetooth on, but still, the difference seems incredible. What could be causing such a difference and is there a way to close the gap without losing core functionality? EDIT: here's some output from powertop. This is with bluetooth turned off and Wifi turned on. The output seems pretty normal to me, but as indicated, this is about 1 hour of battery life on a full battery... Wakeups-from-idle per second : 476.2 interval: 10.0s Power usage (ACPI estimate): 2.5W (1.2 hours) Top causes for wakeups: 30.0% (167.2)D chrome 21.0% (117.3) [extra timer interrupt] 13.9% ( 77.4) [kernel scheduler] Load balancing tick 3.4% ( 18.9)D xchat 7.1% ( 39.8) [iwlagn] <interrupt> 5.9% ( 32.9) AptanaStudio3 3.9% ( 21.6)D java 2.7% ( 14.9) [TLB shootdowns] <kernel IPI> 2.5% ( 14.1) docky 1.8% ( 10.0) nautilus 1.6% ( 9.0) thunderbird-bin 1.0% ( 5.5) [ahci] <interrupt> 0.9% ( 5.0) syndaemon 0.8% ( 4.3) [kernel core] hrtimer_start (tick_sched_timer) EDIT: after changing /proc/sys/vm/laptop_mode to 5 (it was set to 0), wakeups seem to have decreased, although usage still seems far too high: Wakeups-from-idle per second : 263.8 interval: 10.0s Power usage (ACPI estimate): 2.6W (0.9 hours) EDIT: I seem to have discovered the main cause: I was using the open source ATI Drivers. I recently installed the official ATI drivers and laptop battery life seems to have doubled since. EDIT: last edit. The previous 'solution' of installing the official ATI drivers turns out to be a non-solution. Although it does increase battery life, my laptop resolution is maxed out at 1200x800 after a reboot. (Please note that this problem does not need answering in this question as it is a seperate case)

    Read the article

  • how to remove unmet dependencies created by vlc player in ubuntu 12.04 LTS?

    - by Anti
    Output on trying to remove vlc with sudo apt-get remove vlc: niranjan@niranjan-OEM:~$ sudo apt-get remove vlc Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libvlccore5 : Depends: vlc-data (= 2.0.8-0ubuntu0.12.04.1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Trying sudo apt-get -f install niranjan@niranjan-OEM:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vlc-data The following NEW packages will be installed: vlc-data 0 upgraded, 1 newly installed, 0 to remove and 452 not upgraded. 8 not fully installed or removed. Need to get 0 B/10.3 MB of archives. After this operation, 30.4 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 95% dpkg: unrecoverable fatal error, aborting: files list file for package 'libavutil51' is missing final newline E: Sub-process /usr/bin/dpkg returned an error code (2)

    Read the article

  • Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?

    - by Doug T.
    I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder. I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds. Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final "release" build of a project, that ant, maven, or another tool is used to do the full build from the command line? Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?

    Read the article

  • Update Manager won't open (error related to pythonverbose)

    - by Mateus Machado Luna
    I'm having an issue with update-manager. Last night, my computer restart suddenly during the update process. Now it won't open and it keep appearing on the notifier with a message warning that an error occurred. The error is the same that is displayed when I try to open it on the terminal: Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected Traceback (most recent call last): File "/usr/bin/update-manager", line 26, in <module> from __future__ import print_function EOFError: EOF read where not expected I've already seen some questions here, but most of them are related to problems with ppas and the source.list file. This seems to be a bug on update-manager itself. I've already tried to remove it and install again, but the problem persists. I also noted another bug: my source-center doesn't open too. The message for it is similar to the other one: Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected Traceback (most recent call last): File "/usr/lib/command-not-found", line 5, in <module> from __future__ import absolute_import, print_function EOFError: EOF read where not expected For now I'm using apt-get update && upgrade for updating and the Synaptic for the source management. But I really would like to fix this stuff. Anyone can help? I'm with Ubuntu 12.10, Gnome-remix, 64-bits.

    Read the article

  • Automatic TRIM vs. manual TRIM

    - by Eike Cochu
    I am currently trying to find out how to trim with my new TP and was wondering about the difference of manual/online trimming. Here is my setup: ThinkPad T430s with SSD Samsung 830, 128GB and Xubuntu 12.10, here are some outputs to check if trim will work on my system (got these from here: http://wiki.ubuntuusers.de/SSD/TRIM) root@eike-tp:~# sudo hdparm -I /dev/sda | grep -i TRIM * Data Set Management TRIM supported (limit 8 blocks) First, I tried the online trimming: How to enable TRIM? my fstab with discard inserted: UUID=d6c49c17-a4f1-466c-9f7e-896c20db3bba / ext4 discard,noatime,errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=a0322f5f-c6c1-4896-863f-668f0638d8cf none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 I tried to test if it works (but I don't get any zeroes when I try it with /dev/sda), but found out that this method is only possible with SSD type 2 and I seem to have type 3. So I don't know if it works or not. The Ubuntuwiki (first link) recommends manual trimming, so I set up a daily cronjob instead of discard: #!/bin/sh LOG=/var/log/batched_discard.log echo "*** $(date -R) ***" >> $LOG fstrim -v / >> $LOG the wiki article suggests weekly or daily. Now to my questions: How often executes the automated trim? How often is recommended? Online vs. manual trimming? Thank you for your help

    Read the article

  • Configuring log4j on weblogic server for web applications.

    - by adejuanc
    To configure Weblogic server : 1.- Read the following link : How to Use Log4j with WebLogic Logging Services http://download.oracle.com/docs/cd/E12840_01/wls/docs103/logging/config_logs.html#wp1014610 Here the step by step : 2.- Go to WL_HOME/server/lib and copy wllog4j.jar to the server CLASSPATH, to do this copy the file into DOMAIN_NAME/lib 3.- Download log4j jar (in my case I had not the file) from http://logging.apache.org/log4j/1.2/download.html , in this case the last available version is log4j-1.2.17.jar, and copy the file into DOMAIN_NAME/lib (As step 2). 4.- In this case I activate log4j using WLST (Weblogic Scripting Tool), as bellow : 4.1 .- As you're using windows, execute a terminal window and go to DOMAIN_NAME/bin and run the file setDomainEnv.cmd (this file will set the environment to run java). 4.2 .- Execute the following comands : C:\>java weblogic.WLST wls:/offline> connect('username','password') wls:/mydomain/serverConfig> edit() wls:/mydomain/edit> startEdit() wls:/mydomain/edit !> cd("Servers/$YOUR_SERVER_NAME/Log/$YOUR_SERVER_NAME" wls:/mydomain/edit/Servers/myserver/Log/myserver !> cmo.setLog4jLoggingEnabled(true) wls:/mydomain/edit/Servers/myserver/Log/myserver !> save() wls:/mydomain/edit/Servers/myserver/Log/myserver !> activate() you can use ls() to list the objects under the WLS directory this will activate log4j to use it with WLS. Configuring WebLogic Logging Services http://download.oracle.com/docs/cd/E12840_01/wls/docs103/logging/config_logs.html To configure applications : 1. Create a log4j.properties file as bellow log4j.debug=TRUE log4j.rootLogger=INFO, R log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=/home/server.log log4j.appender.R.MaxFileSize=100KB log4j.appender.R.MaxBackupIndex=5 log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSSS} %p %t %c – %m%n 2. Copy the file to /WEB-INF/classes directory. of your application. 3.- implement also the last action provided to activate log4j on WLS

    Read the article

  • Brother MFC-J470DW scan function "Check Connection"

    - by user292599
    I have a Brother MFC-J470DW printer that I have connected to a Linux desktop (running Ubuntu 14.04) using a wireless router network. The printer works fine for printing and copying, but now I want to add the scan function. To set up the scan function, I went to the Brother web page for this printer: http://support.brother.com/g/b/downloadlist.aspx?c=eu_ot&lang=en&prod=mfcj470dw_us_eu_as&os=128 and under Scanner Drivers selected "Scanner driver 64bit (deb package)", "Scan-key-tool 64bit (deb package)", and "Scanner Setting file (deb package)". For each package, I clicked the EULA, and selected "open with Ubuntu Software Center". Then after the USC window pops up, I click on Install and the red line goes from left to right. In each case, the USC window then had a green checkmark and the Install box changes to Reinstall (that's how you know it worked). So now I try it out. Hitting the Scan button on the printer, selecting "Scan to file", and hitting ok produces the message "Check Connection". I checked the Brother Linux Information FAQ (scanner) page and the 14th question seems the same as mine: When I try to use the scan key on my network connected machine, I receive the error "Check connection" or I can not select anything except "scan to FTP". I explored the solution given for this FAQ, but found from ifconfig that I am already using eth0, the default setting, so presumably that is not the problem. I also found brscan-skey installed in /usr/bin and did drrm@drrmlinux2:~$ brscan-skey -t drrm@drrmlinux2:~$ brscan-skey but that didn't help - I still get the "Check connection" message. What can you suggest to fix this problem?

    Read the article

  • Setup a Autoreply Only Account

    - by dabrain
    For some very good reason you might would like to setup a 'autoreply' only account, without storing the incoming mail into a mailbox. If not already done, create an account via Delegated Admin Gui or commadmin Commandline Tool. Example: /opt/sun/comms/da/bin/commadmin user create -D admin -d vmdomain.tld -w enigma -F Mike -l    mparis -L Paris -W tester -E [email protected] -S mail -H mars.vmdomain.tld Setup mailDeliveryOption to autoreply mode only, so no email will be stored in the user mailbox, skip this step if you want incoming emails stored in the mailbox. ldapmodify -D "cn=Directory Manager" -w enigma -f /tmp/modfile [/tmp/modfile] dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify replace: mailDeliveryOption mailDeliveryOption: autoreply Setup mailSieveRuleSource with the autoreply text and 'do-not-reply' From address. The "Thank you ..." part becomes the subject. The next string in quotes is the body part of the message. The ":hours 0" denotes that we want a reply sent for every message. Finally,  the \n is used because of the wanted newlines in the body. ldapmodify -D "cn=Directory Manager" -w enigma -f /tmp/addfile [/tmp/addfile] dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: mailSieveRuleSource mailSieveRuleSource: require "vacation"; vacation :hours 0 :reply :from "do-not-reply   @domain.com" :subject "Thank you for contacting webpost" "Your Mail is being review   ed.\nTo access contact information please visit : http://www.domain.com \nPlease do    not reply to this e-mail as it is an automated response on your mail being accessed   .\n\nPublic Respose Unit.\n"

    Read the article

  • Why is my root filesystem always scanned at boot?

    - by luri
    I always have a pause at boot saying my filesystems are being checked (with a "press C to cancel" note, too). Actually (seeing boot.log) I think it's the / fs, which is located at /dev/sdb5 Several questions altoghether, here (hope this does not break any rule): Is this normal? Can I (or even should I) prevent this anyhow? According to boot.log (below) the fs does not seem to be 'clean', or, at least, it's in an state or condition that makes fsck always can it for errors for a while (just a few seconds). How can I fix it? Edit: This is my boot.log: fsck desde util-linux-ng 2.17.2 udevd[515]: can not read '/etc/udev/rules.d/z80_user.rules' /dev/sdb5: 249045/32841728 ficheros (0.3% no contiguos), 20488485/131338752 bloques init: ureadahead-other main process (1111) terminated with status 4 init: ureadahead-other main process (1116) terminated with status 4 Password: * Starting AppArmor profiles [160G Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox [154G[ OK ] * Setting sensors limits [160G [154G[ OK ] And this is dumpe2fs results for the filesystem being checked (well, the relevant part of the log): Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32841728 Block count: 131338752 Reserved block count: 6566937 Free blocks: 110850356 Free inodes: 32592701 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Dec 10 19:44:15 2010 Last mount time: Mon Feb 14 17:00:02 2011 Last write time: Mon Feb 14 16:59:45 2011 Mount count: 1 Maximum mount count: 33 Last checked: Mon Feb 14 16:59:45 2011 Check interval: 15552000 (6 months) Next check after: Sat Aug 13 17:59:45 2011 Lifetime writes: 331 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28049496 Default directory hash: half_md4 Directory Hash Seed: d3d24459-514b-4413-b840-e970b766095b Journal backup: inode blocks Journal features: journal_incompat_revoke Tamaño de fichero de transacciones: 128M Journal length: 32768 Journal sequence: 0x0005e0c4 Journal start: 1 This is the relevant (at least I think this is the fs being checked) line in fstab: #Entry for /dev/sdb5 : UUID=42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 / ext4 errors=remount-ro 0 1

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >