Search Results

Search found 870 results on 35 pages for 'gz'.

Page 24/35 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • HIGH CPU USAGE FROM SUSTEM [on hold]

    - by user195641
    CAN ANYONE EXPLAIN THIS ? IT IS BAD FOR MY CPU ? CAN ANYONE HELP ME ? I DONT KNOW WHAT TO DO ! THERE IS HIGH DELAY WHEN I OPEN A NEW TASK HOWEVER ITS A INTEL CORE DUO EXTREAMA 3.0GH THANKS There are about 100 items monitored with the agent. They are also monitored on other identical hosts where Zabbix agent does not consume so much of CPU. Agents send collected data to Zabbix proxy. The agent configuration is default. The host CPU has 8 cores (2.4 Gz). The smallest time value for monitored items is 60 seconds.

    Read the article

  • ssh (openSSH) questions

    - by Camran
    I have ubuntu 9.10 server. Firstly, is OpenSSH the same as SSHD? Secondly, In the terminal when typing whereis sshd i get this: whereis sshd /usr/sbin/sshd Also when typing whereis openssh i get this: whereis openssh /usr/lib/openssh How do I know if I have openssh? Also, some tutorials online suggest opening sshd_config, so when typing this: whereis sshd_config /usr/share/man/man5/sshd_config.5.gz // I get this... What should I do, because as you have answered my other Q about security, you have pointed out that it is the way you configure your ssh and etc which is important. Is there any guide for this? How should I configure this? I will be the only user for this server btw... If you need more input let me know and I will update this Q. Thanks

    Read the article

  • ssh (openSSH) questions

    - by Camran
    I have ubuntu 9.10 server. Firstly, is OpenSSH the same as SSHD? Secondly, In the terminal when typing whereis sshd i get this: whereis sshd /usr/sbin/sshd Also when typing whereis openssh i get this: whereis openssh /usr/lib/openssh How do I know if I have openssh? Also, some tutorials online suggest opening sshd_config, so when typing this: whereis sshd_config /usr/share/man/man5/sshd_config.5.gz // I get this... What should I do, because as you have answered my other Q about security, you have pointed out that it is the way you configure your ssh and etc which is important. Is there any guide for this? How should I configure this? I will be the only user for this server btw... If you need more input let me know and I will update this Q. Thanks

    Read the article

  • Resizing Images In a Table Using JavaScript

    - by Abluescarab
    Hey, all. I apologize beforehand if none of this makes sense, or if I sound incompetent... I've been making webpages for a while, but don't know any JavaScript. I am trying to make a webpage for a friend, and he requested if I could write some code to resize the images in the page based on the user's screen resolution. I did some research on this question, and it's kind of similar to this, this, and this. Unfortunately, those articles didn't answer my question entirely because none of the methods described worked. Right now, I have a three-column webpage with 10 images in a table in the left sidebar, and when I use percentages for their sizes, they don't resize right between monitors. As such, I'm trying to write a JavaScript function that changes their sizes after detecting the screen resolution. The code is stripped from my post if I put it all, so I'll just say that each image links to a website and uses a separate image to change color when you hover over it. Would I have to address both images to change their sizes correctly? I use a JavaScript function to switch between them. Anyway, I tried both methods in this article and neither worked for me. If it helps, I'm using Google Chrome, but I'm trying to make this page as cross-browser as possible. Here's the code I have so far in my JavaScript function: function resizeImages() { var w = window.width; var h = window.height; var yuk = document.getElementById('yuk').style; var wb = document.getElementById('wb').style; var tf = document.getElementById('tf').style; var lh = document.getElementById('lh').style; var ko = document.getElementById('ko').style; var gz = document.getElementById('gz').style; var fb = document.getElementById('fb').style; var eg = document.getElementById('eg').style; var dl = document.getElementById('dl').style; var da = document.getElementById('da').style; if (w = "800" && h = "600") { } else if (w = "1024" && h = "768") { } else if (w = "1152" && h = "864") { } else if (w = "1280" && h = "720") { } else if (w = "1280" && h = "768") { } else if (w = "1280" && h = "800") { } else if (w = "1280" && h = "960") { } else if (w = "1280" && h = "1024") { } } Yeah, I don't have much in it because I don't know if I'm doing it right yet. Is this a way I can detect the "width" and "height" properties of a window? The "yuk", "wb", etcetera are the images I'm trying to change the size of. To sum it up: I want to resize images based on screen resolution using JavaScript, but my research attempts have been... futile. I'm sorry if that was long-winded, but thanks ahead of time!

    Read the article

  • XDocument + IEnumerable is causing out of memory exception in System.Xml.Linq.dll

    - by Manatherin
    Basically I have a program which, when it starts loads a list of files (as FileInfo) and for each file in the list it loads a XML document (as XDocument). The program then reads data out of it into a container class (storing as IEnumerables), at which point the XDocument goes out of scope. The program then exports the data from the container class to a database. After the export the container class goes out of scope, however, the garbage collector isn't clearing up the container class which, because its storing as IEnumerable, seems to lead to the XDocument staying in memory (Not sure if this is the reason but the task manager is showing the memory from the XDocument isn't being freed). As the program is looping through multiple files eventually the program is throwing a out of memory exception. To mitigate this ive ended up using System.GC.Collect(); to force the garbage collector to run after the container goes out of scope. this is working but my questions are: Is this the right thing to do? (Forcing the garbage collector to run seems a bit odd) Is there a better way to make sure the XDocument memory is being disposed? Could there be a different reason, other than the IEnumerable, that the document memory isnt being freed? Thanks. Edit: Code Samples: Container Class: public IEnumerable<CustomClassOne> CustomClassOne { get; set; } public IEnumerable<CustomClassTwo> CustomClassTwo { get; set; } public IEnumerable<CustomClassThree> CustomClassThree { get; set; } ... public IEnumerable<CustomClassNine> CustomClassNine { get; set; }</code></pre> Custom Class: public long VariableOne { get; set; } public int VariableTwo { get; set; } public DateTime VariableThree { get; set; } ... Anyway that's the basic structures really. The Custom Classes are populated through the container class from the XML document. The filled structures themselves use very little memory. A container class is filled from one XML document, goes out of scope, the next document is then loaded e.g. public static void ExportAll(IEnumerable<FileInfo> files) { foreach (FileInfo file in files) { ExportFile(file); //Temporary to clear memory System.GC.Collect(); } } private static void ExportFile(FileInfo file) { ContainerClass containerClass = Reader.ReadXMLDocument(file); ExportContainerClass(containerClass); //Export simply dumps the data from the container class into a database //Container Class (and any passed container classes) goes out of scope at end of export } public static ContainerClass ReadXMLDocument(FileInfo fileToRead) { XDocument document = GetXDocument(fileToRead); var containerClass = new ContainerClass(); //ForEach customClass in containerClass //Read all data for customClass from XDocument return containerClass; } Forgot to mention this bit (not sure if its relevent), the files can be compressed as .gz so I have the GetXDocument() method to load it private static XDocument GetXDocument(FileInfo fileToRead) { XDocument document; using (FileStream fileStream = new FileStream(fileToRead.FullName, FileMode.Open, FileAccess.Read, FileShare.Read)) { if (String.Compare(fileToRead.Extension, ".gz", true) == 0) { using (GZipStream zipStream = new GZipStream(fileStream, CompressionMode.Decompress)) { document = XDocument.Load(zipStream); } } else { document = XDocument.Load(fileStream); } return document; } } Hope this is enough information. Thanks Edit: The System.GC.Collect() is not working 100% of the time, sometimes the program seems to retain the XDocument, anyone have any idea why this might be?

    Read the article

  • Default /etc/apt/sources.list?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • PPA causing 404 error?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • Whats consuming HDD Space

    - by Umair Mustafa
    I have single partition of 92GB in which I installed Ubuntu 12.04. And for some Unknown reason a message pop ups saying that I only have 1GB of HDD space left. I ran command sudo du -hscx * on / and /home /home gave me this result 4.0K C:\nppdf32Log\debuglog.txt 0 convertedvideo.avi 176M Desktop 16K Documents 169M Downloads 4.0K examples.desktop 17M file.txt 4.0K Music 984K Pictures 4.0K Public 320K Red Hat 6.iso 2.5M syslog-ng_3.3.6.tar.gz 4.0K Templates 8.0K terminal.png 1.2M Thunderbird Attachments 698M ubuntu10.04LTS.iso 16K Ubuntu One 4.0K Untitled Folder 4.0K Videos 21G VirtualBox VMs 22G total And / gave me this result 81G home 0 initrd.img 0 initrd.img.old 833M lib 16K lost+found 68K media 4.0K mnt 260M opt du: cannot access `proc/8339/task/8339/fd/4': No such file or directory du: cannot access `proc/8339/task/8339/fdinfo/4': No such file or directory du: cannot access `proc/8339/fd/4': No such file or directory du: cannot access `proc/8339/fdinfo/4': No such file or directory 0 proc 640K root 908K run 8.6M sbin 4.0K selinux 4.0K srv 0 sys 148K tmp 3.3G usr 436M var 0 vmlinuz 0 vmlinuz.old 86G total If you look at the result returned by / it shows that /home is consuming 81GB but on the other hand /home returns only 22GB. I cant figure out whats consuming the HDD. I have not installed anything except Virtual Machines Perpetrator found using Disk Usage Analyzer

    Read the article

  • How to boot live iso images?

    - by virpara
    I found that it can be done with loopback as follows menuentry "Lucid ISO" { loopback loop (hd0,1)/boot/iso/ubuntu-10.04-desktop-i386.iso linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/boot/iso/ubuntu-10.04-desktop-i386.iso noprompt noeject initrd (loop)/casper/initrd.lz } But it works only with ubuntu or its derivatives. How it should be written if I want to boot other live images like fedora, cent, opensuse etc. ? Edit: I found some other entries but all of them are probably debian based. menuentry "Linux Mint 10 Gnome ISO" { loopback loop /linuxmint10.iso linux (loop)/casper/vmlinuz file=/cdrom/preseed/mint.seed boot=casper initrd=/casper/initrd.lz iso-scan/filename=/linuxmint10.iso noeject noprompt splash -- initrd (loop)/casper/initrd.lz } menuentry "DBAN ISO" { loopback loop /dban.iso linux (loop)/DBAN.BZI nuke="dwipe" iso-scan/filename=/dban.iso silent -- } menuentry "Tinycore ISO" { loopback loop /tinycore.iso linux (loop)/boot/bzImage -- initrd (loop)/boot/tinycore.gz } menuentry "SystemRescueCd" { loopback loop /systemrescuecd.iso linux (loop)/isolinux/rescuecd isoloop=/systemrescuecd.iso setkmap=us docache dostartx initrd (loop)/isolinux/initram.igz } Edit2: How to chainload grub and syslinux from grub2? Edit3: I want to boot other live images without any removable devices and use grub2 so need menu entries specific to grub2.

    Read the article

  • How to Run Pam Face Authentication

    - by Supriyo Banerjee
    I am using Ubuntu 11.10. I went to the following URL to download the software 'Pam Face Authentication': http://ppa.launchpad.net/antonio.chiurazzi/ppa/ubuntu/pool/main/p/pam-face-authentication/ and downloaded the version for natty narhwall. I installed the software using the following commands: sudo apt-get install build-essential cmake qt4-qmake libx11-dev libcv-dev libcvaux-dev libhighgui2.1 libhighgui-dev libqt4-dev libpam0g-dev checkinstall cd /tmp && wget http://pam-face-authentication.googlecode.com /files/pam-face-authentication-0.3.tar.gz sudo add-apt-repository ppa:antonio.chiurazzi sudo apt-get update sudo apt-get install pam-face-authentication cat << EOF | sudo tee /usr/share/pam-configs/face_authentication /dev/null Name: face_authentication profile Default: yes Priority: 900 Auth-Type: Primary Auth: [success=end default=ignore] pam_face_authentication.so enableX EOF sudo pam-auth-update --package face_authentication The software installed and I can run the qt-facetrainer. But the problem is when I restarted my system, I saw that the default login screen is appearing where I should put my password to login. The webcam is not starting at all. And I cannot login with my face. Which means I think that pam face authentication programme is not starting at all. Please let me know how I can login with my face using pam face authentication programme.

    Read the article

  • Upload to PPA succeeded but packages doesn't appear

    - by lorin
    I'm trying to upload packages to my PPA for the first time. I want to use the PPA for customized versions of the OpenStack Compute (nova) project, so I tried to do a test by uploading packages corresponding to the bexar release of this project (lp:nova/bexar), with a new version number and changelog entry. I signed the source packages using my OpenGPG key, which has been uploaded to the ubuntu keyserver: $ dch -v 2011.1-0ubuntu2-isi1 -D lucid "ISI bexar build #1" $ dpkg-buildpackage -s -rfakeroot -tc -D -k4C8A14AB When I tried to upload the files to the repository, it seemed to work (real email obscured): $ dput ppa:lorinh/ppa nova_2011.2~bzr663-1isi1_source.changes Checking signature on .changes gpg: Signature made Fri 11 Feb 2011 03:52:50 PM EST using RSA key ID 4C8A14AB gpg: Good signature from "Lorin Hochstein <lorin@...>" Good signature on /home/lorin/packaging/nova_2011.2~bzr663-1isi1_source.changes. Checking signature on .dsc gpg: Signature made Fri 11 Feb 2011 03:52:44 PM EST using RSA key ID 4C8A14AB gpg: Good signature from "Lorin Hochstein <lorin@...>" Good signature on /home/lorin/packaging/nova_2011.2~bzr663-1isi1.dsc. Uploading to ppa (via ftp to ppa.launchpad.net): Uploading nova_2011.2~bzr663-1isi1.dsc: done. Uploading nova_2011.2~bzr663-1isi1.tar.gz: done. Uploading nova_2011.2~bzr663-1isi1_source.changes: done. However, the packages aren't listed on my PPA page. If I try to upload again, I get the error: $ dput ppa:lorinh/ppa nova_2011.2~bzr663-1isi1_source.changes Package has already been uploaded to ppa on ppa.launchpad.net Nothing more to do for nova_2011.2~bzr663-1isi1_source.changes Am I supposed to do something next? How do I track down what wrong? As of this writing, it's been a day and a half since I've done the upload.

    Read the article

  • Canon Pixma 432 network scanner

    - by Donald Cutler
    I have a problem with a Canon Pixma MX432 Printer/Scanner. I just removed Windows 7 and installed Ubuntu 14.04 8/17/2014 on an older desktop that I built (The computer is an AMD build with an ASUS motherboard). The printer/scanner is an all-in-one unit that is networked in my house via WiFi. All of the computers in my house can access this printer/scanner. My Macbook, my wife's Windows 8 laptop, and my kids mini iPads. I am giving Linux a test-drive with some success as far as setting devices up. But, for the life of me I cannot figure out my scanner issue. If anyone can help I would appreciate it. Make/Model: Canon Pixma MX432 PPD Driver: I have no idea how to get this info. Supported?: no, from the information I gather from old forum posts. Works?: the printer works via WiFi perfectly, but not the scanner. The Simple Scan program sees the scanner, but produces an error when I attempt to scan. I also tried XSANE, but that program does not even detect the scanner. NOTE: THE PRINTER IS WORKING OFF OF AN UBUNTU DRIVER AND NOT A CANON DRIVER. Linux Version: Ubuntu 14.04 I tried the steps in this post, downloaded the "scangearmp-mx430series-1.90-1-deb.tar.gz" file, but could not get the scanner to work. http://ubuntuforums.org/archive/index.php/t-2096430.html any suggestions?

    Read the article

  • Lexmark X5650 doesn't print (scanning works)

    - by unor
    I want to use a Lexmark X5650 under Ubuntu. When I connect it via USB, Ubuntu recognizes the device, but can't find a driver. On the Lexmark support site, you can select "Unix/Linux" and then "Ubuntu 9.04", "9.10", "10.04" and "10.10". However, all versions lead to the same driver: lexmark-08z-series-driver-1.0-1.i386.deb.sh.tar.gz (Last updated: 2012-04-13). While installing this driver, the installer asks me to connect the device. As soon as I connect it via USB, Ubuntu now adds the printer automatically. So after the installation completed, I had 2 printers set up. But I can't print anything (I tried it with both printer configurations). Scanning works fine, though. So I guess the driver installation and the device connection are successful. When I try to print something, the print job is listed in the printing queue, but nothing happens. After some time, I get an error message which starts a debug wizard, but in the end it reads something like "Sorry, couldn't find the reason". I had several different error messages (unfortunately, I didn't record them), one was approximately like "printer cannot communicate with computer". Another said that my color ink was low (which is true, but black is full). Another said there was an input/output error. I tried it with Ubuntu 11.10. Because I read that some people had success with 10.04., I installed it in QEMU and installed the driver there, too. Same problem, though. I'd like to upgrade to 12.04, if it should work there for any reason, but I read that for some people this printer stopped working in 12.04 :/ So, it would be best if the printer would work in 12.04. If that's not possible, I'd be fine with starting a QEMU virtual machine with any GNU/Linux distribution that works with my printer.

    Read the article

  • How to find location of installed library

    - by Raven
    Background: I'm trying to build my program but first I need to set up libraries in netbeans. My project is using GLU and therefore I installed libglu-dev. I didn't note location where the libraries were located and now I can't find them.. I've switched to Linux just a few days ago and so far I'm very content with it, however I couldn't google this one out and becoming frustrated.. Is there way to find out where files of package were installed without running installation again? I mean if I got library xxx and installed it some time ago, is there somecommand xxx that will print this info? I've already tried locate, find and whereis commands but either I'm missing something or I just can't do it correctly.. for libglu, locate returns: /usr/share/bug/libglu1-mesa /usr/share/bug/libglu1-mesa/control /usr/share/bug/libglu1-mesa/script /usr/share/doc/libglu1-mesa /usr/share/doc/libglu1-mesa/changelog.Debian.gz /usr/share/doc/libglu1-mesa/copyright /usr/share/lintian/overrides/libglu1-mesa /var/lib/dpkg/info/libglu1-mesa:i386.list /var/lib/dpkg/info/libglu1-mesa:i386.md5sums /var/lib/dpkg/info/libglu1-mesa:i386.postinst /var/lib/dpkg/info/libglu1-mesa:i386.postrm /var/lib/dpkg/info/libglu1-mesa:i386.shlibs Other two commands fail to find anything. Now locate did it's job but I'm sure none of those paths is where the library actually resides (at least everything I was linking so far was in /usr/lib or usr/local/lib). The libglu was introduced just as example, I'm looking for general solution for this problem.

    Read the article

  • apt-get does not work with proxy

    - by tommyk
    For a command sudo apt-get update I get following error W: Failed to fetch http://ch.archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages.gz 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) I am running Ubuntu 10.10 installed on Windows XP using VirtualBox. For internet connections I am using proxy server with an authentication. I tried to use gnome-network-proxy tool to set proxy settings system-wide. After that /etc/environment has been updated by http_proxy variable with the format http://my_proxy:port/, there were no authentication data. I checked this with firefox. Browser asked my for login and password and everything was working fine. It was unfortunately not the case for apt-get. I have also tried to do as described here. Unfortunately it does not work. May it be somehow related to the fact that a proxy is in a Windows domain, any ideas ? EDIT: My proxy name is http-proxy. Is '-' a special character here ?

    Read the article

  • Packages not showing up in created APT repository

    - by David
    I created an APT repository using deb-scanpackages, and it seemed to go well. When I did a apt-get update on another server, the Packages.gz file was retrieved, and all seemed well - until I went to search for the packages contained in that repository (all packages are created locally). Several recommendations suggested reprepro; I tried that. Same result - except I had to rebuild the packages with the Priority and Section lines in the control file (nothing says this anywhere). The reprepro utility also generates a complicated directory structure which required rewriting the repository entry on the requesting server. I then found that the arch directory referenced i386 and not amd64 (which was requested by the requesting server). Is it possible that the AMD64 system isn't seeing packages compiled for i386? Searching the *Packages files in /var/lib/apt/lists show that the only packages for i386 are those I added (the other files are for the server - Ubuntu 10.04.2 LTS). The server the packages were built on is Ubuntu 10.04.2 LTS i686; the requesting server is x86_64. I found some discussion at the Debian AMD64FAQ but it claims to be obsolete. It makes mention of an extended syntax for repository listings for APT, and a command dpkg-subarchitecture - neither of which work on the local AMD64 server. Do I have to build two different sets of packages?

    Read the article

  • Hybrid USB Install Method - netboot and iso

    - by Samus Arin
    I was following the steps here ("Preparing Files for USB Memory Stick Booting") https://help.ubuntu.com/10.04/installation-guide/i386/boot-usb-files.html to create a installation usb drive for 12.1. The very first paragraph of the article states "The second is to also copy a CD image onto the USB stick and use that as a source for packages, possibly in combination with a mirror." However, the only instructions mentioned regarding an iso image is to simply copy one somewhere on the drive (after its been made bootable and syslinux, vmlinuz and initrd.gz installed/copied): "you should now copy an Ubuntu ISO image onto the stick." I thought it strange there where no configuration steps for "pointing" the kernel to the iso (like a line in syslinux.cfg or a boot: option or something), but went ahead with the install anyway. I don't think the iso was used at all, it appeared that all the OS files where downloaded during the install process. Therefore, I was wondering if anyone knew how to use this local iso image in this particular installation technique (I know the image can be installed with dd, but thats a different technique), b/c I need to reinstall (I installed unity, but it's wayy to much for my little Atom based netbook) ? Thank you.

    Read the article

  • Is there a way to disable the hardware on/off switch for the Wireless interface?

    - by avee
    I have an HP 520 and is running the latest Ubuntu 11.10. The hardware works fine with Ubuntu with one exception: The device has a hardware switch for turning the wifi on and off. Every time the wi-fi is disabled through the hardware switch, I am unable to bring it on again. The message on the networking popup would be device not ready. What I am looking for is a way to disable the hardware switch altogether so that when users accidentally press the button, the wifi would not be disabled. There is no setting to disable the switch in the BIOS. Hardware info from lspci -nn: 00:00.0 Host bridge [0600]: Intel Corporation Mobile 945GME Express Memory Controller Hub [8086:27ac] (rev 03) 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 945GME Express Integrated Graphics Controller [8086:27ae] (rev 03) 00:02.1 Display controller [0380]: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller [8086:27a6] (rev 03) 00:1b.0 Audio device [0403]: Intel Corporation N10/ICH 7 Family High Definition Audio Controller [8086:27d8] (rev 01) 00:1c.0 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 1 [8086:27d0] (rev 01) 00:1c.1 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 2 [8086:27d2] (rev 01) 00:1d.0 USB Controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 [8086:27c8] (rev 01) 00:1d.7 USB Controller [0c03]: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller [8086:27cc] (rev 01) 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e1) 00:1f.0 ISA bridge [0601]: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge [8086:27b9] (rev 01) 00:1f.2 IDE interface [0101]: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA IDE Controller [8086:27c4] (rev 01) 02:06.0 CardBus bridge [0607]: ENE Technology Inc CB1410 Cardbus Controller [1524:1410] (rev 01) 02:08.0 Ethernet controller [0200]: Intel Corporation 82562ET/EZ/GT/GZ - PRO/100 VE (LOM) Ethernet Controller Mobile [8086:1068] (rev 01) 10:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection [8086:4222] (rev 02) The output from lsmod | grep iwl: iwl3945 73329 0 iwl_legacy 71499 1 iwl3945 mac80211 272785 2 iwl3945,iwl_legacy cfg80211 172392 3 iwl3945,iwl_legacy,mac80211

    Read the article

  • apt-get does not work with proxy

    - by tommyk
    For a command sudo apt-get update I get following error W: Failed to fetch http://ch.archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages.gz 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) I am running Ubuntu 10.10 installed on Windows XP using VirtualBox. For internet connections I am using proxy server with an authentication. I tried to use gnome-network-proxy tool to set proxy settings system-wide. After that /etc/environment has been updated by http_proxy variable with the format http://my_proxy:port/, there were no authentication data. I checked this with firefox. Browser asked my for login and password and everything was working fine. It was unfortunately not the case for apt-get. I have also tried to do as described here. Unfortunately it does not work. May it be somehow related to the fact that a proxy is in a Windows domain, any ideas ? EDIT: My proxy name is http-proxy. Is '-' a special character here ?

    Read the article

  • Failed to spawn test

    - by Lost
    Running a simple test in Ubuntu 12.04: sudo lxc-execute -n test /bin/bash -l debug -o outout Got error message: lxc-execute: failed to spawn 'test' cat outout: lxc-execute 1347053658.113 DEBUG lxc_start - sigchild handler set lxc-execute 1347053658.113 INFO lxc_start - 'test' is initialized lxc-execute 1347053658.366 DEBUG lxc_start - Dropping cap_sys_boot and watching utmp lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/' (rootfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/sys' (sysfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/proc' (proc) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev' (devtmpfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev/pts' (devpts) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/' (ext3) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/fs/fuse/connections' (fusectl) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/debug' (debugfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/security' (securityfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/lock' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/shm' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/rpc_pipefs' (rpc_pipefs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/scratch/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/share' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/proj/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/users/bhu' (nfs) lxc-execute 1347053658.367 ERROR lxc_start - failed to spawn 'test' Run command: sudo lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.38.7-1.0emulab --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig What's the problem? Thanks a lot

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • create a .deb Package from scripts or binaries

    - by tdeutsch
    I searched for a simple way to create .deb Packages for things which have no source code to compile (configs, shellscripts, proprietary software). This was quite a problem because most of the package tutorials are assuming you have a source tarball you want to compile. Then I've found this short tutorial (german). Afterwards, I created a small script to create a simple repository. Like this: rm /export/my-repository/repository/* cd /home/tdeutsch/deb-pkg for i in $(ls | grep my); do dpkg -b ./$i /export/my-repository/repository/$i.deb; done cd /export/avanon-repository/repository gpg --armor --export "My Package Signing Key" > PublicKey apt-ftparchive packages ./ | gzip > Packages.gz apt-ftparchive packages ./ > Packages apt-ftparchive release ./ > /tmp/Release.tmp; mv /tmp/Release.tmp Release gpg --output Release.gpg -ba Release I added the key to the apt keyring and included the source like this: deb http://my.default.com/my-repository/ ./ It looks like the repo itself is working well (I ran into some problems, to fix them I needed to add the Packages twice and make the temp-file workaround for the Release file). I also put some downloaded .deb into the repo, it looks like they are also working without problems. But my self created packages didn't... Wenn i do sudo apt-get update, they are causing errors like this: E: Problem parsing dependency Depends E: Error occurred while processing my-printerconf (NewVersion2) E: Problem with MergeList /var/lib/apt/lists/my.default.com_my-repository_._Packages E: The package lists or status file could not be parsed or opened. Has anyone an idea what I did wrong?

    Read the article

  • How to create a deb package that installs a series of files

    - by fossfreedom
    I would like to create a brand new deb package to install series of files. If at all possible, I would like to untar the folder containing these files as part of the installation into a known folder location. Failing that, some knowledge how to package the source folders and files would be very useful. Question is - is this possible and if so - how? Lets give an example: ~/mypluginfolder/ contains the files x, y, a subfolder called abc and inside that another file called z. I want to tar this folder: tar -cvf myfiles.tar ~/mypluginfolder I presume my debian package would look like myfiles.tar.gz myfiles+ppafoss_0.1-1/ myfiles.tar DEBIAN changelog, compat, control, install, rules source Is it possible to somehow untar myfiles.tar to a known folder location for example /usr/share/rhythmbox/plugins/ Thus the final result would be: /usr/share/rhythmbox/plugins/mypluginfolder /usr/share/rhythmbox/plugins/mypluginfolder\x /usr/share/rhythmbox/plugins/mypluginfolder\y /usr/share/rhythmbox/plugins/mypluginfolder\abc\z If - presuming launchpad needs source, advice is sought as to where I should drop the source folders and files into the deb package structure. This will eventually will become a series of individual launchpad PPA packages. What I prefer (but may not be able to achieve...) is to keep my packaging to a minimum - create a series of packages from a template and adjust the bare minimum (changelog etc + the tar file/file & folder structure).

    Read the article

  • Freescale One Box Unboxing (then installing Java SE Embedded technology)

    - by hinkmond
    So, I get a FedEx delivery the other day... "What cool device could be inside this FedEx Overnight Express Large Box?" I was wondering... Could it be a new Linux/ARM target device board, faster than a Raspberry Pi and better than a BeagleBone Black??? Why, yes! Yes, it was a Linux/ARM target device board, faster than anything around! It was a Freescale i.MX6 Sabre Smart Device Board (SDB)! Cool... Quad Core ARM Cortex A9 1GHz with 1GB of RAM. So, cool... I installed the Freescale One Box OpenWRT Linux image onto its SD card and booted it up into Linux. But, wait! One thing was missing... What was it? What could be missing? Why, it had no Java SE Embedded installed on it yet, of course! So, I went to the JDK 7u45 download link. Clicked on "Accept License Agreement", and clicked on "jdk-7u45-linux-arm-vfp-sflt.tar.gz", installed the bad boy, and all was good. Java SE Embedded 7u45 on a Freescale One Box. Nice... Hinkmond

    Read the article

  • Ubuntu One, compressed files

    - by user8179
    I have uploaded some files to my Ubuntu One account and it seems to work great most of the time. I usually upload them directly from Nautilus by right clicking the folder, using the ”Synchronize this folder” option, and then I make sure that the file I want to upload is published. Then I usually test the whole thing by trying to download it. I right click the file again to get its URL and I paste it into my Web browser. This usually works fine. But yesterday I uploaded two compressed files – ”.tar.bz2”. When I tried to open them after downloading them with my Web browser (Opera), it failed. I found that the file was bigger than the original file (2358 B instead af 2335 B – 15 B added at the beginning of the file and 8 B added to the end), and someone at the Opera channel (IRC) at OperaNet (Europe) figured out that the reason for this is that the server compress the file again, ”without telling Opera”. So to be able to extract the file I need to add ”.gz” to the file name and then extract it twice. If I downloaded it with Firefox however, I didn't need to do that, so maybe Firefox figured this out somehow in a way that Opera does not. Someone also tried to download the file with wget and some other browser and he also got the same result as I did with Opera, that is the file is compressed a second time by the server. I guess ”the server” is the Ubuntu One server, right? So why is this? Could it be done better somehow? Or did I do something wrong when uploading the files? It also seems like this extra compressing thing does not always happen, because when I tried again a few minutes ago, the file came down with its right size (2335 B), without an extra compression. But the other file (114 MiB) was still compressed twice.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >