Search Results

Search found 17566 results on 703 pages for 'package manager'.

Page 187/703 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Missing libcurl.so.3 on updating tp PHP 5.2.13

    - by exentric
    Hi, I am trying to update my PHP to 5.2.13 however when I tried running yum update, it gives me this dependency error. php-5.2.13-jason.1.i386 from utterramblings has depsolving problems --> Missing Dependency: libcurl.so.3 is needed by package php-5.2.13-jason.1.i386 (utterramblings) Error: Missing Dependency: libcurl.so.3 is needed by package php-cli-5.2.13-jason.1.i386 (utterramblings) Error: Missing Dependency: libcurl.so.3 is needed by package php-5.2.13-jason.1.i386 (utterramblings) I believe this problem has been caused by my updating libcurl some time ago (to version 7.16.4-8.el5) but I have no idea how to solve this dependency issue. Some time ago my friend asked me regarding missing libcurl.so.3 as well on running some script. Can't say I remember what but he did say he managed to solved it (at least on his end) so I paid no attention to the libcurl.so.3 issue anymore. But now when I try to update my PHP, this problem arises again. This however does indeed exist (and presumably what solved my friend's issue): /usr/lib/libcurl.so.3 Any thoughts on this matter? I'm using centOS 5.3, PHP 5.2.11 and on LightTPD. -Regards

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • DriveImage XML fails with a Windows Volume Shadow Service Error

    - by ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • SQUID proxy - open FTP (and other ports)

    - by gaffcz
    elpeHow can I open other ports than HTTP and HTTPS using SQUID proxy? I have last version of squid running on Fedora 10 but I'm not able to open FTP port. part of my squid.conf: acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl ftp proto FTP acl ftp_port port 21 always_direct allow FTP acl SSL_ports port 443 20 21 22 acl Safe_ports port 20 # ftp acl Safe_ports port 21 # ftp acl Safe_ports port 22 # sftp acl Safe_ports port 80 # http acl Safe_ports port 280 # http-mgmt acl Safe_ports port 443 # https acl Safe_ports port 1025-65535 # uregistred ports acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager # USER privilegies (encoded in file passwd) auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd acl AUTHUSERS proxy_auth REQUIRED # BLACKLIST (in file denied.conf) acl denied_domains dstdomain "/etc/squid/DNDdomains.conf" acl denied_regex url_regex "/etc/squid/DNDregex.conf" http_access deny denied_regex http_access deny denied_domains http_access allow AUTHUSERS http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow ftp_port CONNECT http_access allow ftp http_access allow localhost http_access deny all #http_reply_access allow all #http_access allow all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 10000 16 256 coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 I've tried to add: acl ftp proto FTP / acl ftp_port port 21 http_access allow ftp add/remove ports 20,21 from SSL_PORTS list set the iptables But nothing helped. It is even possible to use a new version of squid for FTP transfer?

    Read the article

  • How can I copy a VMware Fusion virtual machine to a FAT32 partition?

    - by Michael Prescott
    I created the virtual machine on a host running OS X. I then moved the machine to a FAT32 partition on an external drive. It moved the first time without error. Then I moved it from the external drive to a host running Ubuntu 9.10. I had to move to a FAT32 partition first because Ubuntu doesn't recognize Mac OS Extended partitions on the drive. So, the virtual machine (vm) ran on the ubuntu host for a while and then I moved it back to the FAT32 partition and from there back to the OS X host. I worked on the vm for a while on the OS X host and then attempted to move it back to the FAT32 partition. I get the following system error: The Finder can’t complete the operation because some data in “my-virtual-machine” can’t be read or written. (Error code -36) Interestingly, I can move the file to another OS X partition, just not FAT32. I also perused VMware's forums and found advice to set permissions on all files and folders to 777. I did this, but have had no success. I notice the the files within the vm package are 777 now, but there is an extended attributes symbol on their permission details "rwxrwxrwx@" Since I can copy the vm between OS X partitions, but not to non OS X partitions, and all files and folders withing the vm package and the package itself have permissions of 777, I speculate that the "@" is the problem. How can I remove the "@" or is there something else I need to modify to allow me to copy/move the vm to other hosts?

    Read the article

  • Fix Linux-made png file for use on Windows

    - by BGM
    There is a particular icon library that I really like. Now, I have downloaded the package that has the png files inside (I know the ico files are there, but I want the png files). However, about my Windows 7 computer tells me that about 1/3 of the png files are corrupt. I usually use XnView to view the files, and it won't display the "corrupt" files. I've tried other editors and viewers and I get the same issue. Now, the png package was originally designed for Linux to be an OS-icon-package for the entire system, so I figure the png files were built in Linux. So, is there a way I can "fix" the "corrupted" png files for my Windows 7 computer? Maybe when the files were created there was some bit that was off-colour or something? Any clues? [edit] I have read in this thread that the "corruption" could happen during the extraction process. I did all the extraction with 7-zip. It was a zip containing a tar. I will try another extractor, but I don't think it will make any difference.

    Read the article

  • Windows Feature List blank, Updates fail, system readiness tool says files are corrupt

    - by Chris T
    I tried to install IIS and to my surprise the feature/components lists was blank =[. I tried the system update readiness tool and it creates the following log: ================================= Checking System Update Readiness. Binary Version 6.1.7100.4104 Package Version 5.0 2009-09-30 23:38 Checking Deployment Packages Checking Package Manifests and catalogs. (f) CBS MUM Corrupt 0x800F0900 servicing\packages\Package_1_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.mum Line 1: (f) CBS Catalog Corrupt 0x800B0100 servicing\packages\Package_1_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.cat (f) CBS MUM Corrupt 0x800F0900 servicing\packages\Package_for_KB973540_RTM~31bf3856ad364e35~amd64~~6.1.1.0.mum Line 1: (f) CBS Catalog Corrupt 0x800B0100 servicing\packages\Package_for_KB973540_RTM~31bf3856ad364e35~amd64~~6.1.1.0.cat (f) CBS MUM Corrupt 0x800F0900 servicing\packages\Package_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.mum Line 1: (f) CBS Catalog Corrupt 0x800B0100 servicing\packages\Package_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.cat Checking package watchlist. Checking component watchlist. Checking packages. Checking component store (f) CSI Catalog Corrupt 0x800B0003 winsxs\Catalogs\efdfd17ac9909b9d81e1455d9abf291319237877c23df8a67a3f5a1f2f9e034f.cat 5fbf0b9691b..6772f1b0a58_31bf3856ad364e35_6.1.7100.4127_c2160c1f90006ee6 (f) CSI Manifest All Zeros 0x00000000 WinSxS\Manifests\amd64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_35ba254677b2a294.manifest amd64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_35ba254677b2a294 (f) CSI Manifest All Zeros 0x00000000 WinSxS\Manifests\wow64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_400ecf98ac13648f.manifest wow64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_400ecf98ac13648f Summary: Seconds executed: 240 Found 9 errors CSI Manifest All Zeros Total Count: 2 CSI Catalog Corrupt Total Count: 1 CBS MUM Corrupt Total Count: 3 CBS Catalog Corrupt Total Count: 3 How can I fix this?

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • Enabling JMX for proxool with tomcat

    - by dialt0ne
    I am trying to get proxool's MBeans available so that I can see/manipulate them with jconsole. I have jconsole working, but I don't see anything related to proxool. The system is using Sun Java 1.5.0_17 (I know, I know... I'm working with the developers to upgrade). JMX is enabled by modifying $JAVA_OPTS in my tomcat 5.5 startup script: SJO="$SJO -Dcom.sun.management.jmxremote" SJO="$SJO -Dcom.sun.management.jmxremote.port=4998" SJO="$SJO -Dcom.sun.management.jmxremote.authenticate=false" SJO="$SJO -Dcom.sun.management.jmxremote.ssl=false" JAVA_OPTS="$JAVA_OPTS $SJO" I have proxool configured with JNDI in server.xml: <GlobalNamingResources> <Resource name="jdbc/database" auth="Container" type="javax.sql.DataSource" factory="org.logicalcobwebs.proxool.ProxoolDataSource" user="username" password="password" proxool.driver-url="jdbc:oracle:thin:@fqdn.example.com:1521:MYSID" proxool.driver-class="oracle.jdbc.driver.OracleDriver" proxool.alias="mysid" proxool.maximum-connection-count="20" proxool.statistics="20s,5m,15m" proxool.statistics-log-level="INFO" proxool.jmx="true" proxool.verbose="true" /> </GlobalNamingResources> My test .jsp can run queries and I can see it using the connections with the proxool admin servlet, but I'm unsure if there's more I need to configure in tomcat or proxool to get JMX functioning. Advice? jmxproxy info edit: The jmxproxy servlet is working - when I go to the URL http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=*:type%3DRequestProcessor,* the results are: OK - Number of results: 2 Name: Catalina:type=RequestProcessor,worker=http-8080,name=HttpRequest0 modelerType: org.apache.coyote.RequestInfo bytesSent: 0 requestBytesSent: 0 contentLength: -1 bytesReceived: 0 requestProcessingTime: 1297983483666 globalProcessor: org.apache.coyote.RequestGroupInfo@32dc51c8 requestBytesReceived: 0 serverPort: -1 stage: 0 requestCount: 0 maxTime: 0 processingTime: 0 errorCount: 0 Name: Catalina:type=RequestProcessor,worker=jk-127.0.0.1-8009,name=JkRequest794 modelerType: org.apache.coyote.RequestInfo virtualHost: tomcatserver.example.com bytesSent: 0 method: GET remoteAddr: 172.30.3.51 requestBytesSent: 0 contentLength: -1 workerThreadName: TP-Processor15 bytesReceived: 0 requestProcessingTime: 9 globalProcessor: org.apache.coyote.RequestGroupInfo@1e7d3b8e protocol: HTTP/1.1 currentQueryString: qry=*%3Atype%3DRequestProcessor%2C* requestBytesReceived: 0 serverPort: 4999 stage: 3 requestCount: 0 maxTime: 0 processingTime: 0 currentUri: /manager/jmxproxy/ errorCount: 0 And more to the point http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=Catalina:type%3DEnvironment,resourcetype%3DGlobal,name%3DProxool yields: OK - Number of results: 0

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • How to install a desktop environment onto Ubuntu Server -- but without internet access or a CDROM?

    - by James
    I am playing around with a computer which has no CDROM drive or internet access and I have installed Ubuntu Server onto it. I have that all up and running nicely but now I'd like to install Xfce, GNOME or something similar so I can load up a desktop environment from the command line if I wish. Obviously with internet access or a CDROM, this would be a simple task of using apt-get and it finding & retrieving the packages for me, I assume, but I do not have either. I do however have a USB drive and I have used Unetbootin to make it into a bootable drive with the Ubuntu Server disk image files on there. I have mounted the USB drive to /media/usb0 and tried the command "sudo apt-cdrom add -d /media/usb0" to get apt to recognise the USb drive as an "Ubuntu CD" -- a source of package files but apt-get doesn't seem to be finding Xfce.. I try "sudo apt-get install xfce" and "sudo apt-get install xfce4" but neither find the package.. I would prefer to have Xfce but GNOME would be OK too.. My question is, am I doing something wrong? I figured that the Ubuntu Server disk (or rather, my Ubuntu Server USB drive) might not have any desktop environment packages on there so I tried the Xubuntu Desktop disk too (again, from my USB drive). I tried "sudo apt-get install xubuntu-desktop" but it couldn't find the package - even though it is listed under the /casper/ directory in some MANIFEST file. Anyone see where I'm going wrong? Maybe apt-get install is looking somewhere other than my USB drive? Maybe my commands are wrong? Maybe the disks don't even have the desktop environments on!? Thanks in advance guys, any input would be much appreciated. Cheers - James

    Read the article

  • Tridion 2011 SP1 Core Service - expose to live server within PROD env

    - by Neil
    We have a requirement to allow our users to submit information about their "projects" - a small piece of text and single image they upload. Ultimately we'll have a listing page of user contributed projects that others can comment on and rate. We've decided to user Tridion's UGC for rating & comments site-wide for this first phase which has got me thinking - UGC is tied to Tridion published pages & components, if we want UGC on our user-submitted projects, they'll have to be created within Tridion as components themselves, not be sat in some custom db table? Is this where the Core Service could come in? My understanding is that the CD Web Service is for retrieval, not for interacting with the Content Manager. Is it OK (!) architecturally to expose the Core Service only to our live application servers so our backend .NET code can create "project components" that can be then be published by editors allowing them to be commented on? Everything sounds pretty neat and tidy apart from the "exposing Core Service to live servers" bit. Without this though I'd have to write a custom way to "transfer" it back over to the Content Manager - maybe like Audience Manager Sync works? Anyone done this before?

    Read the article

  • Timeout option not working on efi windows 7/windows8 dual boot machine

    - by Guenter
    I hav a gigbyte GA-Z77m-D3h mobo and installed Windows 8 Pro and Windows 7 Ultimate on two SSDs (in that order) in EFI mode. Now when I start my computer, I get the windows boot menu (text mode) with the two OSses to choose, but I have to manually press RETURN to have the computer boot into the Win OS. Even if I wait an hour, no default action takes place. Using bcdedit (from either of the OSses) I can successfully change the time out value, and it shows up in the bcdedit (no params) output. But it doesn't fire ... Here is my current BCDEdit output (headers are in German, but values should be readable): Windows-Start-Manager --------------------- Bezeichner {bootmgr} device partition=O: path \EFI\Microsoft\Boot\bootmgfw.efi description Windows Boot Manager locale de-DE inherit {globalsettings} integrityservices Enable default {default} resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} displayorder {default} {current} {5ad2802a-c60a-11e2-acdb-80331c501b11} {5ad28028-c60a-11e2-acdb-80331c501b11} {5ad28029-c60a-11e2-acdb-80331c501b11} toolsdisplayorder {memdiag} timeout 5 displaybootmenu Yes Windows-Startladeprogramm ------------------------- Bezeichner {default} device partition=W: path \Windows\system32\winload.efi description Windows 7 locale de-DE inherit {bootloadersettings} recoverysequence {5ad2802e-c60a-11e2-acdb-80331c501b11} recoveryenabled Yes osdevice partition=W: systemroot \Windows resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} nx OptIn Windows-Startladeprogramm ------------------------- Bezeichner {current} device partition=C: path \Windows\system32\winload.efi description Windows 8 locale de-DE inherit {bootloadersettings} recoverysequence {5ad28033-c60a-11e2-acdb-80331c501b11} integrityservices Enable recoveryenabled Yes isolatedcontext Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \Windows resumeobject {5ad28031-c60a-11e2-acdb-80331c501b11} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto (this output is from Win8; the Win7 looks nearly identical) If maybe the problem comes from a bad EFI Windows boot manager installation, can this be fixed without loosing my windows installations?

    Read the article

  • Cant access folder on server- Permission denied

    - by Michal Korzeniowski
    I am running a vps with ubuntu 11.04. After a clean Modx install I've tried to access http://www.encepence.pl/manager and I've got a permission denied by my server. the thing is that I can easily access any other folder under that domain and modify this folder(manager) content via ftp. I’ve tried modifying virtual host with that <Directory /var/www/blackflow/data/www/encepence.pl/manager/> Options Indexes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> But it didn't work. <Directory /var/www/blackflow/data/www/encepence.pl> Options -ExecCGI -Includes php_admin_value open_basedir "/var/www/blackflow/data:." php_admin_flag engine on </Directory> <VirtualHost 192.166.219.34:80 > ServerName encepence.pl CustomLog /var/www/httpd-logs/encepence.pl.access.log combined DocumentRoot /var/www/blackflow/data/www/encepence.pl ErrorLog /var/www/httpd-logs/encepence.pl.error.log ServerAdmin [email protected] ServerAlias www.encepence.pl SuexecUserGroup blackflow blackflow AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/var/www/blackflow/data:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/var/www/blackflow/data/mod-tmp" php_admin_value session.save_path "/var/www/blackflow/data/mod-tmp" VirtualDocumentRoot /var/www/blackflow/data/www/%0 </VirtualHost> Any ideas on what might have gone wrong?

    Read the article

  • How to setup a virtual machine in Ubuntu desktop to run Debian Server

    - by stickman
    I want to run a virtual machine in my Ubuntu desktop that runs a Debian server. The purpose of this is to generate Debian packages. I have some C++ applications that were originally developed on my Ubuntu machine, and I need to (re)compile them on a Debian server in order to: build Deb packages for deployment on a Debian server make sure that the applications will definitely work on a debian server The idea is so that I can do 90% of my development on Ubuntu (where I am more comfortable), and deploy a binary package that definitely works on Debian. BTW, I am developing on Karmic Kola (Ubuntu 9.10). [Edit] Following the advice I got so far, I have installed debootstrap and Debian 'Lenny' on /srv/chroot/debian_lenny on my machine. I am not sure this is the server version, but in any case I dont think that matters for my purposes (though it would be useful to know how to specifically install the server version). At the moment though, I am like a fish out of water, since there is no GUI, and it is only a console that I have in the chroot jail. I had a look in the home folder (I cheated, by using the KNavigator in Ubuntu), and there are no folders there - which presumably mean that no users have been set up as yet in the Debian "system". I would like to know how to do the following: Download and install the dev tools needed for (re)compiling my C++ apps Copy my projects from the Ubuntu "system" to the Debian "system" After building the binaries, I would like to create a debian binary package containing all of my binaries, so that I can install the package on a Debian server (my remote server)

    Read the article

  • AJP Connector Apache-Tomcat with php and java application

    - by Safari
    I have a question about proxy and ajp module. On my machine I have a Apache web server and a Tomcat servlet container. On Tomcat is running a my java webapplication. On Apache I have some services and I can call these in this way: http://myhos/service1 http://myhos/service2 http://myhos/service3 I would configurate a ajp connector to call my tomcat webapplication from Apache. I would somethin as http://myhost to call the Tomcat webapp. So, I configurated my apache in this way..and I have what I wanted: I can use http://myHost to visualize the Tomcat webApp by Apache. <VirtualHost *:80> ProxyRequests off ProxyPreserveHost On ServerAlias myserveralias ErrorLog logs/error.log CustomLog logs/access.log common <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /server-status ! ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID nofailover=Off maxattempts=1 <Proxy balancer://mycluster> BalancerMember ajp://myIp:8009 min=10 max=100 route=portale loadfactor=1 ProxySet lbmethod=bytraffic </Proxy> <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from localhost </Location> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent </VirtualHost> But, now I can't use the apache services: If I use http://myhos/service1 I have an error because apache try to search service1 on my tomcatWebApp. Is there a way to fix it?

    Read the article

  • How to correctly deploy Adobe Reader 9.1

    - by Ben Gillam
    Hi I have recently tried to deploy Adobe Reader 9.1 onto our network here. (SBS 2003 server and XP Workstations) I followed the instructions for the extraction of the installer and .msi and then creating a .mst transform file to set custom options. (Suppress EULA, dont create desktop icon etc) I then added the package to my deployment GPO applied the relevant .mst file and preceded to deploy accross the network. The software package is computer assigned to be installed prior to logon, to avoid user permissions issues. The package deploys correctly to computers and will run perfectly fine if you run from a shortcut, however when trying to view a pdf from within a web browser it fails with the following message. "The adobe acrobat/reader that is running can not be used to view PDF files in a web browser. Adobe Acrobat/Reader version 8 or 9 is required. Please exit and try again" I have found many pages on google refering to this problem, but none appear to be in relation the problems I have found. http :// kb2.adobe.com/cps/405/kb405461.html These fixes recommend correcting a registry entry (which i should mention is missing after the deployed installation. However this does not work. Switching off display in a browser - Seems to defeat the object of fixing the problem Removing old versions - There arent any. Trying with a different user - This affects all users of all privalige levels on all computers. On my workstation I uninstalled Acrobat Reader 9.1 then reinstalled manually using the same installation source files and it works fine. has anyone sucsessfully deployed AR9.1 on their domain and if so how? For the time being I have downloaded the older 8.1.3 release and deployed this in the same way which works fine, but would like to be using the up to date version. Thanks

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • Error when mount the database in exchange 2010 SP1

    - by user64060
    Hi, My company have two exchange 2010 SP1 servers with DAG configuration with OS widows server 2008 R2 in testing entironment. Today i want to test my backup possibility, so i restore the backup data to another location not original location. I dismount the database and then delete the all files under the database location. last I copy back the files from back up location to database location. When i want to mount the database. It will come out the below error! -------------------------------------------------------- Microsoft Exchange Error -------------------------------------------------------- Failed to mount database 'mail2'. mail2FailedError: Couldn't mount the database that you specified. Specified database: mail2; Error code: An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn]. An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn] An Active Manager operation failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Server: mail2.e0594.cn] MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) Any suggestion? Thanks!

    Read the article

  • .htaccess modify rules and redirect if there's .php in the url

    - by Ron
    Hello everyone. I got the following code in my .htaccess: Options +FollowSymlinks RewriteBase /temp/test/ RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^about/(.*)/$ $1.php [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/downloadfile/$ file-download.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/$ download-donate.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/$ download.php?product=$1&version=$2 [L] RewriteRule ^newsletter-confirm/(.*)/$ newsletter-confirm.php?email=$1 [L] RewriteRule ^newsletter-remove/(.*)/$ newsletter-remove.php?email=$1 [L] RewriteRule ^(.*)/screenshots/$ screenshots.php?product=$1 [L] RewriteRule ^(.*)/(.*)/$ products.php?product=$1&page=$2 [L] RewriteRule ^schedule-manager/$ products.php?product=schedule-manager&page=view [L] RewriteRule ^visual-command-line/$ products.php?product=visual-command-line&page=view [L] RewriteRule ^windows-hider/$ products.php?product=windows-hider&page=view [L] RewriteRule ^(.*)/$ $1.php [L] RewriteRule ^products/$ products.php [L] everything work perfect. I would like to know how can I modify it so it will be less lines. I am pretty sure I can atleast remove 4-5 lines, but I dont know how. (merge the schedule-manager, visual-command-line and windows-hider, and some more). I know that the order of the rules is important, this order works - although I have no idea why, I just played with the rules until it worked. If you think that there'll be a bug with the following order please tell me where. Another thing - I would like to redirect for example www.myweb.com/products.php to www.myweb.com/products/ (I mean that the URL in the address bar will change). I dont know if the redirect can go along with my rewrite rules. Thank you.

    Read the article

  • Chef bash resource not executing as specified user

    - by Arthur Maltson
    I'm writing a Chef cookbook to install Hubot. In the recipe, I do the following: bash "install hubot" do user hubot_user group hubot_group cwd install_dir code <<-EOH wget https://github.com/downloads/github/hubot/hubot-#{node['hubot']['version']}.tar.gz && \ tar xzvf hubot-#{node['hubot']['version']}.tar.gz && \ cd hubot && \ npm install EOH end However, when I try to run chef-client on the server installing the cookbook, I'm getting a permission denied writing to the directory of the user that runs chef-client, not the hubot user. For some reason, npm is trying to run under the wrong user, not the user specified in the bash resource. I am able to run sudo su - hubot -c "npm install /usr/local/hubot/hubot" manually, and this gets the result I want (installs hubot as the hubot user). However, it seems chef-client isn't executing the command as the hubot user. Below you'll find the chef-client execution. Thank you in advance. Saving to: `hubot-2.1.0.tar.gz' 0K ...... 100% 563K=0.01s 2012-01-23 12:32:55 (563 KB/s) - `hubot-2.1.0.tar.gz' saved [7115/7115] npm ERR! Could not create /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! Failed creating the tarball. npm ERR! couldn't pack /tmp/npm-1327339976597/1327339976597-0.13104878342710435/contents/package to /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! error installing [email protected] Error: EACCES, permission denied '/home/<user-chef-client-uses>/.npm/log' ... npm not ok ---- End output of "bash" "/tmp/chef-script20120123-25024-u9nps2-0" ---- Ran "bash" "/tmp/chef-script20120123-25024-u9nps2-0" returned 1

    Read the article

  • Centos 6.2 postfix install dependency issues

    - by Mishari
    I am administrating a VPS running cPanel and I'm trying to install postfix. Redhat-release says the version is CentOS release 6.2 (Final) and uname -a says: Linux server.mydomain.com 2.6.32-220.el6.i686 #1 SMP Tue Dec 6 16:15:40 GMT 2011 i686 i686 i386 GNU/Linux This is how I'm installing postfix (I had tried to solve the problem earlier by installing epel). # yum install postfix Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * epel: mirror.cogentco.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package postfix.i686 2:2.6.6-2.2.el6_1 will be installed --> Processing Dependency: mysql-libs for package: 2:postfix-2.6.6-2.2.el6_1.i686 --> Finished Dependency Resolution Error: Package: 2:postfix-2.6.6-2.2.el6_1.i686 (centos-burstnet) Requires: mysql-libs You could try using --skip-broken to work around the problem Attempts to install mysql-libs tells me several files conflict with "MySQL-server-5.1.61-0.glibc23.i386" I'm not sure why or how this is happening, does anyone know how to resolve this? Surely Centos 6.2 could not have shipped with a broken postfix.

    Read the article

  • what are these weird IP address connections in resource monitor?

    - by bill
    I decided to check out Resource Monitor (on the 'Performance' tab in Task Manager, Windows 7) and I noticed in the "Network" section that the 'System' image name kept making a bunch (~5 at a time) of connections to random IP addresses, it would show anywhere from 1-500 bytes/sec 'sent'. They would stay connected for 1-2 minutes. -All web browsers are closed So, first thing I did was run a trace from network-tools.com on some of these IP addresses. 8/10 were outside of US and did not resolve to any host name. Of the 10 IP addresses I traced, 2 were in US, 4 showed origins in China, and one each to Algeria, Russia, Pakistan, Korea. (!) So, the next thing I did was turn off my wireless card, watch the connections disappear, then turn the card back on, and within 30 seconds more random connections were created by System, with different IP addresses from the first time. The next thing I did was go open Task Manager, Show Processes From All Users, then I killed just about everything that wasn't (what appeared to be) a windows process. Turned on wi-fi, and again within 30 seconds, random IP addresses connect for ~ 1 min at a time, new ones coming and going. I occasionally use bit torrent on this machine, but there was definitely no process that seemed related to bt running after I went through task manager, and bt wasn't open to begin with. So, any ideas on what these connections might be for? I have been using Ad-Aware Free and AVG Free on this computer for a while now, always up to date..

    Read the article

  • Java constructor using generic types

    - by user37903
    I'm having a hard time wrapping my head around Java generic types. Here's a simple piece of code that in my mind should work, but I'm obviously doing something wrong. Eclipse reports this error in BreweryList.java: The method initBreweryFromObject() is undefined for the type <T> The idea is to fill a Vector with instances of objects that are a subclass of the Brewery class, so the invocation would be something like: BreweryList breweryList = new BreweryList(BrewerySubClass.class, list); BreweryList.java package com.beerme.test; import java.util.Vector; public class BreweryList<T extends Brewery> extends Vector<T> { public BreweryList(Class<T> c, Object[] j) { super(); for (int i = 0; i < j.length; i++) { T item = c.newInstance(); // initBreweryFromObject() is an instance method // of Brewery, of which <T> is a subclass (right?) c.initBreweryFromObject(); // "The method initBreweryFromObject() is undefined // for the type <T>" } } } Brewery.java package com.beerme.test; public class Brewery { public Brewery() { super(); } protected void breweryMethod() { } } BrewerySubClass.java package com.beerme.test; public class BrewerySubClass extends Brewery { public BrewerySubClass() { super(); } public void androidMethod() { } } I'm sure this is a complete-generics-noob question, but I'm stuck. Thanks for any tips!

    Read the article

  • Cant get squid proxy to work

    - by danielgratz
    i need squid proxy on my centos server. But i just can't get it to work. I did yum install squid. Here is my squid.conf file (i removed all comments): acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Safe_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl CONNECT method CONNECT acl our_networks src 192.168.1.0/24 192.168.2.0/24 http_access allow our_networks http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow all http_port 3128 hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl apache rep_header Server ^Apache broken_vary_encoding allow apache coredump_dir /var/spool/squid Then i just put my server's public ip and port 3128 into my web browsers proxy settings... but it isn't working i can't visit any website. Please help. Thanks.

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >