Search Results

Search found 9078 results on 364 pages for 'package'.

Page 285/364 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Mount an VHD on Mac OS X

    - by janm
    Is it possible (how) to mount an VHD file created by Windows 7 in OS X? I found some information about how to do this on linux. There is a fuse fs "vdfuse" which uses virtualbox libs to mount filesystems supported by virtualbox. However I was unable to compile the package on osx because nearly all headers are missing and I doubt that it would work anyway... EDIT #2: Okay I got my hands dirty and finally compiled vdfuse (http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0) on osx. As a starting point I used macfuse (http://code.google.com/p/macfuse/) and looked at the example file systems. This led me to the following build script infile=vdfuse.c outfile=vdfuse incdir="your/path/to/vbox/headers" INSTALL_DIR="/Applications/VirtualBox.app/Contents/MacOS" CFLAGS="-pipe" gcc -arch i386 "${infile}" \ "${INSTALL_DIR}"/VBoxDD.dylib \ "${INSTALL_DIR}"/VBoxDDU.dylib \ "${INSTALL_DIR}"/VBoxVMM.dylib \ "${INSTALL_DIR}"/VBoxRT.dylib \ "${INSTALL_DIR}"/VBoxDD2.dylib \ "${INSTALL_DIR}"/VBoxREM.dylib \ -o "${outfile}" \ -I"${incdir}" -I"/usr/local/include/fuse" \ -Wl,-rpath,"${INSTALL_DIR}" \ -lfuse_ino64 \ -Wall ${CFLAGS} You actually don't need to compile VirtualBox on your machine, just install a recent version of VirtualBox. So now I can partially mount vhds. The separate partitions appear as block files Partition1, Partition2, ... on my mount point. However Mac OS X does not include a loopback file system and macfuse's loopback fs does not work with block files, so we need a loopback fs to mount the blockfiles as actual partitions.

    Read the article

  • Patch management on multiple systems

    - by Pierre
    I'm in charge of auditing the security configuration of an important farm of Unix servers. So far, I came up with a way to assess the basic configuration but not the installed updates. The very problem here is that I just can't trust the package management tools on those machine. Indeed some of them did not sync with the repository for a long time (So I can't do a "yum check-updates" on Redhat for example). Some of those servers are not even connected to the internet and use an company repository. Another problem is that I have multiple target systems: AIX, Debian, Centos/Redhat, etc... So the version could be different (AIX) and the tools available will be different. And, last but not least, I can't install anything on the target system. So I need to use a script to retrieve the information and either: process it directly or save the information to be able to process it later on a server (Which may happen to run a different distribution than the one on which the information have been retrieved). The best ideas I could come up with were: either retrieve the list of installed packages on the machine (dpkg -l for example on debian) and process it on a dedicated server (Directly parsing the "Packages" file of debian repositories). Still, the problem remains the same for AIX and Redhat... or use Nessus' scripts to assess vulnerability on the installed packages, but I find this a bit dirty. Does anyone know any better/efficient way of doing this ? P.S: I already took time to review some answers to similar problems. Unfortunately Chef, puppet, ... don't meet the requirements I have to meet. Edit: Long story short. I need to have the list of missing updates on a Unix system just like MBSA on Windows. I'm not authorized to install anything on this system as it's not mine. All I have are scripts languages. Thanks.

    Read the article

  • Why do disk images hosted on a read-only HFS+ partition behave differently?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Recommended programming language for linux server management and web ui integration.

    - by Brendan Martens
    I am interested in making an in house web ui to ease some of the management tasks I face with administrating many servers; think Canonical's Landscape. This means doing things like, applying package updates simultaneously across servers, perhaps installing a custom .deb (I use ubuntu/debian.) Reviewing server logs, executing custom scripts, viewing status information for all my servers. I hope to be able to reuse existing command line tools instead of rewriting the exact same operations in a different language myself. I really want to develop something that allows me to continue managing on the ssh level but offers the power of a web interface for easily applying the same infrastructure wide changes. They should not be mutually exclusive. What are some recommended programming languages to use for doing this kind of development and tying it into a web ui? Why do you recommend the language(s) you do? I am not an experienced programmer, but view this as an opportunity to scratch some of my own itches as well as become a better programmer. I do not care specifically if one language is harder than another, but am more interested in picking the best tools for the job from the beginning. Feel free to recommend any existing projects except Landscape (not free,) Ebox (not entirely free, and more than I am looking for,) and webmin (I don't like it, feels clunky and does not integrate well with the "debian way" of maintaining a server, imo.) Thanks for any ideas!

    Read the article

  • Syntax error at '{'; expected '}' when using nagios in puppet

    - by jiangchengwu
    It's a big problem to me, because I'm not familiar with puppet. ERROR on the puppetmaster: debug: importing '/etc/puppet/manifests/nodes/group-1.pp' err: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 ERROR on the puppet client: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 in group-1.pp: node 'group1' { include ntp class { 'nagios::host': #this is line 6 nodename => $clientcert, appname => 'test', } } nagios::host in module module/nagios/host.pp code are here: class nagios::host($nodename, $hostgroup) { file { '/usr/lib/nagios/plugins': mode = "755", require = Package["nagios-plugins"], } ... @@nagios_service { "${nodename}_check_ssh": ensure => present, use => 'generic-service', host_name => "${nodename}", notification_interval => 60, flap_detection_enabled => 0, service_description => "SSH", check_command => "check_ssh", target => "/etc/nagios3/services.d/${nodename}.cfg", } } and the file module/nagios/init.pp is blank How could I fix it ?

    Read the article

  • Debian Wheezy 7.5 64bit xfce4 install error ( no desktop environment installed already )

    - by GeoMind
    i wrote a CD with an iso-image from debian.org. the debian-7.5.0-amd64-CD-1.iso from this folder. Debian Wheezy 7.5 stable 64bit There was an error at Select and install software step. It said Retrieving file 770 from 800 and then it failed the installation. I continued the instal and when i opened the computer it doesn't work the Ctrl + Alt + F7 as i waited. It starts at tty1 and after logging in i edited config file cause it had a lot of errors and said E: Unable to correct problems, you have held broken packages or Couldn't found the package. FILE: /etc/apt/sources.list # deb cdrom:[Debian GNU/Linux 7.5.0 _Wheezy_ - Official amd64 CD Binary-1 20140426-13:37]/ wheezy main #deb cdrom:[Debian GNU/Linux 7.5.0 _Wheezy_ - Official amd64 CD Binary-1 20140426-13:37]/ wheezy main deb http://security.debian.org/ wheezy/updates main contrib non-free deb-src http://security.debian.org/ wheezy/updates main contrib non-free deb http://ftp.us.debian.org/debian/ squeeze main contrib non-free deb-src http://ftp.us.debian.org/debian/ squeeze main contrib non-free After that i tried to install xfce4 as desktop environment. Guide found at Linux Panda But it print at terminal: What i sould do? How i can fix this problem?

    Read the article

  • Goal setting/tracking packages for software projects

    - by Avi
    I'm a developer working by myself. I'm looking for a computerized tool to manage my goals and activities. I own it Microsoft Project, but I don't like it. I've started many "projects" but could never keep on using it. Too complex and heavyweight for me. I use MS-Outlook tasks. They are not what I need. No planning capability. Tracking is not nice. I'm using the Pomodoro technique and I like it, but I'm looking for something more comprehensive and with better computerized support. Something that would allow me to define goals with dependencies and time estimation, keep daily prioritized lists etc. So, I'm looking for a solution. One I've found is GoalPro, but I uneasy because I could not find a cross-product "top ten" like review. Are you using any goal setting package such as GoalPro? Which? Does it help? Pros and Cons?

    Read the article

  • Puppet variables best practice, generalise or specialise?

    - by Andrei Serdeliuc
    I'm trying to figure out which things should be in git within the puppet manifest and which should be in env vars like FACTER_my_var and use that in the manifest instead. Scenario: you are deploying 3 php apps and you've already built all the layers up to the app in other manifests (base system, php extensions, users, etc), and all that's left is installing the correct app (from an apt repo) and creating a vhost. I'm tempted to have something along the lines of: apache::vhost { $::project_hostname: priority => '10', port => '80', docroot => $::project_document_root, logroot => "/var/log/apache2/${$::project_name}", serveradmin => '[email protected]', require => Package[httpd], ssl => false, override => 'all', setenv => ["APP_KERNEL dev"] } This would run on each server, and the FACTER_project_* vars would be set on a per server basis. An obvious restriction of this would be that you can't run more than one app with this specific example. Or would you rather have project_x.pp, project_y.pp which have hardcoded paths and names?

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • Ubuntu cannot resolve unmet dependency

    - by DisgruntledGoat
    I'm trying to install a package on my Ubuntu 8.10 server. However, I get this message: The following packages have unmet dependencies. webmin: Depends: apt-show-versions but it is not going to be installed E: Unmet dependencies. Try ‘apt-get -f install’ with no packages (or specify a solution). So I run apt-get -f install which offers to install apt-show-versions and libapt-pkg-perl. After selecting to install without verification, I get these errors: Err http://gb.archive.ubuntu.com intrepid/universe libapt-pkg-perl 0.1.22build1 404 Not Found Err http://gb.archive.ubuntu.com intrepid/universe apt-show-versions 0.13 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/liba/libapt-pkg-perl/libapt-pkg-perl_0.1.22build1_i386.deb 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/a/apt-show-versions/apt-show-versions_0.13_all.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? I've tried running apt-get update and adding --fix-missing as suggested, but neither works. Where do I go from here?

    Read the article

  • How can I use HAproxy with SSL and get X-Forwarded-For headers AND tell PHP that SSL is in use?

    - by Josh
    I have the following setup: (internet) ---> [ pfSense Box ] /-> [ Apache / PHP server ] [running HAproxy] --+--> [ Apache / PHP server ] +--> [ Apache / PHP server ] \-> [ Apache / PHP server ] For HTTP requests this works great, requests are distributed to my Apache servers just fine. For SSL requests, I had HAproxy distributing the requests using TCP load balancing, and it worked however since HAproxy didn't act as a proxy, it didn't add the X-Forwarded-For HTTP header, and the Apache / PHP servers didn't know the client's real IP address. So, I added stunnel in front of HAproxy, reading that stunnel could add the X-Forwarded-For HTTP header. However, the package which I could install into pfSense does not add this header... also, this apparently kills my ability to use KeepAlive requests, which I would really like to keep. But the biggest issue which killed that idea was that stunnel converted the HTTPS requests into plain HTTP requests, so PHP didn't know that SSL was enabled and tried to redirect to the SSL site. How can I use HAproxy to load balance across a number of SSL servers, allowing those servers to both know the client's IP address and know that SSL is in use? And if possible, how can I do it on my pfSense server? Or should I drop all this and just use nginx?

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • Reset rc.d so software starts at boot again

    - by natli
    I ran the following 2 commands on my VPS box and now it boots without starting any software at all. According to rcconf it's still supposed to start my chosen software (ssh etc.) but it doesn't. update-rc.d vz defaults update-rc.d vzeventd defaults I already tried removing them again with update-rc.d -f vz remove update-rc.d -f vzeventd remove But that didnt't change anything. /etc/rc.local also still correctly lists some scripts I want to run at start-up, but they don't seem to be called either. I expect the top 2 commands to be responsible, but here's everything I did: mkdir /var/openvz-dl cd /var/openvz-dl wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-devel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-core-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-lib-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/vzquota/3.1/vzquota-3.1-1.x86_64.rpm apt-get install fakeroot alien fakeroot alien --to-deb --scripts --keep-version vz*.rpm ploop*.rpm dpkg -i vz*.deb ploop*.deb --force-overwrite update-rc.d vz defaults update-rc.d vzeventd defaults reboot A huge part of that failed because I was running it on an OpenVZ VPS which has a shared kernel that can't be altered, so I also had to fix the dpkg like so (it was moaning about wanting to install vzkernel with a package not being found); rm /var/lib/dpkg/info/vzkernel* dpkg-reconfigure vzkernel --force dpkg --purge --force-all vzkernel But that didn't fix the boot issue either. How do I make my software start at boot again?

    Read the article

  • User not in the sudoers file. This incident will be reported

    - by Sergiy Byelozyorov
    I need to install a package. For that I need root access. However the system says that I am not in sudoers file. When trying to edit one, it complains alike! How I am supposed to add myself to the sudoers file if I don't have the right to edit one? I have installed this system and only administrator. What can I do? Edit: I have tried visudo already. It requires me to be in sudoers in the first place. amarzaya@linux-debian-gnu:/$ sudo /usr/sbin/visudo We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility. [sudo] password for amarzaya: amarzaya is not in the sudoers file. This incident will be reported. amarzaya@linux-debian-gnu:/$

    Read the article

  • How can I install git on RHEL 6?

    - by JR.Xyza
    I'm trying to install Git on a RHEL6 development server, I have experience with Ubuntu but this is my first time working with RHEL (I'm a developer trying to fill in for a recently departed Linux Sysadmin). I've set up two additional repos (EPEL and IUS) for other packages needed for a Magento install. Output of yum repolist: [root@box]# yum repolist Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. repo id repo name status epel Extra Packages for Enterprise Linux 6 - x86_64 7,841 ius IUS for RHEL 6Server - x86_64 135 Most of what I've read indicates a simple 'yum install git' should work with EPEL enabled, but I get the dreaded [root@box]# yum install git Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. Setting up Install Process No package git available. Error: Nothing to do Same goes for git-daemon, etc. I've tracked down a number of git RPMs such as this one at repoforge but they require a train of dependencies that seems to never end. I've also toyed with compiling it manually but the rabbit hole to get make working seems to go even deeper. I'm convinced there's a simple oversight somewhere keeping me from being able to install from the EPEL repo, but I'm a rookie at all this. Thanks in advance for help/pointers/additional resources.

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • Ubuntu cannot resolve unmet dependency

    - by DisgruntledGoat
    I'm trying to install a package on my Ubuntu 8.10 server. However, I get this message: The following packages have unmet dependencies. webmin: Depends: apt-show-versions but it is not going to be installed E: Unmet dependencies. Try ‘apt-get -f install’ with no packages (or specify a solution). So I run apt-get -f install which offers to install apt-show-versions and libapt-pkg-perl. After selecting to install without verification, I get these errors: Err http://gb.archive.ubuntu.com intrepid/universe libapt-pkg-perl 0.1.22build1 404 Not Found Err http://gb.archive.ubuntu.com intrepid/universe apt-show-versions 0.13 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/liba/libapt-pkg-perl/libapt-pkg-perl_0.1.22build1_i386.deb 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/a/apt-show-versions/apt-show-versions_0.13_all.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? I've tried running apt-get update and adding --fix-missing as suggested, but neither works. Where do I go from here?

    Read the article

  • What Windows app can sort a huge XML file?

    - by Torben Gundtofte-Bruun
    I have some enormous XML-based configuration files, with 125000 lines in them. The problem is that they are auto-generated by the system I use, and "child" tags are in a random order within their respective parent tag. This means that a diff comparison is impossible. I want to recursively sort all tags within a parent tag by the value in name="". Some parent tags only appear once and don't have a name="" parameter; these should be sorted by the tag name itself. Once the files are sorted like this, they can be compared quite easily using normal tools. We are currently using ExamXML which can match unsorted XML files, but it fails because the files are too big. Is there an application that can do this? (Windows much preferred; Linux only as a last resort) I do not want to dive into development or XSLT jobs. I am thinking that someone must have made a simple sorting tool like this already - I just can't find it using Google. Update: With help from this site, I created a small package that I want to share: XML-Sorter_v0.3.zip Update: Follow-up question here.

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • Recommend a UK based VPS host equivalent to Dreamhost [closed]

    - by Pez Cuckow
    I appreciate this question could be considered subjective and argumentative so can people make recommendations rather than arguing about the best. I believe the "correct" answer is the one closest to what I am looking for. Basically I live in the UK but have been using the US based Dreamhost for about 6 years now, and my web projects are getting to the scale where the websites need to the UK based to cope with the demand and load. I originally had shared hosting with Dreamhost but upgraded to a VPS a while ago, getting 512mb of RAM, unlimited disk space, bandwidth and domains for $30. Their control panel is a custom easy to use build that they have created in house and offers features very similar to other web panels (as far as I am aware). So basically my question boils down to, is there anywhere that offers an equivalent package? In all honesty as long as I have over 50gb HDD space and unlimited domains it doesn't really matter? Are there any VPS providers you would recommend as reliable? I promise to check every link posted, many thanks for your time!

    Read the article

  • Postfix connects to wrong relay?

    - by Eric
    I am trying to set up postfix on my ubuntu server in order to send emails via my isp's smtp server. I seem to have missed something because the mail.log tells me: Jan 19 11:23:11 mediaserver postfix/smtp[5722]: CD73EA05B7: to=<[email protected]>, relay=new.mailia.net[85.183.240.20]:25, delay=6.2, delays=5.7/0.02/0.5/0, dsn=4.7.0, status=deferred (SASL authentication failed; server new.mailia.net[85.183.240.20] said: 535 5.7.0 Error: authentication failed: ) The relay "new.mailia.net[85.183.240.20]:25" was not set up by me. I use "relayhost = smtp.alice.de". Why is postfix trying to connect to a different server? Here is my main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mediaserver alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = mediaserver, localhost.localdomain, , localhost relayhost = smtp.alice.de mynetworks = 127.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all myorigin = /etc/mailname inet_protocols = all sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_password smtp_sasl_security_options = noanonymous Output of postconf -n: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = ipv4 mailbox_size_limit = 0 mydestination = mediaserver, localhost.localdomain, , localhost myhostname = mediaserver mynetworks = 127.0.0.0/8 myorigin = /etc/mailname readme_directory = no recipient_delimiter = relayhost = smtp.alice.de sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_generic_maps = hash:/etc/postfix/generic smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_password smtp_sasl_security_options = noanonymous smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes

    Read the article

  • Pushing Windows Store apps silently

    - by seagull
    As part of a requirement I have been issued, I seek the ability to push apps from the Windows Store (.appx) to remote users. The framework surrounding the data transmission is sorted; What I need to know is if there is a script that can be sent out along with a URI or the package proper to facilitate installation on the user-side of the Windows Store app. I am well aware that numerous guides exist from Microsoft on pushing out Line-of-Business (LOB) (AKA Enterprise) applications -- that is, apps that have been developed in-house and are not for the consumption of the Windows store. This is inappropriate for my requirement, however; the customer wants their clients to receive apps that appear currently on the Windows Store, and for them to be installed in a silent manner. I've seen this guide; http://blogs.technet.com/b/keithmayer/archive/2013/02/25/step-by-step-deploying-windows-8-apps-with-system-center-2012-service-pack-1.aspx, which details doing exactly this, but it is only applicable to machines that are administered by a system running Windows Server 2012 R2 with the 'System Center 2012' bundle installed. The systems I target are considerably more decentralised than this, making this guide inappropriate. I have a hunch that Microsoft have deliberately designed the Windows Store to be this way, but I figured I ought to ask around before I resign myself to the requirement. Much obliged

    Read the article

  • How to autorun wpa_supplicant on Debian startup

    - by The Electric Muffin
    I'd like to run wpa_supplicant -D wext -i wlan0 -c /etc/wpa_supplicant.conf on Debian startup (runlevels 2-5). I found some vague instructions from a related question that said to put a script in /etc/init.d/ and then symlink to it from the apropriate /etc/rcRUNLEVEL.d/ directories. However, I noticed that there are already some files named "wpasupplicant" that probably run at startup: /etc/network/if-down.d/wpasupplicant /etc/network/if-post-down.d/wpasupplicant /etc/network/if-pre-up.d/wpasupplicant /etc/network/if-up.d/wpasupplicant They all are symlinks to the same script, /etc/wpa_supplicant/ifupdown.sh. It has a comment at the beginning saying it "[...] allows ifup(8), and ifdown(8) to manage wpa_supplicant(8) and wpa_cli(8) processes running in daemon mode." However, the closest it gets to calling wpa_supplicant itself is (in functions.sh): WPA_SUP_BIN="/sbin/wpa_supplicant" [snip] start-stop-daemon --start --oknodo $DAEMON_VERBOSITY \ --name $WPA_SUP_PNAME --startas $WPA_SUP_BIN --pidfile $WPA_SUP_PIDFILE \ -- $WPA_SUP_OPTIONS $WPA_SUP_CONF [snip] start-stop-daemon --stop --oknodo $DAEMON_VERBOSITY \ --exec $WPA_SUP_BIN --pidfile $WPA_SUP_PIDFILE Does that mean it's safe to make an init.d script for wpa_supplicant, and if so what would it look like? General info: Debian Squeeze (5.0) official wpasupplicant package (v0.6.10-2.1) The full contents of my system's functions.sh and ifupdown.sh are here (dependent, of course, on my system's uptime—it's a five-year-old laptop that greatly enjoys overheating): functions.sh ifupdown.sh

    Read the article

  • Can't install mplayer or vlc on ubuntu

    - by mirko4
    I am trying to install Mplayer or VLC player on ubuntu feisty but i can't do it. I try with apt-get: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: mplayer: Depends: libasound2 (> 1.0.16) but 1.0.13-1ubuntu5 is to be installed Depends: libavcodec51 (>= 0.svn20080206-8) but it is not going to be installed or libavcodec-unstripped-51 (>= 0.svn20080206-8) but it is not installable Depends: libavformat52 (>= 0.svn20080206-8) but it is not going to be installed or libavformat-unstripped-52 (>= 0.svn20080206-8) but it is not installable Depends: libavutil49 (>= 0.svn20080206-8) but it is not going to be installed or libavutil-unstripped-49 (>= 0.svn20080206-8) but it is not installable Depends: libcaca0 (>= 0.99.beta14-1) but 0.99.beta11.debian-2build1 is to be installed Depends: libcdparanoia0 (>= 3.10.2+debian) but 3.10+debian~pre0-4build1 is to be installed Depends: libcucul0 (>= 0.99.beta14-1) but 0.99.beta11.debian-2build1 is to be installed Depends: libfaad0 (>= 2.6.1) but it is not going to be installed Depends: libfribidi0 (>= 0.10.9) but 0.10.7-4build1 is to be installed Depends: libgif4 (>= 4.1.6) but it is not going to be installed Depends: libjack0 (>= 0.109.2) but it is not going to be installed Depends: liblzo2-2 but it is not going to be installed Depends: libopenal1 but it is not going to be installed Depends: libpostproc51 (>= 0.svn20080206-8) but it is not going to be installed or libpostproc-unstripped-51 (>= 0.svn20080206-8) but it is not installable Depends: libspeex1 (>= 1.2~beta3-1) but 1.1.12-3 is to be installed Depends: libsvga1 Depends: libswscale0 (>= 0.svn20080206-8) but it is not going to be installed or libswscale-unstripped-0 (>= 0.svn20080206-8) but it is not installable Depends: mplayer-skin python-apt: Depends: libapt-inst-libc6.7-6-1.1 Depends: libapt-pkg-libc6.7-6-4.6 scim-gtk2-immodule: Depends: libscim8c2a (>= 1.4.6) but 1.4.4-7ubuntu1 is to be installed scim-modules-socket: Depends: libscim8c2a (>= 1.4.6) but 1.4.4-7ubuntu1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I try apt-get -f install but it doesn't work neither. What to do please help me ?!

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >