Search Results

Search found 1861 results on 75 pages for 'loss'.

Page 28/75 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How useful is mounting /tmp noexec?

    - by Novelocrat
    Many people (including the Securing Debian Manual) recommend mounting /tmp with the noexec,nodev,nosuid set of options. This is generally presented as one element of a 'defense-in-depth' strategy, by preventing the escalation of an attack that lets someone write a file, or an attack by a user with a legitimate account but no other writable space. Over time, however, I've encountered arguments (most prominently by Debian/Ubuntu Developer Colin Watson) that noexec is a useless measure, for a couple potential reasons: The user can run /lib/ld-linux.so <binary> in an attempt to get the same effect. The user can still run system-provided interpreters on scripts that can't be run directly Given these arguments, the potential need for more configuration (e.g. debconf likes an executable temporary directory), and the potential loss of convenience, is this a worthwhile security measure? What other holes do you know of that enable circumvention?

    Read the article

  • Is it safe to operate a laptop without battery?

    - by leladax
    I know it's 'unsafe' in terms of data loss but I noticed motherboards still have some of their circuits on power when they are plugged in [e.g. a circuit that must wait for power-on signals is certainly one of them]. Hence, I wondered if it would increase the life of the laptop if the battery was simply off. Let alone that may also increase battery life, but that's the least of my concerns. Notice the main point is to plug it off on hibernate and have no power source whatsoever for the duration of being off (apart from the clock battery). (i.e. saving having to plug off the battery every time)

    Read the article

  • greengeeks drupal install imagemagik 'path /usr/bin/convert' does not exists error

    - by letapjar
    I just signed up with greengeeks. I have a drupal install (6.19) on my public_html directory. The ImageMagic Toolkit can't find the binary - the error I get is "the path /usr/bin/convert" does not exist. when I use a terminal and do 'which convert' it shows /usr/bin/convert also, I have a second drupal install in an addon domain - it's home directory is above the public_html directory (in a directory called '/home/myusername/addons/seconddomain') The drupal install in the addon domain finds the imagemagick binary just fine. I am at a total loss as to why the original install cannot find the binary. The tech support guys at greengeeks have no clue either. Any ideas of things to try?

    Read the article

  • mdadm - Recovering a 'split' RAID1 array

    - by Hamza
    I have two drives that used to be part of a single RAID1 volume but it appears that one of them went offline for some time, something I've noticed just now when I rebooted my system. I now seem to have two RAID volumes, as reported by: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc[1] 2096116 blocks super 1.2 [2/1] [_U] md127 : active (auto-read-only) raid1 sdb[0] 2096116 blocks super 1.2 [2/1] [U_] unused devices: <none> Not exactly sure where to go from here. How can I merge and re-sync these volumes without data loss? Thanks.

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • Passenger package not found after adding phusion repo Ubuntu 12.04

    - by speshak
    I'm trying to install the official Passenger (and Nginx) packages from Phusion on an Ubuntu 12.04.3 server. I have the following in /etc/apt/sources.list.d/passenger: deb https://oss-binaries.phusionpassenger.com/apt/passenger precise main Even after running apt-get update there is no passenger package found by apt. I did verify that the package info appears in /var/lib/apt/lists/oss-binaries.phusionpassenger.com_apt_passenger_dists_precise_main_binary-amd64_Packages but at this point I'm at a loss as to why the package isn't available via apt-get. There are some packages (libapache2-mod-passenger, passenger-docs) that are available. These packages seem to also exist in universe, but apt-cache show lists both locations.

    Read the article

  • HP Media Smart remote access

    - by Coov
    I just purchased this box for home backups for my pc's and mac's. Everything works great accept for the remote access part. I can RD into the machine locally but I can't get to it from outside of my network. I've enabled port forwarding on my router but it doesn't seem to matter. I checked with Qwest and they don't block these ports so I'm at a loss. I do have Vonage in front of my router but I've taken it out and it didn't make a difference. I suspect I've made an error with my router setup. I'm a programmer and I'm playing in the world of the unknown here. I'm lost. Any suggestions?

    Read the article

  • Cheapest High Available Web Server [closed]

    - by xyz
    I would like to create a high-available setup (e.g. a small cluster) for a webserver, i.e. it will run Apache, PHP and MySQL. There will be between 2-8 small websites running with only very little traffic and workload. High availability is however very important. I don't want to be dependent on 1 datacenter, so there must be a minimum of 2 servers placed in different datacenters, and if one server goes down, the user must experience no or only a minimum of downtime - and no data loss. I have considered Amazon AWS using their Elastic Load Balancing, since it is possible to buy 2 EC2 instances in 2 availability zones and set up load balancing and RDS (Multi-AZ). However this seems rather expensive. Using the AWS price calculator http://calculator.s3.amazonaws.com/calc5.html it totals to 185$/month the first year (including the free tier). Are my calculations incorrect or is there a cheaper way to make this HA setup? Best regards

    Read the article

  • vi visual mode doesn't work

    - by BobMarley
    I'm running vim (7.0.237) after sshing to a remote CentOS box, and it just won't enter visual mode. When I press 'v', it just beeps and does nothing. I'm running Ubuntu with GNOME Terminal, and the local copy of vi works fine, so I don't see how this could be a problem with the terminal. I have the same .vimrc file on the local and remote machines, and the only settings are: set nocompatible; set tabstop=4. I'm at a total loss here, any ideas?

    Read the article

  • IP issue with Heartbeat & DRBD

    - by adam0345
    I'm in the process of setting up 3-node stacked DRBD, and i'm experiencing a rather bizarre issue. Two nodes are located at the data center, and the 3rd node is located locally. The Primary and Secondary nodes are working as expected, however the 3rd node won't connect to the primary. If I ping the IP provided by heartbeat on the 3rd node it will return 100% packet loss, if I reset networking interfaces, ping will then return a few successful packets, but then stop returning any packets. I can't work out any reason why this would be behaving like this. All nodes are running Debian Squeeze, and the latest version of DRBD.

    Read the article

  • SSRS Errors "Use Local", even though I am

    - by Corey Coogan
    I am at a loss. I posted this on SO, but think this is probably a better place. I have searched high and low and don't know what to do. I am running SQL Server Web Edition on Server 2008, which only supports local databases. I am trying to connect to localhost, but when I test my connection, I get this error. The feature: "The edition of Reporting Services that you are using requires that you use local SQL Server relational databases for report data sources and the report server database." is not supported in this edition of Reporting Services. The DB was upgraded from SQL Express and when I select @@version, it says it's Web Edition. I've tried rebooting and that seemed to fix it, but only for a little while.

    Read the article

  • Application windows have colossal fonts in Enlightenment 17, while system windows are untouched

    - by Matt
    I'm trying to get used to using Enlightenment instead of KDE on my Slackware64 multilib computer, but I'm having a terrible time getting one problem fixed. My fonts are HUGE on application windows - from Firefox to Gimp to Xchat to anything else, all the fonts are 3x the size they should be. But at the same time, the system menu is the correct size. I'm at a loss - I want the applications to have the same DPI as the system menu. When I'm in KDE, they all look normal. I've included a screenshot to show what I'm talking about.

    Read the article

  • Flash Media Server slow over SSL

    - by Antilogic
    We are using FMS to host a VoD site. We host FMS internally (we do not use a CDN). We recently installed an SSL certificate to alleviate connection issues for clients (they're networks either block or don't support RTMP), however we're noticing that when streaming in RTMPS connections are drastically slower (on the order of Mbps). I know SSL causes some amount of over head but both client and server show almost no signs of exertion. Speedtest.net and a locally hosted speed test confirm that bandwidth is not an issue. I'm really not a network guru, so I'm at a loss as to where to check next. Do any of you have an idea why streaming media would run so slow over SSL?

    Read the article

  • Redundant Router and Load Balancing vs. DDoS attack

    - by colgatta
    With a small server farm at a hoster with great support and conditions, I worry about the increasing number of DDoS attacks against this hoster (not my web project, but other clients on the same location). I have booked a redundant router and load balancer as managed service with this hoster to share the load with all the dedicated servers. However, I was lost again today because another one's project was attacked with DDoS for hours :-( Each hour means hundreds of dollars loss whenever my adserver and tracking is not reachable. Even time-out advertising have to be paid by me but can not be resold to my clients without the servers being available. All the time, the servers, the load and traffic is OK and health, but no chance to keep this stable/online if the hoster is vulnerable. Anyone has ideas or suggestions how to protect - even against DDoS?

    Read the article

  • Cannot connect to internet with Clearwire modem.

    - by ide
    I'm currently using a Motorola WiMAX modem (CPEi 25725) and cannot connect to the internet. I can connect to the modem at 192.168.15.1 and check its status. It says that it has good/excellent connectivity to the internet and shows all five signal bars. Additionally it has sent and received some WiMAX packets so I believe it is connected to a tower. I'm at a loss for what the problem is. Unplugging the modem, restarting it from software, and restarting my computer (Windows 7) have not helped. Windows still reports that it is not connected to the internet. Alternatively, could this be an ISP issue? I have heard that Clearwire is a not-so-reputable ISP that blocks VoIP, and I was using Skype recently.

    Read the article

  • When copying VM filesystem over netcat, dd copies double the disk size

    - by JivanAmara
    I'm attempting to copy the disk of a working headless virtualbox VM (VM1) on one server to a new VM (VM2) on a vCloud server. I don't have access to the host of VM2. The OS is Windows Server 2003 (32-bit) I start both VMs with a live Knoppix image. I run 'nc -l | dd of=/dev/sda bs=512' on VM2 I run 'dd if=/dev/sda bs=512 | nc ' on VM1 I previously did this with another windows VM and it worked fine. VM1 has a disk of size ~70GB (verified with fdisk); however, the amount of data dd reports read/written is ~139GB. Of course the target machine doesn't work properly. I get a Windows splash screen, then blue error screen with general 'system not working' information. I'm at a loss what could cause this. Any ideas?

    Read the article

  • Apache crashing at random intervals. Can not find a reason in log files

    - by Nick Downton
    We are having an issue with a VPS running plesk 9.5 on ubuntu 8.04 At seemingly random intervals Apache will disappear and needs to be started manually. I have checked the apache error log, /var/log/messages, individual virtual host apache error files and cannot find anything that coincides with the time of the failure. dmesg is empty which is a bit odd. We have also had the psa service go down for no apparent reason but apache stay up. I'm at a loss to diagnose this really because all the log files I can find do not point to any issues. Are there any others I can look at? Memory usage sits at about 55% (out of 400mb) and it isn't a particularly high trafficed server. Any pointers as to where else I can find out what is going on would be very much appreciated. Nick

    Read the article

  • Badwidth-Hogging Linux Server Causing Trouble

    - by BlairHippo
    We have a Linux server (2.6.28-11-generic #42-Ubuntu) that's misbehaving on a client site, gobbling up an entirely unacceptable percentage of the client's bandwidth, and we're trying to figure out what the heck it's doing. And the guy who had the sysadmin skillset has yet to be replaced. We're at a loss for what could be causing all that network traffic, and need to figure it out SOON. What log files should I be looking at to find this information? What analysis tools would you recommend for this task? Please note that I'm not looking for a tool that will allow me to analyze FUTURE traffic. The client is on the verge of shutting the machine off entirely; I need to figure out what it's been doing with the data I already have, if that's at all possible. My thanks in advance for helping a development monkey play sysadmin.

    Read the article

  • does wordpress auto update work without a cron job?

    - by perler
    I'm a bit on a loss here. Since WP 3.7 you can update your wordpress automatically. Since then it's more hit and miss on several servers here. I would like to understand, if it is enough to enable automatic updates described here and how this would work without a cron job. I'm under the impression right now, that the upgrade is only started, when ever someone logs into the backend, is this correct? If so, is there a way to automate the updates via a cron job?

    Read the article

  • Django - Moving database from development to production servers

    - by Garfonzo
    I am working on a Django project with a MySQL backend. I'm curious about the best way to update a production server's database to reflect the changes made on the development server's database? When I develop now, I make some changes to a models.py file, then to a schemamigration using South. Sometimes I do several migrations across several apps within the main project folder before it's ready for the production database. This means that there are several migration files in the app/migrations/ folder created by South. So on the production server, how does one update the database to reflect all the changes made in development, without having any data loss?

    Read the article

  • Upgrading from 32 to 64 Bit Windows 7, without losing program installations/games

    - by Fogest
    I recently built a new computer and put the wrong installation of Windows on (32 bit), meaning I cannot use all of my RAM. I would like to upgrade to the 64 bit version, though I already have downloaded many programs and games which would total to around 30 GB give or take. I don't have the kind of data usage with my ISP to re-download this much data again, until next month (total GB will be higher as time goes on). I know there is Windows Easy Transfer, but it is not so much my data itself I'm worried about, it is more having to re-download and install a bunch of games and applications. Is it possible to perform an upgrade from 32 bit to 64 bit without this loss?

    Read the article

  • Booting ubuntu from usb hdd: GRUB menu not shown

    - by emanemos
    Hello, could anyone help me to boot ubuntu-9.04 from usb hard disk? This disk contains /boot primary partition. During ubuntu installation I used "Advanced" button and asked to install GRUB to the /boot partition. Later I checked whether GRUB files are really present in this partition. They are. However, I get stuck while trying to boot. The boot menu ("ubuntu generic version", "ubuntu recovery mode", etc...) is not shown. Instead I am thrown to GRUB minimal bash-like version. I feel at a loss and have no idea why I am pointed to this minimal version. Can anybody prompt me what to do?

    Read the article

  • Exclude pings from apache error logs (ran from PHP exec)

    - by fooraide
    Now, for a number of reasons I need to ping several hosts on a regular basis for a dashboard display. I use this PHP function to do it: function PingHost($strIpAddr) { exec(escapeshellcmd('ping -q -W 1 -c 1 '.$strIpAddr), $dataresult, $returnvar); if (substr($dataresult[4],0,3) == "rtt") { //We got a ping result, lets parse it. $arr = explode("/",$dataresult[4]); return ereg_replace(" ms","",$arr[4]); } elseif (substr($dataresult[3],35,16) == "100% packet loss") { //Host is down! return "Down"; } elseif ($returnvar == "2") { return "No DNS"; } } The problem is that whenever there is an unknown host, I will get an error logged to my apache error log (/var/log/apache/error.log). How would I go about disabling logs for this particular function ? Disabling logs in the vhost is not an option since logs for that vhost are relevant, just not the pings. Thanks,

    Read the article

  • Service or software to auto-sync one server to another so they are identical?

    - by Ryan
    We have a main node in a DC data center and want to setup a back-up node in Seattle. The back-up node will only be used if the DC node goes down and we switch it over while DC node is repaired. The question is, what kind of services out there allow me to sync the data, I suppose we want to do it fairly frequently so if something goes down there isn't much loss in data between the time of failure and the last back-up/sync. Is there any common solution for this? It's Windows Server 2003 running Parallels Virtuozzo.

    Read the article

  • Installing Color-Theme with GNU Emacs 23.2 on OS X Snow Leopard

    - by idclark
    Hi all, I've just started using emacs a week ago and I've been unsuccessful in installing color-theme using GNU Emacs 23.2 on OS X. With Ubuntu the whole process took maybe a few minutes with the package manager, but I'm completely at a loss with OS X, what the heck is a "tarball"? I don't have any experience compiling source code. I know Carbon Emacs comes with color-theme packaged, what would i lose by reverting to Emacs 22? I'd prefer staying with GNU Emacs 23 across both systems. Any input is greatly appreciated!!

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >