Search Results

Search found 8144 results on 326 pages for 'thread'.

Page 249/326 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Running gdb on Ubuntu 9.10 Apache2 Install

    - by AJ
    Hi all, I am trying to run gdb to debug my Ubuntu 9.10 Apache2 install and having a couple of problems: It seems like the package installed by Ubuntu for Apache2 does not include debugging symbols; is there a different version of the package I should be using for developing/debugging? When I try to run gdb, I get an error that looks to be caused by some missing environment variable. Are there additional options I should pass to "run" to get this to work? Here is the output of the debugger session: root@aj-ubuntu:/usr/sbin# gdb apache2 GNU gdb (GDB) 7.0-ubuntu Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/sbin/apache2...(no debugging symbols found)...done. (gdb) run -X Starting program: /usr/sbin/apache2 -X [Thread debugging using libthread_db enabled] apache2: bad user name ${APACHE_RUN_USER} Program exited with code 01. (gdb) Thanks in advance, -aj

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • 554 - Sending MTA’s poor reputation

    - by Phil Wilks
    I am running an email server on 77.245.64.44 and have recently started to have problems with remote delivery of emails sent using this server. Only about 5% of recipients are rejecting the emails, but they all share the following common message... Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. As far as I can tell my server is not on any blacklists, and it is set up correctly (the reverse DNS checks out and so on). I'm not even sure what the "Sending MTA" is, but I assume it's my server. If anyone could shed any light on this I'd really appreciate it! Here's the full bounce message... Could not deliver message to the following recipient(s): Failed Recipient: [email protected] Reason: Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. If you believe that this failure is in error, please contact the intended recipient via alternate means. -- The header and top 20 lines of the message follows -- Received: from 79-79-156-160.dynamic.dsl.as9105.com [79.79.156.160] by mail.fruityemail.com with SMTP; Thu, 3 Sep 2009 18:15:44 +0100 From: "Phil Wilks" To: Subject: Test Date: Thu, 3 Sep 2009 18:16:10 +0100 Organization: Fruity Solutions Message-ID: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_01C2_01CA2CC2.9D9585A0" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Acosujo9LId787jBSpS3xifcdmCF5Q== Content-Language: en-gb x-cr-hashedpuzzle: ADYN AzTI BO8c BsNW Cqg/ D10y E0H4 GYjP HZkV Hc9t ICru JPj7 Jd7O Jo7Q JtF2 KVjt;1;YwBoAGEAcgBsAG8AdAB0AGUALgBoAHUAbgB0AC0AZwByAHUAYgBiAGUAQABzAHUAbgBkAGEAeQAtAHQAaQBtAGUAcwAuAGMAbwAuAHUAawA=;Sosha1_v1;7;{F78BB28B-407A-4F86-A12E-7858EB212295};cABoAGkAbABAAGYAcgB1AGkAdAB5AHMAbwBsAHUAdABpAG8AbgBzAC4AYwBvAG0A;Thu, 03 Sep 2009 17:16:08 GMT;VABlAHMAdAA= x-cr-puzzleid: {F78BB28B-407A-4F86-A12E-7858EB212295} This is a multipart message in MIME format. ------=_NextPart_000_01C2_01CA2CC2.9D9585A0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • Get details / solve issue with a kernel panic?

    - by Joseph
    I have a Lenovo T430 running Linux Mint 13 (MATE): joseph:~$ uname -a Linux joseph-T430-LM 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I installed Mint immediately after getting the laptop about two weeks ago, and have noticed that about once a day, the computer will completely freeze up- I can't use Ctrl+Alt+Backspace to restart X, I can't use Ctrl+Alt+F1 to get a text only terminal, can't move mouse, can't type, and if any music was playing it just gets stuck in about a 1-second loop. There is a Windows partition, but I haven't had any issues in Windows. I couldn't find a common thread between the freezes, they were seemingly random (sometimes right after I clicked the mouse, sometimes not; sometimes with Pandora/flash being used, sometimes not, etc). I assume they're kernel panics since it completely locks up, but the laptop doesn't have a capslock or scroll lock LED. It is on a dock and I do have a USB keyboard, but the scroll lock/capslock lights do not flash when it happens (not sure if this is indicating its not a kernel panic, or if the kernel panic just wouldn't illuminate the LEDs on a usb keyboard attached to a laptop dock). This was annoying but not terrible. However, I've found a way to reproduce it. I have a particular CSV file that when I open up in LibreOffice Calc and scroll around, the same thing happens- complete lock up. I really need to use this file, so I'd like to fix the issue, but at the least it's given me a test case to work with. So, having a case where I can cause this issue, what can I do to better find out what's going on? I've looked in /var/log/syslog but haven't found anything seemingly useful. Any thoughts?

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • attach / detach mssql 2008 sql server manager [SOLVED]

    - by Tillebeck
    An external consult wrote a guide on how to copy a database. Step two was detach the database using Sql Server Manager. After the detach the database was not visible in the SQL Server Manager... Not much to do but write a mail to the service provider asking to have the database attached again. The service porviders answer: Not posisble to attach again since the SQL Server security has been violated". Rolling back to last backup is not the option I want to use. Can any one give feedback if this seems logic and reasonable to assume that a detached database in a SQL Server 2008 accessed through SQL Server Manager cannot be reattached. It was done by rightclicking the database and choosing detach. -- update -- Based on the comments below I update the question with the server setup. There are two dedicated servers: srv1: Web server with remote desktop and an Sql Server Manager srv2: Sql server that can be accessed through the Sql Server Manager on the web server -- update2 -- After a restart of the server the DBA could suddenly do the attachment of the database. And I guess that after the restart it was a simple task. So all of your answer were rigth! It seems that I can only mark one as a correct answer so I marked the first answer correct. But all are correct answer. Thanks a lot. Without posting the link to this thread then we might had so suffer while watching our database beeing restored by a backup :-) Thanks a lot. BR. Anders

    Read the article

  • ubuntu: Installed php-mcrypt but it doesn't show up in phpinfo()

    - by jules
    A web app I'm trying to install on my ubuntu 10.04 LTS requires mcrypt, and is generating this error: Fatal error: Call to undefined function mcrypt_module_open(). I know this is the same question as this one: Installed php-mcrypt but it doesn't show up in phpinfo(), but I tried several things, none of which worked, and have additional questions. I would comment on the original thread but don't have enough reputation to do so; forgive me for the duplicate question. My versions of php and mcrypt are (both installed via apt-get): php: 5.3.2-1ubuntu4.10 mcrypt: 5.3.2-0ubuntu Doing a php -m shows that the mcrypt module is installed. I installed mcrypt and php5-mcrypt via apt-get. Also, I'm using nginx as my web server. I have tried reinstalling mcrypt and restarting nginx, but still can't get mcrypt to show up on phpinfo() and calls to mcrypt are still broken. Here is some more info: $ php -i | grep "mcrypt" /etc/php5/cli/conf.d/mcrypt.ini, mcrypt mcrypt support => enabled mcrypt.algorithms_dir => no value => no value mcrypt.modes_dir => no value => no value I also checked that mcrypt is on in /etc/php5/cli/conf.d/mcrypt.ini and /etc/php5/cgi/conf.d/mcrypt.ini. Lastly, I'm using fastCGI with nginx. I googled around and saw suggestions to restart php5-fpm. I couldn't find php5-fpm in apt-get, I'm not sure if I still need php5-fpm since I already have fastCGI. Is there anything else I'm missing?

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid)

    - by Svyatoslavik
    How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid) Problem that lucid have old ImageMagic version(6.5.2) Its very important because me need work with SVG grafics, In my local pc I have ubuntu 11.04 and ImageMagic 6.6.2 and all work fine, In server I have 6.5... and I have problem. Reinstall ubuntu to 11.* this is no solution. I tried change /etc/apt/source.list from ubuntu 10.04 (lucid) to list from ubuntu 11.04 (natty) and update ImageMagic. After this action I have ImageMagic 6.6.2 (I looked phpinfo()) but ImageMagick is not work now. If I try do any action I get error: [error] 8996#0: *19983 FastCGI sent in stderr: "PHP Fatal error: Uncaught exception 'ImagickException' with message 'no decode delegate for this image format `/tmp/magick-XXnYKWKC' @ error/constitute.c/ReadImage/532' How it fix? Or how return to old version imagemagick? Problem if I try install from sources: /tmp/image/ImageMagick-6.7.2-7# ./configure configuring ImageMagick 6.7.2-7 checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking whether build environment is sane... yes checking for a BSD-compatible install... /usr/bin/install -c checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/tmp/image/ImageMagick-6.7.2-7': configure: error: C compiler cannot create executables See `config.log' for more details /tmp/image/ImageMagick-6.7.2-7#

    Read the article

  • Can I edit the snapshots of websites on most visited sites page in Chrome?

    - by arik
    I saw the previous chain of discussion, but did not find a way to continue the thread - I have found the "Preference" (I have Windows 7), but in that, I did not find clearly what to modify. I did find a section called 'URLs pinned" or something like this, but it did NOT match fully the ones I have. I have activated the 'profile sync' for Chrome - don't know if it has any effect. Can you manually edit the icons of the most visited sites, for the new tab page in Chrome? When you open a new tab in Chrome, I get the 'new page' tab, where I selected the 'most popular', so I have 8 icons with the 8 most popular sites I visited; I can also pin any one of those I see on screen, such that they remain 'permanent' there. We are missing one which would point to, say, "hotmail". So, I was looking for a way to add 'hotmail' to be one of the 8. (and, by the way, we ticked the 'X' on one of those 8, and now it remains grey / shows nothing in it). So, my double-question: How can I add a URL of my choice into one of those 8 spaces? How can I restore usage of the last one?

    Read the article

  • Huh? JDK not found? (on Windows 7 64-bit)

    - by Android Eve
    I am setting up a development environment for the latest Android 2.3 on a fresh install of Windows 7 64-bit. I first installed the 64-bit JDK 6 (jdk-6u23-windows-x64.exe). Then, I installed 64-bit Eclipse Classic 3.6 (eclipse-SDK-3.6.1-win32-x86_64.zip). Then, I proceed to install the Android SDK Starter Package: installer_r08-windows.exe. But... upon start it says: "Java SE Development Kit (JDK) not found." Why? I just installed it. Is this a mismatch between 32-bit and 64-bit? How do I solve this? Update (1): I tried setting the %JAVA_HOME% environment variable, as well as setting the Installed JREs in Eclipse, as suggested below. None of these solved the problem. It appears that I am not the only experiencing the problem, as this thread suggests: http://stackoverflow.com/questions/1919340/android-sdk-setup-under-windows-7-pro-64-bit I wonder whether there is a 64-bit version of the Android SDK. Update (2): I used the zip version instead (android-sdk_r08-windows.zip), ran android.bat, updated all SDK packages, and installed the ADT plugin (8.0.1), not before having to check: 'Contact all update sites during install to find required software'. We'll see how this goes... Update (3): It worked! (going to accept @bubu's answer shortly) -- but why doesn't the emulator include the HelloAndroid app when I run it (Ctrl+F11) from Eclipse?

    Read the article

  • Patching Synaptics Driver In Ubuntu

    - by danben
    I've got an HP Envy with the new Synaptics Clickpad - the device is not supported out of the box by Ubuntu, so I downloaded this patch (https://patchwork.kernel.org/patch/67335/), applied it to synaptics.c in the linux kernel source, rebuilt the kernel and booted from it. Nothing changed. I'm new to Linux and don't know much about driver configuration; how can I check that the newly-produced synaptics.o file is actually being used somewhere? Or if it is obvious, what step did I leave out? Separately, I tried following the instructions in this thread: http://georgia.ubuntuforums.org/showpost.php?p=8989185&postcount=11 There, someone posted the patch along with a Makefile and instructions for installation. When I followed those, my mouse stopped working altogether. Not desirable, but at least I know it had some effect. The Makefile is posted below; I get the gist of what it's doing, but don't have enough knowledge to figure out why it would cause the device to stop responding entirely. obj-m := psmouse.o psmouse-objs := psmouse-base.o synaptics.o psmouse-$(CONFIG_MOUSE_PS2_ALPS) += alps.o psmouse-$(CONFIG_MOUSE_PS2_ELANTECH) += elantech.o psmouse-$(CONFIG_MOUSE_PS2_OLPC) += hgpk.o psmouse-$(CONFIG_MOUSE_PS2_LOGIPS2PP) += logips2pp.o psmouse-$(CONFIG_MOUSE_PS2_LIFEBOOK) += lifebook.o psmouse-$(CONFIG_MOUSE_PS2_TRACKPOINT) += trackpoint.o psmouse-$(CONFIG_MOUSE_PS2_TOUCHKIT) += touchkit_ps2.o KDIR := /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules install: @cp psmouse.ko /lib/modules/$(shell uname -r)/kernel/drivers/input/mouse/

    Read the article

  • Sophos UTM in Hyper-V

    - by TheD
    So, I had a previous thread about this Virtualizing Firewalls/UTM. Essentially, I have configured what I think would work, but networking isn't my strong point! Two Virtual Adapters - with IP addresses 192.168.0.2 (External) and 192.168.0.3 (Internal) respectively. The External Adapater looks at 192.168.0.1 (my Zyxel) for it's default gateway. The Internal Adapter, 192.168.0.3, which is what the Sophos UTM listens on, has it's default gateway set to 192.168.0.2, the IP of the External Lan interface. So, PC (192.168.0.11, DHCP) --> (LAN) --> Switch --> 192.168.0.3 (Internal LAN Interface IP) --> Sophos UTM --> 192.168.0.2 (External LAN Interface IP) --> 192.168.0.1 --> Internet Would this be the correct setup, or am I completely out of the game here? Cheers!

    Read the article

  • Authenticating Active Directory Users to Mac OS X Mavericks Server L2TP VPN Service

    - by dean
    We have a Windows Server 2012 Active Directory Infrastructure that consists of two domain controllers. Bound to the Active Directory Domain is a Mac OS X Mavericks Server 10.9.3. The server runs Profile Manager and VPN Services. My Active Directory users are able to authenticate to the Profile Manager, but not the VPN. I have found several threads on other forums of other users reporting similar issues, here is just one of many references: https://discussions.apple.com/thread/5174619 It appears as though the issue is related to a CHAP authentication failure. Can anyone suggest what next troubleshooting steps I might take? Is there a way to liberalize the authentication mechanism to include MSCHAP? Here is an excerpt of the transaction from the logs. Please note the domain has been changed to example.com. Jun 6 15:25:03 profile-manager.example.com vpnd[10317]: Incoming call... Address given to client = 192.168.55.217 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: publish_entry SCDSet() failed: Success! Jun 6 15:25:03 --- last message repeated 2 times --- Jun 6 15:25:03 profile-manager.example.com pppd[10677]: pppd 2.4.2 (Apple version 727.90.1) started by root, uid 0 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: L2TP incoming call in progress from '108.46.112.181'... Jun 6 15:25:03 profile-manager.example.com racoon[257]: pfkey DELETE received: ESP 192.168.55.12[4500]->108.46.112.181[4500] spi=25137226(0x17f904a) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP connection established. Jun 6 15:25:04 profile-manager kernel[0]: ppp0: is now delegating en0 (type 0x6, family 2, sub-family 0) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connect: ppp0 <--> socket[34:18] Jun 6 15:25:04 profile-manager.example.com pppd[10677]: CHAP peer authentication failed for alex Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connection terminated. Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnecting... Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnected Jun 6 15:25:04 profile-manager.example.com vpnd[10317]: --> Client with address = 192.168.55.217 has hung up

    Read the article

  • How do you install .net4 on a Server 2008 r2 machine through psremoting in powershell?

    - by Jake
    I need to write a script that installs .net 4 remotely using powershell to a group of Server 2008 R2 machines. I based my script off of http://social.technet.microsoft.com/Forums/en-US/winserverpowershell/thread/3045eb24-7739-4695-ae94-5aa7052119fd/. enter-pssession -computername localhost $arglist = "/q /norestart /log C:\Users\tempuser\Desktop\dotnetfx4" $filepath = "C:\Users\tempuser\Desktop\dotNetFx40_Full_setup.exe" Start-Process -FilePath $filepath -ArgumentList $arglist -Wait -PassThru After running the command I would get the following log errors (running the same lines locally would install .net without error): Action: Downloading Item Failed to CreateJob : hr= 0x80200014 Action: Performing actions on all Items Action: Performing Action on Exe at C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe Exe (C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe) succeeded. Exe Log File: dd_SetupUtility.txt Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_32 ServiceControl operation succeeded! Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_64 ServiceControl operation succeeded! Action complete Action: Performing Action on Exe at C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu Exe (C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu) failed with 0x5 - Access is denied. . PerformOperation on exe returned exit code 5 (translates to HRESULT = 0x5) Action complete OnFailureBehavior for this item is to Rollback. Action: Performing actions on all Items Action complete Action complete Action: Downloading http://go.microsoft.com/fwlink/?LinkId=164184&clcid=0x409 using WinHttp WinHttpDetectAutoProxyConfigUrl failed with error: 12180 Unable to retrieve Proxy information although WinHttpGetIEProxyConfigForCurrentUser called succeeded Action complete C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe: Verifying signature for netfx_Core.mzz C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe Signature verified successfully for netfx_Core.mzz Action complete Decompression completed with code: 16389 Decompression of payload failed: C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\netfx_Core.mzz Action complete Final Result: Installation failed with error code: (0x80074005) (Elapsed time: 0 00:00:28). Is there some security setting or perhaps something else I've missed?

    Read the article

  • Parental Controls in Ubuntu - per user

    - by Hamish Downer
    I would like to set up parental controls on Ubuntu for a friend of mine. I want it so that the child user has the controls set, but the parent user is not restricted. To be clear, they are sharing one computer, so a router based solution won't help. And I would like a set of step by step instructions to do this. Just one way of doing it. I'm an experienced Ubuntu user, happy at the command line. I've spent quite some time googling for this along the way. I hope that the GChildCare project will eventually make this easy, but it is not ready yet. In the meantime, the WebContentControl GUI provides a way of managing parental controls, but apply them to every user on the computer (easy WebContentContol install instructions and detailed instructions, discussion and related links on ubuntuforums). The ubuntuforums post has a FAQ that states that user-specific configuration is not possible with WebContentControl, and then provides 3 links he used to help him do it. But they are far from step by step instructions. There is this thread which is notes along the way and linking to this article about squid and dansguardian. And then to these two dansguardian articles which are somewhat in depth ... So does anyone know of an existing guide to how to set up parental controls on ubuntu with some users not affected? If no one has come up with an answer after a little bit, I'll set up a community wiki answer so we can come up with a guide.

    Read the article

  • Tomcat web application intermittent freeze

    - by tinny
    I have a Grails web application (just a standard war file) deployed on a Ubuntu 10.10 server running on tomcat 6. My database is postgresql. The problem is that every so often (once or twice a day after inactivity) when I try to log into this web application it just freezes. I can navigate to the login page but when I try and login (first time the DB is hit, might be a clue..?) the application just freezes indefinitely, no 500 response code... the browser just waits and waits. I followed the instructions detailed here because the problem described sounded the same as mine. My GC logging showed no long running GC, all sub sec. When the application freezes a jmap heap output is... using parallel threads in the new generation. using thread-local object allocation. Concurrent Mark-Sweep GC Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 536870912 (512.0MB) NewSize = 21757952 (20.75MB) MaxNewSize = 87228416 (83.1875MB) OldSize = 65404928 (62.375MB) NewRatio = 7 SurvivorRatio = 8 PermSize = 21757952 (20.75MB) MaxPermSize = 85983232 (82.0MB) Heap Usage: New Generation (Eden + 1 Survivor Space): capacity = 19595264 (18.6875MB) used = 11411976 (10.883308410644531MB) free = 8183288 (7.804191589355469MB) 58.23843965562291% used Eden Space: capacity = 17432576 (16.625MB) used = 9249296 (8.820816040039062MB) free = 8183280 (7.8041839599609375MB) 53.05754009046053% used From Space: capacity = 2162688 (2.0625MB) used = 2162680 (2.0624923706054688MB) free = 8 (7.62939453125E-6MB) 99.99963008996212% used To Space: capacity = 2162688 (2.0625MB) used = 0 (0.0MB) free = 2162688 (2.0625MB) 0.0% used concurrent mark-sweep generation: capacity = 101556224 (96.8515625MB) used = 83906080 (80.01907348632812MB) free = 17650144 (16.832489013671875MB) 82.62032270912317% used Perm Generation: capacity = 85983232 (82.0MB) used = 62866832 (59.95448303222656MB) free = 23116400 (22.045516967773438MB) 73.1152232100324% used Anyone know what "From Space:" is? Any ideas on further fault finding ideas? I dont have much experience with this type of fault finding.

    Read the article

  • How to set up QoS on ADSL router (terracom) for prioritizing browsing

    - by DBZ_A
    I want to configure the ADSL router which connects 10+ machines to the internet. I want to give maximum priority to browsing (ports 80,443) and set low priority for bittorrent etc.(port 42180) I have been experimenting with settings , but with no luck. There are three settings which confuse me, along with my understanding. 802.1 Priority - Related to LAN level, possible values 0-7 , higher numbers means higher priority. 'Mark traffic priority' - clueless about this. IPP/DS - IP Precedence - possible values 0-7 ; 6 & 7 are reserved, so set 5 for highest priority. Or when using DSCP - set 46 for highest priority. Please help me in getting this done... Similer question for another model of router here , but with less number of confusing options :) How to configure QoS on home router Update: from discussion on another thread, QoS can control only upstream traffic (from router to the internet) , while this may in turn affect downstream traffic rate, there is no direct control over data coming into the router.

    Read the article

  • DNS Resolution doesn't work after uninstalling Cisco VPN & Deterministic Network Enhancer in Win 7

    - by Craig M
    I just upgraded my home PC to Windows 7 Ultimate 32 bit. After trying various methods to get the Cisco VPN client to work, I gave up and decided to just run it in XP mode. The last steps I tried were in this article ( http://social.technet.microsoft.com/Forums/en-US/w7itproappcompat/thread/d880dfe5-7f44-4955-8620-2a9355d8ea8b/ ) After that, I uninstalled the Cisco client and rebooted. I uninstalled the Deterministic Network Enhancer and rebooted again. Both uninstalled successfully, but now I'm not able to resolve any DNS. The only way I can resolve DNS is to reinstall the DNE, reboot, and uninstall the DNE. Then I am able to resolve DNS lookups until I reboot again. Once it's rebooted, no more DNS. Any ideas? Edit: I completely forgot I'd asked this question until harrymc posted his answer. I've since found out that to fix this problem, I need to disable my Local Area Connection and re-enable it. Once I do that I have no trouble making network connections until the next time I reboot at which point I repeat the process. It's annoying, but manageable since I reboot very infrequently.

    Read the article

  • Network share not always available on Windows 2003

    - by JP Hellemons
    Hello everybody, we have a windows 2003 server with a shared directory/folder. I've seen this thread but this wasn't any help: http://superuser.com/questions/58890/the-specified-network-name-is-no-longer-available I have a ping -t running from 3 pc's (vista and two windows 7) they all work. the problem occurss when two users enter the network share then this 'network share is no longer available' appears and the explorer windows turn white. after f5 or refresh the shared directory is back. this is really strange. there is no anti virus or kasparsky running on either end. this is all in the same LAN. the internet connection is really stable, so it's really strange. because a stable internet connection should imply that the local network connection is also stable and that this is a windows issue. can it be a router issue? I have checked the eventlog on the server for diskfailure related messages, but there are none. EDIT: can this be related to mapping a shared directory to a drive letter? and that there is a router between me and the mapped network drive? or is it just windows that is not working well with two users on the same shared folder? should I install samba or something?

    Read the article

  • Which linux distributions offer seamless support for UEFI and an LVM root out of the box?

    - by Jannik Jochem
    My new ultrabook (an Asus UX32VD) requires UEFI in order to boot from the internal harddisk. I use an LVM partition which contains my root fs and dual-boot Windows 8. I somehow managed to get this working on Sabayon Linux, however the overall process was pretty painful, and system upgrades keep breaking my configuration because everything depends on a hand-configured kernel and a hand-crafted GRUB2 configuration. This causes a lot of hassle and distractions for me, so I am considering to switch to a different distribution. However, I cannot find any concrete resources that precisely document the state of UEFI support in the popular distributions. As an example, the length of the Ubuntu wiki page on UEFI suggests that installing on UEFI systems is a non-trivial process, and this AskUbuntu thread on encrypted LVM on UEFI systems suggests that LVM might also be a problem. I know that this question seems somewhat open-ended, so I'll formulate concrete questions: Are there any Linux distributions with an installer that supports installing to an LVM root in a UEFI boot setting where Windows 8 is dual-booted? Which distributions support UEFI without having to jump through hoops in order to bootstrap into a UEFI-booted system or requiring manual configuration of the boot manager?

    Read the article

  • Mysql Fail to start

    - by John Naegle
    I'm running a Ubuntu 12.04 LTS Virtual Machine. Last week, the VM stopped unexpectedly now mysql will not start on the VM. These two events may be related, they may not be. When I try to connect: $ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Then: $ sudo service mysql start start: Job failed to start And $ dmesg [ 1838.218400] type=1400 audit(1374633238.253:50): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18473 comm="apparmor_parser" [ 1838.358656] init: mysql main process (18477) terminated with status 1 [ 1838.358695] init: mysql main process ended, respawning [ 1839.269303] init: mysql post-start process (18478) terminated with status 1 And $ service mysql status mysql stop/waiting I think this means mysql is crashing when it starts: $ sudo mysqld start 130723 21:51:24 InnoDB: Assertion failure in thread 3064211200 in file fut0lst.ic line 83 InnoDB: Failing assertion: addr.page == FIL_NULL || addr.boffset >= FIL_PAGE_DATA InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 02:51:24 UTC - mysqld got signal 6 ; Per the manual, I went to the data directory (/var/lib/mysql) and ran this: myisamchk --silent --force */*.MYI Then: $ sudo mysqld ... InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. ... Is my database corrupt? What can I do to recover? Re-install mysql? Something less drastic? I'm fine with losing the database, I just want a working system.

    Read the article

  • GlusterFS is failing to mount on boot

    - by J. Pablo Fernández
    I'm running the official GlusterFS 3.5 packages on Ubuntu 12.04 and everything seems to be working fine, except mounting the GlusterFS volumes at boot time. This is what I see in the log files: [2014-06-13 08:52:28.139382] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.0 (/usr/sbin/glusterfs --volfile-server=koraga --volfile-id=/private_uploads /var/www/shared/private/uploads) [2014-06-13 08:52:28.147186] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled [2014-06-13 08:52:28.147237] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread [2014-06-13 08:52:28.148183] E [socket.c:2161:socket_connect_finish] 0-glusterfs: connection to 176.58.113.205:24007 failed (Connection refused) [2014-06-13 08:52:28.148236] E [glusterfsd-mgmt.c:1601:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: koraga (No data available) [2014-06-13 08:52:28.148251] I [glusterfsd-mgmt.c:1607:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2014-06-13 08:52:28.148477] W [glusterfsd.c:1095:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x27) [0x7fe077f8e0f7] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x1a4) [0x7fe077f91cc4] (-->/usr/sbin/glusterfs(+0xcada) [0x7fe078655ada]))) 0-: received signum (1), shutting down [2014-06-13 08:52:28.148513] I [fuse-bridge.c:5444:fini] 0-fuse: Unmounting '/var/www/shared/private/uploads'. My fstab contains: proc /proc proc defaults 0 0 /dev/xvda / ext4 noatime,errors=remount-ro 0 1 /dev/xvdb none swap sw 0 0 /dev/xvdc /var/lib/glusterfs/brick01 ext4 defaults 1 2 koraga:/private_uploads /var/www/shared/private/uploads glusterfs defaults,_netdev 0 0 Any ideas what's going on and/or how to fix it?

    Read the article

  • Strange ASP.NET Queue Performance Counters Behavior?

    - by LemurTech
    We have an ASP.NET 2.0 site running in classic mode. I am seeing very strange behavior in the performance counter values. Perhaps these are bugs (I've been all over Google trying to verify this, without much luck), or perhaps it is just my inexperience with monitoring these things. This PerfMon graph (http://imgur.com/Jv5io5J) represents a load test where I add up to 350 virtual users to the site, at a rate of about 1/sec, performing relatively simple page browsing. At the end of the test, I gradually taper off the number of users. This is a 4 CPU server. Machine.config settings for are at the defaults. The solid blue line is ASP.NET Apps v2.x\Requests Executing for the application in question. The profile makes perfect sense, with a quick ramp-up to 32 executing requests (minWorkerThreads x 4CPUs), followed by a slower ramp-up to 48 ((maxWorkerThreads - minWorkerThreads) x 4CPUs). The solid yellow line is ASP.NET v2.x\Requests Queued. Again, this makes sense: after the initial 32 request threads are activated, the queue begins to build as new thread initialization can't keep pace with incoming requests. But as executing requests reaches its highest possible value of 48, the counter for ASP.NET Apps v2.x\Requests Queued (green solid line) suddenly springs to life and maintains step with the yellow counter. As far as I can tell, and with no other apps running on the server, these two counters should have had the same values from the start. One other odd thing: The counter for ASP.NET v2.x\Request Wait Time (dotted yellow line) also does not spring to life until executing requests reaches 48. Shouldn't I be seeing values here from the moment ASP.NET v2.x\Requests Queued begins to build? And likewise, why would ASP.NET Apps v2.x\Request Execution Time (dotted blue) increase significantly only after that peak of 48 is reached? Shouldn't it ramp-up gradually along with queued requests?

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >