Search Results

Search found 16324 results on 653 pages for 'per thread'.

Page 211/653 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • .htaccess error "not allowed here" for all for all instructions

    - by andres descalzo
    I am using Debian Lenny and Apache 2. I changed the default .htaccess file with: AllowOverride AuthConfig But I always get the error message not allowed here when putting any instructions in the .htaccess file. EDIT: file default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks Order allow,deny Allow from all AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Includes #AllowOverride All #AllowOverride Indexes AuthConfig Limit FileInfo AllowOverride AuthConfig Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccess: #Options +FollowSymlinks # Prevent Directoy listing Options -Indexes # Prevent Direct Access to files <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> # SEO URL Settings RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] PHP info: apache2handler Apache Version = Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch Apache API Version = 20051115 Server Administrator = webmaster@localhost Hostname:Port = hw-linux.homework:80 User/Group = www-data(33)/33 Max Requests = Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts = Connection: 300 - Keep-Alive: 15 Virtual Server = Yes Server Root = /etc/apache2 Loaded Modules = core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status

    Read the article

  • Are Colocation Cross Connects Worth While

    - by SvrGuy
    We currently operate three clusters of collocated machines in different data centers. Recently, I became aware that our newest data center will offer to cross connect us to a bandwidth provider free of charge. In the past, I never really investigated a cross connect for bandwidth because I figured that the rates would be similar to what we are paying the colo now and that it would reduce our resiliency (because we would only be using one or two carriers for IP, where as the colo uses, say 8 different providers). Then I saw an ad for hurricane electric internet services (http://he.net/cgi-bin/ip_transit_quote) that gave a price for IP transit at $1/Mbs, which is much better than the $30/Mb we pay for the blended bandwidth. What are people out there typically paying for bandwith via cross connect and how hard is to setup? Is my understanding that what you do is open agreemetns with two or three ISPs, cross connect to them and then configure your top of rack router on their network. Can you really get IP transit down to a couple of dollars per megabit per month just by doing the routing yourself? Or, is my understanding of cross connection fundamentally wrong?

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • Squirrelmail receiving duplicate emails

    - by Austin
    A client of mine is experiencing issues with his email, it appears that whenever he receives email from a certain domain it comes as duplicates. Not only are they duplicates but the duplicated items have a (+) sign next to them which usually indicates an attachment. Could this be because of a forwarding issue? Here are the headers: Return-Path: <[email protected]> Received: from bigcat.centralmasswebdesign.com (root@localhost) by tarbellconstruction.com (8.13.1/8.13.1) with ESMTP id o4OFnO23003379 for <[email protected]>; Mon, 24 May 2010 11:49:24 -0400 X-ClientAddr: 72.249.26.200 Received: from mf3.spamfiltering.com (mf3.spamfiltering.com [72.249.26.200]) by bigcat.centralmasswebdesign.com (8.13.1/8.13.1) with ESMTP id o4OFnOjF005520 for <[email protected]>; Mon, 24 May 2010 11:49:24 -0400 X-Envelope-From: [email protected] X-Envelope-To: [email protected] Received: From 67-132-16-226.dia.static.qwest.net (67.132.16.226) by mf3.spamfiltering.com (MAILFOUNDRY) id 6lzIAmdLEd+oFQAw for [email protected]; Mon, 24 May 2010 15:49:23 -0000 (GMT) Received: from mail pickup service by WMA2-EXCH1.NELCO-USA.net with Microsoft SMTPSVC; Mon, 24 May 2010 11:49:18 -0400 Content-Transfer-Encoding: 7bit Importance: normal Priority: normal X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.4325 Content-Class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01CAFB58.AAB268D0" Subject: weekly activity report for week ending May 22, 2010 Date: Mon, 24 May 2010 11:49:16 -0400 Message-ID: <15BCC4D99E8CBF48A2FA37A318CFF5C801209CCC@wma2-exch1.NELCO-USA.net> X-MS-Has-Attach: yes X-MS-TNEF-Correlator: Thread-Topic: weekly activity report for week ending May 22, 2010 thread-index: Acr7WKpdCelRCiocT1eBY2YN5Ma8DA== From: "Mike LeBlanc" <[email protected]> To: "Keith Berube" <[email protected]>, "Ken Tarbell" <[email protected]> X-OriginalArrivalTime: 24 May 2010 15:49:18.0361 (UTC) FILETIME=[AB546890:01CAFB58]

    Read the article

  • Linux Experts Riddle: Network output of 10MB/s on 10GB/s NIC

    - by user150324
    I have two CentOS 6 servers. I am trying to transfer files between them. Source server has 10GB/s NIC nd destination server has 1GB/s NIC. Regardless to the command used nor the protocol, the transfer speed is ~1 Mega byte per second. The goal is at least couple dozens MB per second. I have tried: rsync (also with various encryptions), scp, wget, aftp, nc. Here's some testing results with iperf: [root@serv ~]# iperf -c XXX.XXX.XXX.XXX -i 1 ------------------------------------------------------------ Client connecting to XXX.XXX.XXX.XXX, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [ 3] local XXX.XXX.XXX.XXX port 33180 connected with XXX.XXX.XXX.XXX port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.30 MBytes 10.9 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 1.0- 2.0 sec 1.28 MBytes 10.7 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 2.0- 3.0 sec 1.34 MBytes 11.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 3.0- 4.0 sec 1.53 MBytes 12.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 4.0- 5.0 sec 1.65 MBytes 13.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 5.0- 6.0 sec 1.79 MBytes 15.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 6.0- 7.0 sec 1.95 MBytes 16.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 7.0- 8.0 sec 1.98 MBytes 16.6 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 8.0- 9.0 sec 1.91 MBytes 16.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 9.0-10.0 sec 2.05 MBytes 17.2 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.68 MBytes 14.0 Mbits/sec I guess HD is not the bottleneck here.

    Read the article

  • Is there a way to force spam-filter to change their policy or remove them as recognized spam service?

    - by Alvin Caseria
    As per mxtoolbox I got 1 blacklist still active for quite sometime now. UCEPROTECTL1's is running on 7 day policy since last spam mail. This is too strict compared to the 98 other spam filters out there as per mxtoolbox. (Or at least to the other 4 that detected the problem) I have no problem with our e-mail since it is hosted locally. But our domain is hosted outside the country and it run on a different IP. I contacted them but since it is the spam-filter's rule, there's nothing to be done but wait. I do believe services like spam-filters should at lease be bounded by guidelines and standards for this matter. Otherwise problem on delivering valid (after the fix) e-mails will be disastrous. Is there a way to force UCEPROTECT to change their policy or remove them as recognized spam service? Apart from contacting them in case they do not answer. Currently they are charging for fast removal if you pay by PayPal. I'm still looking for guideline/standard on how they should operate regarding this matter. Appreciate the help.

    Read the article

  • Cisco VPN endpoints disconnecting from a VLAN

    - by dunxd
    I have a number of Cisco ASA 5505 and PIX 506e around the world acting as VPN endpoints. They connect to a Cisco VPN Concentrator 3000 at HQ. I am using EZVPN to set up the VPN (i.e. most of the config is central on the VPN Concentrator) The majority of endpoints work absolutely fine. However, there are three that do not. 2 ASAs and 1 PIX get disconnected from one of the VLANs on our network. This is the VLAN that my monitoring server runs on - so those endpoints look as if they have gone down. However, I can still ping the endpoints from our user VLAN. If I then SSH onto the endpoint, and do a ping to my monitoring server, the connection comes back. Then after about 10 minutes it stops working again. I've looked at the configuration of my endpoints, and I can't see any significant differences. One common feature is that the affected endpoints are connecting to the internet via retail quality routers. However, I don't see how this could affect traffic within a VPN tunnel. Any ideas or suggestions? I've also got a thread on Cisco's forums at https://supportforums.cisco.com/thread/344638. One other person has reported the same problem.

    Read the article

  • Apache2 process stuck at 100% cpu, CLOSE_WAIT socket lingering

    - by mmazing
    I've troubleshooted the heck out of this today, and I can't seem to find any information on how to determine what is happening exactly. Basically, on my development server, another developer is causing CLOSE_WAIT connections that eat up one or more apache2 processes for several hours if I don't restart apache2. strace on any of the processes yields no information, only that it was able to attach. mod_proxy is not enabled. KeepAlive is on, KeepAliveTimeout is 15 seconds, MaxKeepAliveRequests is 100. From what I've been reading, this may or may not be an apache issue at all, just that that's how CLOSE_WAIT works (the server is waiting for a FIN packet to close the connection). I just can't believe that a server would be crippled so easily by not receiving a packet from a remote host telling it to close the connection. Especially without any intervention for well over an hour. Any tips? I'm about to pull my hair out. Edit : Also, there are no unusual entries in any apache log files. Edit 2: lsof -i shows only a single CLOSE_WAIT per hanging process. (That's what has been bothering me about this, as most other discussions talk about many CLOSE_WAIT connections, while I only have one per process.) The nature of the code that is running (php) doesn't really lend itself to closing open connections and whatnot. I can run the same code that he is executing with the same session data, and not result in a hanging process.

    Read the article

  • Find out which task is generating a lot of context switches on linux

    - by Gaks
    According to vmstat, my Linux server (2xCore2 Duo 2.5 GHz) is constantly doing around 20k context switches per second. # vmstat 3 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 7292 249472 82340 2291972 0 0 0 0 0 0 7 13 79 0 0 0 7292 251808 82344 2291968 0 0 0 184 24 20090 1 1 99 0 0 0 7292 251876 82344 2291968 0 0 0 83 17 20157 1 0 99 0 0 0 7292 251876 82344 2291968 0 0 0 73 12 20116 1 0 99 0 ... but uptime shows small load: load average: 0.01, 0.02, 0.01 and top doesn't show any process with high %CPU usage. How do I find out what exactly is generating those context switches? Which process/thread? I tried to analyze pidstat output: # pidstat -w 10 1 12:39:13 PID cswch/s nvcswch/s Command 12:39:23 1 0.20 0.00 init 12:39:23 4 0.20 0.00 ksoftirqd/0 12:39:23 7 1.60 0.00 events/0 12:39:23 8 1.50 0.00 events/1 12:39:23 89 0.50 0.00 kblockd/0 12:39:23 90 0.30 0.00 kblockd/1 12:39:23 995 0.40 0.00 kirqd 12:39:23 997 0.60 0.00 kjournald 12:39:23 1146 0.20 0.00 svscan 12:39:23 2162 5.00 0.00 kjournald 12:39:23 2526 0.20 2.00 postgres 12:39:23 2530 1.00 0.30 postgres 12:39:23 2534 5.00 3.20 postgres 12:39:23 2536 1.40 1.70 postgres 12:39:23 12061 10.59 0.90 postgres 12:39:23 14442 1.50 2.20 postgres 12:39:23 15416 0.20 0.00 monitor 12:39:23 17289 0.10 0.00 syslogd 12:39:23 21776 0.40 0.30 postgres 12:39:23 23638 0.10 0.00 screen 12:39:23 25153 1.00 0.00 sshd 12:39:23 25185 86.61 0.00 daemon1 12:39:23 25190 12.19 35.86 postgres 12:39:23 25295 2.00 0.00 screen 12:39:23 25743 9.99 0.00 daemon2 12:39:23 25747 1.10 3.00 postgres 12:39:23 26968 5.09 0.80 postgres 12:39:23 26969 5.00 0.00 postgres 12:39:23 26970 1.10 0.20 postgres 12:39:23 26971 17.98 1.80 postgres 12:39:23 27607 0.90 0.40 postgres 12:39:23 29338 4.30 0.00 screen 12:39:23 31247 4.10 23.58 postgres 12:39:23 31249 82.92 34.77 postgres 12:39:23 31484 0.20 0.00 pdflush 12:39:23 32097 0.10 0.00 pidstat Looks like some postgresql tasks are doing 10 context swiches per second, but it doesn't all sum up to 20k anyway. Any idea how to dig a little deeper for an answer?

    Read the article

  • Share Point ACL on OSX Lion Server - Posix group always takes over ACLs

    - by Ben
    Trying to configure a share point on a Lion Server machine. The directory is created by the local server admin (serveradmin) and has rwxr-x--- given to it. The serveradmin user belongs to the local staff group so serveradmin readwrite staff group read Others none We have an OD group for all the employees (Workers) . Using the Server tool we've given Full Control to the share point: Workers Full Control serveradmin readwrite staff group read Others none We would assume that Workers could then do what they want on the share but that doesn't seem to be the case. It appears the POSIX permissions take over the ACL permissions for Worker. If I change the staff permission to readwrite then the Workers can create a file or folder in the share point. I would think the ACL should take over but it doesn't, posix always win, rendering ACL useless. Furthermore if I leave the readwrite permission for staff and take Write permission away for the Workers group then the posix group still wins. Essentially the Workers ACL does absolutely nothing. There are reports of similar problems in this Apple forum thread: https://discussions.apple.com/thread/3722901 The directory nesting fix suggested there doesn't work for us. Has anyone had similar issues and know how to fix this? Edit: in Workgroup Manager the employees user are set to primary group staff and given the additional OD group Workers. Changing their primary group doesn't help, it only shifts the problem onto Others taking over rights (logically) Edit 2: Ok, this is interesting, adding OD Users to the share's ACL works totally fine

    Read the article

  • Regarding traffic shaping on juniper SRX550

    - by peilin
    We have implemented the Juniper SRX550 in our company. Now we have one issue that how to restrict the internal user download speed from internet. Take one example that i want to restrict the end user with IP:192.168.1.20/32 downloading speed up to 1M via my external port ge-0/0/6.0. Below is my setting: [edit firewall policer p1M] root@SRX550# show if-exceeding { bandwidth-limit 1m; burst-size-limit 15k; } then discard; [edit firewall family inet] root@SRX550# show filter limit-user term 10 { from { destination-address { 192.168.1.20/32; } } then policer p1M; } term else { then accept; } [edit interfaces ge-0/0/6] root@SRX550# show per-unit-scheduler; unit 0 { family inet { filter { input limit-user; } address Hidden Here; } } As per the setting, the end user downloading speed should not exceed the 1m (125KB in windows), but the result is the downloading speed for this end users still can up to 400KB via HTTP/HTTPS. Please advise. Thanks.

    Read the article

  • Dedicated server with a lot of storage and good support - and cost-effective

    - by Martin Burger
    Hello, I am from Germany and looking for a dedicated server located in the US with a lot of storage: 750 - 1500 GB. CPU speed and amount of memory are secondary, the server will host large amounts of media files via http and ftp - the basic task is to help people exchange media files. In Germany, there are some good offers, like "Root Server EQ6" at www.hetzner.de. For example, that company provides support of high quality, and their plans are very cost-effective. The plan mentioned above costs about $90 per month and provides two 1500 GB SATA-II HDDs (Software-RAID 1). In the US, I found (amongst others) Go Daddy and rackspace. Go Daddy offers some "Storage Monster" plans that include 2 x 1,000 GB hard drives for about $180 per month - already twice as much as Hetzner above. However, I found some blog and forum entries that complain about the support provided by Go Daddy. Rackspace seems to provide decent support, but they are very "upscale". Their dedicated servers are customizable and start at $419 - thus, about 4.5 times as much as Hetzner. Can anybody recommend a solution / plan that is comparable to the one by Hetzner? Or are prices for dedicated servers in general much higher than in Germany? Regards, Martin

    Read the article

  • Any e-mail client with additional grouping functionality on Mac OS X?

    - by harald
    Hello, I'm very unhappy with my mail experience. I'm receiving a lot of mails from various clients and projects and need a way to better organize them. I would really love to have additional grouping-functionality with the e-mail client. Currently I'm using Mac OS X's Mail.app, but I am not bound to this. So I am open to any Mail.app-plugin or independent mail application commercial or not for Mac OS X -- should support IMAP, though -- but I think this should not be a problem nowadays? With Mail.app i'm doing the following: group by thread sort by datetime descending What I would love to have is not only additional tagging-functionality for e-mails -- I know, that at least thunderbird and postbox support them. I would love to have some additional grouping functionality for these tags -- inside the main mail window. So maybe I can summarize the important points: "native" Mac OS X mail client (no web-mailer please) automatic-tagging functionality (eg.: auto-apply tags by some kind of filter) easy access to tagged mails Easy access to tagged mails: I would really love to have some additional grouping functionality in the mail folders. The mail application should put all tagged mails in a group -- the groups should be sorted by last received e-mail. Inside the group I would still like to have the possiblity to group by thread. or It would be ok to have a list of tags (topics) on the left pane of the mail client. For example postbox: There is the 'accounts-section', there is the 'folders-section' -- but why is there no 'topics-section'? Thanks very much,

    Read the article

  • Problems with X11GraphicsDevice on Suse 11

    - by Daniel
    Hi, On servers running Suse 11 I'm experiencing hangups in sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) when connecting via Citrix (and setting DISPLAY to localhost:11.0). Running exactly the same code in exactly the same environment, excepth through Exceed (with DISPLAY set to my workstation's IP) it runs like clockwork. The error is not intermittent, it happens every time Reinstalling the OS does not help Can not reproduce it on Suse 10 This is what the main thread stack looks like: [junit] "main" prio=10 tid=0x0000000040112000 nid=0x6acc runnable [0x00002b9f909ae000] [junit] java.lang.Thread.State: RUNNABLE [junit] at sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) [junit] at sun.awt.X11GraphicsDevice.makeDefaultConfiguration(X11GraphicsDevice.java:208) [junit] at sun.awt.X11GraphicsDevice.getDefaultConfiguration(X11GraphicsDevice.java:182) [junit] - locked <0x00002b9fed6b8e70 (a java.lang.Object) [junit] at sun.awt.X11.XToolkit.(XToolkit.java:92) [junit] at java.lang.Class.forName0(Native Method) [junit] at java.lang.Class.forName(Class.java:169) [junit] at java.awt.Toolkit$2.run(Toolkit.java:834) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:826) [junit] - locked <0x00002b9f94b8ada0 (a java.lang.Class for java.awt.Toolkit) [junit] at java.awt.Toolkit.getEventQueue(Toolkit.java:1676) [junit] at java.awt.EventQueue.invokeLater(EventQueue.java:954) [junit] at javax.swing.SwingUtilities.invokeLater(SwingUtilities.java:1264) ... Has anyone experienced something similar? Could this be a problem in Suse 11's display handling? I'm thankful for any input at this point - I'm fresh out of ideas :)

    Read the article

  • Interpreting and using the Asterisk "timing test" command

    - by zigg
    Timing is very important for certain kinds of applications in Asterisk. If DAHDI is the timing source, the dahdi_test command can be used to check the timing provided by the DAHDI kernel module. If dahdi_test returns exclusively measurements above 99.975%, the DAHDI timing source is generally considered good. Since Asterisk 1.6, new timing sources have become available, such as pthread and timerfd. The accuracy of these timing sources seems to be measurable with the Asterisk CLI timing test command: localhost*CLI> timing test Attempting to test a timer with 50 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 50 timer ticks My concern is that timing 50 ticks seems to be a considerably less stressful test than dahdi_test's 8192 samples in 8000 ms, particularly since just about every system I've tried it on, virtual or otherwise, can handle it. I can ask timing test to ramp it up to what I think are dahdi_test's standards: localhost*CLI> timing test 1024 Attempting to test a timer with 1024 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 1024 timer ticks This will indeed break down a bit depending on the system I'm using, usually with a decrease in timer ticks. But I'm not sure whether this is useful to stress it to this level. Is there authoritative guidance on using and interpreting the timing test command to insure that a given Asterisk system has a timing source that will work well?

    Read the article

  • Pivot tables: How can I total the subtotal?

    - by Mike
    Person A needs £115, Person D £234 and Person G £789, but how do I SUM that and get it to show on the same ROW as the subtotal? The Rows are subscription names. The Value field holds the Cost per subscription. the Columns holds the name of the person who receives the subscription. I have GROUPED on YEAR & MONTH, and have a subtotal that shows me how much each person will need to pay each month for all their subscriptions, but I need a figure showing me the total of all the subscriptions per month. I've tried adding calculated fields, but I want to SUM the subtotals so I'm struggling to see the field I need to use. I've tried Grand Totals but that SUMS all rows and I really only want SUM the Subtotal Total Row. I need a nice neat report that my managers won't go white at when looking at it...to many numbers = fear and confusion. Anyway it got messy, so I've come for help. Cheers Mike.

    Read the article

  • RabbitMQ message consumers stop consuming messages

    - by Bruno Thomas
    Hi server fault, Our team is in a spike sprint to choose between ActiveMQ or RabbitMQ. We made 2 little producer/consumer spikes sending an object message with an array of 16 strings, a timestamp, and 2 integers. The spikes are ok on our devs machines (messages are well consumed). Then came the benchs. We first noticed that somtimes, on our machines, when we were sending a lot of messages the consumer was sometimes hanging. It was there, but the messsages were accumulating in the queue. When we went on the bench plateform : cluster of 2 rabbitmq machines 4 cores/3.2Ghz, 4Gb RAM, load balanced by a VIP one to 6 consumers running on the rabbitmq machines, saving the messages in a mysql DB (same type of machine for the DB) 12 producers running on 12 AS machines (tomcat), attacked with jmeter running on another machine. The load is about 600 to 700 http request per second, on the servlets that produces the same load of RabbitMQ messages. We noticed that sometimes, consumers hang (well, they are not blocked, but they dont consume messages anymore). We can see that because each consumer save around 100 msg/sec in database, so when one is stopping consumming, the overall messages saved per seconds in DB fall down with the same ratio (if let say 3 consumers stop, we fall around 600 msg/sec to 300 msg/sec). During that time, the producers are ok, and still produce at the jmeter rate (around 600 msg/sec). The messages are in the queues and taken by the consumers still "alive". We load all the servlets with the producers first, then launch all the consumers one by one, checking if the connexions are ok, then run jmeter. We are sending messages to one direct exchange. All consumers are listening to one persistent queue bounded to the exchange. That point is major for our choice. Have you seen this with rabbitmq, do you have an idea of what is going on ? Thank you for your answers.

    Read the article

  • How do I only dp or do just the lines, not the entire block in Vim diff?

    - by hobbes3
    I'm currently using MacVim (Snapshot 64) "Split Diff by..." menu option. The file is Django's my settings.py from version 1.3.1 to a fresh file from version 1.4. (Open the image on a separate window/tab to enlarge.) I know two basic commands do to "obtain" (and replace) a block from the other side. dp to "put" (and replace) a block to the other side. But those two commands writes the entire block, which in MacVim is the purple highlights. If you look at the 2nd block, you can see that from line 2 and 3 only has 2 words that are different: mysite and hobbes3. I just want to replace per line not the entire block. So what is there a command to replace do do and dp per line as oppose to an entire block or do I have to manually type it out? Bonus question: I noticed that once I manually edit a block, I lose the purple highlighting. How do I "refresh" the diff again to include the highlights without reopening the file? Please try to keep the answers Vim-general as oppose to MacVim-specific. Thanks!

    Read the article

  • Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5

    - by Adrian
    My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important. Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)? I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)). I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern? I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it. Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes): Apache: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Worker: <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 400 MaxRequestsPerChild 2000 </IfModule> PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2): --enable-bcmath \ --enable-calendar \ --enable-exif \ --enable-ftp \ --enable-mbstring \ --enable-pcntl \ --enable-soap \ --enable-sockets \ --enable-sqlite-utf8 \ --enable-wddx \ --enable-zip \ --enable-fastcgi \ --with-zlib \ --with-gettext \ Apache php-fastcgi-setup.conf FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2 FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13 FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9 ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

    Read the article

  • Passenger connection reset by peer issue

    - by user887372
    I am new to ruby on rails. I am using passenger 3.0.17 to deploy my ruby 3.2.6 project. My project is working fine but i got 500 internal error when i try to upload files on server. I checked my passenger log and found: [ pid=20654 thr=140394143790848 file=ext/nginx/HelperAgent.cpp:933 time=2012-11-01 09:29:57.82 ]: Uncaught exception in PassengerServer client thread: exception: write() failed: Connection reset by peer (104) backtrace: in 'void Client::forwardResponse(Passenger::SessionPtr&, Passenger::FileDescriptor&, const Passenger::AnalyticsLogPtr&)' (HelperAgent.cpp:705) in 'void Client::handleRequest(Passenger::FileDescriptor&)' (HelperAgent.cpp:859) in 'void Client::threadMain()' (HelperAgent.cpp:952) 2012/11/01 09:29:27 [crit] 20691#0: *431 mkdir() "/tmp/passenger-standalone.20640/proxy_temp/2" failed (2: No such file or directory) while reading upstream, client: 124.172.71.55, server: _, request: "GET /assets/jquery.js?body=1 HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "test.com:3000", referrer: "http://test.com:3000/" 2012/11/01 09:29:33 [crit] 20691#0: *435 mkdir() "/tmp/passenger-standalone.20640/proxy_temp/3" failed (2: No such file or directory) while reading upstream, client: 124.172.71.55, server: _, request: "GET /assets/background.png HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "test.com:3000", referrer: "http://test.com:3000/" [ pid=20654 thr=140394115462912 file=ext/nginx/HelperAgent.cpp:933 time=2012-11-01 09:29:33.543 ]: Uncaught exception in PassengerServer client thread: exception: write() failed: Connection reset by peer (104) backtrace: in 'void Client::forwardResponse(Passenger::SessionPtr&, Passenger::FileDescriptor&, const Passenger::AnalyticsLogPtr&)' (HelperAgent.cpp:705) in 'void Client::handleRequest(Passenger::FileDescriptor&)' (HelperAgent.cpp:859) in 'void Client::threadMain()' (HelperAgent.cpp:952) Please guide me regarding the issue. I am unable to find the reason of this peer reset and failied mkdir(). Thanks in advance

    Read the article

  • Best way to convert episodic DVDs for Windows Media Center?

    - by Roger Lipscombe
    I'm archiving my DVD collection. My goal is to be able to play them back in Windows Media Center. For feature-length DVDs, I'm using AnyDVD and CloneDVD, which is working well. For playing back TV shows (and other episodic content), I'm using Media Browser, which doesn't support a VIDEO_TS folder per episode. It expects the shows to be broken up into one file per episode (e.g. "Willo the Wisp - S01E12.avi"). For this, I'm attempting to use Handbrake, which, for extracting the episodes from DVD (or already-ripped VIDEO_TS folder), is working pretty well. The problem that I have is that the default x264 encoder over-compresses the resulting video stream, which results in hideous artifacts in animated shows. The aforementioned Willo the Wisp is a particularly bad example, because the original DVD is particularly "noisy". If I switch to using the ffmpeg encoder, the artifacts are gone in Windows Media Player, but I can't get the resulting files to play back in Windows Media Center. I see the first frame, and then there's an error message. I've installed the CCCP codec collection, but it doesn't seem to have made any difference. So: what's the best way to convert VIDEO_TS to individual episode files for playback in Windows Media Center?

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • Router recommendation to virtualize 800 IPs

    - by delerious010
    I've recently been looking at getting some new load balancers for our environment as we are expecting to double our client base in the next 12 months. Currently we have 400 public IPS serving 800 clusters ( 2 clusters / IP due to ports ) on Coyote Point Balancers, and distributing connections to 3 web servers serving about 6GBytes outgoing, 2Gbytes in per day. If we double, this would be about 800 IPs, possibly 1600 clusters, and about 6 servers per cluster ( for a total of 9600 so called "real servers" using Barracuda's lingo ). Due to the amount of clusters, most solutions I've looked at ( Coyote, Barracuda, Loadbalancer.org ) seem to be unsure whether they'll be able to handle our planned growth, mostly due to health checks performed on the servers ... which makes total sense when you think of it. So the fine folk at loadbalancer.org recommended that we may be better off offload the 400-800 public IPs, which we require for SSL eCommerce solutions, over to a forward facing router. From that point on, the router could do some mangling to route EXT_IP:443 to INT_IP:INT_PORT which would then allow us to reduce the Load Balancer configuration to 1 or 2 clusters, thus resolving the health check problem. Does this idea make sense to yall ? Or would you have other recommendations to make ? Secondly, what router would you recommend for such an undertaking ? I'd be looking at something that has some form of failover mechanism built in. On a totally unrelated note, I've got to admit that I'm extremely pleased with the responses I got from loadbalancer.org. Their responses to my inquiries were surprisingly helpful ( i.e. I didn't feel as if I was taking to a sales guy trying to push something ). ( No I don't work for them, and sadly nor are they sending me free gear ).

    Read the article

  • What is the fastest and best way to convert an rmvb video to mp4/mkv without losing any quality?

    - by Eric Leung
    the file will be played in a popbox3d. my old method was to convert the video using vidcoder (an offshoot of handbrake) using normal settings, but i've recently confirmed that this significantly reduces video and audio quality. i bumped up the conversion quality to 'high profile' and this produced a higher quality video but raised the conversion time to about twice the video length (95 minutes to convert a 45 minute video) on a core2duo laptop. this is less than ideal when a large number of videos need to be converted. i have tried a direct remuxing using mkv toolnix but this produced a video that refused to display video on the popbox3d, which is consistent with the reported: [quote=other old thread] it is possible to put RealMedia A/V in MKV container (used MKVtoolnix) - however, it is awkward to play later. RV40 is only suspected to be based on H.264 - simplify, is not consistent with MPEG-4 AVC specification. [/quote] i have read that ... [quote=from old thread] Under normal circumstances, [ffmpeg] should convert the video to .video.mp4 and the audio to (.wav then to) .audio.mp4, then mux the video and audio into a new .mp4 file and delete the temporary video-only and audio-only files.[/quote] and i am currently attempting to discover how this is done. help? PS: i download a lot of series from asia and for some strange reason, rmvb is a really popular format over there. sometimes, it's the only format that's available. unfortunately, it's a format that is incompatible with the popbox3d, so i have to convert the files before i can watch them on my tv.

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >