Search Results

Search found 19090 results on 764 pages for 'greatest n per group'.

Page 33/764 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Postfix count relayed messages per user

    - by Martino Dino
    I would like to know if it's possible to count the outgoing (relayed) messages on a per user basis in postfix. I'm managing a small commercial SMTP relay and decided that it would be nice to have a detailed daily report on how much mail a single user have sent (and eventually enforce some limits) possibly in realtime. I've looked almost everywhere and started to think that writing my own milter would be the way to go... Are you aware of anything that already exists for postfix that can count and report relayed mail for authenticated users (a script, milter or whatever)?

    Read the article

  • iSCSI: LUNs per target?

    - by badnews
    My question relates specifically to ZFS/COMSTAR but I assume is generally applicable to any iSCSI system: Should one prefer to create a target for every LUN that you want to expose? Or is it good practise to have a single target with multiple LUNs? Does either approach have a performance impact? And is there some crossover point where the other approach makes sense? The use case is for VM disks, where each disk (zvol) is a LUN. So far we have created a a separate target for each VM; but a single target that contains all the LUNs would probably greatly simplify management... but we may need hundreds of LUNs per a single target. (And then possibly tens of initiator connections to that target)

    Read the article

  • how to offer subdomain hosting with bandwidth calculation per user

    - by Ke
    Hi, Im a php developer and I would like to offer subdomain hosting,but need also to be able to calculate bandwidth for each subdomain. For the subdomains creation I will use catch-all and wildcards etc to easily set these up. The one thing im a bit stumped on is how to calculate bandwidth for each subdomain, is this possible within php? or are there better solutions for this perhaps using folders or something? Also a script will be loaded under the subdomain, this will vary per user. Is it worth giving each user a new folder and their own script, or more easy to manage one script for all users? Any considerations here? Cheers Ke

    Read the article

  • Linux (non-transparent) per-process hugepage accounting

    - by Dan Pritts
    I've recently converted some java apps to run with linux manually-configured hugepages. I've got about 10 tomcats running on a system and I am interested in knowing how much memory each one is using. I can get summary information out of /proc/meminfo as described in Linux Huge Pages Usage Accounting. But I can't find any tools that tell me about the actual per-process hugepage usage. I poked around in /proc/pid/numa_stat and found some interesting information that led me to this grossity: function pshugepage () { HUGEPAGECOUNT=0 for num in `grep 'anon_hugepage.*dirty=' /proc/$@/numa_maps | awk '{print $6}' | sed 's/dirty=//'` ; do HUGEPAGECOUNT=$((HUGEPAGECOUNT+num)) done echo process $@ using $HUGEPAGECOUNT huge pages } The numbers it gives me are plausible, but i'm far from confident this method is correct. Environment is a quad-CPU dell, 64GB ram, RHEL6.3, oracle jdk 1.7.x (current as of 20130728)

    Read the article

  • Setting per-directory umask using ACLs

    - by Yarin
    We want to mimic the behavior of a system-wide 002 umask on a certain directory foo, in order to ensure the following result: All sub-directories created underneath foo will have 775 permissions All files created underneath foo and subdirectories will have 664 permissions 1 and 2 will happen for files/dirs created by all users, including root, and all daemons. Assuming that ACL is enabled on our partition, this is the command we've come up with: setfacl -R -d -m mask:002 foo This seems to be working- I'm basically just looking for confirmation. Is this the most effective way to apply a per-directory umask with an ACL?

    Read the article

  • Per process I/O accounting on AIX

    - by ipozgaj
    Is there a way of getting per process I/O statistics on AIX, i.e. to get current disk I/O rate of a process? Commands like iostat, nmon, topas etc. can't display such data. Filemon also doesn't help. Actually, what I would need is something much like iotop(1) command on Linux. Update: it seems there is no builtin command(s) to do this. I will most probably make my own by using the SPMI API.

    Read the article

  • Alien: .rpm -> .deb | Compability / Built per system?

    - by MaddinXx
    At the moment I'm playing a bit with alien (for OpenVZ packages on Debian) and was wondering myself about one question for which I was not able to find an answer anywhere. Therefor I thought it might be smart to ask here :) The question is... If I convert a .rpm to .deb on a system, how compatible is this .deb package? What do I mean? Will the .deb be working on other systems as well or is it per-system, e.g. that on every system the .deb package will be little different? That i386 and x86_64 are different is clear, so this doesn't need to be answered :) Examples that would be nice to know are for example: .deb built on Debian 6 64-bit - Ubuntu 12.04 64-bit (compatible?) .deb built on Debian 6 64-bit - Debian 5 64-bit (compatible?) etc. Thanks anyone reading this / helping me! Regards, Michel

    Read the article

  • Per-user vhost logging

    - by kojiro
    I have a working per-user virtual host configuration with Apache, but I would like each user to have access to the logs for his virtual hosts. Obviously the ErrorLog and CustomLog directives don't accept the wildcard syntax that VirtualDocumentRoot does, but is there a way to achieve logs in each user's directory? <VirtualHost *:80> ServerName *.example.com ServerAdmin [email protected] VirtualDocumentRoot /home/%2/projects/%1 <Directory /home/*/projects/> Options FollowSymlinks Indexes IndexOptions FancyIndexing FoldersFirst AllowOverride All Order Allow,Deny Allow From All Satisfy Any </Directory> Alias /favicon.ico /var/www/default/favicon.ico Alias /robots.txt /var/www/default/robots.txt LogLevel warn # ErrorLog /home/%2/logs/%1.error.log # CustomLog /home/%2/logs/%1.access.log combined </VirtualHost>

    Read the article

  • itunes equalizer per track settings

    - by DA
    I'm a little confused about iTune's ability to set eq presets on a per-song basis. I have a song that I open, set to a given EQ preset, then close. When I play this song, however, the audio isn't being adjusted. If I open the EQ and turn it 'on', though, it then obeys the song's eq setting. Of course, if I leave the EQ on, then songs that do not have their own EQ setting default to whatever the EQ was originally set at. Not a HUGE deal, but it means that I would then normally have to have EQ always turned on, set to flat, so that any song that I HAVE given a EQ setting to will use it. Am I understanding how that works correctly? Is there a way to have individual songs use their EQ setting without having to turn on the EQ for everything?

    Read the article

  • Retrieve 2 last posts for each category.

    - by Savageman
    Hello, Lets say I have 2 tables: blog_posts and categories. Each blog post belongs to only ONE category, so there is basically a foreign key between the 2 tables here. I would like to retrieve the 2 lasts posts from each category, is it possible to achieve this in a single request? GROUP BY would group everything and leave me with only one row in each category. But I want 2 of them. It would be easy to perform 1 + N query (N = number of category). First retrieve the categories. And then retrieve 2 posts from each category. I believe it would also be quite easy to perform M queries (M = number of posts I want from each category). First query selects the first post for each category (with a group by). Second query retrieves the second post for each category. etc. I'm just wondering if someone has a better solution for this. I don't really mind doing 1+N queries for that, but for curiosity and general SQL knowledge, it would be appreciated! Thanks in advance to whom can help me with this.

    Read the article

  • Optimization of running total calculation in SQL for multiple values per join condition

    - by Kiril
    I have the following table (test_table): date value --------------- d1 10.0 d1 20.0 d2 60.0 d2 10.0 d2 -20.0 d3 40.0 I calculate the running total as follows. I use the same query twice, because first I need to calculate the values for a specifi date, and afterwards I can calculate the running total. Otherwise, joining the two tables where date is not unique, I would get too many results from the join: SELECT t1.date, SUM(t2.value) AS total FROM (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t1 JOIN (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t2 ON t1.date >= t2.date GROUP BY t1.date ORDER BY t1.date This gives me (which is fine): date total ------------- d1 30.0 d2 80.0 d3 120.0 BUT, this query isn't very efficient, because I need to change conditions in two places, if necessary. In production, the test_table is a lot bigger ( 4 Mio. rows), and the query takes too much time to complete. Question: How can I avoid using the same query twice?

    Read the article

  • Regex optional match in python fails

    - by AaronG
    tickettypepat = (r'MIS Notes:.**(//p//)?.**') retype = re.search(tickettypepat,line) if retype: print retype.group(0) print retype.group(1) Given the input. MIS Notes: //p// Can anyone tell me why group(0) is MIS Notes: //p// and group(1) is returning as None?

    Read the article

  • Django User "per project" group assignation

    - by Ben G
    Hi, Here's my problem : my site has users, which can create projects, and access other user's projects. Each project can assign different rights to users. So, i could have Project A : user "John" is in group "manager" , and Project "B" user "John" is in group "worker". How could I use the Django User authentication model to do that ? From a SQL point a view, what i would like is to be able to add "project_id" in the primary key for the "auth_user_groups" table. I don't think profile is of any help here. Any advice ? UPDATE : "worker" and "manager" are just two examples of the permission group (or "roles") that my application defines. There will be more in the future. Eg : i will probably also have "admin", "reporting", etc...

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • sell skimmed dump+pin([email protected])wu transfer,bank transfer,paypal+mailpass

    - by bestseller
    http://megareserve.blogspot.com///////// [email protected]////gmail:[email protected] Sell, Cvv, Bank Logins, Tracks, PayPal, Transfers, WU, Credit Cards, Card, Hacks, Citi, Boa, Visa, MasterCard, Amex, American Express, Make Money Fast, Stolen, Cc, C++, Adder, Western, Union, Banks, Of, America, Wellsfargo, Liberty, Reserve, Gram, Mg, LR, AlertPay ..PAYMENT METHOD LIBERTYRESERVE AND WESTERNUNION ONLY........ Ccv EU is $ 6 per ccv (Visa + Master) Ccv EU is $ 7 per ccv (Amex + Discover) Ccv Au is $ 6 per ccv Ccv Italy is 15 $ per cc sweden 12$ spain 10$ france 12$ Ccv US is $ 1.5 per ccv (Visa) Ccv US is $ 2 per ccv (master) Ccv US is $ 3 per ccv (Amex + Discover) Ccv UK is $ 5 per ccv (Visa + Master) Ccv UK is $ 6 per ccv (Amex + swith) Ccv Ca is $ 6 per ccv (Visa+ Master) Ccv Ca is $ 9 per ccv (Visa Business + Visa Gold) Ccv Germany is 14$ Per Ccv Ccv DOB with US is 15 $ per ccv Ccv DOB with UK is 19 $ per ccv Ccv DOB + BIN with UK 25$ per ccv Ccv US full info is 40 $ per ccv Ccv UK full info is 60 $ per ccv 1 Uk check bins= 12.5$/1cvv 1 Sock live = 1$/1sock live 5day yahoo:[email protected] gmail:[email protected] Balance In Chase:.........70K To 155K ========300$ Balance In Wachovia:.........24K To 80K==========180$ Balance In Boa..........75K To 450K==========400$ Balance In Credit Union:.........Any Amount:=========420$ Balance In Hallifax..........ANY AMOUNT=========420$ Balance In Compass..........ANY AMOUNT=========400$ Balance In Wellsfargo..........ANY AMOUNT=========400$ Balance In Barclays..........80K To 100K=========550$ Balance In Abbey:.........82K ==========650$ Balance in Hsbc:.........50K========650$ and more 1 Paypal with pass email = 50$/paypal 1 Paypal don't have pass email = 20$/Paypal 1 Banklogin us or uk (personel)= 550$ yahoo :[email protected] gmail :[email protected] Track 1/2 Visa Classic, MasterCard Standart US - 13$ UK - 17$ EU - 24$ AU - 26$ Track 1/2 Visa Gold | Platinum | Business | Signature, MasterCard Gold | Platinum US - 17$ UK - 20$ EU - 28$ AU - 30$ Bank transfer Balance 71.000$ CITIBANK SOUTH DAKOTA, N.A. Balance 65.000$ Wachovia: 76.000$ Abbey: 65.000£ Hsbc : 87.000$ Hallifax : 97.000£ Barclays: 110.000£ AHLI UNITED BANK --- 80.000£ LLOYDSTSB ---------- 100.000£ BANK OF SCOTLAND --- 123.000£ BOA ----------210.000$ UBS ---------- 152.000$ RBC BANK ------ 245.000$ BANK OF CANADA -------- 78.000$ BDC ---------- 281.000$ BANK LAURENTIENNE ----- 241.000$ please no test yahoo: [email protected] gmail: [email protected] website ; http://megareserve.blogspot.com Prices For Bin and Its List: 5434, 5419, 4670,374288,545140,454634,3791 with d.o.b,4049,4462,4921.4929.46274547,5506,5569,5404,5031,4921, 5505,5506,4921,4550 ,4552,4988,5186,4462,4543,4567 ,4539,5301,4929,5521 , 4291,5051,4975,5413 5255 4563,4547 4505,4563 5413 5255,5521,5506,4921,4929,54609 7,5609,54609,4543, 4975,5432,5187 ,4973,4627,4049,4779,426565,55 05, 5549, 5404, 5434, 5419, 4670,456730,541361, 451105,4670,5505, 5549, 5404, UK Nomall NO BINS(Serial) with DOB is :10$ UK with BINS(Serial) with DOB is :15$ UK Nomall no BINS(Serial) no DOB is: 6$ UK with BINS(Serial) is :12$ Please do not request : cc for TEST and FREE. DON'T CONTACT ME IF YOU NOT READY NEED TO BUY .

    Read the article

  • GPO - Setting not applied, although policy is applied

    - by Kenny Bones
    This is rather strange. In our domain we have several terminal servers and this morning a user reported that no drives are mapped when he logs on to the terminal server. So, I checked Group Policy Results and compare two users. Both users have the exact same policies applied. But for this particular user, the Script section under User Configuration - Policies - Windows Settings is just not there. For the other user, which this is working fine for, it says under the Script section that Winning GPO is Terminal2008, which is the GPO that contains the script section. And the Terminal2008 GPO is applied to both users. Also, the loopback processing is set to Replace. What could be the cause for this? I've never seen this particular issue before. I mean, both users are in the same OU, they log on to the same terminal server and the same policies are applied to both. They do not however have the exact same group memberships, but should that matter? It's not stated that the script should be run only if the user is a member of a certain group either. Not sure if that could be done through that specific setting either.All I know is, the very same policies are applied to both users, in the same OU and the same computer. Meaning, the same policies should be applied? Edit: I just ran Group Policy Results on one of the other terminal servers, which are also in the same OU, and the Scripts section is there! This means that this particular user don't get this setting when he's logged onto this particular server. What could be the cause of this?

    Read the article

  • Apache Simple Configuration Issue: per-user directory is accessing /~user instead of ~user

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work?

    Read the article

  • Per client DNS server assignment using Pfsense

    - by Trix
    I have a network where pfsense is the gateway. There are two sets of clients that I want. One where there will be some restrictions to the network (example, IM being blocked) and one network where there are no restrictions. One easy way I thought about doing this was assigning the different domains different DNS servers. One set could use OpenDNS, the other could use Google's Public DNS. The set with OpenDNS would have the filter options on (using OpenDNS' dashboard, I can check block IM .... so I do not manually need to block login.oscar.aol.com, meebo.com, gmail chat ....etc). So the problem is the DHCP server looks like it will only assign a single set of DNS servers to clients. Is there a way to set a per client assignment? Is there a better way to obtain what I want to obtain. This is just a small home network. I do not need anything fancy, but I do need this functionality in one way or another.

    Read the article

  • How to configure postfix for per-sender SASL authentication

    - by Marwan
    I have two gmail accounts, and I want to configure my local postfix server as a client which does SASL authentication with smtp.gmail.com:587 with credentials that depend on the sender address. So, let's say that my gmail accounts are: [email protected] and [email protected]. If I sent a mail with [email protected] in the FROM header field, then postfix should use the credentials: [email protected]:psswd1 to do SASL authentication with gmail SMTP server. Similarly with [email protected], it should use [email protected]:passwd2. Sounds fairly simple. Well, I followed the postfix official documentation at http://www.postfix.org/SASL_README.html, and I ended up with the following relevant configurations: /etc/postfix/main.cf smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sender_dependent_authentication = yes sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay smtp_tls_security_level = secure smtp_tls_CAfile = /etc/ssl/certs/Equifax_Secure_CA.pem smtp_tls_CApath = /etc/ssl/certs smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache smtp_tls_session_cache_timeout = 3600s smtp_tls_loglevel = 1 tls_random_source = dev:/dev/urandom relayhost = smtp.gmail.com:587 /etc/postfix/sasl_passwd [email protected] [email protected]:passwd1 [email protected] [email protected]:passwd2 smtp.gmail.com:587 [email protected]:passwd1 /etc/postfix/sender_relay [email protected] smtp.gmail.com:587 [email protected] smtp.gmail.com:587 After I'm done with the configurations I did: $ postmap /etc/postfix/sasl_passwd $ postmap /etc/postfix/sender_relay $ /etc/init.d/postfix restart The problem is that when I send a mail from [email protected], the message ends up in the destination with sender address [email protected] and NOT [email protected], which means that postfix always ignores the per-sender configurations and send the mail using the default credentials (the third line in /etc/postfix/sasl_passwd above). I checked the configurations multiple times and even compared them to those in various blog posts addressing the same issue but found them to be more or less the same as mine. So, can anyone point me in the right direction, in case I'm missing something? Many thanks.

    Read the article

  • Raid-5 Performance per spindle scaling

    - by Bill N.
    So I am stuck in a corner, I have a storage project that is limited to 24 spindles, and requires heavy random Write (the corresponding read side is purely sequential). Needs every bit of space on my Drives, ~13TB total in a n-1 raid-5, and has to go fast, over 2GB/s sort of fast. The obvious answer is to use a Stripe/Concat (Raid-0/1), or better yet a raid-10 in place of the raid-5, but that is disallowed for reasons beyond my control. So I am here asking for help in getting a sub optimal configuration to be as good as it can be. The array built on direct attached SAS-2 10K rpm drives, backed by a ARECA 18xx series controller with 4GB of cache. 64k array stripes and an 4K stripe aligned XFS File system, with 24 Allocation groups (to avoid some of the penalty for being raid 5). The heart of my question is this: In the same setup with 6 spindles/AG's I see a near disk limited performance on the write, ~100MB/s per spindle, at 12 spindles I see that drop to ~80MB/s and at 24 ~60MB/s. I would expect that with a distributed parity and matched AG's, the performance should scale with the # of spindles, or be worse at small spindle counts, but this array is doing the opposite. What am I missing ? Should Raid-5 performance scale with # of spindles ? Many thanks for your answers and any ideas, input, or guidance. --Bill Edit: Improving RAID performance The other relevant thread I was able to find, discusses some of the same issues in the answers, though it still leaves me with out an answer on the performance scaling.

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • Firefox proxy authentication with Kerberos: one service ticket per connection (Linux)

    - by Dari
    I am trying to enable proxy authentication via Kerberos for Firefox. The setup is: Active Directory domain (for LDAP and Kerberos; this works and I can log in the computer and get Kerberos tickets without problems) Microsoft Windows witness machine (on which Firefox runs fine with no ticket problem) CentOS 6.3 system with Firefox (the tests were performed with both the 10.0.1 ESR found in the CentOS package repositories and the 15.0.1 downloaded from Mozilla's website) BlueCoat proxy with Kerberos authentication enabled For the moment, Firefox requests an element of a website, gets an HTTP error code of "407 Proxy Authentication Required" from the proxy, gets a ticket granting service (TGS) from the domain for the proxy and performs the request again while passing the ticket. The transaction runs fine. However, when more elements are requested (in parallel), Firefox requests one more ticket per proxy connection. And this takes many DNS queries, Kerberos interactions with domain controllers and costs a lot of time (for example, the home page of Adobe takes several minutes to load and at the end, I have about 30 valid Kerberos tickets). I am stuck on this since a while, and help would be greatly appreciated. Minor information: the CentOS operating system is virtualized with VMware Player 3.1.3, but I do not think this would be a game changer.

    Read the article

  • Multiple munin-nodes per machine

    - by Alexander T
    I'm collecting statistics remotely through JMX. The munin JMX plugin allows you to select an URL to connect to when aggregating statistics. This allows me to collect statistics from hosts which do not actually have munin-node installed. I find this a desirable property for some systems where I am hindered to install munin-node. How I work today is that if i want to collect JMX stats from machine A without munin-node, I install munin-node on machine B. Machine B then collects data from A via JMX, and reports it to munin-server, which runs on machine C. This setup requires multiple B-type machines: one per C-type machine. I would like to avoid this and instead use only one B-type machine to collect the data from all A-type machines and reports it to the only munin-server (C-type machine). As far as I understand this requires running multiple munin-nodes on B or in some other way report to munin-server that the B-type machine is reporting data from multiple sources. Is this possible? Thank you.

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Limited bandwidth and transfer rates per user.

    - by Cx03
    I searched for a while but couldn't find anything concrete, hopefully someone can help me. I'm going to be running a Debian server on a gigabit port, and want to give each user his/her fair share of internet access. The first objective is easy - transfer rates (speed) per user. From what I've looked at, IPTables/Shorewall could do the job easy. Is this easy to setup, or could one of you point me at a config? I was hoping to limit users at 300mbit or 650mbit each. The second objective gets complicated. Due to the usage of the boxes, most of the traffic will be internal network traffic that does NOT get counted to the quota. However, I still need to limit the external traffic, and if they go over, cut off access (or throttle traffic to a very low speed (10mbit?)). Let's say the user has a 3TB external traffic limit. The IF part is: If the hostname they are exchanging the traffic with DOES NOT MATCH .ovh. or .kimsufi. (company owns multiple TLDs), count to the quota. Once said quota exceeds 3TB, choke them. Where could I find a system to count that for me? It would also need to reset or be able to be manually reset on a monthly basis. Thanks ahead of time!

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >