Search Results

Search found 187 results on 8 pages for 'jimmy j'.

Page 8/8 | < Previous Page | 4 5 6 7 8 

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • Last week I was presented with a Microsoft MVP award in Virtual Machines – time to thank all who hel

    - by Liam Westley
    MVP in Virtual Machines Last week, on 1st April, I received an e-mail from Microsoft letting me know that I had been presented with a 2010 Microsoft® MVP Award for outstanding contributions in Virtual Machine technical communities during the past year.   It was an honour to be nominated, and is a great reflection on the vibrancy of the UK user group community which made this possible. Virtualisation for developers, not just IT Pros I consider it a special honour as my expertise in virtualisation is as a software developer utilising virtual machines to aid my software development, rather than an IT Pro who manages data centre and network infrastructure.  I’ve been on a minor mission over the past few years to enthuse developers in a topic usually seen as only for network admins, but which can make their life a whole lot easier once understood properly. Continuous learning is fun In 1676, the scientist Isaac Newton, in a letter to Robert Hooke used the phrase (http://www.phrases.org.uk/meanings/268025.html) ‘If I have seen a little further it is by standing on the shoulders of Giants’ I’m a nuclear physicist by education, so I am more than comfortable that any knowledge I have is based on the work of others.  Although far from a science, software development and IT is equally built upon the work of others. It’s one of the reasons I despise software patents. So in that sense this MVP award is a result of all the great minds that have provided virtualisation solutions for me to talk about.  I hope that I have always acknowledged those whose work I have used when blogging or giving presentations, and that I have executed my responsibility to share any knowledge gained as widely as possible. Thanks to all those who helped – a big thanks to the UK user group community I reckon this journey started in 2003 when I started attending a user group called the London .Net Users Group (http://www.dnug.org.uk) started by a nice chap called Ian Cooper. The great thing about Ian was that he always encouraged non professional speakers to take the stage at the user group, and my first ever presentation was on 30th September 2003; SQL Server CE 2.0 and the.NET Compact Framework. In 2005 Ian Cooper was on the committee for the first DeveloperDeveloperDeveloper! day, the free community conference held at Microsoft’s UK HQ in Thames Valley park in Reading.  He encouraged me to take part and so on 14th May 2005 I presented a talk previously given to the London .Net User Group on Simplifying access to multiple DB providers in .NET.  From that point on I definitely had the bug; presenting at DDD2, DDD3, groking at DDD4 and SQLBits I and after a break, DDD7, DDD Scotland and DDD8.  What definitely made me keen was the encouragement and infectious enthusiasm of some of the other DDD organisers; Craig Murphy, Barry Dorrans, Phil Winstanley and Colin Mackay. During the first few DDD events I met the Dave McMahon and Richard Costall from NxtGenUG who made it easy to start presenting at their user groups.  Along the way I’ve met a load of great user group organisers; Guy Smith-Ferrier of the .Net Developer Network, Jimmy Skowronski of GL.Net and the double act of Ray Booysen and Gavin Osborn behind what was Vista Squad and is now Edge UG. Final thanks to those who suggested virtualisation as a topic ... Final thanks have to go the people who inspired me to create my Virtualisation for Developers talk.  Toby Henderson (@holytshirt) ensured I took notice of Sun’s VirtualBox, Peter Ibbotson for being a fine sounding board at the Kew Railway over quite a few Adnam’s Broadside and to Guy Smith-Ferrier for allowing his user group to be the guinea pigs for the talk before it was seen at DDD7.  Thanks to all of you I now know much more about virtualisation than I would have thought possible and it continues to be great fun. Conclusion If this was an academy award acceptance speech I would have been cut off after the first few paragraphs, so well done if you made it this far.  I’ll be doing my best to do justice to the MVP award and the UK community.  I’m fortunate in having a new employer who considers presenting at user groups as a good thing, so don’t expect me to stop any time soon. If you’ve never seen me in action, then you can view the original DDD7 Virtualisation for Developers presentation (filmed by the Microsoft Channel 9 team) as part of the full DDD7 video list here, http://www.craigmurphy.com/blog/?p=1591.  Also thanks to Craig Murphy’s fine video work you can also view my latest DDD8 presentation on Commercial Software Development, here, http://vimeo.com/9216563 P.S. If I’ve missed anyone out, do feel free to lambast me in comments, it’s your duty.

    Read the article

  • Oracle OpenWorld Preview: Let's Get Social and Interactive

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} On this blog, we often write about getting social and interactive.  Usually, we’re talking about how to create a social business or how to make the customer experience more social and interactive.  Today’s topic is about getting social and interactive as well. But this time we’re talking about getting social and interactive the old fashioned way, face-to-face at Oracle OpenWorld with fellow Oracle WebCenter customers, partners and experts and the broader Oracle community.  Here are some great ways to get social at OpenWorld outside of the exhibition halls and meeting rooms: Oracle OpenWorld Welcome Reception - Sponsored by FujitsuSunday, September 30, 7:00 p.m.–8:30 p.m.Yerba Buena Gardens & Howard Street Tent You’ll definitely want to attend the Opening Ceremonies for Oracle OpenWorld 2012 on Sunday, September 30. Centered in Yerba Buena Gardens (YBG) and shimmying out to other venues, the Opening Ceremonies are not to be missed. Join other attendees for great food and drink, energizing music, networking opportunities, and more. While you’re at YBG (home of ORACLE TEAM USA’s America’s Cup Pavilion), be sure to meet the sailors who will be defending the 34th America’s Cup in 2013. Get a good look at the 161-year old Trophy itself—the oldest trophy still being contested in international sport. And at the AC72 boat display, view a model of the largest wingsail ever built. Oracle WebCenter Customer Appreciation ReceptionTuesday, October 2, 6:30 p.m.—9:30 p.m.The Palace Hotel, Rallston BallroomThose Oracle WebCenter customers who’ve RSVP’d to attend the Oracle WebCenter Customer Appreciation Reception shouldn’t miss this private cocktail reception at one of San Francisco’s finest hotels. Sponsored by Oracle WebCenter partners Fishbowl Solutions, Fujitsu, Keste, Mythics, Redstone Content Solutions, TEAM Informatics, and TekStream, this evening will provide plenty of time to interact with other WebCenter customers, partners and employees over hors d'oeuvres and cocktails. Oracle Appreciation Event – Sponsored by CSC, Fujitsu and IntelWednesday, October 3, 7:30 p.m.—1:00 a.m.Treasure Island, San Francisco On Wednesday night October 3, Treasure Island will be engineered to rock as the Oracle Appreciation Event gets revved up and attendees get rolling. As always at the Oracle Appreciation Event, there will be unlimited refreshments, fun and games, the most awesome views of San Francisco from just about anywhere, and top notch entertainment.  Past performers read like a veritable who’s who of the rock and roll elite. Join us—it's our way of saying thanks to you for supporting Oracle and our flagship conference. Complimentary shuttle service to and from Treasure Island will be provided, so all you have to worry about is having a rocking night of your own. Oracle OpenWorld Music FestivalSeptember 30-October 4, Check schedule for venues and times.Oracle presents the first annual Oracle OpenWorld Musical Festival, featuring some of today’s breakthrough musicians from around the country and the world including Macy Gray, Joss Stone, Jimmy Cliff and The Hives. It’s five nights of back-to-back performances in the heart of San Francisco. Registered Oracle conference attendees get free admission, so remember your badge when you head to a show. With limited space at some venues, these concerts are first-come, first-served. So mark your calendars and get ready for the music to begin. See you there!I hope this give you an idea of the many opportunities to socialize and interact with the Oracle community at OpenWorld, and if you’re a music lover like me, you’re in for a special treat as we debut our first annual Oracle OpenWorld Music Festival.  Check out the links below for more information on these events and the many featured performers: Reflections from the Young Prisms A Brief Soul Session with Joss Stone Mixing It Up with Blues Mix Red Meat’s Music is Rare and Well Done The English Beat’s Dave Wakeling Gets Philosophical Top Ten Reasons to Attend the Oracle Appreciation Event There’s Magic in the Air, There’ll Be Music Everywhere Looking forward to seeing you at OpenWorld!

    Read the article

  • What are the industry metrics for average spend on dev hardware and software? [on hold]

    - by RationalGeek
    I'm trying to budget for my dev shop and compare our budget items to industry expectations. I'm hoping to find some information on what percentage of a dev's salary is generally spent on tooling, both hardware and software. Where can I find such information? If instead there is a source that looks at raw dollars that is useful, too. I can extrapolate what I need from that. NOTE: Your anecdotal evidence from your own job will not be very helpful. I'm looking for industry average statistics from a credible source. EDIT: I'm reluctant to even keep this question going based on the passionate negative responses of commenters, but I do think this is valuable information (assuming anyone will care to answer) so let me make one attempt to clarify why I'm looking for this information, and then leave it at that. I'm not sure why understanding and validating my motives is a necessary step to providing the information, but apparently that is the case, so I will do my best. Firstly, let me respond to the idea that us "management types" shouldn't use these types of metrics to evaluate budgets. I agree in part. Ideally, you should spend whatever is necessary on developers in order to keep them fully happy and productive. And this is true of all employees. However, companies operate in a world of limited resources, and every dollar spent in one area means a dollar not spent in another. So it is not enough to simply say "I need to spend $10,000 per developer next year" without having some way to justify that position. One way to help justify it is to compare yourself against the industry. If it is the case that on average a software shops spends 5% (making up that number) of their total development budget (salaries being the large portion of the other 95%, for arguments sake), and I'm only spending 3%, it helps in the justification process. So, it is not my intent to use this information to limit what I spend on developers, but rather to arm myself with the necessary justification to spend what I need to spend on developers to give them the best tools I can. I have been a developer for many years and I understand the need for proper tooling. Next, let's examine the idea that even considering the relationship between a spend on developer salaries and developer tooling is ludicrous and should be banned from budgetary thinking. As Jimmy Hoffa put it in their comment, it's like saying "I'm going to spend no more than 10% of median employee salary on light bulbs and coffee from now on.". Well, yes, it is like saying that, and from a budgeting perspective, this is a useful way to look at things. If you know that, on average, an employee consumes X dollars of coffee a year, then you can project a coffee budget based on that. And you can compare it to an industry metric to understand where you fall: do you spend more on coffee than other companies or less? Why might this be? If you are a coffee supply manager, that seems like a useful thought process. The same seems to hold true for developers. Now, on to the idea that I need to compare "apples to apples" and only look at other shops that are in the same place geographically, the same business, the same application architecture, and the same development frameworks. I guess if I could find such a statistic that said "a shop that is exactly identical to yours spends X on developer tooling" it would be wonderful. But there is plenty of value in an average statistic. Here's an analogy: let's say you are working on a household budget and need to decide how much to spend on groceries. Is it enough to know that the average consumer spends 15% on groceries and therefore decide that you will budget exactly 15%? No. You have to tweak your budget based on your individual needs and situation. But the generalized statistic does help in this evaluation. You can know if your budget is grossly off from what others are doing, and this can help you figure out why this is. So, I will concede the point that it would be better to find statistics that align to my shop, though I think any statistics I could find would be useful for what I'm doing. In that light, let's say that my shop is mostly focused on ASP.NET web applications. That doesn't map perfectly to reality because large enterprises have very heterogenous IT environments. But if I was going to pick one technology that is our focus that would be it. But, if you were to point me at some statistics that are related to a Linux shop doing embedded Java applications, I would still find it useful as a point of comparison. SUMMARY: Let me try to rephrase my question. I'm trying to find industry metrics on how much dev shops spend on developer tooling, both hardware and software. I don't so much care whether it is expressed as a percentage of total budget or as X dollars per dev or as Y percentage of salary. Any metric would be useful. If there are metrics that are specific to ASP.NET dev shops in the Northeast US, all the better, but I would be happy to find anything.

    Read the article

  • XNA - Keyboard text input

    - by Sekhat
    Okay, so basically I want to be able to retrieve keyboard text. Like entering text into a text field or something. I'm only writing my game for windows. I've disregarded using Guide.BeginShowKeyboardInput because it breaks the feel of a self contained game, and the fact that the Guide always shows XBOX buttons doesn't seem right to me either. Yes it's the easiest way, but I don't like it. Next I tried using System.Windows.Forms.NativeWindow. I created a class that inherited from it, and passed it the Games window handle, implemented the WndProc function to catch WM_CHAR (or WM_KEYDOWN) though the WndProc got called for other messages, WM_CHAR and WM_KEYDOWN never did. So I had to abandon that idea, and besides, I was also referencing the whole of Windows forms, which meant unnecessary memory footprint bloat. So my last idea was to create a Thread level, low level keyboard hook. This has been the most successful so far. I get WM_KEYDOWN message, (not tried WM_CHAR yet) translate the virtual keycode with Win32 funcation MapVirtualKey to a char. And I get my text! (I'm just printing with Debug.Write at the moment) A couple problems though. It's as if I have caps lock on, and an unresponsive shift key. (Of course it's not however, it's just that there is only one Virtual Key Code per key, so translating it only has one output) and it adds overhead as it attaches itself to the Windows Hook List and isn't as fast as I'd like it to be, but the slowness could be more due to Debug.Write. Has anyone else approached this and solved it, without having to resort to an on screen keyboard? or does anyone have further ideas for me to try? thanks in advance. note: This is cross posted from the XNA Creators Forums, so if I get an answer there I'll post it here and Vice-Versa Question asked by Jimmy Maybe I'm not understanding the question, but why can't you use the XNA Keyboard and KeyboardState classes? My comment: It's because though you can read keystates, you can't get access to typed text as and how it is typed by the user. So let me further clarify. I want to implement being able to read text input from the user as if they are typing into textbox is windows. The keyboard and KeyboardState class get states of all keys, but I'd have to map each key and combination to it's character representation. This falls over when the user doesn't use the same keyboard language as I do especially with symbols (my double quotes is shift + 2, while american keyboards have theirs somewhere near the return key).

    Read the article

  • Best practices concerning view model and model updates with a subset of the fields

    - by Martin
    By picking MVC for developing our new site, I find myself in the midst of "best practices" being developed around me in apparent real time. Two weeks ago, NerdDinner was my guide but with the development of MVC 2, even it seems outdated. It's an thrilling experience and I feel privileged to be in close contact with intelligent programmers daily. Right now I've stumbled upon an issue I can't seem to get a straight answer on - from all the blogs anyway - and I'd like to get some insight from the community. It's about Editing (read: Edit action). The bulk of material out there, tutorials and blogs, deal with creating and view the model. So while this question may not spell out a question, I hope to get some discussion going, contributing to my decision about the path of development I'm to take. My model represents a user with several fields like name, address and email. All the names, in fact, on field each for first name, last name and middle name. The Details view displays all these fields but you can change only one set of fields at a time, for instance, your names. The user expands a form while the other fields are still visible above and below. So the form that is posted back contains a subset of the fields representing the model. While this is appealing to us and our layout concerns, for various reasons, it is to be shunned by serious MVC-developers. I've been reading about some patterns and best practices and it seems that this is not in key with the paradigm of viewmodel == view. Or have I got it wrong? Anyway, NerdDinner dictates using FormCollection och UpdateModel. All the null fields are happily ignored. Since then, the MVC-community has abandoned this approach to such a degree that a bug in MVC 2 was not discovered. UpdateModel does not work without a complete model in your formcollection. The view model pattern receiving most praise seems to be Dedicated view model that contains a custom view model entity and is the only one that my design issue could be made compatible with. It entails a tedious amount of mapping, albeit lightened by the use of AutoMapper and the ideas of Jimmy Bogard, that may or may not be worthwhile. He also proposes a 1:1 relationship between view and view model. In keeping with these design paradigms, I am to create a view and associated view for each of my expanding sets of fields. The view models would each be nearly identical, differing only in the fields which are read-only, the views also containing much repeated markup. This seems absurd to me. In future I may want to be able to display two, more or all sets of fields open simultaneously. I will most attentively read the discussion I hope to spark. Many thanks in advance.

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow. Is there anyway that the server can auto-detect wither to use the development or production configs?

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Skinny controller in ASP.NET MVC 4

    - by thangchung
    Rails community are always inspire a lot of best ideas. I really love this community by the time. One of these is "Fat models and skinny controllers". I have spent a lot of time on ASP.NET MVC, and really I did some miss-takes, because I made the controller so fat. That make controller is really dirty and very hard to maintain in the future. It is violate seriously SRP principle and KISS as well. But how can we achieve that in ASP.NET MVC? That question is really clear after I read "Manning ASP.NET MVC 4 in Action". It is simple that we can separate it into ActionResult, and try to implementing logic and persistence data inside this. In last 2 years, I have read this from Jimmy Bogard blog, but in that time I never had a consideration about it. That's enough for talking now. I just published a sample on ASP.NET MVC 4, implemented on Visual Studio 2012 RC at here. I used EF framework at here for implementing persistence layer, and also use 2 free templates from internet to make the UI for this sample. In this sample, I try to implementing the simple magazine website that managing all articles, categories and news. It is not finished at all in this time, but no problems, because I just show you about how can we make the controller skinny. And I wanna hear more about your ideas. The first thing, I am abstract the base ActionResult class like this:    public abstract class MyActionResult : ActionResult, IEnsureNotNull     {         public abstract void EnsureAllInjectInstanceNotNull();     }     public abstract class ActionResultBase<TController> : MyActionResult where TController : Controller     {         protected readonly Expression<Func<TController, ActionResult>> ViewNameExpression;         protected readonly IExConfigurationManager ConfigurationManager;         protected ActionResultBase (Expression<Func<TController, ActionResult>> expr)             : this(DependencyResolver.Current.GetService<IExConfigurationManager>(), expr)         {         }         protected ActionResultBase(             IExConfigurationManager configurationManager,             Expression<Func<TController, ActionResult>> expr)         {             Guard.ArgumentNotNull(expr, "ViewNameExpression");             Guard.ArgumentNotNull(configurationManager, "ConfigurationManager");             ViewNameExpression = expr;             ConfigurationManager = configurationManager;         }         protected ViewResult GetViewResult<TViewModel>(TViewModel viewModel)         {             var m = (MethodCallExpression)ViewNameExpression.Body;             if (m.Method.ReturnType != typeof(ActionResult))             {                 throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");             }             var result = new ViewResult             {                 ViewName = m.Method.Name             };             result.ViewData.Model = viewModel;             return result;         }         public override void ExecuteResult(ControllerContext context)         {             EnsureAllInjectInstanceNotNull();         }     } I also have an interface for validation all inject objects. This interface make sure all inject objects that I inject using Autofac container are not null. The implementation of this as below public interface IEnsureNotNull     {         void EnsureAllInjectInstanceNotNull();     } Afterwards, I am just simple implementing the HomePageViewModelActionResult class like this public class HomePageViewModelActionResult<TController> : ActionResultBase<TController> where TController : Controller     {         #region variables & ctors         private readonly ICategoryRepository _categoryRepository;         private readonly IItemRepository _itemRepository;         private readonly int _numOfPage;         public HomePageViewModelActionResult(Expression<Func<TController, ActionResult>> viewNameExpression)             : this(viewNameExpression,                    DependencyResolver.Current.GetService<ICategoryRepository>(),                    DependencyResolver.Current.GetService<IItemRepository>())         {         }         public HomePageViewModelActionResult(             Expression<Func<TController, ActionResult>> viewNameExpression,             ICategoryRepository categoryRepository,             IItemRepository itemRepository)             : base(viewNameExpression)         {             _categoryRepository = categoryRepository;             _itemRepository = itemRepository;             _numOfPage = ConfigurationManager.GetAppConfigBy("NumOfPage").ToInteger();         }         #endregion         #region implementation         public override void ExecuteResult(ControllerContext context)         {             base.ExecuteResult(context);             var cats = _categoryRepository.GetCategories();             var mainViewModel = new HomePageViewModel();             var headerViewModel = new HeaderViewModel();             var footerViewModel = new FooterViewModel();             var mainPageViewModel = new MainPageViewModel();             headerViewModel.SiteTitle = "Magazine Website";             if (cats != null && cats.Any())             {                 headerViewModel.Categories = cats.ToList();                 footerViewModel.Categories = cats.ToList();             }             mainPageViewModel.LeftColumn = BindingDataForMainPageLeftColumnViewModel();             mainPageViewModel.RightColumn = BindingDataForMainPageRightColumnViewModel();             mainViewModel.Header = headerViewModel;             mainViewModel.DashBoard = new DashboardViewModel();             mainViewModel.Footer = footerViewModel;             mainViewModel.MainPage = mainPageViewModel;             GetViewResult(mainViewModel).ExecuteResult(context);         }         public override void EnsureAllInjectInstanceNotNull()         {             Guard.ArgumentNotNull(_categoryRepository, "CategoryRepository");             Guard.ArgumentNotNull(_itemRepository, "ItemRepository");             Guard.ArgumentMustMoreThanZero(_numOfPage, "NumOfPage");         }         #endregion         #region private functions         private MainPageRightColumnViewModel BindingDataForMainPageRightColumnViewModel()         {             var mainPageRightCol = new MainPageRightColumnViewModel();             mainPageRightCol.LatestNews = _itemRepository.GetNewestItem(_numOfPage).ToList();             mainPageRightCol.MostViews = _itemRepository.GetMostViews(_numOfPage).ToList();             return mainPageRightCol;         }         private MainPageLeftColumnViewModel BindingDataForMainPageLeftColumnViewModel()         {             var mainPageLeftCol = new MainPageLeftColumnViewModel();             var items = _itemRepository.GetNewestItem(_numOfPage);             if (items != null && items.Any())             {                 var firstItem = items.First();                 if (firstItem == null)                     throw new NoNullAllowedException("First Item".ToNotNullErrorMessage());                 if (firstItem.ItemContent == null)                     throw new NoNullAllowedException("First ItemContent".ToNotNullErrorMessage());                 mainPageLeftCol.FirstItem = firstItem;                 if (items.Count() > 1)                 {                     mainPageLeftCol.RemainItems = items.Where(x => x.ItemContent != null && x.Id != mainPageLeftCol.FirstItem.Id).ToList();                 }             }             return mainPageLeftCol;         }         #endregion     }  Final step, I get into HomeController and add some line of codes like this [Authorize]     public class HomeController : BaseController     {         [AllowAnonymous]         public ActionResult Index()         {             return new HomePageViewModelActionResult<HomeController>(x=>x.Index());         }         [AllowAnonymous]         public ActionResult Details(int id)         {             return new DetailsViewModelActionResult<HomeController>(x => x.Details(id), id);         }         [AllowAnonymous]         public ActionResult Category(int id)         {             return new CategoryViewModelActionResult<HomeController>(x => x.Category(id), id);         }     } As you see, the code in controller is really skinny, and all the logic I move to the custom ActionResult class. Some people said, it just move the code out of controller and put it to another class, so it is still hard to maintain. Look like it just move the complicate codes from one place to another place. But if you have a look and think it in details, you have to find out if you have code for processing all logic that related to HttpContext or something like this. You can do it on Controller, and try to delegating another logic  (such as processing business requirement, persistence data,...) to custom ActionResult class. Tell me more your thinking, I am really willing to hear all of its from you guys. All source codes can be find out at here. Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="http://weblogs.asp.net//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

    Read the article

  • saslauthd + PostFix producing password verification and authentication errors

    - by Aram Papazian
    So I'm trying to setup PostFix while using SASL (Cyrus variety preferred, I was using dovecot earlier but I'm switching from dovecot to courier so I want to use cyrus instead of dovecot) but I seem to be having issues. Here are the errors I'm receiving: ==> mail.log <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.info <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.warn <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure I tried $testsaslauthd -u xxxx -p xxxx 0: OK "Success." So I know that the password/user I'm using is correct. I'm thinking that most likely I have a setting wrong somewhere, but can't seem to find where. Here is my files. Here is my main.cf for postfix: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname # This is already done in /etc/mailname #myhostname = crazyinsanoman.xxxxx.com smtpd_banner = $myhostname ESMTP $mail_name #biff = no # appending .domain is the MUA's job. #append_dot_mydomain = no readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # Relay smtp through another server or leave blank to do it yourself #relayhost = smtp.yourisp.com # Network details; Accept connections from anywhere, and only trust this machine mynetworks = 127.0.0.0/8 inet_interfaces = all #mynetworks_style = host #As we will be using virtual domains, these need to be empty local_recipient_maps = mydestination = # how long if undelivered before sending "delayed mail" warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/vmail # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf # Setup the uid/gid of the owner of the mail files - static:5000 allows virtual ones virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 inet_protocols=all # Cyrus SASL Support smtpd_sasl_path = smtpd smtpd_sasl_local_domain = xxxxx.com ####################### ## OLD CONFIGURATION ## ####################### #myorigin = /etc/mailname #mydestination = crazyinsanoman.xxxxx.com, localhost, localhost.localdomain #mailbox_size_limit = 0 #recipient_delimiter = + #html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 #virtual_alias_domains = ##virtual_alias_maps = hash:/etc/postfix/virtual #virtual_mailbox_base = /home/vmail ##luser_relay = webmaster #smtpd_sasl_type = dovecot #smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes #smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination #virtual_create_maildirsize = yes #virtual_maildir_extended = yes #proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps #virtual_transport = dovecot #dovecot_destination_recipient_limit = 1 Here is my master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # cyrus unix - n n - - pipe user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} #dovecot unix - n n - - pipe # flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} Here is what I'm using for /etc/postfix/sasl/smtpd.conf log_level: 7 pwcheck_method: saslauthd pwcheck_method: auxprop mech_list: PLAIN LOGIN CRAM-MD5 DIGEST-MD5 allow_plaintext: true auxprop_plugin: mysql sql_hostnames: 127.0.0.1 sql_user: xxxxx sql_passwd: xxxxx sql_database: maildb sql_select: select crypt from users where id = '%u' As you can see I'm trying to use mysql as my authentication method. The password in 'users' is set through the 'ENCRYPT()' function. I also followed the methods found in http://www.jimmy.co.at/weblog/?p=52 in order to redo /var/spool/postfix/var/run/saslauthd as that seems to be a lot of people's problems, but that didn't help at all. Also, here is my /etc/default/saslauthd START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" # Which authentication mechanisms should saslauthd use? (default: pam) # # Available options in this Debian package: # getpwent -- use the getpwent() library function # kerberos5 -- use Kerberos 5 # pam -- use PAM # rimap -- use a remote IMAP server # shadow -- use the local shadow password file # sasldb -- use the local sasldb database file # ldap -- use LDAP (configuration is in /etc/saslauthd.conf) # # Only one option may be used at a time. See the saslauthd man page # for more information. # # Example: MECHANISMS="pam" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" I had heard that potentially changing MECHANISM to MECHANISMS="mysql" but obviously that didn't help as is shown by the options listed above and also by trying it out anyway in case the documentation was outdated. So, I'm now at a loss... I have no idea where to go from here or what steps I need to do to get this working =/ Anyone have any ideas? EDIT: Here is the error that is coming from auth.log ... I don't know if this will help at all, but here you go: Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql auxprop plugin using mysql engine Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1'

    Read the article

< Previous Page | 4 5 6 7 8