Search Results

Search found 249 results on 10 pages for 'jimmy zhang'.

Page 10/10 | < Previous Page | 6 7 8 9 10 

  • Can I create template-based library objects in Dreamweaver CS5?

    - by Danjah
    At work we need two 'streams' of template. The first are general layout templates, like the ones already available in the MX through CS5 packages (except we'd have our own customised ones). The second are more granular objects, some of which are functional. In both cases, I don't want Jimmy to be able to wreak havoc inside anything other than the 'editable regions' which make up the templates. Now this is fine if I stick with the first scenario (layout templates) where there's simply a big chunk of editable region for good ole Jim to sprawl into, think of this as the 'body content' area. But I really do need these granular library (or snippet) objects to work in the same way. Unfortunately with my attempts so far they don't work as I'd have thought - perhaps for good reason? When I create a blank template and throw in my chunk of HTML (unobstrusive JS and external CSS use selectors in this HTML to provide style and function) and save it as a new library item or snippet, all looks well. Then I create a new document based on a layout template and save it as a plain html file (still all good so far). Next I drop in my custom library item... still all good... but then I go to save the document and it only allows me to save it as a new template! I expected it would just allow me to save it as HTML and have it simply respect the defined editable regions, as happens in the containing page 'body content' editable region. Apologies if that got specific and technical quite quickly, but it is quite particular. If you want some example files lemme know and I'll zip some up. Many thanks :) p.s It is not a requirement that library objects must somehow inject their dependency files into the newly created page - I already know what they'll be. Also, I know I must 'detatch from original' once I drop a library item into a document which then allows customisation of the library object.

    Read the article

  • DDD: Persisting aggregates

    - by Mosh
    Hello, Let's consider the typical Order and OrderItem example. Assuming that OrderItem is part of the Order Aggregate, it an only be added via Order. So, to add a new OrderItem to an Order, we have to load the entire Aggregate via Repository, add a new item to the Order object and persist the entire Aggregate again. This seems to have a lot of overhead. What if our Order has 10 OrderItems? This way, just to add a new OrderItem, not only do we have to read 10 OrderItems, but we should also re-insert all these 10 OrderItems again. (This is the approach that Jimmy Nillson has taken in his DDD book. Everytime he wants to persists an Aggregate, he clears all the child objects, and then re-inserts them again. This can cause other issues as the ID of the children are changed everytime because of the IDENTITY column in database.) I know some people may suggest to apply Unit of Work pattern at the Aggregate Root so it keeps track of what has been changed and only commit those changes. But this violates Persistence Ignorance (PI) principle because persistence logic is leaking into the Domain Model. Has anyone thought about this before? Mosh

    Read the article

  • adding multiple <asp:Hyperlink>s into a repeater

    - by Colin Pickard
    I have a repeater control, and I want to put an unknown number of <asp:Hyperlink>s into the template, for example if you start with this: <asp:Repeater runat="server" ID="PetsRepeater"> <ItemTemplate> <%#DataBinder.Eval(Container.DataItem, "Owner")%> <%#this.ListPets(Container.DataItem)%> </ItemTemplate> </asp:Repeater> and in code behind: public partial class test1 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { PetOwner p = new PetOwner() { Owner = "Jimmy", PetNames = new List<String>() { "Nemo", "Dory" } }; List<PetOwner> PetOwners = new List<PetOwner>() { p }; PetsRepeater.DataSource = PetOwners; PetsRepeater.DataBind(); } } protected String ListPets(Object PetOwner) { StringBuilder sb = new StringBuilder(); foreach (String Name in ((PetOwner)PetOwner).PetNames) { if (sb.Length > 0) sb.Append(", "); sb.Append(Name); } return sb.ToString(); } } class PetOwner { public String Owner; public List<String> PetNames; } Now suppose instead of having the string "Nemo, Dory" in my repeater, I want something like this: <asp:HyperLink runat=server Text="Nemo" NavigateUrl="Pet.aspx?Name=Nemo" />, <asp:HyperLink runat=server Text="Dory" NavigateUrl="Pet.aspx?Name=Dory" /> How can I do that? I tried putting a foreach inline in the aspx page, but I get the error Invalid expression term 'foreach'.

    Read the article

  • sorting names in a linked list

    - by sil3nt
    Hi there, I'm trying to sort names into alphabetical order inside a linked list but am getting a run time error. what have I done wrong here? #include <iostream> #include <string> using namespace std; struct node{ string name; node *next; }; node *A; void addnode(node *&listpointer,string newname){ node *temp; temp = new node; if (listpointer == NULL){ temp->name = newname; temp->next = listpointer; listpointer = temp; }else{ node *add; add = new node; while (true){ if(listpointer->name > newname){ add->name = newname; add->next = listpointer->next; break; } listpointer = listpointer->next; } } } int main(){ A = NULL; string name1 = "bob"; string name2 = "tod"; string name3 = "thomas"; string name4 = "kate"; string name5 = "alex"; string name6 = "jimmy"; addnode(A,name1); addnode(A,name2); addnode(A,name3); addnode(A,name4); addnode(A,name5); addnode(A,name6); while(true){ if(A == NULL){break;} cout<< "name is: " << A->name << endl; A = A->next; } return 0; }

    Read the article

  • Passing array values in an HTTP request in .NET

    - by Zarjay
    What's the standard way of passing and processing an array in an HTTP request in .NET? I have a solution, but I don't know if it's the best approach. Here's my solution: <form action="myhandler.ashx" method="post"> <input type="checkbox" name="user" value="Aaron" /> <input type="checkbox" name="user" value="Bobby" /> <input type="checkbox" name="user" value="Jimmy" /> <input type="checkbox" name="user" value="Kelly" /> <input type="checkbox" name="user" value="Simon" /> <input type="checkbox" name="user" value="TJ" /> <input type="submit" value="Submit" /> </form> The ASHX handler receives the "user" parameter as a comma-delimited string. You can get the values easily by splitting the string: public void ProcessRequest(HttpContext context) { string[] users = context.Request.Form["user"].Split(','); } So, I already have an answer to my problem: assign multiple values to the same parameter name, assume the ASHX handler receives it as a comma-delimited string, and split the string. My question is whether or not this is how it's typically done in .NET. What's the standard practice for this? Is there a simpler way to grab the multiple values than assuming that the value is comma-delimited and calling Split() on it? Is this how arrays are typically passed in .NET, or is XML used instead? Does anyone have any insight on whether or not this is the best approach?

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • Last week I was presented with a Microsoft MVP award in Virtual Machines – time to thank all who hel

    - by Liam Westley
    MVP in Virtual Machines Last week, on 1st April, I received an e-mail from Microsoft letting me know that I had been presented with a 2010 Microsoft® MVP Award for outstanding contributions in Virtual Machine technical communities during the past year.   It was an honour to be nominated, and is a great reflection on the vibrancy of the UK user group community which made this possible. Virtualisation for developers, not just IT Pros I consider it a special honour as my expertise in virtualisation is as a software developer utilising virtual machines to aid my software development, rather than an IT Pro who manages data centre and network infrastructure.  I’ve been on a minor mission over the past few years to enthuse developers in a topic usually seen as only for network admins, but which can make their life a whole lot easier once understood properly. Continuous learning is fun In 1676, the scientist Isaac Newton, in a letter to Robert Hooke used the phrase (http://www.phrases.org.uk/meanings/268025.html) ‘If I have seen a little further it is by standing on the shoulders of Giants’ I’m a nuclear physicist by education, so I am more than comfortable that any knowledge I have is based on the work of others.  Although far from a science, software development and IT is equally built upon the work of others. It’s one of the reasons I despise software patents. So in that sense this MVP award is a result of all the great minds that have provided virtualisation solutions for me to talk about.  I hope that I have always acknowledged those whose work I have used when blogging or giving presentations, and that I have executed my responsibility to share any knowledge gained as widely as possible. Thanks to all those who helped – a big thanks to the UK user group community I reckon this journey started in 2003 when I started attending a user group called the London .Net Users Group (http://www.dnug.org.uk) started by a nice chap called Ian Cooper. The great thing about Ian was that he always encouraged non professional speakers to take the stage at the user group, and my first ever presentation was on 30th September 2003; SQL Server CE 2.0 and the.NET Compact Framework. In 2005 Ian Cooper was on the committee for the first DeveloperDeveloperDeveloper! day, the free community conference held at Microsoft’s UK HQ in Thames Valley park in Reading.  He encouraged me to take part and so on 14th May 2005 I presented a talk previously given to the London .Net User Group on Simplifying access to multiple DB providers in .NET.  From that point on I definitely had the bug; presenting at DDD2, DDD3, groking at DDD4 and SQLBits I and after a break, DDD7, DDD Scotland and DDD8.  What definitely made me keen was the encouragement and infectious enthusiasm of some of the other DDD organisers; Craig Murphy, Barry Dorrans, Phil Winstanley and Colin Mackay. During the first few DDD events I met the Dave McMahon and Richard Costall from NxtGenUG who made it easy to start presenting at their user groups.  Along the way I’ve met a load of great user group organisers; Guy Smith-Ferrier of the .Net Developer Network, Jimmy Skowronski of GL.Net and the double act of Ray Booysen and Gavin Osborn behind what was Vista Squad and is now Edge UG. Final thanks to those who suggested virtualisation as a topic ... Final thanks have to go the people who inspired me to create my Virtualisation for Developers talk.  Toby Henderson (@holytshirt) ensured I took notice of Sun’s VirtualBox, Peter Ibbotson for being a fine sounding board at the Kew Railway over quite a few Adnam’s Broadside and to Guy Smith-Ferrier for allowing his user group to be the guinea pigs for the talk before it was seen at DDD7.  Thanks to all of you I now know much more about virtualisation than I would have thought possible and it continues to be great fun. Conclusion If this was an academy award acceptance speech I would have been cut off after the first few paragraphs, so well done if you made it this far.  I’ll be doing my best to do justice to the MVP award and the UK community.  I’m fortunate in having a new employer who considers presenting at user groups as a good thing, so don’t expect me to stop any time soon. If you’ve never seen me in action, then you can view the original DDD7 Virtualisation for Developers presentation (filmed by the Microsoft Channel 9 team) as part of the full DDD7 video list here, http://www.craigmurphy.com/blog/?p=1591.  Also thanks to Craig Murphy’s fine video work you can also view my latest DDD8 presentation on Commercial Software Development, here, http://vimeo.com/9216563 P.S. If I’ve missed anyone out, do feel free to lambast me in comments, it’s your duty.

    Read the article

  • Base de Datos Oracle, su mejor opción para reducir costos de IT

    - by Ivan Hassig
    Por Victoria Cadavid Sr. Sales Cosultant Oracle Direct Uno de los principales desafíos en la administración de centros de datos es la reducción de costos de operación. A medida que las compañías crecen y los proveedores de tecnología ofrecen soluciones cada vez más robustas, conservar el equilibrio entre desempeño, soporte al negocio y gestión del Costo Total de Propiedad es un desafío cada vez mayor para los Gerentes de Tecnología y para los Administradores de Centros de Datos. Las estrategias más comunes para conseguir reducción en los costos de administración de Centros de Datos y en la gestión de Tecnología de una organización en general, se enfocan en la mejora del desempeño de las aplicaciones, reducción del costo de administración y adquisición de hardware, reducción de los costos de almacenamiento, aumento de la productividad en la administración de las Bases de Datos y mejora en la atención de requerimientos y prestación de servicios de mesa de ayuda, sin embargo, las estrategias de reducción de costos deben contemplar también la reducción de costos asociados a pérdida y robo de información, cumplimiento regulatorio, generación de valor y continuidad del negocio, que comúnmente se conciben como iniciativas aisladas que no siempre se adelantan con el ánimo de apoyar la reducción de costos. Una iniciativa integral de reducción de costos de TI, debe contemplar cada uno de los factores que  generan costo y pueden ser optimizados. En este artículo queremos abordar la reducción de costos de tecnología a partir de la adopción del que según los expertos es el motor de Base de Datos # del mercado.Durante años, la base de datos Oracle ha sido reconocida por su velocidad, confiabilidad, seguridad y capacidad para soportar cargas de datos tanto de aplicaciones altamente transaccionales, como de Bodegas de datos e incluso análisis de Big Data , ofreciendo alto desempeño y facilidades de administración, sin embrago, cuando pensamos en proyectos de reducción de costos de IT, además de la capacidad para soportar aplicaciones (incluso aplicaciones altamente transaccionales) con alto desempeño, pensamos en procesos de automatización, optimización de recursos, consolidación, virtualización e incluso alternativas más cómodas de licenciamiento. La Base de Datos Oracle está diseñada para proveer todas las capacidades que un área de tecnología necesita para reducir costos, adaptándose a los diferentes escenarios de negocio y a las capacidades y características de cada organización.Es así, como además del motor de Base de Datos, Oracle ofrece una serie de soluciones para optimizar la administración de la información a través de mecanismos de optimización del uso del storage, continuidad del Negocio, consolidación de infraestructura, seguridad y administración automática, que propenden por un mejor uso de los recursos de tecnología, ofrecen opciones avanzadas de configuración y direccionan la reducción de los tiempos de las tareas operativas más comunes. Una de las opciones de la base de datos que se pueden provechar para reducir costos de hardware es Oracle Real Application Clusters. Esta solución de clustering permite que varios servidores (incluso servidores de bajo costo) trabajen en conjunto para soportar Grids o Nubes Privadas de Bases de Datos, proporcionando los beneficios de la consolidación de infraestructura, los esquemas de alta disponibilidad, rápido desempeño y escalabilidad por demanda, haciendo que el aprovisionamiento, el mantenimiento de las bases de datos y la adición de nuevos nodos se lleve e cabo de una forma más rápida y con menos riesgo, además de apalancar las inversiones en servidores de menor costo. Otra de las soluciones que promueven la reducción de costos de Tecnología es Oracle In-Memory Database Cache que permite almacenar y procesar datos en la memoria de las aplicaciones, permitiendo el máximo aprovechamiento de los recursos de procesamiento de la capa media, lo que cobra mucho valor en escenarios de alta transaccionalidad. De este modo se saca el mayor provecho de los recursos de procesamiento evitando crecimiento innecesario en recursos de hardware. Otra de las formas de evitar inversiones innecesarias en hardware, aprovechando los recursos existentes, incluso en escenarios de alto crecimiento de los volúmenes de información es la compresión de los datos. Oracle Advanced Compression permite comprimir hasta 4 veces los diferentes tipos de datos, mejorando la capacidad de almacenamiento, sin comprometer el desempeño de las aplicaciones. Desde el lado del almacenamiento también se pueden conseguir reducciones importantes de los costos de IT. En este escenario, la tecnología propia de la base de Datos Oracle ofrece capacidades de Administración Automática del Almacenamiento que no solo permiten una distribución óptima de los datos en los discos físicos para garantizar el máximo desempeño, sino que facilitan el aprovisionamiento y la remoción de discos defectuosos y ofrecen balanceo y mirroring, garantizando el uso máximo de cada uno de los dispositivos y la disponibilidad de los datos. Otra de las soluciones que facilitan la administración del almacenamiento es Oracle Partitioning, una opción de la Base de Datos que permite dividir grandes tablas en estructuras más pequeñas. Esta aproximación facilita la administración del ciclo de vida de la información y permite por ejemplo, separar los datos históricos (que generalmente se convierten en información de solo lectura y no tienen un alto volumen de consulta) y enviarlos a un almacenamiento de bajo costos, conservando la data activa en dispositivos de almacenamiento más ágiles. Adicionalmente, Oracle Partitioning facilita la administración de las bases de datos que tienen un gran volumen de registros y mejora el desempeño de la base de datos gracias a la posibilidad de optimizar las consultas haciendo uso únicamente de las particiones relevantes de una tabla o índice en el proceso de búsqueda. Otros factores adicionales, que pueden generar costos innecesarios a los departamentos de Tecnología son: La pérdida, corrupción o robo de datos y la falta de disponibilidad de las aplicaciones para dar soporte al negocio. Para evitar este tipo de situaciones que pueden acarrear multas y pérdida de negocios y de dinero, Oracle ofrece soluciones que permiten proteger y auditar la base de datos, recuperar la información en caso de corrupción o ejecución de acciones que comprometan la integridad de la información y soluciones que permitan garantizar que la información de las aplicaciones tenga una disponibilidad de 7x24. Ya hablamos de los beneficios de Oracle RAC, para facilitar los procesos de Consolidación y mejorar el desempeño de las aplicaciones, sin embrago esta solución, es sumamente útil en escenarios dónde las organizaciones de quieren garantizar una alta disponibilidad de la información, ante fallo de los servidores o en eventos de desconexión planeada para realizar labores de mantenimiento. Además de Oracle RAC, existen soluciones como Oracle Data Guard y Active Data Guard que permiten replicar de forma automática las bases de datos hacia un centro de datos de contingencia, permitiendo una recuperación inmediata ante eventos que deshabiliten por completo un centro de datos. Además de lo anterior, Active Data Guard, permite aprovechar la base de datos de contingencia para realizar labores de consulta, mejorando el desempeño de las aplicaciones. Desde el punto de vista de mejora en la seguridad, Oracle cuenta con soluciones como Advanced security que permite encriptar los datos y los canales a través de los cueles se comparte la información, Total Recall, que permite visualizar los cambios realizados a la base de datos en un momento determinado del tiempo, para evitar pérdida y corrupción de datos, Database Vault que permite restringir el acceso de los usuarios privilegiados a información confidencial, Audit Vault, que permite verificar quién hizo qué y cuándo dentro de las bases de datos de una organización y Oracle Data Masking que permite enmascarar los datos para garantizar la protección de la información sensible y el cumplimiento de las políticas y normas relacionadas con protección de información confidencial, por ejemplo, mientras las aplicaciones pasan del ambiente de desarrollo al ambiente de producción. Como mencionamos en un comienzo, las iniciativas de reducción de costos de tecnología deben apalancarse en estrategias que contemplen los diferentes factores que puedan generar sobre costos, los factores de riesgo que puedan acarrear costos no previsto, el aprovechamiento de los recursos actuales, para evitar inversiones innecesarias y los factores de optimización que permitan el máximo aprovechamiento de las inversiones actuales. Como vimos, todas estas iniciativas pueden ser abordadas haciendo uso de la tecnología de Oracle a nivel de Base de Datos, lo más importante es detectar los puntos críticos a nivel de riesgo, diagnosticar las proporción en que están siendo aprovechados los recursos actuales y definir las prioridades de la organización y del área de IT, para así dar inicio a todas aquellas iniciativas que de forma gradual, van a evitar sobrecostos e inversiones innecesarias, proporcionando un mayor apoyo al negocio y un impacto significativo en la productividad de la organización. Más información http://www.oracle.com/lad/products/database/index.html?ssSourceSiteId=otnes 1Fuente: Market Share: All Software Markets, Worldwide 2011 by Colleen Graham, Joanne Correia, David Coyle, Fabrizio Biscotti, Matthew Cheung, Ruggero Contu, Yanna Dharmasthira, Tom Eid, Chad Eschinger, Bianca Granetto, Hai Hong Swinehart, Sharon Mertz, Chris Pang, Asheesh Raina, Dan Sommer, Bhavish Sood, Marianne D'Aquila, Laurie Wurster and Jie Zhang. - March 29, 2012 2Big Data: Información recopilada desde fuentes no tradicionales como blogs, redes sociales, email, sensores, fotografías, grabaciones en video, etc. que normalmente se encuentran de forma no estructurada y en un gran volumen

    Read the article

  • Partner Blog Series: PwC Perspectives - "Is It Time for an Upgrade?"

    - by Tanu Sood
    Is your organization debating their next step with regard to Identity Management? While all the stakeholders are well aware that the one-size-fits-all doesn’t apply to identity management, just as true is the fact that no two identity management implementations are alike. Oracle’s recent release of Identity Governance Suite 11g Release 2 has innovative features such as a customizable user interface, shopping cart style request catalog and more. However, only a close look at the use cases can help you determine if and when an upgrade to the latest R2 release makes sense for your organization. This post will describe a few of the situations that PwC has helped our clients work through. “Should I be considering an upgrade?” If your organization has an existing identity management implementation, the questions below are a good start to assessing your current solution to see if you need to begin planning for an upgrade: Does the current solution scale and meet your projected identity management needs? Does the current solution have a customer-friendly user interface? Are you completely meeting your compliance objectives? Are you still using spreadsheets? Does the current solution have the features you need? Is your total cost of ownership in line with well-performing similar sized companies in your industry? Can your organization support your existing Identity solution? Is your current product based solution well positioned to support your organization's tactical and strategic direction? Existing Oracle IDM Customers: Several existing Oracle clients are looking to move to R2 in 2013. If your organization is on Sun Identity Manager (SIM) or Oracle Identity Manager (OIM) and if your current assessment suggests that you need to upgrade, you should strongly consider OIM 11gR2. Oracle provides upgrade paths to Oracle Identity Manager 11gR2 from SIM 7.x / 8.x as well as Oracle Identity Manager 10g / 11gR1. The following are some of the considerations for migration: Check the end of product support (for Sun or legacy OIM) schedule There are several new features available in R2 (including common Helpdesk scenarios, profiling of disconnected applications, increased scalability, custom connectors, browser-based UI configurations, portability of configurations during future upgrades, etc) Cost of ownership (for SIM customers)\ Customizations that need to be maintained during the upgrade Time/Cost to migrate now vs. waiting for next version If you are already on an older version of Oracle Identity Manager and actively maintaining your support contract with Oracle, you might be eligible for a free upgrade to OIM 11gR2. Check with your Oracle sales rep for more details. Existing IDM infrastructure in place: In the past year and half, we have seen a surge in IDM upgrades from non-Oracle infrastructure to Oracle. If your organization is looking to improve the end-user experience related to identity management functions, the shopping cart style access request model and browser based personalization features may come in handy. Additionally, organizations that have a large number of applications that include ecommerce, LDAP stores, databases, UNIX systems, mainframes as well as a high frequency of user identity changes and access requests will value the high scalability of the OIM reconciliation and provisioning engine. Furthermore, we have seen our clients like OIM's out of the box (OOB) support for multiple authoritative sources. For organizations looking to integrate applications that do not have an exposed API, the Generic Technology Connector framework supported by OIM will be helpful in quickly generating custom connector using OOB wizard. Similarly, organizations in need of not only flexible on-boarding of disconnected applications but also strict access management to these applications using approval flows will find the flexible disconnected application profiling feature an extremely useful tool that provides a high degree of time savings. Organizations looking to develop custom connectors for home grown or industry specific applications will likewise find that the Identity Connector Framework support in OIM allows them to build and test a custom connector independently before integrating it with OIM. Lastly, most of our clients considering an upgrade to OIM 11gR2 have also expressed interest in the browser based configuration feature that allows an administrator to quickly customize the user interface without adding any custom code. Better yet, code customizations, if any, made to the product are portable across the future upgrades which, is viewed as a big time and money saver by most of our clients. Below are some upgrade methodologies we adopt based on client priorities and the scale of implementation. For illustration purposes, we have assumed that the client is currently on Oracle Waveset (formerly Sun Identity Manager).   Integrated Deployment: The integrated deployment is typically where a client wants to split the implementation to where their current IDM is continuing to handle the front end workflows and OIM takes over the back office operations incrementally. Once all the back office operations are moved completely to OIM, the front end workflows are migrated to OIM. Parallel Deployment: This deployment is typically done where there can be a distinct line drawn between which functionality the platforms are supporting. For example the current IDM implementation is handling the password reset functionality while OIM takes over the access provisioning and RBAC functions. Cutover Deployment: A cutover deployment is typically recommended where a client has smaller less complex implementations and it makes sense to leverage the migration tools to move them over immediately. What does this mean for YOU? There are many variables to consider when making upgrade decisions. For most customers, there is no ‘easy’ button. Organizations looking to upgrade or considering a new vendor should start by doing a mapping of their requirements with product features. The recommended approach is to take stock of both the short term and long term objectives, understand product features, future roadmap, maturity and level of commitment from the R&D and build the implementation plan accordingly. As we said, in the beginning, there is no one-size-fits-all with Identity Management. So, arm yourself with the knowledge, engage in industry discussions, bring in business stakeholders and start building your implementation roadmap. In the next post we will discuss the best practices on R2 implementations. We will be covering the Do's and Don't's and share our thoughts on making implementations successful. Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Oracle OpenWorld Preview: Let's Get Social and Interactive

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} On this blog, we often write about getting social and interactive.  Usually, we’re talking about how to create a social business or how to make the customer experience more social and interactive.  Today’s topic is about getting social and interactive as well. But this time we’re talking about getting social and interactive the old fashioned way, face-to-face at Oracle OpenWorld with fellow Oracle WebCenter customers, partners and experts and the broader Oracle community.  Here are some great ways to get social at OpenWorld outside of the exhibition halls and meeting rooms: Oracle OpenWorld Welcome Reception - Sponsored by FujitsuSunday, September 30, 7:00 p.m.–8:30 p.m.Yerba Buena Gardens & Howard Street Tent You’ll definitely want to attend the Opening Ceremonies for Oracle OpenWorld 2012 on Sunday, September 30. Centered in Yerba Buena Gardens (YBG) and shimmying out to other venues, the Opening Ceremonies are not to be missed. Join other attendees for great food and drink, energizing music, networking opportunities, and more. While you’re at YBG (home of ORACLE TEAM USA’s America’s Cup Pavilion), be sure to meet the sailors who will be defending the 34th America’s Cup in 2013. Get a good look at the 161-year old Trophy itself—the oldest trophy still being contested in international sport. And at the AC72 boat display, view a model of the largest wingsail ever built. Oracle WebCenter Customer Appreciation ReceptionTuesday, October 2, 6:30 p.m.—9:30 p.m.The Palace Hotel, Rallston BallroomThose Oracle WebCenter customers who’ve RSVP’d to attend the Oracle WebCenter Customer Appreciation Reception shouldn’t miss this private cocktail reception at one of San Francisco’s finest hotels. Sponsored by Oracle WebCenter partners Fishbowl Solutions, Fujitsu, Keste, Mythics, Redstone Content Solutions, TEAM Informatics, and TekStream, this evening will provide plenty of time to interact with other WebCenter customers, partners and employees over hors d'oeuvres and cocktails. Oracle Appreciation Event – Sponsored by CSC, Fujitsu and IntelWednesday, October 3, 7:30 p.m.—1:00 a.m.Treasure Island, San Francisco On Wednesday night October 3, Treasure Island will be engineered to rock as the Oracle Appreciation Event gets revved up and attendees get rolling. As always at the Oracle Appreciation Event, there will be unlimited refreshments, fun and games, the most awesome views of San Francisco from just about anywhere, and top notch entertainment.  Past performers read like a veritable who’s who of the rock and roll elite. Join us—it's our way of saying thanks to you for supporting Oracle and our flagship conference. Complimentary shuttle service to and from Treasure Island will be provided, so all you have to worry about is having a rocking night of your own. Oracle OpenWorld Music FestivalSeptember 30-October 4, Check schedule for venues and times.Oracle presents the first annual Oracle OpenWorld Musical Festival, featuring some of today’s breakthrough musicians from around the country and the world including Macy Gray, Joss Stone, Jimmy Cliff and The Hives. It’s five nights of back-to-back performances in the heart of San Francisco. Registered Oracle conference attendees get free admission, so remember your badge when you head to a show. With limited space at some venues, these concerts are first-come, first-served. So mark your calendars and get ready for the music to begin. See you there!I hope this give you an idea of the many opportunities to socialize and interact with the Oracle community at OpenWorld, and if you’re a music lover like me, you’re in for a special treat as we debut our first annual Oracle OpenWorld Music Festival.  Check out the links below for more information on these events and the many featured performers: Reflections from the Young Prisms A Brief Soul Session with Joss Stone Mixing It Up with Blues Mix Red Meat’s Music is Rare and Well Done The English Beat’s Dave Wakeling Gets Philosophical Top Ten Reasons to Attend the Oracle Appreciation Event There’s Magic in the Air, There’ll Be Music Everywhere Looking forward to seeing you at OpenWorld!

    Read the article

  • Partner Blog Series: PwC Perspectives Part 2 - Jumpstarting your IAM program with R2

    - by Tanu Sood
    Identity and access management (IAM) isn’t a new concept. Over the past decade, companies have begun to address identity management through a variety of solutions that have primarily focused on provisioning. . The new age workforce is converging at a rapid pace with ever increasing demand to use diverse portfolio of applications and systems to interact and interface with their peers in the industry and customers alike. Oracle has taken a significant leap with their release of Identity and Access Management 11gR2 towards enabling this global workforce to conduct their business in a secure, efficient and effective manner. As companies deal with IAM business drivers, it becomes immediately apparent that holistic, rather than piecemeal, approaches better address their needs. When planning an enterprise-wide IAM solution, the first step is to create a common framework that serves as the foundation on which to build the cost, compliance and business process efficiencies. As a leading industry practice, IAM should be established on a foundation of accurate data for identity management, making this data available in a uniform manner to downstream applications and processes. Mature organizations are looking beyond IAM’s basic benefits to harness more advanced capabilities in user lifecycle management. For any organization looking to embark on an IAM initiative, consider the following use cases in managing and administering user access. Expanding the Enterprise Provisioning Footprint Almost all organizations have some helpdesk resources tied up in handling access requests from users, a distraction from their core job of handling problem tickets. This dependency has mushroomed from the traditional acceptance of provisioning solutions integrating and addressing only a portion of applications in the heterogeneous landscape Oracle Identity Manager (OIM) 11gR2 solves this problem by offering integration with third party ticketing systems as “disconnected applications”. It allows for the existing business processes to be seamlessly integrated into the system and tracked throughout its lifecycle. With minimal effort and analysis, an organization can begin integrating OIM with groups or applications that are involved with manually intensive access provisioning and de-provisioning activities. This aspect of OIM allows organizations to on-board applications and associated business processes quickly using out of box templates and frameworks. This is especially important for organizations looking to fold in users and resources from mergers and acquisitions. Simplifying Access Requests Organizations looking to implement access request solutions often find it challenging to get their users to accept and adopt the new processes.. So, how do we improve the user experience, make it intuitive and personalized and yet simplify the user access process? With R2, OIM helps organizations alleviate the challenge by placing the most used functionality front and centre in the new user request interface. Roles, application accounts, and entitlements can all be found in the same interface as catalog items, giving business users a single location to go to whenever they need to initiate, approve or track a request. Furthermore, if a particular item is not relevant to a user’s job function or area inside the organization, it can be hidden so as to not overwhelm or confuse the user with superfluous options. The ability to customize the user interface to suit your needs helps in exercising the business rules effectively and avoiding access proliferation within the organization. Saving Time with Templates A typical use case that is most beneficial to business users is flexibility to place, edit, and withdraw requests based on changing circumstances and business needs. With OIM R2, multiple catalog items can now be added and removed from the shopping cart, an ecommerce paradigm that many users are already familiar with. This feature can be especially useful when setting up a large number of new employees or granting existing department or group access to a newly integrated application. Additionally, users can create their own shopping cart templates in order to complete subsequent requests more quickly. This feature saves the user from having to search for and select items all over again if a request is similar to a previous one. Advanced Delegated Administration A key feature of any provisioning solution should be to empower each business unit in managing their own access requests. By bringing administration closer to the user, you improve user productivity, enable efficiency and alleviate the administration overhead. To do so requires a federated services model so that the business units capable of shouldering the onus of user life cycle management of their business users can be enabled to do so. OIM 11gR2 offers advanced administrative options for creating, managing and controlling business logic and workflows through easy to use administrative interface and tools that can be exposed to delegated business administrators. For example, these business administrators can establish or modify how certain requests and operations should be handled within their business unit based on a number of attributes ranging from the type of request or the risk level of the individual items requested. Closed-Loop Remediation Security continues to be a major concern for most organizations. Identity management solutions bolster security by ensuring only the right users have the right access to the right resources. To prevent unauthorized access and where it already exists, the ability to detect and remediate it, are key requirements of an enterprise-grade proven solution. But the challenge with most solutions today is that some of this information still exists in silos. And when changes are made to systems directly, not all information is captured. With R2, oracle is offering a comprehensive Identity Governance solution that our customer organizations are leveraging for closed loop remediation that allows for an automated way for administrators to revoke unauthorized access. The change is automatically captured and the action noted for continued management. Conclusion While implementing provisioning solutions, it is important to keep the near term and the long term goals in mind. The provisioning solution should always be a part of a larger security and identity management program but with the ability to seamlessly integrate not only with the company’s infrastructure but also have the ability to leverage the information, business models compiled and used by the other identity management solutions. This allows organizations to reduce the cost of ownership, close security gaps and leverage the existing infrastructure. And having done so a multiple clients’ sites, this is the approach we recommend. In our next post, we will take a journey through our experiences of advising clients looking to upgrade to R2 from a previous version or migrating from a different solution. Meet the Writers:   Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Partner Blog Series: PwC Perspectives - Looking at R2 for Customer Organizations

    - by Tanu Sood
    Welcome to the first of our partner blog series. November Mondays are all about PricewaterhouseCoopers' perespective on Identity and R2. In this series, we have identity management experts from PricewaterhouseCoopers (PwC) share their perspective on (and experiences with) the recent identity management release, Oracle Identity Management R2. The purpose of the series is to discuss real world identity use cases that helped shape the innovations in the recent R2 release and the implementation strategies that customers are employing today with expertise from PwC. Part 1: Looking at R2 for Customer Organizations In this inaugural post, we will discuss some of the new features of the R2 release of Oracle Identity Manager that some of our customer organizations are implementing today and the business rationale for those. Oracle's R2 Security portfolio represents a solid step forward for a platform that is already market-leading.  Prior to R2, Oracle was an industry titan in security with reliable products, expansive compatibility, and a large customer base.  Oracle has taken their identity platform to the next level in their latest version, R2.  The new features include a customizable UI, a request catalog, flexible security, and enhancements for its connectors, and more. Oracle customers will be impressed by the new Oracle Identity Manager (OIM) business-friendly UI.  Without question, Oracle has invested significant time in responding to customer feedback about making access requests and related activities easier for non-IT users.  The flexibility to add information to screens, hide fields that are not important to a particular customer, and adjust web themes to suit a company's preference make Oracle's Identity Manager stand out among its peers.  Customers can also expect to carry UI configurations forward with minimal migration effort to future versions of OIM.  Oracle's flexible UI will benefit many organizations looking for a customized feel with out-of-the-box configurations. Organizations looking to extend their services to end users will benefit significantly from new usability features like OIM’s ‘Catalog.’  Customers familiar with Oracle Identity Analytics' 'Glossary' feature will be able to relate to the concept.  It will enable Roles, Entitlements, Accounts, and Resources to be requested through the out-of-the-box UI.  This is an industry-changing feature as customers can make the process to request access easier than ever.  For additional ease of use, Oracle has introduced a shopping cart style request interface that further simplifies the experience for end users.  Common requests can be setup as profiles to save time.  All of this is combined with the approval workflow engine introduced in R1 that provides the flexibility customers need to meet their compliance requirements. Enhanced security was also on the list of features Oracle wanted to deliver to its customers.  The new end-user UI provides additional granular access controls.  Common Help Desk use cases can be implemented with ease by updating the application profiles.  Access can be rolled out so that administrators can only manage a certain department or organization.  Further, OIM can be more easily configured to select which fields can be read-only vs. updated.  Finally, this security model can be used to limit search results for roles and entitlements intended for a particular department.  Every customer has a different need for access and OIM now matches this need with a flexible security model. One of the important considerations when selecting an Identity Management platform is compatibility.  The number of supported platform connectors and how well it can integrate with non-supported platforms is a key consideration for selecting an identity suite.  Oracle has a long list of supported connectors.  When a customer has a requirement for a platform not on that list, Oracle has a solution too.  Oracle is introducing a simplified architecture called Identity Connector Framework (ICF), which holds the potential to simplify custom connectors.  Finally, Oracle has introduced a simplified process to profile new disconnected applications from the web browser.  This is a useful feature that enables administrators to profile applications quickly as well as empowering the application owner to fulfill requests from their web browser.  Support will still be available for connectors based on previous versions in R2. Oracle Identity Manager's new R2 version has delivered many new features customers have been asking for.  Oracle has matured their platform with R2, making it a truly distinctive platform among its peers. In our next post, expect a deep dive into use cases for a customer considering R2 as their new Enterprise identity solution. In the meantime, we look forward to hearing from you about the specific challenges you are facing and your experience in solving those. Meet the Writers Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years. Praveen Krishna is a Manager in the Advisory  Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving.

    Read the article

  • What are the industry metrics for average spend on dev hardware and software? [on hold]

    - by RationalGeek
    I'm trying to budget for my dev shop and compare our budget items to industry expectations. I'm hoping to find some information on what percentage of a dev's salary is generally spent on tooling, both hardware and software. Where can I find such information? If instead there is a source that looks at raw dollars that is useful, too. I can extrapolate what I need from that. NOTE: Your anecdotal evidence from your own job will not be very helpful. I'm looking for industry average statistics from a credible source. EDIT: I'm reluctant to even keep this question going based on the passionate negative responses of commenters, but I do think this is valuable information (assuming anyone will care to answer) so let me make one attempt to clarify why I'm looking for this information, and then leave it at that. I'm not sure why understanding and validating my motives is a necessary step to providing the information, but apparently that is the case, so I will do my best. Firstly, let me respond to the idea that us "management types" shouldn't use these types of metrics to evaluate budgets. I agree in part. Ideally, you should spend whatever is necessary on developers in order to keep them fully happy and productive. And this is true of all employees. However, companies operate in a world of limited resources, and every dollar spent in one area means a dollar not spent in another. So it is not enough to simply say "I need to spend $10,000 per developer next year" without having some way to justify that position. One way to help justify it is to compare yourself against the industry. If it is the case that on average a software shops spends 5% (making up that number) of their total development budget (salaries being the large portion of the other 95%, for arguments sake), and I'm only spending 3%, it helps in the justification process. So, it is not my intent to use this information to limit what I spend on developers, but rather to arm myself with the necessary justification to spend what I need to spend on developers to give them the best tools I can. I have been a developer for many years and I understand the need for proper tooling. Next, let's examine the idea that even considering the relationship between a spend on developer salaries and developer tooling is ludicrous and should be banned from budgetary thinking. As Jimmy Hoffa put it in their comment, it's like saying "I'm going to spend no more than 10% of median employee salary on light bulbs and coffee from now on.". Well, yes, it is like saying that, and from a budgeting perspective, this is a useful way to look at things. If you know that, on average, an employee consumes X dollars of coffee a year, then you can project a coffee budget based on that. And you can compare it to an industry metric to understand where you fall: do you spend more on coffee than other companies or less? Why might this be? If you are a coffee supply manager, that seems like a useful thought process. The same seems to hold true for developers. Now, on to the idea that I need to compare "apples to apples" and only look at other shops that are in the same place geographically, the same business, the same application architecture, and the same development frameworks. I guess if I could find such a statistic that said "a shop that is exactly identical to yours spends X on developer tooling" it would be wonderful. But there is plenty of value in an average statistic. Here's an analogy: let's say you are working on a household budget and need to decide how much to spend on groceries. Is it enough to know that the average consumer spends 15% on groceries and therefore decide that you will budget exactly 15%? No. You have to tweak your budget based on your individual needs and situation. But the generalized statistic does help in this evaluation. You can know if your budget is grossly off from what others are doing, and this can help you figure out why this is. So, I will concede the point that it would be better to find statistics that align to my shop, though I think any statistics I could find would be useful for what I'm doing. In that light, let's say that my shop is mostly focused on ASP.NET web applications. That doesn't map perfectly to reality because large enterprises have very heterogenous IT environments. But if I was going to pick one technology that is our focus that would be it. But, if you were to point me at some statistics that are related to a Linux shop doing embedded Java applications, I would still find it useful as a point of comparison. SUMMARY: Let me try to rephrase my question. I'm trying to find industry metrics on how much dev shops spend on developer tooling, both hardware and software. I don't so much care whether it is expressed as a percentage of total budget or as X dollars per dev or as Y percentage of salary. Any metric would be useful. If there are metrics that are specific to ASP.NET dev shops in the Northeast US, all the better, but I would be happy to find anything.

    Read the article

  • CodePlex Daily Summary for Monday, December 20, 2010

    CodePlex Daily Summary for Monday, December 20, 2010Popular ReleasesLINQ to Twitter: LINQ to Twitter Beta v2.0.18: Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4.3: Helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: Improvements for confirm, popup, popup form RenderView controller extension the user experience for crud in live demo has been substantially improved + added search all the features are shown in the live demoSQL Monitor: SQL Monitor 3.0 alpha 6: 1. add alert target with regexp support 2. fix actitivites not getting current server 3. allow to change connection in query 4. fix problem with open data tableGanttPlanner: GanttPlanner V1.0: GanttPlanner V1.0 include GanttPlanner.dll and also a Demo application.EnhSim: EnhSim 2.2.5 ALPHA: 2.2.5 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the new...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.2: A Couple of small changes required to add support for XBMC which requires -trailer be appended to trailer filenames 2. Changed Version Number to 1.2. Leaving it a beta just in case. 3. added assembly.directives.dll (needed that to successfully compile the application. Not sure if it has to be in the distribution package. 4. Leaving Version 1.1N2 CMS: 2.1 release candidate 3: * Web platform installer support available N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) A bunch of bugs were fixed File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the templ...TweetSharp: TweetSharp v2.0.0.0 - Preview 6: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numer...Team Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.Hacker Passwords: HackerPasswords.zip: Source code, executable and documentationSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...LiveChat Starter Kit: LCSK v1.0: This is a working version of the LCSK for Visual Studio 2010, ASP.NET MVC 3 (using Razor View Engine). this is still provider based (with 1 provider Sql) and this is still using WebService and Windows Forms operator console. The solution is cleaner, with an installer to create tables etc. You can also install it via nuget (Install-Package lcsk) Let me know your feedbackOrchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010New Projects.NET Micro Framework Binary Parsing and Processing Extensions: Prototype library and test cases for parsing binary data in the .NET Micro Framework. The goal is to improve processing speed, reduce GC work and simplify code.CarolLib Framework: A common library by Lance Zhang and Carol Xiong Include Helpers, Extensions, Configurations and Windows Service...Copy Document URL to Clipboard: This is a SharePoint 2010 Feature that adds a button in the Ribbon, allowing users to copy document urls (links) to clipboard.exTWEET download: <exTWEET> Download link for application to be downloaded & used from here unless it becomes publicly available. Then, this link will be removed. HandleSpy: Get Object information by its Handle.ibuy: helloLoA: PL: Podstawa gry bez tekstur, modeli, map i skryptów. EN: ---MP3Tunes Locker Windows Phone API: A simple REST client for connecting to your mp3tunes.com locker on your windows mobile phone. You can browse by artist and album and shuffle your songs.Netduino Library: Various useful classes to help develop for netduino.NReports: NReports is a reporting library using RDL file format, aiming at interoperatibility with SQL Reporting Services. It has began as a fork of the latest version 4.1 of fyiReporting (now defunct). Supported platforms are .NET 3.5 (Windows) and Mono 2.6 (Linux).qlink Framework: Qlink FrameWork development application that auto-generates a data layer object model based on your database schema and ...Rapid Image Editor: This is image editor which allows to edit images very rapidly and comfortably. Therefore it's named as Rapid.Serial-test with SHLCN: ?????????????????????????SharePoint Selbstbedienung: This is a project for all Microsoft SharePoint users. Whether you are a developer, administrator or end user, I want to provide you, complete NO-Deployment-solutions to create nice application for SharePoint 2010 or Office 365 plattform easily. TestRepo: test projectUmbraco Live At Education Calendar: Umbraco Live At Education CalendarUSS: Urban Statistical SystemVLBA Web Services: This is our seminar project to demonstrate the implementation and use of web services.Windows Phone 7 Isolated Storage Explorer: Isolated Storage Explorer for Windows Phone 7 emulator or device.Windows Phone 7 Logging Framework: Logging Framework for Windows Phone 7

    Read the article

  • CodePlex Daily Summary for Sunday, July 14, 2013

    CodePlex Daily Summary for Sunday, July 14, 2013Popular ReleasesVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...JSLint.NET: JSLint.NET 0.9.1 Beta: Version 0.9.1 Beta includes: JSLint.NET Console: Console help available using the /? switch (or no arguments). See the Console Options page for more. JSLint.NET for Visual Studio: Support linked JSLintNet.json file. More about this file on the JSLint.NET Settings page. Hide un-implemented menu items. Prefix JSLint errors in the task list. JSLint.NET Core: Allow ignoring of individual files in JSLintNet.json.TypePipe: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of the TypePipe for .NET 4.5. Find the complete release notes for the build here: Release Notes.re-linq: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of re-linq for .NET 4.5. Find the complete release notes for the build here: Release Notes To use re-linq with .NET 3.5, use a 1.13.x build.Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsSearch for Team Foundation Server workitems changes: Release 1.2: - Issue 1184 fixed, - Changeset's comboboxes sorted by Id (From : Ascending - To : Descending) - Application window iconImpulse Media Player: Impulse Media Player 3.5.0.1: Fixed a crash that occurs when copying data from lastfm to file panelPhoneGuitarTab: Release 1.1: Improved UX. Simplified navigation. More performance improvements coming soon.The GLMET Project: Get OS Version: --DataDevelop: Beta 0.6.5: Hotfix bug in Python Table.ImportAll method Updated External Libraries Fixes in Excel Exportation Modify ConnectionString refreshes the Properties Window correctlyUser Group Labs: User Group Data: 01.00.00: This release has the following updates and new features: Initial release with a minimal feature set Easy to use (just add to the social group details page) Edit common user group properties System Requirements DNN v07.00.02 or newer .Net Framework v4.0 or newerCarrotCake, an ASP.Net WebForms CMS: Binaries and PDFs - Zip Archive (v. 4.3 20130709): Product documentation and additional templates for this version is included in the zip archive, or if you want individual files, visit the http://www.carrotware.com website. Templates, in addition to those found in the download, can be downloaded individually from the website as well. If you are coming from earlier versions, make a precautionary backup of your existing website files and database. When installing the update, the database update engine will create the new schema items (if you...Dalmatian Build Script: Dalmatian Build 0.1.3.0: -Minor bug fixes -Added Choose<T> and ChooseYesNo to Console objectPushover.NET: Pushover.NET - Stable Release 10 July 2013: This is the first stable release of Pushover.NET. It includes 14 overloads of the SendNotification method, giving you total flexibility of sending Pushover notifications from your client. Assembly is built to .NET 2.x so it can be called from .NET 2.x, 3.x and 4.x projects. Also available is the Test Harness. This is a small GUI that you can use to test calls to Pushover.NET's main DLL. It's almost fully functional--the sound effects haven't been fully configured so no matter what you pick ...MCEBuddy 2.x: MCEBuddy 2.3.14: 2.3.14 BETA is available through the Early Access Program.Click here https://mcebuddy2x.codeplex.com/discussions/439439 for details and to get access to Early Access Program to download latest releases. Changelog for 2.3.14 (32bit and 64bit) NEW FEATURES: 1. ENHANCEMENTS: 2. Improved eMail notifications 3. Improved metrics details 4. Support for larger history (INI) file (about 45,000 sections, each section can have about 1500 entries) BUG FIXES: 5. Fix for extracting Movie release year from...Azure Depot: Flask: Flask Version 01LINQ to Twitter: LINQ to Twitter v2.1.07: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.DotNetNuke® Community Edition CMS: 06.02.08: Major Highlights Fixed issue where the application throws an Unhandled Error and an HTTP Response Code of 200 when the connection to the database is lost. Security FixesNone Updated Modules/Providers ModulesNone ProvidersNoneModern UI for WPF: Modern UI 1.0.5: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE This version is backwards incompatible. ModernDialog.ShowMessage returns MessageBoxResult instead of bool? Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extensi...New ProjectsA Domain-Driven Design Framework for .Net: A .Net framework library for applying the domain-driven design approach to develop business software.a Linq to Workitem provider: Wilinq is a linq to workitem provider. It also contains WIQL to expression tree parser. Wilinq is based on the the fissum project source codeApprentice for WP: Apprentice for WPArgo New Deal: Data Type DBL DAL UI ToolsC# Practice: C# PracticeDardemEvo: summaryDavid.A.Zhang: Personal class LibrayEnglish Practice Helper: English Practice Helper is a C# window form application for everyone want to practice writing,speaking,listening and reading skill with your OWN computerFinancialManagement: FinancialManagementGoAgent GUI: GoAgent??????。GoAgent: https://code.google.com/p/goagentIndustrial Programming: Industrial Programming approaches tips (it's old and in russian language)ISS.IR.RRN-MS: Summary Tany :PLifeDataManager: Web project to manage some dataMixERP - ERP Solution That Sucks Less: A humble ERP solution that does not scare the users, MixERP is a purely mult-establishment and multi-currency solution.Nokia Portal: Install Nokia, HTC and LG apps on any WP8 devicePenn State SWENG 581 Team 5 Su13.2: This project is an academic extension of the NClass project found at http://nclass.sourceforge.net for the purposes of software testing and quality assurance.Pomp: testProfessor Oak's Pokemon Library DotNet: The Professor Oak's Pokemon Library is a .NET class library that aims to help programmers, by providing different tools to modify the game memory.Pure Music Player: Pure Music PlayerRandomly Balanced Trees: C# Implementations of Treap and Skiplist data structures. Which are representations of randomly balanced binary trees.ReoScript: JavaScript-like script language engine for .NET application. Easy to plug in .NET program and make API extension for script. SQL Queries: This is for all developers help.SqlSetup: This project create SQL server database automatically. Truco Pythons: Truco Argentino (Argentine truc), is a card game developed in python by Argentine programmers of the UNGS (General Sarmiento National University). WebServer .NET: Projekt zawiera oprogramowanie i zestaw narzedzi do zarzadzania serwerem http. Posiada wiele funkcjonalnosci ulatwiajacych korzystanie i konfigurowanie serwera.workspaces: solr exampleWP8NativeAccess: Win32 API wrappers for Windows Phone 8. Intended to be used in WP8 WinRT apps. Includes FileSystem project.

    Read the article

  • XNA - Keyboard text input

    - by Sekhat
    Okay, so basically I want to be able to retrieve keyboard text. Like entering text into a text field or something. I'm only writing my game for windows. I've disregarded using Guide.BeginShowKeyboardInput because it breaks the feel of a self contained game, and the fact that the Guide always shows XBOX buttons doesn't seem right to me either. Yes it's the easiest way, but I don't like it. Next I tried using System.Windows.Forms.NativeWindow. I created a class that inherited from it, and passed it the Games window handle, implemented the WndProc function to catch WM_CHAR (or WM_KEYDOWN) though the WndProc got called for other messages, WM_CHAR and WM_KEYDOWN never did. So I had to abandon that idea, and besides, I was also referencing the whole of Windows forms, which meant unnecessary memory footprint bloat. So my last idea was to create a Thread level, low level keyboard hook. This has been the most successful so far. I get WM_KEYDOWN message, (not tried WM_CHAR yet) translate the virtual keycode with Win32 funcation MapVirtualKey to a char. And I get my text! (I'm just printing with Debug.Write at the moment) A couple problems though. It's as if I have caps lock on, and an unresponsive shift key. (Of course it's not however, it's just that there is only one Virtual Key Code per key, so translating it only has one output) and it adds overhead as it attaches itself to the Windows Hook List and isn't as fast as I'd like it to be, but the slowness could be more due to Debug.Write. Has anyone else approached this and solved it, without having to resort to an on screen keyboard? or does anyone have further ideas for me to try? thanks in advance. note: This is cross posted from the XNA Creators Forums, so if I get an answer there I'll post it here and Vice-Versa Question asked by Jimmy Maybe I'm not understanding the question, but why can't you use the XNA Keyboard and KeyboardState classes? My comment: It's because though you can read keystates, you can't get access to typed text as and how it is typed by the user. So let me further clarify. I want to implement being able to read text input from the user as if they are typing into textbox is windows. The keyboard and KeyboardState class get states of all keys, but I'd have to map each key and combination to it's character representation. This falls over when the user doesn't use the same keyboard language as I do especially with symbols (my double quotes is shift + 2, while american keyboards have theirs somewhere near the return key).

    Read the article

  • Best practices concerning view model and model updates with a subset of the fields

    - by Martin
    By picking MVC for developing our new site, I find myself in the midst of "best practices" being developed around me in apparent real time. Two weeks ago, NerdDinner was my guide but with the development of MVC 2, even it seems outdated. It's an thrilling experience and I feel privileged to be in close contact with intelligent programmers daily. Right now I've stumbled upon an issue I can't seem to get a straight answer on - from all the blogs anyway - and I'd like to get some insight from the community. It's about Editing (read: Edit action). The bulk of material out there, tutorials and blogs, deal with creating and view the model. So while this question may not spell out a question, I hope to get some discussion going, contributing to my decision about the path of development I'm to take. My model represents a user with several fields like name, address and email. All the names, in fact, on field each for first name, last name and middle name. The Details view displays all these fields but you can change only one set of fields at a time, for instance, your names. The user expands a form while the other fields are still visible above and below. So the form that is posted back contains a subset of the fields representing the model. While this is appealing to us and our layout concerns, for various reasons, it is to be shunned by serious MVC-developers. I've been reading about some patterns and best practices and it seems that this is not in key with the paradigm of viewmodel == view. Or have I got it wrong? Anyway, NerdDinner dictates using FormCollection och UpdateModel. All the null fields are happily ignored. Since then, the MVC-community has abandoned this approach to such a degree that a bug in MVC 2 was not discovered. UpdateModel does not work without a complete model in your formcollection. The view model pattern receiving most praise seems to be Dedicated view model that contains a custom view model entity and is the only one that my design issue could be made compatible with. It entails a tedious amount of mapping, albeit lightened by the use of AutoMapper and the ideas of Jimmy Bogard, that may or may not be worthwhile. He also proposes a 1:1 relationship between view and view model. In keeping with these design paradigms, I am to create a view and associated view for each of my expanding sets of fields. The view models would each be nearly identical, differing only in the fields which are read-only, the views also containing much repeated markup. This seems absurd to me. In future I may want to be able to display two, more or all sets of fields open simultaneously. I will most attentively read the discussion I hope to spark. Many thanks in advance.

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow. Is there anyway that the server can auto-detect wither to use the development or production configs?

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Skinny controller in ASP.NET MVC 4

    - by thangchung
    Rails community are always inspire a lot of best ideas. I really love this community by the time. One of these is "Fat models and skinny controllers". I have spent a lot of time on ASP.NET MVC, and really I did some miss-takes, because I made the controller so fat. That make controller is really dirty and very hard to maintain in the future. It is violate seriously SRP principle and KISS as well. But how can we achieve that in ASP.NET MVC? That question is really clear after I read "Manning ASP.NET MVC 4 in Action". It is simple that we can separate it into ActionResult, and try to implementing logic and persistence data inside this. In last 2 years, I have read this from Jimmy Bogard blog, but in that time I never had a consideration about it. That's enough for talking now. I just published a sample on ASP.NET MVC 4, implemented on Visual Studio 2012 RC at here. I used EF framework at here for implementing persistence layer, and also use 2 free templates from internet to make the UI for this sample. In this sample, I try to implementing the simple magazine website that managing all articles, categories and news. It is not finished at all in this time, but no problems, because I just show you about how can we make the controller skinny. And I wanna hear more about your ideas. The first thing, I am abstract the base ActionResult class like this:    public abstract class MyActionResult : ActionResult, IEnsureNotNull     {         public abstract void EnsureAllInjectInstanceNotNull();     }     public abstract class ActionResultBase<TController> : MyActionResult where TController : Controller     {         protected readonly Expression<Func<TController, ActionResult>> ViewNameExpression;         protected readonly IExConfigurationManager ConfigurationManager;         protected ActionResultBase (Expression<Func<TController, ActionResult>> expr)             : this(DependencyResolver.Current.GetService<IExConfigurationManager>(), expr)         {         }         protected ActionResultBase(             IExConfigurationManager configurationManager,             Expression<Func<TController, ActionResult>> expr)         {             Guard.ArgumentNotNull(expr, "ViewNameExpression");             Guard.ArgumentNotNull(configurationManager, "ConfigurationManager");             ViewNameExpression = expr;             ConfigurationManager = configurationManager;         }         protected ViewResult GetViewResult<TViewModel>(TViewModel viewModel)         {             var m = (MethodCallExpression)ViewNameExpression.Body;             if (m.Method.ReturnType != typeof(ActionResult))             {                 throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");             }             var result = new ViewResult             {                 ViewName = m.Method.Name             };             result.ViewData.Model = viewModel;             return result;         }         public override void ExecuteResult(ControllerContext context)         {             EnsureAllInjectInstanceNotNull();         }     } I also have an interface for validation all inject objects. This interface make sure all inject objects that I inject using Autofac container are not null. The implementation of this as below public interface IEnsureNotNull     {         void EnsureAllInjectInstanceNotNull();     } Afterwards, I am just simple implementing the HomePageViewModelActionResult class like this public class HomePageViewModelActionResult<TController> : ActionResultBase<TController> where TController : Controller     {         #region variables & ctors         private readonly ICategoryRepository _categoryRepository;         private readonly IItemRepository _itemRepository;         private readonly int _numOfPage;         public HomePageViewModelActionResult(Expression<Func<TController, ActionResult>> viewNameExpression)             : this(viewNameExpression,                    DependencyResolver.Current.GetService<ICategoryRepository>(),                    DependencyResolver.Current.GetService<IItemRepository>())         {         }         public HomePageViewModelActionResult(             Expression<Func<TController, ActionResult>> viewNameExpression,             ICategoryRepository categoryRepository,             IItemRepository itemRepository)             : base(viewNameExpression)         {             _categoryRepository = categoryRepository;             _itemRepository = itemRepository;             _numOfPage = ConfigurationManager.GetAppConfigBy("NumOfPage").ToInteger();         }         #endregion         #region implementation         public override void ExecuteResult(ControllerContext context)         {             base.ExecuteResult(context);             var cats = _categoryRepository.GetCategories();             var mainViewModel = new HomePageViewModel();             var headerViewModel = new HeaderViewModel();             var footerViewModel = new FooterViewModel();             var mainPageViewModel = new MainPageViewModel();             headerViewModel.SiteTitle = "Magazine Website";             if (cats != null && cats.Any())             {                 headerViewModel.Categories = cats.ToList();                 footerViewModel.Categories = cats.ToList();             }             mainPageViewModel.LeftColumn = BindingDataForMainPageLeftColumnViewModel();             mainPageViewModel.RightColumn = BindingDataForMainPageRightColumnViewModel();             mainViewModel.Header = headerViewModel;             mainViewModel.DashBoard = new DashboardViewModel();             mainViewModel.Footer = footerViewModel;             mainViewModel.MainPage = mainPageViewModel;             GetViewResult(mainViewModel).ExecuteResult(context);         }         public override void EnsureAllInjectInstanceNotNull()         {             Guard.ArgumentNotNull(_categoryRepository, "CategoryRepository");             Guard.ArgumentNotNull(_itemRepository, "ItemRepository");             Guard.ArgumentMustMoreThanZero(_numOfPage, "NumOfPage");         }         #endregion         #region private functions         private MainPageRightColumnViewModel BindingDataForMainPageRightColumnViewModel()         {             var mainPageRightCol = new MainPageRightColumnViewModel();             mainPageRightCol.LatestNews = _itemRepository.GetNewestItem(_numOfPage).ToList();             mainPageRightCol.MostViews = _itemRepository.GetMostViews(_numOfPage).ToList();             return mainPageRightCol;         }         private MainPageLeftColumnViewModel BindingDataForMainPageLeftColumnViewModel()         {             var mainPageLeftCol = new MainPageLeftColumnViewModel();             var items = _itemRepository.GetNewestItem(_numOfPage);             if (items != null && items.Any())             {                 var firstItem = items.First();                 if (firstItem == null)                     throw new NoNullAllowedException("First Item".ToNotNullErrorMessage());                 if (firstItem.ItemContent == null)                     throw new NoNullAllowedException("First ItemContent".ToNotNullErrorMessage());                 mainPageLeftCol.FirstItem = firstItem;                 if (items.Count() > 1)                 {                     mainPageLeftCol.RemainItems = items.Where(x => x.ItemContent != null && x.Id != mainPageLeftCol.FirstItem.Id).ToList();                 }             }             return mainPageLeftCol;         }         #endregion     }  Final step, I get into HomeController and add some line of codes like this [Authorize]     public class HomeController : BaseController     {         [AllowAnonymous]         public ActionResult Index()         {             return new HomePageViewModelActionResult<HomeController>(x=>x.Index());         }         [AllowAnonymous]         public ActionResult Details(int id)         {             return new DetailsViewModelActionResult<HomeController>(x => x.Details(id), id);         }         [AllowAnonymous]         public ActionResult Category(int id)         {             return new CategoryViewModelActionResult<HomeController>(x => x.Category(id), id);         }     } As you see, the code in controller is really skinny, and all the logic I move to the custom ActionResult class. Some people said, it just move the code out of controller and put it to another class, so it is still hard to maintain. Look like it just move the complicate codes from one place to another place. But if you have a look and think it in details, you have to find out if you have code for processing all logic that related to HttpContext or something like this. You can do it on Controller, and try to delegating another logic  (such as processing business requirement, persistence data,...) to custom ActionResult class. Tell me more your thinking, I am really willing to hear all of its from you guys. All source codes can be find out at here. Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="http://weblogs.asp.net//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

    Read the article

  • saslauthd + PostFix producing password verification and authentication errors

    - by Aram Papazian
    So I'm trying to setup PostFix while using SASL (Cyrus variety preferred, I was using dovecot earlier but I'm switching from dovecot to courier so I want to use cyrus instead of dovecot) but I seem to be having issues. Here are the errors I'm receiving: ==> mail.log <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.info <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.warn <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure I tried $testsaslauthd -u xxxx -p xxxx 0: OK "Success." So I know that the password/user I'm using is correct. I'm thinking that most likely I have a setting wrong somewhere, but can't seem to find where. Here is my files. Here is my main.cf for postfix: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname # This is already done in /etc/mailname #myhostname = crazyinsanoman.xxxxx.com smtpd_banner = $myhostname ESMTP $mail_name #biff = no # appending .domain is the MUA's job. #append_dot_mydomain = no readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # Relay smtp through another server or leave blank to do it yourself #relayhost = smtp.yourisp.com # Network details; Accept connections from anywhere, and only trust this machine mynetworks = 127.0.0.0/8 inet_interfaces = all #mynetworks_style = host #As we will be using virtual domains, these need to be empty local_recipient_maps = mydestination = # how long if undelivered before sending "delayed mail" warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/vmail # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf # Setup the uid/gid of the owner of the mail files - static:5000 allows virtual ones virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 inet_protocols=all # Cyrus SASL Support smtpd_sasl_path = smtpd smtpd_sasl_local_domain = xxxxx.com ####################### ## OLD CONFIGURATION ## ####################### #myorigin = /etc/mailname #mydestination = crazyinsanoman.xxxxx.com, localhost, localhost.localdomain #mailbox_size_limit = 0 #recipient_delimiter = + #html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 #virtual_alias_domains = ##virtual_alias_maps = hash:/etc/postfix/virtual #virtual_mailbox_base = /home/vmail ##luser_relay = webmaster #smtpd_sasl_type = dovecot #smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes #smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination #virtual_create_maildirsize = yes #virtual_maildir_extended = yes #proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps #virtual_transport = dovecot #dovecot_destination_recipient_limit = 1 Here is my master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # cyrus unix - n n - - pipe user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} #dovecot unix - n n - - pipe # flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} Here is what I'm using for /etc/postfix/sasl/smtpd.conf log_level: 7 pwcheck_method: saslauthd pwcheck_method: auxprop mech_list: PLAIN LOGIN CRAM-MD5 DIGEST-MD5 allow_plaintext: true auxprop_plugin: mysql sql_hostnames: 127.0.0.1 sql_user: xxxxx sql_passwd: xxxxx sql_database: maildb sql_select: select crypt from users where id = '%u' As you can see I'm trying to use mysql as my authentication method. The password in 'users' is set through the 'ENCRYPT()' function. I also followed the methods found in http://www.jimmy.co.at/weblog/?p=52 in order to redo /var/spool/postfix/var/run/saslauthd as that seems to be a lot of people's problems, but that didn't help at all. Also, here is my /etc/default/saslauthd START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" # Which authentication mechanisms should saslauthd use? (default: pam) # # Available options in this Debian package: # getpwent -- use the getpwent() library function # kerberos5 -- use Kerberos 5 # pam -- use PAM # rimap -- use a remote IMAP server # shadow -- use the local shadow password file # sasldb -- use the local sasldb database file # ldap -- use LDAP (configuration is in /etc/saslauthd.conf) # # Only one option may be used at a time. See the saslauthd man page # for more information. # # Example: MECHANISMS="pam" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" I had heard that potentially changing MECHANISM to MECHANISMS="mysql" but obviously that didn't help as is shown by the options listed above and also by trying it out anyway in case the documentation was outdated. So, I'm now at a loss... I have no idea where to go from here or what steps I need to do to get this working =/ Anyone have any ideas? EDIT: Here is the error that is coming from auth.log ... I don't know if this will help at all, but here you go: Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql auxprop plugin using mysql engine Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1'

    Read the article

< Previous Page | 6 7 8 9 10