Search Results

Search found 28517 results on 1141 pages for 'sql und pl sql'.

Page 452/1141 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • How to retrieve the size of a database in restoring mode

    - by Marc Wittke
    With pure SQL - how to do it? I doubt it is possible, since even the SQL Management Studio fails to show the size of such a database in the UI. Already tried: exec sp_helpdb 'DbInRecMode' ...won't show anything; exec sys.sp_helpfile 'DbInRecMode' ... something like file not found (Msg 15325) Main pitfall seems to be the issue, that select * from DbInRecMode.dbo.sysfiles won't work when the database is in restoring mode. Any ideas?

    Read the article

  • Vmware cpu allocation for a spiking database server

    - by user1552172
    I have a database server with many poorly written queries that causes the sql server to spike then drop constantly ( a massive start from scratch is happening). I need to know if the cpu allocation on the vm to expand as needed is best practice for a case like this. I am wondering if the esxi platform cant expand as fast as the spikes happen. I am curious what is best practice for vm cpu allocation on sql server (with horribly written queries)

    Read the article

  • How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS... Please pay attention to this: http://stackoverflow.com/questions/485581/generate-delete-statement-from-foreign-key-relationships-in-sql-2008/2677145#2677145 Another user has just written it in SQL SERVER 2008, anyone is able to convert to Oracle 10G PL/SQL? I am not able to... :-( This is the code written by another user in SQL SERVER 2008: DECLARE @COLUMN_NAME AS sysname DECLARE @TABLE_NAME AS sysname DECLARE @IDValue AS int SET @COLUMN_NAME = '<Your COLUMN_NAME here>' SET @TABLE_NAME = '<Your TABLE_NAME here>' SET @IDValue = 123456789 DECLARE @sql AS varchar(max) ; WITH RELATED_COLUMNS AS ( SELECT QUOTENAME(c.TABLE_SCHEMA) + '.' + QUOTENAME(c.TABLE_NAME) AS [OBJECT_NAME] ,c.COLUMN_NAME FROM PBANKDW.INFORMATION_SCHEMA.COLUMNS AS c WITH (NOLOCK) INNER JOIN PBANKDW.INFORMATION_SCHEMA.TABLES AS t WITH (NOLOCK) ON c.TABLE_CATALOG = t.TABLE_CATALOG AND c.TABLE_SCHEMA = t.TABLE_SCHEMA AND c.TABLE_NAME = t.TABLE_NAME AND t.TABLE_TYPE = 'BASE TABLE' INNER JOIN ( SELECT rc.CONSTRAINT_CATALOG ,rc.CONSTRAINT_SCHEMA ,lkc.TABLE_NAME ,lkc.COLUMN_NAME FROM PBANKDW.INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS rc WITH (NOLOCK) INNER JOIN PBANKDW.INFORMATION_SCHEMA.KEY_COLUMN_USAGE lkc WITH (NOLOCK) ON lkc.CONSTRAINT_CATALOG = rc.CONSTRAINT_CATALOG AND lkc.CONSTRAINT_SCHEMA = rc.CONSTRAINT_SCHEMA AND lkc.CONSTRAINT_NAME = rc.CONSTRAINT_NAME INNER JOIN PBANKDW.INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc WITH (NOLOCK) ON rc.CONSTRAINT_CATALOG = tc.CONSTRAINT_CATALOG AND rc.CONSTRAINT_SCHEMA = tc.CONSTRAINT_SCHEMA AND rc.UNIQUE_CONSTRAINT_NAME = tc.CONSTRAINT_NAME INNER JOIN PBANKDW.INFORMATION_SCHEMA.KEY_COLUMN_USAGE rkc WITH (NOLOCK) ON rkc.CONSTRAINT_CATALOG = tc.CONSTRAINT_CATALOG AND rkc.CONSTRAINT_SCHEMA = tc.CONSTRAINT_SCHEMA AND rkc.CONSTRAINT_NAME = tc.CONSTRAINT_NAME WHERE rkc.COLUMN_NAME = @COLUMN_NAME AND rkc.TABLE_NAME = @TABLE_NAME ) AS j ON j.CONSTRAINT_CATALOG = c.TABLE_CATALOG AND j.CONSTRAINT_SCHEMA = c.TABLE_SCHEMA AND j.TABLE_NAME = c.TABLE_NAME AND j.COLUMN_NAME = c.COLUMN_NAME ) SELECT @sql = COALESCE(@sql, '') + 'DELETE FROM ' + [OBJECT_NAME] + ' WHERE ' + [COLUMN_NAME] + ' = ' + CONVERT(varchar, @IDValue) + CHAR(13) + CHAR(10) FROM RELATED_COLUMNS PRINT @sql Thank to Charles, this is the latest not working release of the software, I have added a parameter with the OWNER because the referential integrities propagate through about 5 other Oracle users (!!!): CREATE OR REPLACE PROCEDURE delete_cascade ( parent_table VARCHAR2, parent_table_owner VARCHAR2) IS cons_name VARCHAR2 (30); tab_name VARCHAR2 (30); tab_name_owner VARCHAR2 (30); parent_cons VARCHAR2 (30); parent_col VARCHAR2 (30); delete1 VARCHAR (500); delete2 VARCHAR (500); delete_command VARCHAR (4000); CURSOR cons_cursor IS SELECT constraint_name, r_constraint_name, table_name, constraint_type FROM all_constraints WHERE constraint_type = 'R' AND r_constraint_name IN (SELECT constraint_name FROM all_constraints WHERE constraint_type IN ('P', 'U') AND table_name = parent_table AND owner = parent_table_owner) AND delete_rule = 'NO ACTION'; CURSOR tabs_cursor IS SELECT DISTINCT table_name FROM all_cons_columns WHERE constraint_name = cons_name; CURSOR child_cols_cursor IS SELECT column_name, position FROM all_cons_columns WHERE constraint_name = cons_name AND table_name = tab_name; BEGIN FOR cons IN cons_cursor LOOP cons_name := cons.constraint_name; parent_cons := cons.r_constraint_name; SELECT DISTINCT table_name, owner INTO tab_name, tab_name_owner FROM all_cons_columns WHERE constraint_name = cons_name; delete_cascade (tab_name, tab_name_owner); delete_command := ''; delete1 := ''; delete2 := ''; FOR col IN child_cols_cursor LOOP SELECT DISTINCT column_name INTO parent_col FROM all_cons_columns WHERE constraint_name = parent_cons AND position = col.position; IF delete1 IS NULL THEN delete1 := col.column_name; ELSE delete1 := delete1 || ', ' || col.column_name; END IF; IF delete2 IS NULL THEN delete2 := parent_col; ELSE delete2 := delete2 || ', ' || parent_col; END IF; END LOOP; delete_command := 'delete from ' || tab_name_owner || '.' || tab_name || ' where (' || delete1 || ') in (select ' || delete2 || ' from ' || parent_table_owner || '.' || parent_table || ');'; INSERT INTO ris VALUES (SEQUENCE_COMANDI.NEXTVAL, delete_command); COMMIT; END LOOP; END; / In the cursor CONS_CURSOR I have added the condition: AND delete_rule = 'NO ACTION'; in order to avoid deletion in case of referential integrities with DELETE_RULE = 'CASCADE' or DELETE_RULE = 'SET NULL'. Now I have tried to turn from stored procedure to stored function, but the delete statements are not correct: CREATE OR REPLACE FUNCTION deletecascade ( parent_table VARCHAR2, parent_table_owner VARCHAR2) RETURN VARCHAR2 IS cons_name VARCHAR2 (30); tab_name VARCHAR2 (30); tab_name_owner VARCHAR2 (30); parent_cons VARCHAR2 (30); parent_col VARCHAR2 (30); delete1 VARCHAR (500); delete2 VARCHAR (500); delete_command VARCHAR (4000); AT_LEAST_ONE_ITERATION NUMBER DEFAULT 0; CURSOR cons_cursor IS SELECT constraint_name, r_constraint_name, table_name, constraint_type FROM all_constraints WHERE constraint_type = 'R' AND r_constraint_name IN (SELECT constraint_name FROM all_constraints WHERE constraint_type IN ('P', 'U') AND table_name = parent_table AND owner = parent_table_owner) AND delete_rule = 'NO ACTION'; CURSOR tabs_cursor IS SELECT DISTINCT table_name FROM all_cons_columns WHERE constraint_name = cons_name; CURSOR child_cols_cursor IS SELECT column_name, position FROM all_cons_columns WHERE constraint_name = cons_name AND table_name = tab_name; BEGIN FOR cons IN cons_cursor LOOP AT_LEAST_ONE_ITERATION := 1; cons_name := cons.constraint_name; parent_cons := cons.r_constraint_name; SELECT DISTINCT table_name, owner INTO tab_name, tab_name_owner FROM all_cons_columns WHERE constraint_name = cons_name; delete1 := ''; delete2 := ''; FOR col IN child_cols_cursor LOOP SELECT DISTINCT column_name INTO parent_col FROM all_cons_columns WHERE constraint_name = parent_cons AND position = col.position; IF delete1 IS NULL THEN delete1 := col.column_name; ELSE delete1 := delete1 || ', ' || col.column_name; END IF; IF delete2 IS NULL THEN delete2 := parent_col; ELSE delete2 := delete2 || ', ' || parent_col; END IF; END LOOP; delete_command := 'delete from ' || tab_name_owner || '.' || tab_name || ' where (' || delete1 || ') in (select ' || delete2 || ' from ' || parent_table_owner || '.' || parent_table || ');' || deletecascade (tab_name, tab_name_owner); INSERT INTO ris VALUES (SEQUENCE_COMANDI.NEXTVAL, delete_command); COMMIT; END LOOP; IF AT_LEAST_ONE_ITERATION = 1 THEN RETURN ' where COD_CHICKEN = V_CHICKEN AND COD_NATION = V_NATION;'; ELSE RETURN NULL; END IF; END; / Please assume that V_CHICKEN and V_NATION are the criteria to select the CHICKEN to delete from the root table: the condition is: "where COD_CHICKEN = V_CHICKEN AND COD_NATION = V_NATION" on the root table.

    Read the article

  • Why isn't the backup file created when running sqlcmd from remote machine?

    - by Ed Gl
    I tried running the sqlcmd from a remote host to do a simple backup of a sql 2008 database. The command goes something like this: sqlcmd -s xxx.xxx.xxx.xx -U username -P some_password -Q "Backup database [db] to \ disk = 'c:\test_backup.bak' with format" I get a succesfull message but the file isn't created. When I run this on the sql manager on the same machine, it works. I thought it was permission problems, but I'm using the same username in both cases. Any thoughts?

    Read the article

  • Remote access to Microsoft Dynamics NAV (C/Side) with native non-SQL database

    - by Joannes Vermorel
    I am facing a company that have a fairly recent Microsoft Dynamics NAV (C/Side) setup that comes with a non-SQL storage system called the native database server. I would need to be remotely connect to this database, and perform what would equate to SQL queries with very modest needs (no join, no complex filtering). I am rather ignorant of this technology, does someone knows to how make remote queries to this ERP?

    Read the article

  • MOSS 2007 SP2 DB index maintanance

    - by Mike H
    I've read in the "About Service Pack 2 for SharePoint Products and Technologies" paper that SP2 includes an update for the Update Statistics Timer Job that causes SharePoint to run SQL Server's online index rebuild feature (p.4). I'm uncertain of the terminology here but is this the rebuild that SQL Server uses for minor fragmentation (up to around 40%) and leaves the DB online? I'm also guessing that this will therefore not rebuild severely fragmented indexes as I think this requires the DB to come offline. Can someone please confirm my belief here?

    Read the article

  • Script to create a table and fields in SQL wont work

    - by Jacksta
    Warning this is lenghty! attack if you knowledagble. well at least more then a newb beginner like me. This script uses three files as detailed below. It is suppoed to create the database and fields from the form input. It gets to the end and shows my_contacts has been created!. But when i go into phpMyadmin the table has not been created. I have a file named show_createtable.html which is used to create a table in MySQL <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <h1>Step 1: Name and Number</h1> <form method="post" action="do_showfielddef.php" /> <p><strong>Table Name:</strong><br /> <input type="text" name="table_name" size="30" /></p> <p><strong>Number of fields:</strong><br /> <input type="text" name="num_fields" size="30" /></p> <p><input type="submit" name="submit" value="go to step2" /></p> </form> </body> </html> This Form Posts to do_showfielddef.php <?php //validate important input if ((!$_POST[table_name]) || (!$_POST[num_fields])) { header( "location: show_createtable.html"); exit; } //begin creating form for display $form_block = " <form action=\"do_createtable.php\" method=\"post\"> <input name=\"table_name\" type=\"hidden\" value=\"$_POST[table_name]\"> <table cellspacing=\"5\" cellpadding=\"5\"> <tr> <th>Field Name</th><th>Field Type</th><th>Table Length</th><th>Primary Key?</th><th>Auto-Increment?</th> </tr>"; //count from 0 until you reach the number fo fields for ($i = 0; $i <$_POST[num_fields]; $i++) { $form_block .=" <tr> <td align=center><input type=\"texr\" name=\"field name[]\" size=\"30\"></td> <td align=center> <select name=\"field_type[]\"> <option value=\"char\">char</option> <option value=\"date\">date</option> <option value=\"float\">float</option> <option value=\"int\">int</option> <option value=\"text\">text</option> <option value=\"varchar\">varchar</option> </select> </td> <td align=center><input type=\"text\" name=\"field_length[]\" size=\"5\"></td> <td aligh=center><input type=\"checkbox\" name=\"primary[]\" value=\"Y\"></td> <td aligh=center><input type=\"checkbox\" name=\"auto_increment[]\" value=\"Y\"></td> </tr>"; } //finish up the form $form_block .= " <tr> <td align=center colspan=3><input type =\"submit\" value=\"create table\"> </td> </tr> </table> </form>"; ?> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Create a database table: Step 2</title> </head> <body> <h1>defnie fields for <? echo "$_POST[table_name]"; ?> </h1> <? echo "$form_block"; ?> </body> </html> Which in turn creates the table and fields with this file do_showfielddef.php //connect to database $connection = @mysql_connect("localhost", "user", "pass") or die(mysql_error()); $db = @mysql_select_db($db_name, $connection) or die(mysql_error()); //start creating the SQL statement $sql = "CREATE TABLE $_POST[table_name]("; //continue the SQL statement for each new field for ($i = 0; $i < count($_POST[field_name]); $i++) { $sql .= $_POST[field_name][$i]." ".$_POST[field_type][$i]; if ($_POST[auto_increment][$i] =="Y") { $additional = "NOT NULL auto_increment"; } else { $additional = ""; } if ($_POST[primary][$i] =="Y") { $additional .= ", primary key (".$_POST[field_name][$i].")"; } else { $additional = ""; } if ($_POST[field_length][$i] !="") { $sql .= " (".$_POST[field_length][$i].") $additional ,"; } else { $sql .=" $additional ,"; } } //clean up the end of the string $sql = substr($sql, 0, -1); $sql .= ")"; //execute the query $result = mysql_query($sql, $connection) or die(mysql_error()); //get a giid message for display upon success if ($result) { $msg = "<p>" .$_POST[table_name]." has been created!</p>"; } ?> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Create A Database Table: Step 3</title> </head> <body> <h1>Adding table to <? echo "$db_name"; ?>...</h1> <? echo "$msg"; ?> </body> </html>

    Read the article

  • Emptying site collection recycle bin doesn’t make content DB smaller?

    - by Mike
    I deleted everything from a site collection recycle bin and remoted into the SQL server the content database is located on, went to view the WSS_Content and the sucker didn't get smaller. I had about a good 2 or 3 gigs of folders with files in the recycle bin. I just want to make sure that it is getting deleted. Is there something I am missing? Or does the SQL server not update file sizes properly? MOSS2007 IIS6 WinSer2003

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment

    - by user69508
    (Please forgive if this is posted in an incorrect forum. We didn’t know exactly where to post it.) We have an ASP.NET Web API single page application - a browser-based app running in IIS to serve up HTML5/CSS3/JavaScript, which talks to the ASP.NET Web API endpoint only to access a database and transfer JSON data. Everything is working great in our development environment - that is, we have one Visual Studio solution with an ASP.NET Web API project and two class library projects for data access. While development and testing on development boxes, using IIS Express to a localhost:port to run the site and access the Web API, everything is fine. Now we need to move it to a production environment (and we’re having problems - or just not understanding what needs to be done). The production environment is all internal (nothing will be exposed on the public Internet). There are two domains. One domain, the corporate domain, is where all users login normally. The other domain, the process domain, contains the SQL Server instance that our app and Web API will need to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. So, I guess what they want is: corp domain (end users) <– firewall (open port 80) <– DMZ (web server running IIS) <– firewall (open port 80 or 1433????) <– process domain (IIS for Web API and SQL Server) We’re developers and don’t really understand all the networking aspects, so we’re wondering how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code (HTML5/CSS3/JavaScript/images/etc.) is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Or, does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to "http://server/appname/api/getitmes"? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? I’m sure there are other questions, but we’ll start with these. Thanks!!! (Note: the servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.)

    Read the article

  • Data Sources (ODBC) hangs when trying to create a new database connection

    - by FredrikD
    When I try to create a new database connection, the Data Sources (ODBC) programs hangs or takes a very long time to find the list of available SQL Servers. This only happens when there are other computers on the network, when my machine (a standard Windows 7 laptop) is alone, it works just fine. My question is: What should I look for in terms of SQL server or ODBC configurations that will take away this random behaviour?

    Read the article

  • Maintainance backup in sql2005

    - by Visharth Asthana
    I changed my administrator and Sql password for server but when I try to create the backup in maintainance plan then I get this error message "Exception has been thrown by the target of invocation (MSCORLIB)" A sql statement was issued and failed. Please revert me because same thing I applied for another server it was worked properly. Regards Visharth

    Read the article

  • How to ensure dbs are all in sync when restored?

    - by blade
    Hi, In large server environments, how do you handle the issue of backing up SQL Server dbs which may not be in sync with other dbs they rely on? So if I back up DB1 from a server, and it uses another db which is not backed up, doing a restore when the dbs are in differing state could cause problems? It seems like all dependent DBs should be backed up, regardless of size etc, but in my current job (where we're a datacentre company and I'm a .NET Developer), I only backup some of several dependent DBs on a SQL Server instance. Thanks

    Read the article

  • How to write this function as a pL/pgSQl function ?

    - by morpheous
    I am trying to implement some business logic in a PL/pgSQL function. I have hacked together some pseudo code that explains the type of business logic I want to include in the function. Note: This function returns a table, so I can use it in a query like: SELECT A.col1, B.col1 FROM (SELECT * from some_table_returning_func(1, 1, 2, 3)) as A, tbl2 as B; The pseudocode of the pl/PgSQL function is below: CREATE FUNCTION some_table_returning_func(uid int, type_id int, filter_type_id int, filter_id int) RETURNS TABLE AS $$ DECLARE where_clause text := 'tbl1.id = ' + uid; ret TABLE; BEGIN switch (filter_type_id) { case 1: switch (filter_id) { case 1: where_clause += ' AND tbl1.item_id = tbl2.id AND tbl2.type_id = filter_id'; break; //other cases follow ... } break; //other cases follow ... } // where clause has been built, now run query based on the type ret = SELECT [COL1, ... COLN] WHERE where_clause; IF (type_id <> 1) THEN return ret; ELSE return select * from another_table_returning_func(ret,123); ENDIF; END; $$ LANGUAGE plpgsql; I have the following questions: How can I write the function correctly to (i.e. EXECUTE the query with the generated WHERE clause, and to return a table How can I write a PL/pgSQL function that accepts a table and an integer and returns a table (another_table_returning_func) ?

    Read the article

  • Complex SQL query... 3 tables and need the most popular in the last 24 hours using timestamps!

    - by Stefan
    Hey guys, I have 3 tables with a column in each which relates to one ID per row. I am looking for an sql statement query which will check all 3 tables for any rows in the last 24 hours (86400 seconds) i have stored timestamps in each tables under column time. After I get this query I will be able to do the next step which is to then check to see how many of the ID's a reoccurring so I can then sort by most popular in the array and limit it to the top 5... Any ideas welcome! :) Thanks in advanced. Stefan

    Read the article

  • In SQL, we can use "Union" to merge two tables. What are different ways to do "Intersection"?

    - by Jian Lin
    In SQL, there is an operator to "Union" two tables. In an interview, I was told that, say one table has just 1 field with 1, 2, 7, 8 in it, and another table also has just 1 field with 2, and 7 in it, how do I get the intersection. I was stunned at first, because I never saw it that way. Later on, I found that it is actually a "Join" (inner join), which is just select * from t1, t2 where t1.number = t2.number (although the name "join" feels more like "union" rather than "intersect") another solution seems to be select * from t1 INTERSECT select * from t2 but it is not supported in MySQL. Are there different ways to get the intersection besides these two methods?

    Read the article

  • In SQL, a Join is actually an Intersection? And it is also a linkage or a "Sideway Union"?

    - by Jian Lin
    I always thought of a Join in SQL as some kind of linkage between two tables. For example, select e.name, d.name from employees e, departments d where employees.deptID = departments.deptID In this case, it is linking two tables, to show each employee with a department name instead of a department ID. And kind of like a "linkage" or "Union" sideway". But, after learning about inner join vs outer join, it shows that a Join (Inner join) is actually an intersection. For example, when one table has the ID 1, 2, 7, 8, while another table has the ID 7 and 8 only, the way we get the intersection is: select * from t1, t2 where t1.ID = t2.ID to get the two records of "7 and 8". So it is actually an intersection. So we have the "Intersection" of 2 tables. Compare this with the "Union" operation on 2 tables. Can a Join be thought of as an "Intersection"? But what about the "linking" or "sideway union" aspect of it?

    Read the article

  • Adding a hyperlink in a client report definition file (RDLC)

    - by rajbk
    This post shows you how to add a hyperlink to your RDLC report. In a previous post, I showed you how to create an RDLC report. We have been given the requirement to the report we created earlier, the Northwind Product report, to add a column that will contain hyperlinks which are unique per row.  The URLs will be RESTful with the ProductID at the end. Clicking on the URL will take them to a website like so: http://localhost/products/3  where 3 is the primary key of the product row clicked on. To start off, open the RDLC and add a new column to the product table.   Add text to the header (Details) and row (Product Website). Right click on the row (not header) and select “TextBox properties” Select Action – Go to URL. You could hard code a URL here but what we need is a URL that changes based on the ProductID.   Click on the expression button (fx) The expression builder gives you access to several functions and constants including the fields in your dataset. See this reference for more details: Common Expressions for ReportViewer Reports. Add the following expression: = "http://localhost/products/" & Fields!ProductID.Value Click OK to exit the Expression Builder. The report will not render because hyperlinks are disabled by default in the ReportViewer control. To enable it, add the following in your page load event (where rvProducts is the ID of your ReportViewerControl): protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { rvProducts.LocalReport.EnableHyperlinks = true; } } We want our links to open in a new window so set the HyperLinkTarget property of the ReportViewer control to “_blank”   We are done adding hyperlinks to our report. Clicking on the links for each product pops open a new windows. The URL has the ProductID added at the end. Enjoy!

    Read the article

  • Oracle Communications Data Model

    - by jean-pierre.dijcks
    I've mentioned OCDM in previous posts but found the following (see end of the post) podcast on the topic and figured it is worthwhile to spread the news some more. ORetailDM and OCommunicationsDM are the two data models currently available from Oracle. Both are intended to capture: Business best practices and industry knowledge Pre-built advanced analytics intended to predict future events before they happen (like the Churn model shown below) Oracle technology best practices to ensure optimal performance of the model All of this typically comes with a reduced time to implementation, or as the marketing slogan goes, reduced time to value. Here are the links: Podcast on OCDM OTN pages for OCDM and ORDM

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 1

    - by rajbk
    This tutorial walks you through creating an report based on the Northwind sample database. You will add a client report definition file (RDLC), create a dataset for the RDLC, define queries using LINQ to Entities, design the report and add a ReportViewer web control to render the report in a ASP.NET web page. The report will have a chart control. Different results will be generated by changing filter criteria. At the end of the walkthrough, you should have a UI like the following.  From the UI below, a user is able to view the product list and can see a chart with the sum of Unit price for a given category. They can filter by Category and Supplier. The drop downs will auto post back when the selection is changed.  This demo uses Visual Studio 2010 RTM. This post is split into three parts. The last part has the sample code attached. Creating an ASP.NET report using Visual Studio 2010 - Part 2 Creating an ASP.NET report using Visual Studio 2010 - Part 3   Lets start by creating a new ASP.NET empty web application called “NorthwindReports” Creating the Data Access Layer (DAL) Add a web form called index.aspx to the root directory. You do this by right clicking on the NorthwindReports web project and selecting “Add item..” . Create a folder called “DAL”. We will store all our data access methods and any data transfer objects in here.   Right click on the DAL folder and add a ADO.NET Entity data model called Northwind. Select “Generate from database” and click Next. Create a connection to your database containing the Northwind sample database and click Next.   From the table list, select Categories, Products and Suppliers and click next. Our Entity data model gets created and looks like this:    Adding data transfer objects Right click on the DAL folder and add a ProductViewModel. Add the following code. This class contains properties we need to render our report. public class ProductViewModel { public int? ProductID { get; set; } public string ProductName { get; set; } public System.Nullable<decimal> UnitPrice { get; set; } public string CategoryName { get; set; } public int? CategoryID { get; set; } public int? SupplierID { get; set; } public bool Discontinued { get; set; } } Add a SupplierViewModel class. This will be used to render the supplier DropDownlist. public class SupplierViewModel { public string CompanyName { get; set; } public int SupplierID { get; set; } } Add a CategoryViewModel class. public class CategoryViewModel { public string CategoryName { get; set; } public int CategoryID { get; set; } } Create an IProductRepository interface. This will contain the signatures of all the methods we need when accessing the entity model.  This step is not needed but follows the repository pattern. interface IProductRepository { IQueryable<Product> GetProducts(); IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID); IQueryable<SupplierViewModel> GetSuppliers(); IQueryable<CategoryViewModel> GetCategories(); } Create a ProductRepository class that implements the IProductReposity above. The methods available in this class are as follows: GetProducts – returns an IQueryable of all products. GetProductsProjected – returns an IQueryable of ProductViewModel. The method filters all the products based on SupplierId and CategoryId if any. It then projects the result into the ProductViewModel. GetSuppliers() – returns an IQueryable of all suppliers projected into a SupplierViewModel GetCategories() – returns an IQueryable of all categories projected into a CategoryViewModel  public class ProductRepository : IProductRepository { /// <summary> /// IQueryable of all Products /// </summary> /// <returns></returns> public IQueryable<Product> GetProducts() { var dataContext = new NorthwindEntities(); var products = from p in dataContext.Products select p; return products; }   /// <summary> /// IQueryable of Projects projected /// into the ProductViewModel class /// </summary> /// <returns></returns> public IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID) { var projectedProducts = from p in GetProducts() select new ProductViewModel { ProductID = p.ProductID, ProductName = p.ProductName, UnitPrice = p.UnitPrice, CategoryName = p.Category.CategoryName, CategoryID = p.CategoryID, SupplierID = p.SupplierID, Discontinued = p.Discontinued }; // Filter on SupplierID if (supplierID.HasValue) { projectedProducts = projectedProducts.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { projectedProducts = projectedProducts.Where(a => a.CategoryID == categoryID); }   return projectedProducts; }     public IQueryable<SupplierViewModel> GetSuppliers() { var dataContext = new NorthwindEntities(); var suppliers = from s in dataContext.Suppliers select new SupplierViewModel { SupplierID = s.SupplierID, CompanyName = s.CompanyName }; return suppliers; }   public IQueryable<CategoryViewModel> GetCategories() { var dataContext = new NorthwindEntities(); var categories = from c in dataContext.Categories select new CategoryViewModel { CategoryID = c.CategoryID, CategoryName = c.CategoryName }; return categories; } } Your solution explorer should look like the following. Build your project and make sure you don’t get any errors. In the next part, we will see how to create the client report definition file using the Report Wizard.   Creating an ASP.NET report using Visual Studio 2010 - Part 2

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • Tellago is still hiring….

    - by gsusx
    Tellago 's SOA practice is rapidly growing and we are still hiring. In that sense, we are looking to for Connected Systems (WCF, BizTalk, WF) experts who are passionate about building game changing solutions with the latest Microsoft technologies. You will be working alongside technology gurus like DonXml , Pablo Cibraro or Dwight Goins . If you are interested and not afraid of working with a bunch of crazy people ;)please drop me a line at jesus dot rodriguez at tellago dot com. Hope to hear from...(read more)

    Read the article

  • saslauthd + PostFix producing password verification and authentication errors

    - by Aram Papazian
    So I'm trying to setup PostFix while using SASL (Cyrus variety preferred, I was using dovecot earlier but I'm switching from dovecot to courier so I want to use cyrus instead of dovecot) but I seem to be having issues. Here are the errors I'm receiving: ==> mail.log <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.info <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.warn <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure I tried $testsaslauthd -u xxxx -p xxxx 0: OK "Success." So I know that the password/user I'm using is correct. I'm thinking that most likely I have a setting wrong somewhere, but can't seem to find where. Here is my files. Here is my main.cf for postfix: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname # This is already done in /etc/mailname #myhostname = crazyinsanoman.xxxxx.com smtpd_banner = $myhostname ESMTP $mail_name #biff = no # appending .domain is the MUA's job. #append_dot_mydomain = no readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # Relay smtp through another server or leave blank to do it yourself #relayhost = smtp.yourisp.com # Network details; Accept connections from anywhere, and only trust this machine mynetworks = 127.0.0.0/8 inet_interfaces = all #mynetworks_style = host #As we will be using virtual domains, these need to be empty local_recipient_maps = mydestination = # how long if undelivered before sending "delayed mail" warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/vmail # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf # Setup the uid/gid of the owner of the mail files - static:5000 allows virtual ones virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 inet_protocols=all # Cyrus SASL Support smtpd_sasl_path = smtpd smtpd_sasl_local_domain = xxxxx.com ####################### ## OLD CONFIGURATION ## ####################### #myorigin = /etc/mailname #mydestination = crazyinsanoman.xxxxx.com, localhost, localhost.localdomain #mailbox_size_limit = 0 #recipient_delimiter = + #html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 #virtual_alias_domains = ##virtual_alias_maps = hash:/etc/postfix/virtual #virtual_mailbox_base = /home/vmail ##luser_relay = webmaster #smtpd_sasl_type = dovecot #smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes #smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination #virtual_create_maildirsize = yes #virtual_maildir_extended = yes #proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps #virtual_transport = dovecot #dovecot_destination_recipient_limit = 1 Here is my master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # cyrus unix - n n - - pipe user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} #dovecot unix - n n - - pipe # flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} Here is what I'm using for /etc/postfix/sasl/smtpd.conf log_level: 7 pwcheck_method: saslauthd pwcheck_method: auxprop mech_list: PLAIN LOGIN CRAM-MD5 DIGEST-MD5 allow_plaintext: true auxprop_plugin: mysql sql_hostnames: 127.0.0.1 sql_user: xxxxx sql_passwd: xxxxx sql_database: maildb sql_select: select crypt from users where id = '%u' As you can see I'm trying to use mysql as my authentication method. The password in 'users' is set through the 'ENCRYPT()' function. I also followed the methods found in http://www.jimmy.co.at/weblog/?p=52 in order to redo /var/spool/postfix/var/run/saslauthd as that seems to be a lot of people's problems, but that didn't help at all. Also, here is my /etc/default/saslauthd START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" # Which authentication mechanisms should saslauthd use? (default: pam) # # Available options in this Debian package: # getpwent -- use the getpwent() library function # kerberos5 -- use Kerberos 5 # pam -- use PAM # rimap -- use a remote IMAP server # shadow -- use the local shadow password file # sasldb -- use the local sasldb database file # ldap -- use LDAP (configuration is in /etc/saslauthd.conf) # # Only one option may be used at a time. See the saslauthd man page # for more information. # # Example: MECHANISMS="pam" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" I had heard that potentially changing MECHANISM to MECHANISMS="mysql" but obviously that didn't help as is shown by the options listed above and also by trying it out anyway in case the documentation was outdated. So, I'm now at a loss... I have no idea where to go from here or what steps I need to do to get this working =/ Anyone have any ideas? EDIT: Here is the error that is coming from auth.log ... I don't know if this will help at all, but here you go: Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql auxprop plugin using mysql engine Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1'

    Read the article

  • Hosting StreamInsight applications using WCF

    - by gsusx
    One of the fundamental differentiators of Microsoft's StreamInsight compared to other Complex Event Processing (CEP) technologies is its flexible deployment model. In that sense, a StreamInsight solution can be hosted within an application or as a server component. This duality contrasts with most of the popular CEP frameworks in the current market which are almost exclusively server based. Whether it's undoubtedly that the ability of embedding a CEP engine in your applications opens new possibilities...(read more)

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >