Search Results

Search found 12714 results on 509 pages for 'db schema'.

Page 103/509 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Where to store site settings: DB? XML? CONFIG? CLASS FILES?

    - by Emin
    I am re-building a news portal of which already have a large number of visits every day. One of the major concerns when re-building this site was to maximize performance and speed. Having said this, we have done many things from caching, to all sort of other measures to ensure speed. Now towards the end of the project, I am having a dilemma of where to store my site settings that would least affect performance. The site settings will include things such as: Domain, DefaultImgPath, Google Analytics code, default emails of editors as well as more dynamic design/display feature settings such as the background color of specific DIVs and default color for links etc.. As far as I know, I have 4 choices in storing all these info. Database: Storing general settings in the DB and caching them may be a solution however, I want to limit the access to the database for only necessary and essential functions of the project which generally are insert/update/delete news items, author articles etc.. XML: I can store these settings in an XML file but I have not done this sort of thing before so I don't know what kind of problems -if any- I might face in the future. CONFIG: I can also store these settings in web.config CLASS FILE: I can hard code all these settings in a SiteSettings class, but since the site admin himself will be able to edit these settings, It may not be the best solution. Currently, I am more close to choosing web.config but letting people fiddle with it too often is something I do not want. E.g. if somehow, I miss out a validation for something and it breaks the web.config, the whole site will go down. My concern basically is that, I cannot forsee any possible consequences of using any of the methods above (or is there any other?), I was hoping to get this question over to more experienced people out here who hopefully help make my decision.

    Read the article

  • Do entity collections and object sets implement IQueryable<T>?

    - by Chevex
    I am using Entity Framework for the first time and noticed that the entities object returns entity collections. DBEntities db = new DBEntities(); db.Users; //Users is an ObjectSet<User> User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? user.Posts; //Posts is an EntityCollection<Post> Post post = user.Posts.Where(x => x.PostID == "123").First(); //Is this getting executed in the SQL or in memory? Do both ObjectSet and EntityCollection implement IQueryable? I am hoping they do so that I know the queries are getting executed at the data source and not in memory. EDIT: So apparently EntityCollection does not while ObjectSet does. Does that mean I would be better off using this code? DBEntities db = new DBEntities(); User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? Post post = db.Posts.Where(x => (x.PostID == "123")&&(x.Username == user.Username)).First(); // Querying the object set instead of the entity collection. Also, what is the difference between ObjectSet and EntityCollection? Shouldn't they be the same? Thanks in advance!

    Read the article

  • Problem with LINQ query

    - by Niels Bosma
    The following works fine: (from e in db.EnquiryAreas from w in db.WorkTypes where w.HumanId != null && w.SeoPriority > 0 && e.HumanId != null && e.SeoPriority > 0 && db.Enquiries.Where(f => f.WhereId == e.Id && f.WhatId == w.Id && f.EnquiryPublished != null && f.StatusId != EnquiryMethods.STATUS_INACTIVE && f.StatusId != EnquiryMethods.STATUS_REMOVED && f.StatusId != EnquiryMethods.STATUS_REJECTED && f.StatusId != EnquiryMethods.STATUS_ATTEND ).Any() select new { EnquiryArea = e, WorkType = w }); But: (from e in db.EnquiryAreas from w in db.WorkTypes where w.HumanId != null && w.SeoPriority > 0 && e.HumanId != null && e.SeoPriority > 0 && EnquiryMethods.BlockOnSite(db.Enquiries.Where(f => f.WhereId == e.Id && f.WhatId == w.Id)).Any() select new { EnquiryArea = e, WorkType = w }); + public static IQueryable<Enquiry> BlockOnSite(IQueryable<Enquiry> linq) { return linq.Where(e => e.EnquiryPublished != null && e.StatusId != STATUS_INACTIVE && e.StatusId != STATUS_REMOVED && e.StatusId != STATUS_REJECTED && e.StatusId != STATUS_ATTEND ); } I get the following error: base {System.SystemException}: {"Method 'System.Linq.IQueryable1[X.Enquiry] BlockOnSite(System.Linq.IQueryable1[X.Enquiry])' has no supported translation to SQL."}

    Read the article

  • how to filter files from the root "classes" and "test-classes" folders in Eclipse?

    - by Kidburla
    I am using ClearCase in my application which generates a whole load of ".copyarea.db" files (one in every folder). These cause conflicts when publishing to Tomcat as Eclipse will bundle the "classes" and "test-classes" folders into one JAR (not sure why it does this - as there is no need to have test classes available on the application server). Any folders with the same names will have a separate .copyarea.db in the classes and test-classes branches. I managed to get around this problem in general by adding ".copyarea.db" to the Filtered resources on the Java->Compiler->Building->Output Folder preference page. This stops the file appearing in source output (package/class folders), the vast majority of cases. However there remains the problem of the root folder, i.e. "target/classes/.copyarea.db" and "target/test-classes/.copyarea.db". These files are not filtered as they are not part of the compile task. Just deleting the files manually doesn't help either, as Eclipse expects to find them and doesn't. How can I exclude these ".copyarea.db" files from the root "classes" and "test-classes" folders?

    Read the article

  • Database doesn't update using TransactionScope

    - by Dissonant
    I have a client trying to communicate with a WCF service in a transactional manner. The client passes some data to the service and the service adds the data to its database accordingly. For some reason, the new data the service submits to its database isn't being persisted. When I have a look at the table data in the Server Explorer no new rows are added... Relevant code snippets are below: Client static void Main() { MyServiceClient client = new MyServiceClient(); Console.WriteLine("Please enter your name:"); string name = Console.ReadLine(); Console.WriteLine("Please enter the amount:"); int amount = int.Parse(Console.ReadLine()); using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required)) { client.SubmitData(amount, name); transaction.Complete(); } client.Close(); } Service Note: I'm using Entity Framework to persist objects to the database. [OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)] public void SubmitData(int amount, string name) { DatabaseEntities db = new DatabaseEntities(); Payment payment = new Payment(); payment.Amount = amount; payment.Name = name; db.AddToPayment(payment); //add to Payment table db.SaveChanges(); db.Dispose(); } I'm guessing it has something to do with the TransactionScope being used in the client. I've tried all combinations of db.SaveChanges() and db.AcceptAllChanges() as well, but the new payment data just doesn't get added to the database!

    Read the article

  • Convert charset in mysql query

    - by Yousf
    Hi, I have a question about converting charset from inside mysql query. I have a 2 databases. One for the website (joomla), the other for forum (IPB). I am doing query from inside joomla, which by default have "SET NAMES UTF8". I want to query a table inside the forum databases. A table called "ibf_topics". This table has latin1 encoding. I do the following to select anything from the not-utf8 table. //convert connection to handle latin1. $query = "SET NAMES latin1"; $db->setQuery($query); $db->query(); $query = "select id, title from other_database.ibf_topics"; $db->setQuery($query); $db->query(); //read result into an array. //return connection to handle UTF8. $query = "SET NAMES UTF8"; $db->setQuery($query); $db->query(); After that, when I want to use the selected tile, I use the following: echo iconv("CP1256", "UTF-8", $topic['title']) The question is, is there anyway to avoid all this hassle? For now, I can't change forum database to UTF8 and I can't change joomla database to latin1 :S

    Read the article

  • Linq Scope Problem + Reduce Repeated Code

    - by Tom Gullen
    If the parameter is -1, it needs to run a different query as to if an ID was specified... how do I do this? I've tried initialising var q; outside the If block but no luck! // Loads by Entry ID, or if -1, by latest entry private void LoadEntryByID(int EntryID) { IEnumerable<tblBlogEntry> q; if (EntryID == -1) { q = ( from Blog in db.tblBlogEntries orderby Blog.date descending select new { Blog.ID, Blog.title, Blog.entry, Blog.date, Blog.userID, Comments = ( from BlogComments in db.tblBlogComments where BlogComments.blogID == Blog.ID select BlogComments).Count(), Username = ( from Users in db.yaf_Users where Users.UserID == Blog.userID select new { Users.DisplayName }) }).FirstOrDefault(); } else { q = ( from Blog in db.tblBlogEntries where Blog.ID == EntryID select new { Blog.ID, Blog.title, Blog.entry, Blog.date, Blog.userID, Comments = ( from BlogComments in db.tblBlogComments where BlogComments.blogID == Blog.ID select BlogComments).Count(), Username = ( from Users in db.yaf_Users where Users.UserID == Blog.userID select new { Users.DisplayName }) }).SingleOrDefault(); } if (q == null) { this.Loaded = false; } else { this.ID = q.ID; this.Title = q.title; this.Entry = q.entry; this.Date = (DateTime)q.date; this.UserID = (int)q.userID; this.Loaded = true; this.AuthorUsername = q.Username; } } My main aim is to reduce repeating code

    Read the article

  • Could this be considered a well-written PHP5 class?

    - by Ben Dauphinee
    I have been learning OOP principals on my own for a while, and taken a few cracks at writing classes. What I really need to know now is if I am actually using what I have learned correctly, or if I could improve as far as OOP is concerned. I have chopped a massive portion of code out of a class that I have been working on for a while now, and pasted it here. To all you skilled and knowledgeable programmers here I ask: Am I doing it wrong? class acl extends genericAPI{ // -- Copied from genericAPI class protected final function sanityCheck($what, $check, $vars){ switch($check){ case 'set': if(isset($vars[$what])){return(1);}else{return(0);} break; } } // --------------------------------- protected $db = null; protected $dataQuery = null; public function __construct(Zend_Db_Adapter_Abstract $db, $config = array()){ $this->db = $db; if(!empty($config)){$this->config = $config;} } protected function _buildQuery($selectType = null, $vars = array()){ // Removed switches for simplicity sake $this->dataQuery = $this->db->select( )->from( $this->config['table_users'], array('tf' => '(CASE WHEN count(*) > 0 THEN 1 ELSE 0 END)') )->where( $this->config['uidcol'] . ' = ?', $vars['uid'] ); } protected function _sanityRun_acl($sanitycheck, &$vars){ switch($sanitycheck){ case 'uid_set': if(!$this->sanityCheck('uid', 'set', $vars)){ throw new Exception(ERR_ACL_NOUID); } $vars['uid'] = settype($vars['uid'], 'integer'); break; } } private function user($action = null, $vars = array()){ switch($action){ case 'exists': $this->_sanityRun_acl('uid_set', $vars); $this->_buildQuery('user_exists_idcheck', $vars); return($this->db->fetchOne($this->dataQuery->__toString())); break; } } public function user_exists($uid){ return($this->user('exists', array('uid' => $uid))); } } $return = $acl_test->user_exists(1);

    Read the article

  • how to fetch more than 1000 entities NON keybased?

    - by user291071
    If I should be approaching this problem through a different method, please suggest so. I am creating an item based collaborative filter. I populate the db with the LinkRating2 class and for each link there are more than a 1000 users that I need to call and collect their ratings to perform calculations which I then use to create another table. So I need to call more than 1000 entities for a given link. For instance lets say there are over a 1000 users rated 'link1' there will be over a 1000 instances of this class for the given link property that I need to call. How would I complete this example? class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() query =LinkRating2.all() link1 = 'link string name' a = query.filter('link = ', link1) aa = a.fetch(1000)##how would i get more than 1000 for a given link1 as shown? ##keybased over 1000 in other post example i need method for a subset though not key class MyModel(db.Expando): @classmethod def count_all(cls): """ Count *all* of the rows (without maxing out at 1000) """ count = 0 query = cls.all().order('__key__') while count % 1000 == 0: current_count = query.count() if current_count == 0: break count += current_count if current_count == 1000: last_key = query.fetch(1, 999)[0].key() query = query.filter('__key__ > ', last_key) return count

    Read the article

  • if statement inside array : codeigniter

    - by ahmad
    Hello , I have this function to edit all fields that come from the form and its works fine .. function editRow($tableName,$id) { $fieldsData = $this->db->field_data($tableName); $data = array(); foreach ($fieldsData as $key => $field) { $data[ $field->name ] = $this->input->post($field->name); } $this->db->where('id', $id); $this->db->update($tableName, $data); } now I want to add a condition for Password field , if the field is empty keep the old password , I did some thing like that : function editRow($tableName,$id) { $fieldsData = $this->db->field_data($tableName); $data = array(); foreach ($fieldsData as $key => $field) { if ($data[ $field->name ] == 'password' && $this->input->post('password') == '' ) { $data[ 'password' ] => $this->input->post('hide_password'), //'password' => $this->input->post('hide_password'), } else { $data[ $field->name ] => $this->input->post($field->name) } } $this->db->where('id', $id); $this->db->update($tableName, $data); } but I get error ( Parse error: syntax error, unexpected T_DOUBLE_ARROW in ... ) Html , some thing like this : <input type="text" name="password" value=""> <input type="hidden" name="hide_password" value="$row->$password" /> umm , any help ? thanks ..

    Read the article

  • Recommended approach for error handling with PHP and MYSQL

    - by iama
    I am trying to capture database (MYSQL) errors in my PHP web application. Currently, I see that there are functions like mysqli_error(), mysqli_errno() for capturing the last occurred error. However, this still requires me to check for error occurrence using repeated if/else statements in my php code. You may check my code below to see what I mean. Is there a better approach to doing this? (or) Should I write my own code to raise exceptions and catch them in one single place? What is the recommended approach? Also, does PDO raise exceptions? Thanks. function db_userexists($name, $pwd, &$dbErr) { $bUserExists = false; $uid = 0; $dbErr = ''; $db = new mysqli(SERVER, USER, PASSWORD, DB); if (!mysqli_connect_errno()) { $query = "select uid from user where uname = ? and pwd = ?"; $stmt = $db->prepare($query); if ($stmt) { if ($stmt->bind_param("ss", $name, $pwd)) { if ($stmt->bind_result($uid)) { if ($stmt->execute()) { if ($stmt->fetch()) { if ($uid) $bUserExists = true; } } } } if (!$bUserExists) $dbErr = $db->error(); $stmt->close(); } if (!$bUserExists) $dbErr = $db->error(); $db->close(); } else { $dbErr = mysqli_connect_error(); } return $bUserExists; }

    Read the article

  • Could this be considered a well-written class (am I using OOP correctly)?

    - by Ben Dauphinee
    I have been learning OOP principals on my own for a while, and taken a few cracks at writing classes. What I really need to know now is if I am actually using what I have learned correctly, or if I could improve as far as OOP is concerned. I have chopped a massive portion of code out of a class that I have been working on for a while now, and pasted it here. To all you skilled and knowledgeable programmers here I ask: Am I doing it wrong? class acl extends genericAPI{ // -- Copied from genericAPI class protected final function sanityCheck($what, $check, $vars){ switch($check){ case 'set': if(isset($vars[$what])){return(1);}else{return(0);} break; } } // --------------------------------- protected $db = null; protected $dataQuery = null; public function __construct(Zend_Db_Adapter_Abstract $db, $config = array()){ $this->db = $db; if(!empty($config)){$this->config = $config;} } protected function _buildQuery($selectType = null, $vars = array()){ // Removed switches for simplicity sake $this->dataQuery = $this->db->select( )->from( $this->config['table_users'], array('tf' => '(CASE WHEN count(*) > 0 THEN 1 ELSE 0 END)') )->where( $this->config['uidcol'] . ' = ?', $vars['uid'] ); } protected function _sanityRun_acl($sanitycheck, &$vars){ switch($sanitycheck){ case 'uid_set': if(!$this->sanityCheck('uid', 'set', $vars)){ throw new Exception(ERR_ACL_NOUID); } $vars['uid'] = settype($vars['uid'], 'integer'); break; } } private function user($action = null, $vars = array()){ switch($action){ case 'exists': $this->_sanityRun_acl('uid_set', $vars); $this->_buildQuery('user_exists_idcheck', $vars); return($this->db->fetchOne($this->dataQuery->__toString())); break; } } public function user_exists($uid){ return($this->user('exists', array('uid' => $uid))); } } $return = $acl_test->user_exists(1);

    Read the article

  • Codeigniter - Active record - sql - complex join

    - by Jack
    I have a function that retrieves all tags from a table: function global_popular_tags() { $this->db->select('tags.*, COUNT(tags.id) AS count'); $this->db->from('tags'); $this->db->join('tags_to_work', 'tags.id = tags_to_work.tag_id'); $this->db->group_by('tags.id'); $this->db->order_by('count', 'desc'); $query = $this->db->get()->result_array(); return $query; } I have another table called 'work'. The 'work' table has a 'draft' column with values of either 1 or 0. I want the COUNT(tags.id) to take into account whether the work with the specific tag is in draft mode (1) or not. Say there are 10 pieces of work tagged with, for example, 'design'. The COUNT will be 10. But 2 of these pieces of work are in draft mode, so the COUNT should really be 8. How do I manage this?

    Read the article

  • how do I copy value from one table and inserted to another in the same database??

    - by mathew
    I am having a tough time to do this... I have created two table say table-1 and table-2 in same database.what I want is I need to copy some values from table-1 and insert the same to table-2. I have tried many ways but it does not seems work. below is my code can any one tell me where I am missing?? $db = mysql_connect("localhost", "user", "pass") or die("Could not connect."); mysql_select_db("comdata",$db)or die(mysql_error()); $resultb = mysql_query("SELECT * FROM table-2")or die(mysql_error()); $row = mysql_fetch_array($resultb); $days = (strtotime(date("Y-m-d")) - strtotime($row['regtime'])) / (60 * 60 * 24); if($row > 0 && $days < 1){ $person = $row['person']; $catogr = $row['catog']; $position = $row['position']; $location = $row['location']; $rank = $row['rank']; mysql_close($db); }else{ $db = mysql_connect("localhost", "user", "pass") or die("Could not connect."); mysql_select_db("comdata",$db)or die(mysql_error()); $result = mysql_query("SELECT * FROM table-1 WHERE regtime = DATE(NOW()) ORDER BY rank ASC LIMIT 1;")or die(mysql_error()); $row = mysql_fetch_array($result); $person = $row['person']; $catogr = $row['catog']; $position = $row['position']; $location = $row['location']; $rank = $row['rank']; mysql_query("INSERT INTO table-2 (regtime,person,catog,position,location,rank) VALUES(NOW(),'$person','$catogr','$position','$location','$rank')"); mysql_close($db); }

    Read the article

  • how to insert excel-2003 values into SQL2005 database?

    - by vas
    Are there any rules / guidelines for DATA form XLS sheets to be inserted into SQL- DB? I have a group of Excel templates in 2005.Each concerned cell in Excel template is named. When Excel sheets are filled, saved and submitted , the values are transferred to the database. Excel sheets have names for various cells that are to b e filled by the user EX:- for the total number of Milk in the Beginning a given month , there is an Excel Cell Named "mtsBpiPTR180" Total number of Milk in the Ending a given month , there is an Excel Cell Named **"mtsEpiPTR180"** I have added 2 new cells , named "mtsBpiPTR180PA" and "mtsEpiPTR180PA". Now I try to upload the Excel File. But I AM UNABLE TO SEE MY FILLED DATA FROM "mtsBpiPTR180PA" and "mtsEpiPTR180PA" INTO THE RELATED DB/table. The above 2 are empty in the DB/table, even though I have filled them and successfully filed the Excel sheets Now no matter how much I search in the DB/stored procs i am unable to the ACTUAL STORED PROC or how the Data form Excel sheet is inserted into Tables WHERE DATA FROM XLS is inserted into DB. So was wondering:- Are there any rules / guidelines for DATA form XLS sheets to be inserted into SQL- DB?

    Read the article

  • Why won't the following PDO transaction won't work in PHP?

    - by jfizz
    I am using PHP version 5.4.4, and a MySQL database using InnoDB. I had been using PDO for awhile without utilizing transactions, and everything was working flawlessly. Then, I decided to try to implement transactions, and I keep getting Internal Server Error 500. The following code worked for me (doesn't contain transactions). try { $DB = new PDO('mysql:host=localhost;dbname=database', 'root', 'root'); $DB->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $dbh = $DB->prepare("SELECT * FROM user WHERE username = :test"); $dbh->bindValue(':test', $test, PDO::PARAM_STR); $dbh->execute(); } catch(Exception $e){ $dbh->rollback(); echo "an error has occured"; } Then I attempted to utilize transactions with the following code (which doesn't work). try { $DB = new PDO('mysql:host=localhost;dbname=database', 'root', 'root'); $DB->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $dbh = $DB->beginTransaction(); $dbh->prepare("SELECT * FROM user WHERE username = :test"); $dbh->bindValue(':test', $test, PDO::PARAM_STR); $dbh->execute(); $dbh->commit(); } catch(Exception $e){ $dbh->rollback(); echo "an error has occured"; } When I run the previous code, I get an Internal Server Error 500. Any help would be greatly appreciated! Thanks!

    Read the article

  • How can I add file locations to a database after they are uploaded using a Perl CGI script?

    - by Paul K
    I have a CGI program I have written using Perl. One of its functions is to upload pics to the server. All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db? I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases. Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db. I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).

    Read the article

  • SQLite3 database doesn't actually insert data - iPhone

    - by user334934
    I'm trying to add a new entry into my database, but it's not working. There are no errors thrown, and the code that is supposed to be executed after the insertion runs, meaning there are no errors with the query. But still, nothing is added to the database. I've tried both prepared statements and the simpler sqlite3_exec and it's the same result. I know my database is being loaded because the info for the tableview (and subsequent tableviews) are loaded from the database. The connection isn't the problem. Also, the log of the sqlite3_last_insert_rowid(db) returns the correct number for the next row. But still, the information is not saved. Here's my code: db = [Database openDatabase]; NSString *query = [NSString stringWithFormat:@"INSERT INTO lists (name) VALUES('%@')", newField.text]; NSLog(@"Query: %@",query); sqlite3_stmt *statement; if (sqlite3_prepare_v2(db, [query UTF8String], -1, &statement, nil) == SQLITE_OK) { if(sqlite3_step(statement) == SQLITE_DONE){ NSLog(@"You created a new list!"); int newListId = sqlite3_last_insert_rowid(db); MyList *newList = [[MyList alloc] initWithName:newField.text idNumber:[NSNumber numberWithInt:newListId]]; [self.listArray addObject:newList]; [newList release]; [self.tableView reloadData]; sqlite3_finalize(statement); } else { NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(db)); } } [Database closeDatabase:db]; Again, no errors have been thrown. The prepare and step statements return SQLITE_OK and SQLITE_DONE respectively, yet nothing happens. Any help is appreciated!

    Read the article

  • Adding A Custom Dropdown in RCDC for Forefront Identity Manager 2010

    - by Daniel Lackey
    My latest exploration has been FIM 2010 for Identity Management. The following is a post of how to add a custom dropdown for the FIM Portal. I have decided to document this as I cannot find documentation on how to do this anywhere else. I hope that it finds useful to others.   For starters, this was to me not an easy task to figure out. I really would like to know why it is so cumbersome to do something that seems like a lot of people would need to do, but that’s for another day J   The dropdown I wanted to add was for ‘Account Status’ which would display if the account is ‘Enabled’ or ‘Disabled’ in the data source Active Directory. This option would also allow helpdesk users or admins to administer the userAccountControl attribute in AD from the FIM Portal interface.   The first thing I had to do was create the attribute itself. This is done by going to Administration à Schema Management from the FIM 2010 portal. Once here, you click on All Attributes. What is listed here are all attributes and their associated Resource Types in FIM. To create the ‘AccountStatus’ attribute, click on New. As shown below, enter ‘AccountStatus’ with no spaces for the System Name and ‘Account Status’ for the Display Name. The Data Type is going to be ‘Indexed String’. Click Next.           Leave everything on the Localization tab default and click Next.   On the Validation tab as shown below, we will enter the regex expression ^(Enabled|Disabled)?$ with our two desired string values ‘Enabled’ and ‘Disabled’. Click on Finish and then and Submit to complete adding the attribute.       The next step involves associating the attribute with a resource type. This is called ‘Binding’ the attribute. From the Schema Management page, click on All Bindings. From the page that comes up, click on New. As shown below, enter ‘User’ for the Resource Type and ‘Account Status’ for the Attribute Type. This is essentially binding the Account Status attribute to the ‘User’ Resource Type. Click Next.    On the ‘Attribute Override’ tab, type in ‘Account Status’ for the Display Name field. Click Next.   On the ‘Localization’ tab, click Next.   On the ‘Validation’ tab, enter the regex expression ^(Enabled|Disabled)?$ we entered previously for the attribute. Click Finish and then Submit to complete.   Now that the Attribute and the Binding are complete, you have to give users permission to see the attribute on the User Edit page. Go to Administration à Management Policy Rules. Look for the rule named Administration: Administrators can read and update Users and click on it. Once it opens, click on the ‘Target Resources’ tab and look at the section named Resource Attributes. Type in at the end the ‘Account Status’ attribute and check it with the validator. Once done click on OK to save the changes.         Lastly, we need to add the actual dropdown control to the RCDC (Resource Control Display Configuration) for User Editing. Go to Administration à Resource Control Display Configuration. From here navigate until you find the RCDC named Configuration for User Editing RCDC and click on it. The following is what you will see:       First step is to export the Configuration Data file. Click on the Export configuration link and save the file to your desktop of other folder.   Find the file you just exported and open the file in your XML editor of choice. I use notepad but anything will work. Since we are adding a dropdown control, first find another control in the existing file that is already a dropdown in FIM. I used EmployeeType as my example. Copy the control from the beginning tag named <my:Control… to the ending tag </my:Control>. Now take what you copied and paste it in whatever location you desire within the form between two other controls. I chose to place the ‘Account Status’ field after the ‘Account Name’ field. After you paste the control you will need to modify so it looks like this:       Notice where you specify what attribute you are dealing with where it has AccountStatus in the XML. Once you are complete with modifying this, save the file and make sure it is a .xml file.   Now go back to the Configuration for User Editing screen and look at the section named ‘Configuration Data’. Click the ‘Browse’ button and find the XML file you just modified and choose it. Click OK on the bottom of the window and you are done!   Now when you click on a user’s name in the FIM Portal, you should see the newly added dropdown box as below:       Later I will post more about this drop down, specifically on how to automate actually ‘Disabling’ the account in the data source through the FIM Workflows and MAs.   <my:Control my:Name="AccountStatus" my:TypeName="UocDropDownList" my:Caption="{Binding Source=schema, Path=AccountStatus.DisplayName}" my:Description="{Binding Source=schema, Path=AccountStatus.Description}" my:RightsLevel="{Binding Source=rights, Path=AccountStatus}"> <my:Properties> <my:Property my:Name="ValuePath" my:Value="Value"/> <my:Property my:Name="CaptionPath" my:Value="Caption"/> <my:Property my:Name="HintPath" my:Value="Hint"/> <my:Property my:Name="ItemSource" my:Value="{Binding Source=schema, Path=AccountStatus.LocalizedAllowedValues}"/> <my:Property my:Name="SelectedValue" my:Value="{Binding Source=object, Path=AccountStatus, Mode=TwoWay}"/> </my:Properties> </my:Control>

    Read the article

  • Upgrading Fusion Middleware 11.1.1.x to 11.1.1.4

    - by James Taylor
    This is a follow on from my previous post where we upgraded 11.1.1.2 to 11.1.1.3. The instructions I provide here will work for Fusion Middleware 11.1.1.2 and 11.1.1.3 wanting to upgrade to 11.1.1.4. In this example I’m just upgrading SOA Suite on OEL 64bit but the steps will be the same, some of the downloads may be different based on your environment. To upgrade to 11.1.1.4 you need to have access to http://support.oracle.com as this is where the downloads reside. Oracle provides 11.1.1.4 as a standalone download so you can do a fresh install if required using OTN downloads (http://www.oracle.com/technetwork/indexes/downloads/index.html). The high level steps to upgrade are as follows: Download software Shutdown you SOA Environment Upgrade WLS to 11.1.1.4 Upgrade SOA Suite to 11.1.1.4 Upgrade OSB to 11.1.1.4 Upgrade MSD Schemas Identify the downloads you require for your install. You will need the WebLogic Server Upgrade and the additional product downloads. If you are using 64bit then use the generic version. The downloads are found from the following location - http://download.oracle.com/docs/html/E18749_01/download_readme.htm#BABDDIIC For the purpose of this post I downloaded the following patches 11060985 – WLS Server Generic 11060960 – SOA Suite 11061005 – OSB Suite You must also download the 11.1.1.4 RCU tool to upgrade the DB schemas. It is available via OTN, or, Oracle Support, I have provided the link from Oracle Support.  11060956 – RCU Make sure you have set the Java executable in your PATH e.g. export PATH=$JAVA_HOME/bin:$PATH  Make sure all your WebLogic environment has been shut down before performing the upgrade. Extract the WLS patch 11060985 to a temporary directory and start the installer java –jar wls1034_upgrade_generic.jar Please note if you are not running 64BIT then the upgrade executable will be just a bin file which you can execute directly. Chose the right Oracle home for your WebLogic Server install. In the Register for Security Updates you can enter your details or just click Next. If you do not enter details confirm that you don’t want to receive these updates Select the products you want to upgrade and select next. It is recommended that you accept the defaults. Confirm the directories that will be upgraded Upgrade of WLS ahs been completed   Extract your both SOA downloads to a temporary directory and run the installer found in Disk1 ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Please note that the java location and version may be different for your environment Skip the Software Updates Ensure your system meets the prerequisites Set the Oracle home for your SOA install. You will be asked to confirm that you want to upgrade, click Yes Choose your application server. Since you are upgrading from 11.1.1.x you will be on WebLogic Start the Install Installation Upgrade of SOA Suite completed accept the default to finish.   In my environment I have OSB installed so I need to upgrade this next. If you don’t have SOA Suite you can go straight to completing the DB Schema updates at Step 24.  Extract the OSB upgrade files to a temporary directory and execute the installer found in the Disk1 folder. ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Skip the software updates Select the Oracle home for your environment Accept the warning to continue the upgrade Point to the location of your WebLogic Server installation Install the OSB upgrade Upgrade has been completed accept the defaults Change directory to $MW_HOME/oracle_common/bin where the Patch Set Assistant is installed Execute the following command to update the MDS schema. Please not for my examples I have the context set to DEV. your may be different. This means that all my schemas are prefixed by DEV. ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_MDS You will be asked you passwords for sys and the schema Enter the database administrator password for "sys": Enter the schema password for schema user "DEV_MDS": Change directory to $MW_HOME/Oracle_SOA1/bin to where the Patch Set Assistant is installed for SOA Suite. Execute the following command to update the SOA and BAM schemas ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA   To check that you have the installed correctly run the following SQL as sysdba. SELECT owner, version, status FROM schema_version_registry; OWNER                          VERSION                        STATUS ------------------------------ ------------------------------ ----------- DEV_MDS                        11.1.1.4.0                     VALID DEV_SOAINFRA                   11.1.1.4.0                     VALID Don’t stress if the versions are not all sitting at version 11.1.1.4 as not all schemas need to be updated. The key ones are MDS and SOAINFRA

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • Handling Trailing Delimiters in HL7 Messages

    - by Thomas Canter
    Applies to: BizTalk Server 2006 with the HL7 1.3 Accelerator Outline of the problem Trailing Delimiters are empty values at the end of an object in a HL7 ER7 formatted message. Examples: Empty Field NTE|P| NTE|P|| Empty component ORC|1|725^ Empty Subcomponent ORC|1|||||27& Empty repeat OBR|1||||||||027~ Trailing delimiters indicate the following object exists and is empty, which is quite different from null, null is an explicit value indicated by a pair of double quotes -> "". The BizTalk HL7 Accelerator by default does not allow trailing delimiters. There are three methods to allow trailing delimiters. NOTE: All Schemas always allow trailing delimiters in the MSH Segment Using party identifiers MSH3.1 – Receive/inbound processing, using this value as a party allows you to configure the system to allow inbound trailing delimiters. MSH5.1 – Send/outbound processing, using this value as a party allows you to configure the system to allow outbound trailing delimiters. Generally, if you allow inbound trailing delimiters, unless you are willing to programmatically remove all trailing delimiters, then you need to configure the send to allow trailing delimiters. Add the appropriate parties to the BizTalk Parties list from these two fields in your message stream. Open the BizTalk HL7 Configuration tool and for each party check the "Allow trailing delimiters (separators)" check box on the Validation tab. Disadvantage – Each MSH3.1 and MSH5.1 value must be represented in the parties list and configured. Advantage – granular control over system behavior for each inbound/outbound system. Using instance properties of a pipeline used in a send port or receive location. Open the BizTalk Server Administration console locate the send port or receive location that contains the BTAHL72XReceivePipeline or BTAHL72XSendPipeline pipeline. Open the properties To the right of the pipeline selected locate the […] ellipses button In the property list, locate the "TrailingDelimiterAllowed" property and set it to True. Advantage – All messages through a particular Send Port or Receive Location will allow trailing delimiters. Disadvantage – Must configure each Send Port or Receive Location. No granular control over which remote parties will send or receive messages with trailing delimiters. Using a custom pipeline that uses a pre-configured BTA HL7 Pipeline component. Use Visual Studio to construct a custom receive and send pipeline using the appropriate assembler or dissasembler. Set the component property to "TrailingDelimitersAllowed" to True Compile and deploy the custom pipeline Use the custom pipeline instead of the standard pipeline for all HL7 message processing Advantage – All messages using the custom pipeline will automatically allow trailing delimiters. Disadvantage – Requires custom coding and development to create and deploy the custom pipeline. No granular control over which remote parties will send or receive messages with trailing delimiters. What does a Trailing Delimiter do to the XML Schema? Allowing trailing delimiters does not have the impact often expected in the actual XML Schema.The Schema reproduces the message with no data loss.Thus, the message when represented in XML must contain the extra fields, in order to reproduce the outbound message.Thus, a trialing delimiter results in an empty XML field.Trailing Delmiters are not stripped from the inbound message. Example:<PID_21>44172</PID_21><PID_21>9257</PID_21> -> the original maximum number of repeats<PID_21></PID_21> -> The empty repeated field Allowing trailing delimiters not remove the trailing delimiters from the message, it simply suppresses the check that will cause the message to fail parse with trailing delimiters. When can you not fix the problem by enabling trailing delimiters Each object in a message must have a location in the target BTAHL7 schema for its content to reside.If you have more objects in the message than are contained at that location, then enabling trailing delimiters will not resolve the problem. The schema must be extended to accommodate the empty message content.Examples: Extra Field NTE|P||||Only 4 fields in NTE Segment, the 4th field exists, but is empty. Extra component PID|1|1523|47^^^^^^^Only 5 components in a CX data type, the 5th component exists, but is empty Extra subcomponent ORC|1|||||27&&Only 2 subcomponents in a CQ data type, the 3rd subcomponent is empty, but exists. Extra Repeat PID|1||||||||||||||||||||4419~5217~Only 2 repeats allowed for the field "Mother's identifier", the repeat is empty, but exists. In each of these cases, you must locate the failing object and extend the type to allow an additional object of that type. FieldAdd a field of ST to the end of the segment with a suitable name in the segments_nnn.xsd Component Create a new Custom CX data type (i.e. CX_XtraComp) in the datatypes_nnn.xsd and add a new component to the custom CX data type. Update the field in the segments_nnn.xsd file to use the custom data type instead of the standard datatype. Subcomponent Create a new Custom CQ data type that accepts an additional TS value at the end of the data type. Create a custom TQ data type that uses the new custom CQ data type as the first subcomponent. Modify the ORC segment to use the new CQ data type at ORC.7 instead of the standard CQ data type. RepeatModify the Field definition for PID.21 in the segments_nnn.xsd to allow more repeats in the field.

    Read the article

  • SQL SERVER – SmallDateTime and Precision – A Continuous Confusion

    - by pinaldave
    Some kinds of confusion never go away. Here is one of the ancient confusing things in SQL. The precision of the SmallDateTime is one concept that confuses a lot of people, proven by the many messages I receive everyday relating to this subject. Let me start with the question: What is the precision of the SMALLDATETIME datatypes? What is your answer? Write it down on your notepad. Now if you do not want to continue reading the blog post, head to my previous blog post over here: SQL SERVER – Precision of SMALLDATETIME. A Social Media Question Since the increase of social media conversations, I noticed that the amount of the comments I receive on this blog is a bit staggering. I receive lots of questions on facebook, twitter or Google+. One of the very interesting questions yesterday was asked on Facebook by Raghavendra. I am re-organizing his script and asking all of the questions he has asked me. Let us see if we could help him with his question: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO Now when the above script is ran, we will get the following result: Well, the expectation of the query was to have the following result. The row which was inserted last was expected to return as first row in result set as the ORDER BY descending. Side note: Because the requirement is to get the latest data, we can’t use any  column other than smalldatetime column in order by. If we use name column in the order by, we will get an incorrect result as it can be any name. My Initial Reaction My initial reaction was as follows: 1) DataType DateTime2: If file precision of the column is expected from the column which store date and time, it should not be smalldatetime. The precision of the column smalldatetime is One Minute (Read Here) for finer precision use DateTime or DateTime2 data type. Here is the code which includes above suggestion: CREATE TABLE #temp (name VARCHAR(100), registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO 2) Tie Breaker Identity: There are always possibilities that two rows were inserted at the same time. In that case, you may need a tie breaker. If you have an increasing identity column, you can use that as a tie breaker as well. CREATE TABLE #temp (ID INT IDENTITY(1,1), name VARCHAR(100),registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY ID DESC GO DROP TABLE #temp GO Those two were the quick suggestions I provided. It is not necessary that you should use both advices. It is possible that one can use only DATETIME datatype or Identity column can have datatype of BIGINT or have another tie breaker. An Alternate NO Solution In the facebook thread this was also discussed as one of the solutions: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO However, I believe it is not the solution and can be further misleading if used in a production server. Here is the example of why it is not a good solution: CREATE TABLE #temp (name VARCHAR(100) NOT NULL,registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO -- Before Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO -- Create Index ALTER TABLE #temp ADD CONSTRAINT [PK_#temp] PRIMARY KEY CLUSTERED (name DESC) GO -- After Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO Now let us examine the resultset. You will notice that an index which is created on the base table which is (indeed) schema change the table but can affect the resultset. As you can see, an index can change the resultset, so this method is not yet perfect to get the latest inserted resultset. No Schema Change Requirement After giving these two suggestions, I was waiting for the feedback of the asker. However, the requirement of the asker was there can’t be any schema change because the application was used by many other applications. I validated again, and of course, the requirement is no schema change at all. No addition of the column of change of datatypes of any other columns. There is no further help as well. This is indeed an interesting question. I personally can’t think of any solution which I could provide him given the requirement of no schema change. Can you think of any other solution to this? Need of Database Designer This question once again brings up another ancient question:  “Do we need a database designer?” I often come across databases which are facing major performance problems or have redundant data. Normalization is often ignored when a database is built fast under a very tight deadline. Often I come across a database which has table with unnecessary columns and performance problems. While working as Developer Lead in my earlier jobs, I have seen developers adding columns to tables without anybody’s consent and retrieving them as SELECT *.  There is a lot to discuss on this subject in detail, but for now, let’s discuss the question first. Do you have any suggestions for the above question? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: CodeProject, Developer Training, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Recipient address rejected: User unknown in local recipient table;

    - by Thufir
    I've gone through the guide for mailman with some difficulty, but seem to be nearly there. I'm able to navigate to the mailman web GUI, create lists and subscribe. I just subscribe my local FQDN, so [email protected] for testing purposes. This FQDN only works on localhost. However, e-mails to the list address, in this case [email protected], are rejected: root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 08:28:43 dur postfix/master[12208]: terminating on signal 15 Aug 28 08:28:44 dur postfix/postfix-script[12322]: starting the Postfix mail system Aug 28 08:28:44 dur postfix/master[12323]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 08:28:46 dur postfix/postfix-script[12332]: stopping the Postfix mail system Aug 28 08:28:46 dur postfix/master[12323]: terminating on signal 15 Aug 28 08:28:47 dur postfix/postfix-script[12437]: starting the Postfix mail system Aug 28 08:28:47 dur postfix/master[12438]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 08:29:29 dur postfix/smtpd[12460]: connect from localhost[127.0.0.1] Aug 28 08:29:30 dur postfix/smtpd[12460]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur.bounceme.net> Aug 28 08:29:33 dur postfix/smtpd[12460]: disconnect from localhost[127.0.0.1] root@dur:~# root@dur:~# ll /var/lib/mailman/data/ total 56 drwxrwsr-x 2 root list 4096 Aug 28 08:28 ./ drwxrwsr-x 8 root list 4096 Aug 27 19:58 ../ -rw-r--r-- 1 root list 0 Aug 28 04:36 aliases -rw-r--r-- 1 root list 12288 Aug 28 04:36 aliases.db -rw-r--r-- 1 root list 12288 Aug 28 08:28 aliases.db.db -rw-r----- 1 root list 41 Aug 27 21:04 creator.pw -rw-rw-r-- 1 root list 10 Aug 27 19:58 last_mailman_version -rw-r--r-- 1 root list 14100 Oct 19 2011 sitelist.cfg root@dur:~# root@dur:~# grep alias /etc/postfix/main.cf alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases alias_database = hash:/var/lib/mailman/data/aliases.db #alias_database = hash:/etc/aliases root@dur:~# root@dur:~# postconf -n alias_database = hash:/var/lib/mailman/data/aliases.db alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = $myhostname localhost.$mydomain localhost $mydomain myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.example.com relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# Why is this e-mail rejected? It seems to, maybe be related to the alias_maps and alias_database settings in postfix.

    Read the article

  • [JAVA]How to make my Oracle update/insert action through JAVA faster?

    - by gunbuster363
    [JAVA]How to make my Oracle update/insert action through JAVA faster? Hi everyone, I am facing a problem in my company that is - our program's speed is not fast enough. To be more specific, we are telecommunication company and this program handle call/internet serfing transaction made by every mobile phone users in our city. Because the amount of download content made by the iphone users is just too much, our program cannot handle them fast enough. The situation is, the amount of transaction made by users are double of the transaction processed by our program. Most of the running time of the program are dominated by DB transactions. I've search through the internet and browsed some sites ( for example: http://www.javaperformancetuning.com/tips/rawtips.shtml ) talking about java performace in DB, but I cannot find a suggestion suitable for us. these advices are not applicable/already used, for instance: 1)Use prepared statements. Use parametrized SQL Already used prepared statement. Each time will use different parameter by clear parameters and set parameters. 2)Tune the SQL to minimize the data returned (e.g. not 'SELECT *'). Sure, already used. 3)Use connection pooling. We hold a single connection during the program's execution. And I doubt that pooling cannot solve the problem because our program act as 1 user, so there are no problem for concurrent access to DB. If anyone of you think pooling is good, please tell me why. Thanks. 4)Try to combine queries and batch updates. Cannot do it. Every query/insert/update is depend on the database's information. For example, we look up the DB for the client's information, if we cannot find his usage, we insert the usage into DB, otherwise we do update. 5)Close resources (Connections, Statements, ResultSets) when finished Sure. 6)Select the fastest JDBC driver. I don't know. I've search on the internet about the type of driver available and I am very confused. We use oracle.jdbc.driver.OracleDriver and we use thin instead of oci, that's all I know. In addition, our program is a two-tier way ( java <- oracle ) 7)turn off auto-commit already done that. Looking forwards to any helps, thank you very much.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >